WO2012140468A1 - Method for generating a sound effect in a piece of game software, associated computer program and data processing system for executing instructions of the computer program - Google Patents

Method for generating a sound effect in a piece of game software, associated computer program and data processing system for executing instructions of the computer program Download PDF

Info

Publication number
WO2012140468A1
WO2012140468A1 PCT/IB2011/003221 IB2011003221W WO2012140468A1 WO 2012140468 A1 WO2012140468 A1 WO 2012140468A1 IB 2011003221 W IB2011003221 W IB 2011003221W WO 2012140468 A1 WO2012140468 A1 WO 2012140468A1
Authority
WO
WIPO (PCT)
Prior art keywords
audio data
music
ambient music
key
beat
Prior art date
Application number
PCT/IB2011/003221
Other languages
French (fr)
Inventor
Olivier GILLET
Elhad PIESCZEK-ALI
Original Assignee
Mxp4
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mxp4 filed Critical Mxp4
Priority to US13/264,189 priority Critical patent/US20140128160A1/en
Publication of WO2012140468A1 publication Critical patent/WO2012140468A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/54Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/22Setup operations, e.g. calibration, key configuration or button assignment
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/44Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment involving timing of operations, e.g. performing an action within a time slot
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0008Associated control or indicating means
    • G10H1/0025Automatic or semi-automatic music composition, e.g. producing random music, applying rules from music theory or modifying a musical piece
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/36Accompaniment arrangements
    • G10H1/38Chord
    • G10H1/383Chord detection and/or recognition, e.g. for correction, or automatic bass generation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/36Accompaniment arrangements
    • G10H1/40Rhythm
    • G10H1/42Rhythm comprising tone forming circuits
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/215Input arrangements for video game devices characterised by their sensors, purposes or types comprising means for detecting acoustic signals, e.g. using a microphone
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1081Input via voice recognition
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/6063Methods for processing data by generating or executing the game program for sound processing
    • A63F2300/6081Methods for processing data by generating or executing the game program for sound processing generating an output signal, e.g. under timing constraints, for spatialization
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/69Involving elements of the real world in the game world, e.g. measurement in live races, real video
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/021Background music, e.g. for video sequences or elevator music
    • G10H2210/026Background music, e.g. for video sequences or elevator music for games, e.g. videogames
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/031Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/031Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
    • G10H2210/071Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal for rhythm pattern analysis or rhythm style recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/031Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
    • G10H2210/076Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal for extraction of timing, tempo; Beat detection
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/101Music Composition or musical creation; Tools or processes therefor
    • G10H2210/141Riff, i.e. improvisation, e.g. repeated motif or phrase, automatically added to a piece, e.g. in real time
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2240/00Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
    • G10H2240/075Musical metadata derived from musical analysis or for use in electrophonic musical instruments
    • G10H2240/081Genre classification, i.e. descriptive metadata for classification or selection of musical pieces according to style
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2240/00Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
    • G10H2240/075Musical metadata derived from musical analysis or for use in electrophonic musical instruments
    • G10H2240/085Mood, i.e. generation, detection or selection of a particular emotional content or atmosphere in a musical piece

Definitions

  • the present disclosure relates to a method and system for generating a sound effect in a piece of game software, and in particular for synchronizing the sound effects of a video game to background music as a substitution to the original game music.
  • the present disclosure relates to adjusting the sound effects of a video game in such a way that they blend perfectly with whatever piece of music the user has decided to play as a substitution to the original game music.
  • the aim of the disclosure is to allow satisfactory immersion in the game, even when a user is using his own ambient music, by encouraging the user to keep the sound effects provided.
  • the present disclosure discusses a method for generating a sound effect in a piece of game software.
  • the method includes accessing audio data representing a sound effect from a sound reproduction device in response to a request for emission of a sound effect from the game software.
  • the method analyzes audio data representing music in the course of reproduction, referred to as ambient music, in order to determine at least one characteristic of the ambient music.
  • the method then defines at least one characteristic of the transmission from the at least one characteristic of the ambient music.
  • the method includes analyzing the audio data representing the ambient music in order to determine instants at which the ambient music has a rhythmic beat in order to analyze audio data representing the ambient music for determining the at least one characteristic of the ambient music.
  • the method defines an instant at which the transmission starts from the instants at which the ambient music has a rhythmic beat in order to determine the at least one characteristic of the transmission from the at least one characteristic of the ambient music.
  • the method includes defining as the instant at which the transmission starts, an instant that follows the last instant at which the music has a rhythmic beat in order to determine the instant at which the transmission starts from the instants at which the music has a rhythmic beat.
  • the instant is defined by an integer number multiplied by the average time interval separating the instants at which the music has a rhythmic beat. According to some embodiments, it is preferable that this be once the average time interval.
  • the method includes analyzing the audio data representing the ambient music in order to determine a musical genre for the ambient music in order to analyze the audio data representing the ambient music in order to determine the at least one characteristic of the ambient music.
  • the method then includes selecting, from among several audio data associated with different musical genres, the audio data which is associated with the genre of the ambient music, where the audio data for the transmission stem is from the selected audio data in order to define the at least one characteristic of the transmission from the at least one characteristic of the ambient music.
  • the method includes analyzing the audio data representing the ambient music in order to determine a key for the ambient music in order to analyze the audio data representing the ambient music for determining the at least one characteristic of the ambient music. The method then determines a desired pitch from the determined key in order to determine the at least one characteristic of the transmission from the at least one characteristic of the ambient music.
  • the method includes analyzing the audio data representing the ambient music in order to determine a bass line and a melody line for the ambient music.
  • the analyzing step is also performed in order to analyze the audio data representing the ambient music in order to determine a key for the ambient music.
  • the method also includes determining the key of the ambient music from the bass line and the melody line that have been determined.
  • the method further includes recovering audio data representing a sound effect having a certain pitch, modifying the recovered audio data so that the sound effect that they represent has the desired pitch, in that the audio data of the transmission stem from the audio data that have been modified in this manner.
  • the method further includes determining parameters of a software synthesizer from, firstly, the at least one characteristic of the ambient music and, secondly, from defined relationships.
  • a computer-readable storage medium for generating a sound effect in a piece of game software.
  • a system for generating a sound effect in a piece of game software.
  • the system includes a data processing system which includes a sound reproduction device, a storage device on which a computer program has been saved, and a central processing unit for executing the instructions of the computer program.
  • Figure 1 is a block diagram of a data processing system in accordance with an embodiment of the present disclosure
  • Figure 2 is a block diagram illustrating instruction blocks in a piece of game software implemented by the data processing system of Figure 1 in accordance with an embodiment of the present disclosure
  • Figure 3 illustrates a flow chart for generating a sound effect in accordance an embodiment of the present disclosure
  • Figure 4 is a block diagram illustrating an internal architecture of a computing device in accordance with an embodiment of the present disclosure.
  • the principles described herein may be embodied in many different forms.
  • the described systems and methods allow for synchronizing the sound effects of a video game to background music.
  • the described systems and methods adjust the sound effects in such a way that they blend perfectly with whichever piece of music the player has decided to play as a substitution to the original game music.
  • end user should be understood to refer to a consumer of data supplied by a data provider.
  • user can refer to a person who receives data provided by the data provider over the Internet in a browser session, or can refer to an automated software application which receives the data and stores or processes the data.
  • a computer readable medium stores computer data, which data can include computer program code that is executable by a computer, in machine readable form.
  • a computer readable medium may comprise computer readable storage media, for tangible or fixed storage of data, or communication media for transient interpretation of code-containing signals.
  • Computer readable storage media refers to physical or tangible storage (as opposed to signals) and includes without limitation volatile and non-volatile, removable and non-removable media implemented in any method or technology for the tangible storage of information such as computer-readable instructions, data structures, program modules or other data.
  • Computer readable storage media includes, but is not limited to, RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, DVD, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other physical or material medium which can be used to tangibly store the desired information or data or instructions and which can be accessed by a computer or processor.
  • a module is a software, hardware, or firmware (or combinations thereof) system, process or functionality, or component thereof, that performs or facilitates the processes, features, and/or functions described herein (with or without human interaction or augmentation).
  • a module can include sub-modules.
  • Software components of a module may be stored on a computer readable medium.
  • Modules may be integral to one or more computers (or servers), or be loaded and executed by one or more computers (or servers).
  • One or more modules may be grouped into an engine or an application.
  • a background music analyzer, game sound effects analyzer and a sound effect scheduler can be a module that is a software, hardware, or firmware (or combinations thereof) system for automatically synchronizing game sound effects with background music.
  • server should be understood to refer to a service point which provides processing, database, and communication facilities.
  • server can refer to a single, physical processor with associated communications and data storage and database facilities, or it can refer to a networked or clustered complex of processors and associated network and storage devices, as well as operating software and one or more database systems and applications software which support the services provided by the server.
  • the game software may provide an option to use an ambient music file (for example a file in mp3 format) from the user instead of the ambient music initially provided.
  • an ambient music file for example a file in mp3 format
  • users simply turn off the ambient music initially provided to replace it with ambient music from a piece of software other than the game software, generally a multimedia player such as the software VLC or the software foobar2000.
  • the background music analyzer is a library integrated into a game responsible for recording the music which is substituted to the original game music, either through direct access to the audio file (at the game-level), through OS-level interception of audio buffers (at the system-level), or through direct recording with a microphone (at the room-level).
  • a recorded signal can be split into overlapping frames, such as 100ms frames.
  • the following functions can be used to extract features for each frame: (1) Beat detection function: a function showing sharp peaks at beats; (2) Key detection function: indicating the probability that the music has been, over a past period of time, such as 20s, in a specific tonality.
  • a predetermined number of the key detection functions are computed for each minor and major tonalities. For example, 24 of the key detection functions are computed for each of the 12 minor and major tonalities.
  • the beat detection function is computed by a periodicity estimation and tracking of onset detection.
  • the key detection function is computed by matching a bass and melody chromagram with note distribution templates computed for each scale.
  • the chromogram is obtained by binning the frequency spectrum into a number of bins (e.g., 12 bins) mapped to a number of tones (e.g., 12 tones) of equal temperament scale; or by encoding into a number of pitch classes (e.g., 12 pitch classes) the output of a multi-pitch estimator.
  • Additional genre information can be extracted through the use of standard machine learning techniques, such as but not limited to, SVM or Bayesian classifier using mixtures of Gaussian distributions trained on annotated audio files.
  • a game sound effects analyzer analyzes each of the sound effects samples used in the game to detect their fundamental frequency, using an algorithm such as YIN. It is either used during the game development process, in which all the sound effect samples produced for the game can be annotated with their pitch, or embedded in the game, in which the analysis can be performed every time the game is launched. In the situation the analysis is part of the game asset preparation procedure, different sound effects can also be annotated with a specific music genre, or different sets of sound effects can be created that match different music genres. For example, the destruction of an enemy in a game can be sonified by a synthesizer sound in the "electro" sample set, and a brass hit in the "soul” sample set.
  • a sound effect scheduler can be embedded in the game and may be responsible for the playback of the game sound effects. It can operate in two modes. In a normal operating mode, the samples are played at their original pitch immediately after the moment the action that triggers them has taken place. In a music-synchronous mode, the sound effect scheduler queries the background music analyzer to retrieve the times at which the past number of beats (e.g., 4 beats) have been played in the background music, and the most probable tonality of the background music. The position in time of the past number of beats (e.g., 4 beats) can be used to anticipate the time at which the next beat will occur.
  • the past number of beats e.g., 4 beats
  • the sound effect is not played instantly, but instead, it is delayed so that its playback will coincide with the next beat in the music. Additionally, the difference in pitch between the original sound effect sample (as computed by the sound effect analyzer) and the tonality of the music is compensated for, using transpositions methods such as sample rate conversion or pitch-shifting. In the situation where the game sound effects bank has been annotated by genre, the genre information returned by the analysis module can be used to restrain further the set of sound effects played back.
  • the data processing system 100 includes a central unit 102 which contains a central processing unit 104, such as a microprocessor, and a storage device 106, such as a hard disk.
  • the data processing system 100 has a man/machine interface 108 comprising input devices, such as for example a keyboard 110 and a mouse 112, and output devices, such as for example a display screen 114 and a sound reproduction device 118, 120.
  • the sound reproduction device can be comprised of a sound card 118 arranged in the central unit 102 and speakers 120 connected to the sound card 118.
  • the data processing system 100 includes a sound capture device 122, such as a microphone connected to the sound card 118.
  • the sound capture device 122 is designed to capture a musical source 114 which can be external 124 to the data processing system 100.
  • a non- limiting example of an external musical source 124 is a hi-fi system.
  • a computing device discussed in the data processing system 100 may be any computing device that may be coupled to a network, including, for example, personal digital assistants, Web-enabled cellular telephones, devices that dial into the network, mobile computers, personal computers, Internet appliances, wireless communication devices, game consoles and the like.
  • Computing devices in data processing system 100 include a program for interfacing with the network.
  • Such program can be a window or browser, or other similar graphical user interface, for visually displaying the game to the end user (or player) on the display 114 of the computing device.
  • servers for providing game software and/or ambient music external to the game software may be of any type, running any software, and the software modules, objects or plug-ins may be written in any suitable programming language.
  • Figure 2 illustrates instruction blocks in a piece of game software implemented by the data processing system 100 of Figure 1 in accordance with some embodiments of the present disclosure.
  • audio data FX A , FX B and FX C are saved in the storage device 106 of the data processing system of Figure 1.
  • the audio data FX A , FX B or FX C represent a sound effect and are associated with respective musical genres GA, G b and Go
  • a piece of game software 200 allowing a user to play a game is likewise saved in the storage device 106.
  • the game software 200 includes game instructions 202 which are designed to supply game information to a user through the output devices of the man/machine interface 108, in that the game information evolves on the basis of commands input by a user using the input devices (e.g., 110, 112) of the man/machine interface 108.
  • the game instructions 202 are designed to send a request R for emission of a sound effect when the game is being executed.
  • the request R is sent upon every action in the game which is performed by the user using the input devices of the man/machine interface 108, in that said action is associated with a sound effect, as discussed below.
  • the game software 200 includes sound effect analysis instructions 204.
  • the sound effect analysis instructions 204 are designed to analyze each saved instance of audio data FX A , FX B and FX C and to determine the pitch P A , P B and Pc thereof.
  • the pitch corresponds to a fundamental frequency for the audio data, as determined by means of, for example, a YIN algorithm.
  • the sound effect analysis instructions 204 are furthermore designed to create associations between the audio data FX A , FX B or FX C and the respective pitch P A , P B or P c thereof.
  • the game software 200 includes instructions 206 for analyzing a piece of music in the course of reproduction either by the reproduction device 118, 120 or by the external reproduction device 124. This music is referred to as ambient music.
  • the ambient music analysis instructions 206 are designed to recover audio data MUS representing the ambient music. In a first case of replacing ambient music, for example, the ambient music analysis instructions 206 are designed to directly access the music file indicated by the user in the game software options.
  • the game software options can be a dialog box, window, menu or any other graphical user interface element through which the user can configure aspects of the game, such as, input controls, sound volume, music selection, etc.
  • the ambient music analysis instructions 206 are designed to intercept the audio buffers of an operating system running on the data processing system 100 and executing the game software.
  • the ambient music analysis instructions 206 are designed to use the sound capture device 122 to convert the ambient music into the audio data MUS.
  • the ambient music analysis instructions 206 are designed to analyze the audio data MUS in order to determine at least one characteristic of the ambient music. More precisely, in an example, three characteristics of the ambient music are determined. Thus, the ambient music analysis instructions 206 are designed to analyze the audio data MUS in order to determine instants, denoted as BEAT in Figure 2, at which the ambient music has a rhythmic beat. The ambient music analysis instructions 206 are also designed to analyze the audio data MUS in order to determine a musical genre, denoted GENRE in Figure 2, for the ambient music. The ambient music analysis instructions 206 are also designed to analyze the audio data MUS in order to determine a key, denoted KEY in Figure 2, for the ambient music. A key is defined as the set of a tonic and a mode.
  • the tonic is one of the twelve notes in the classical scale (C, C sharp, D, D sharp, E, F, F sharp, G, G sharp, A, A sharp, B), and the mode is chosen from among the harmonic major mode and the harmonic minor mode, there are thus twenty-four possible keys.
  • the ambient music analysis instructions 206 are designed to analyze the audio data MUS in order to determine a bass line and a melody line for the ambient music. From this, the key of the music from the bass line and the melody line is determined.
  • the game software 200 has sound effect generation instructions 208. This coincides with the sound effects scheduler discussed above.
  • the sound effect generation instructions 208 are designed to, in response to the sending of the request R, define at least one characteristic for an audio data transmission, which are denoted FX in Figure 2
  • This at least one transmission characteristic is determined from the at least one ambient music characteristic determined by the ambient music analysis instructions 204. More precisely, according to some embodiments, and by way of a non-limiting example, the sound effect generation instructions 208 are designed to define three transmission characteristics from,
  • the sound effect generation instructions 208 are designed to define an instant To at which the transmission starts from the instants BEAT, at which the ambient music has a rhythmic beat.
  • the sound effect generation instructions 208 are designed to define this instant T as following the last rhythmic beat instant by a time interval equal to an integer number of times the average time interval separating the rhythmic beat instants. According to some embodiments, transmission occurs once this average time interval.
  • the sound effect generation instructions 208 are designed to select, from among the default audio data FXA, FX b and FXc, those which are associated with the musical genre GENRE of the ambient music, as provided by the instructions 204.
  • the selected default audio data will subsequently be denoted FX; and the pitch thereof Pi.
  • the sound effect generation instructions 208 are designed to determine a desired pitch P from the key KEY of the ambient music MUS as provided by the instructions 204.
  • the desired pitch P is the tonic or the fifth of the key KEY.
  • the sound effect generation instructions 208 are designed to recover the selected default audio data FX; which, as indicated previously, have a default pitch P;.
  • the sound effect generation instructions 208 are designed to modify the recovered default audio data FX; so that the sound effect which they represent has the desired pitch P.
  • the sound effect generation instructions 208 are designed to define the selected and modified audio data as audio data FX which represents the desired sound effect.
  • the sound effect generation instructions 208 are designed to implement the transmission having the characteristics defined previously, that is to say: the instant To at which transmission starts, the audio data FX stemming from default audio data FX; corresponding to the genre of the ambient music and having the desired pitch P.
  • FIG. 3 is a flow chart showing the steps in a method 300 for generating a sound effect, via the data processing system 100 in Figure 1 executing the instructions of the game software in Figure 2, in accordance an embodiment of the present disclosure.
  • the data processing system 100 receives a request for execution of the game software 200 from the user through the man/machine interface 108.
  • Step 304 in response to reception of the request, the data processing system 100 launches the game software 200.
  • Step 305 the processing unit 104 executing the sound effect analysis instructions 204 analyzes the audio data FXA, FX b and FXc, determines the respective pitch PA, P B and Pc thereof, in the manner indicated with reference to Figure 2, and creates associations between the audio data FX A , FX B and FX C and the respective pitch P A , PB, PC thereof.
  • the central processing unit 104 executing the game instructions 202 supplies game information to the user through the output devices (screen, sound reproduction device, etc.) of the man/machine interface 108 on the basis of commands which are input by the user using the input devices 110, 112 (keyboard, mouse, etc.) of the man/machine interface 108.
  • Step 308 the processing unit 104 executing the ambient music analysis instructions 204 recovers audio data MUS representing the ambient music. Still in parallel with Step 306, in Step 310, the processing unit 104 executing the ambient music analysis instructions 204 analyzes the audio data MUS in order to determine at least one characteristic of the ambient music, for example the three characteristics BEAT, GENRE and KEY indicated previously.
  • Step 316 the central processing unit 104 executing the game instructions 202 receives a command from the user through the input devices of the man/machine interface 108 in order to perform an action in the game, where the action is associated with a sound effect.
  • Step 318 in response to reception of the command from the user, the central processing unit 104 executing the game instructions 202 sends a request R for emission of a sound effect.
  • Step 320 in response to the request R, the central processing unit 104 executing the sound effect generation instructions 208 defines the three characteristics T, FX; and P on the basis of, respectively, the three characteristics BEAT, GENRE and KEY of the ambient music which were determined during step 310.
  • Step 322 the central processing unit 104 executing the sound effect generation instructions 208 recovers the selected default audio data FX; which, as indicated previously, represents a sound effect having the default pitch P;.
  • Step 324 the central processing unit 104 executing the sound effect generation instructions 208 modifies the default audio data FX; so that the sound effect which they represent changes from the pitch P; to the desired pitch P.
  • the audio data modified in this manner are denoted FX.
  • the central processing unit 104 executing the sound effect generation instructions 208 performs the transmission at the instant T, with the audio data FX which, firstly, represents a sound effect at the pitch P and, secondly, stems from the audio data FX; selected in accordance with the genre of the ambient music.
  • the generated sound effect is harmoniously incorporated into the ambient music on several levels: on a rhythmic level as a result of the transmission instant To, on a melodic level as a result of the pitch P 0 of said sound effect, and on a stylistic level as a result of the selection of the audio data FX; that matched the genre of the ambient music.
  • the method 300 then returns to Steps 306 and 308.
  • Figure 4 is a block diagram illustrating an internal architecture of an example of a computing device, as discussed in data processing system 100 of Figures 1-3, in
  • a computing device as referred to herein refers to any device with a processor capable of executing logic or coded instructions, and could be, as understood in context, a server, personal computer, game console, set top box, smart phone, pad/tablet computer or media device, to name a few such devices.
  • internal architecture 400 includes one or more processing units (also referred to herein as CPUs) 412, which interface with at least one computer bus 402. Also interfacing with computer bus 402 are persistent storage medium/media 406, network interface 414, memory 404, e.g., random access memory (RAM), run-time transient memory, read only memory (ROM), etc., media disk drive interface 408 as an interface for a drive that can read and/or write to media including removable media such as floppy, CD ROM, DVD, etc.
  • processing units also referred to herein as CPUs
  • persistent storage medium/media 406 e.g., random access memory (RAM), run-time transient memory, read only memory (ROM), etc.
  • media disk drive interface 408 as an interface for a drive that can read and/or write to media including removable media such as floppy, CD ROM, DVD, etc.
  • Memory 404 interfaces with computer bus 402 so as to provide information stored in memory 404 to CPU 412 during execution of software programs such as an operating system, application programs, device drivers, and software modules that comprise program code, and/or computer executable process steps, incorporating functionality described herein, e.g., one or more of process flows described herein.
  • software programs such as an operating system, application programs, device drivers, and software modules that comprise program code, and/or computer executable process steps, incorporating functionality described herein, e.g., one or more of process flows described herein.
  • CPU 412 first loads computer executable process steps from storage, e.g., memory 404, storage medium / media 406, removable media drive, and/or other storage device. CPU 412 can then execute the stored process steps in order to execute the loaded computer-executable process steps.
  • Stored data e.g., data stored by a storage device, can be accessed by CPU 412 during the execution of computer-executable process steps.
  • Persistent storage medium/media 406 is a computer readable storage medium(s) that can be used to store software and data, e.g., an operating system and one or more application programs. Persistent storage medium / media 406 can also be used to store device drivers, such as one or more of a digital camera driver, monitor driver, printer driver, scanner driver, or other device drivers, web pages, content files, playlists and other files. Persistent storage medium / media 406 can further include program modules and data files used to implement one or more embodiments of the present disclosure.
  • the system can be composed of a games console, of an input for music, of an input for introducing the game into the console, the console being provided so as to implement the whole of the method.
  • the input for the music may be a USB port or a digital disk reader.
  • the saved instances of the sound effect audio data could be associated with pitches outside of execution of the game software, either automatically (with software analysis) during development of the game or by the musicians or engineers of the sound themselves.
  • step 305 of the method 300 from Figure 3 may be unnecessary.
  • the sound effect audio data could be adapted not only to suit the possible musical genres of the ambient music but also to suit possible keys of the ambient music.
  • the sound effect audio data could be adapted to suit the twenty-four keys corresponding to the twelve possible tonics and to the two possible modes as discussed above.
  • each saved instance of audio data would be associated, in addition to a genre, with a tonic and with a mode.
  • the sound effect generation instructions 208 would be designed to select, from among the default audio data, those which are associated not only with the musical genre of the ambient music but also with the key thereof.
  • Step 322 of the method in figure 3 would be adapted as a result. Furthermore, it would no longer be necessary to analyze the sound effect audio data in order to determine the pitch thereof, nor to modify them in order to transpose said pitch, so that steps 305 and 324 of the method in figure 3 may be unnecessary.
  • the sound effect generation instructions 208 could be designed to synthesize the sound effect, that is to provide the audio data corresponding to said sound effect on the basis of sound synthesis taking account of the characteristics of the ambient music which are determined by the means 206, particularly the characteristics KEY, GENRE and BEAT. There would thus no longer be any need for sound effects to be saved nor for the analysis means 204 illustrated in Figure 2.
  • the sound synthesis could comprise, firstly, a software synthesizer having a certain number of modifiable parameters (for example the fundamental frequency or the waveform from an oscillator, or else the cutoff frequency of a filter) and, secondly, a set of relationships, defined by mathematical expressions, between the parameters of the software synthesizer and the characteristics of the ambient music.
  • steps 322 and 324 of the method in figure 3 would be replaced by a step involving determination of the parameters of the software synthesizer from, firstly, the characteristics of the ambient music KEY and GENRE and, secondly, the defined relationships, and via a step involving implementation of the software synthesizer with the determined parameters so that it synthesizes sound effect audio data.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Electrophonic Musical Instruments (AREA)

Abstract

Disclosed is a method and system for generating a sound effect in a piece of game software. In response to a request for emission of a sound effect from the game software, transmission of audio data is performed, where the transmission represents a sound effect to a sound reproduction device. Further, audio data, referred to as ambient music, representing music in the course of reproduction is analyzed in order to determine at least one characteristic (BEAT, GENRE, KEY) of the ambient music. At least one characteristic of the transmission is defined from the at least one characteristic (BEAT, GENRE, KEY) of the ambient music.

Description

METHOD FOR GENERATING A SOUND EFFECT IN A PIECE OF GAME SOFTWARE, ASSOCIATED COMPUTER PROGRAM AND DATA PROCESSING SYSTEM FOR EXECUTING INSTRUCTIONS OF THE COMPUTER PROGRAM
FIELD
[001] The present disclosure relates to a method and system for generating a sound effect in a piece of game software, and in particular for synchronizing the sound effects of a video game to background music as a substitution to the original game music.
BACKGROUND
[002] Many video game players prefer to play music from their own collection instead of the original background score authored for the game. As a result, they may switch off the game's original sound effects, which may be perceived as unwanted or even annoying.
SUMMARY
[003] The present disclosure relates to adjusting the sound effects of a video game in such a way that they blend perfectly with whatever piece of music the user has decided to play as a substitution to the original game music. The aim of the disclosure is to allow satisfactory immersion in the game, even when a user is using his own ambient music, by encouraging the user to keep the sound effects provided.
[004] According to some embodiments, the present disclosure discusses a method for generating a sound effect in a piece of game software. The method includes accessing audio data representing a sound effect from a sound reproduction device in response to a request for emission of a sound effect from the game software. The method analyzes audio data representing music in the course of reproduction, referred to as ambient music, in order to determine at least one characteristic of the ambient music. The method then defines at least one characteristic of the transmission from the at least one characteristic of the ambient music. [005] According to some embodiments, the method includes analyzing the audio data representing the ambient music in order to determine instants at which the ambient music has a rhythmic beat in order to analyze audio data representing the ambient music for determining the at least one characteristic of the ambient music. The method then defines an instant at which the transmission starts from the instants at which the ambient music has a rhythmic beat in order to determine the at least one characteristic of the transmission from the at least one characteristic of the ambient music.
[006] According to some embodiments, the method includes defining as the instant at which the transmission starts, an instant that follows the last instant at which the music has a rhythmic beat in order to determine the instant at which the transmission starts from the instants at which the music has a rhythmic beat. The instant is defined by an integer number multiplied by the average time interval separating the instants at which the music has a rhythmic beat. According to some embodiments, it is preferable that this be once the average time interval.
[007] According to some embodiments, the method includes analyzing the audio data representing the ambient music in order to determine a musical genre for the ambient music in order to analyze the audio data representing the ambient music in order to determine the at least one characteristic of the ambient music. The method then includes selecting, from among several audio data associated with different musical genres, the audio data which is associated with the genre of the ambient music, where the audio data for the transmission stem is from the selected audio data in order to define the at least one characteristic of the transmission from the at least one characteristic of the ambient music. .
[008] According to some embodiments, the method includes analyzing the audio data representing the ambient music in order to determine a key for the ambient music in order to analyze the audio data representing the ambient music for determining the at least one characteristic of the ambient music. The method then determines a desired pitch from the determined key in order to determine the at least one characteristic of the transmission from the at least one characteristic of the ambient music.
[009] According to some embodiments, the method includes analyzing the audio data representing the ambient music in order to determine a bass line and a melody line for the ambient music. The analyzing step is also performed in order to analyze the audio data representing the ambient music in order to determine a key for the ambient music. The method also includes determining the key of the ambient music from the bass line and the melody line that have been determined.
[0010] According to some embodiments, the method further includes recovering audio data representing a sound effect having a certain pitch, modifying the recovered audio data so that the sound effect that they represent has the desired pitch, in that the audio data of the transmission stem from the audio data that have been modified in this manner.
[0011] According to some embodiments, the method further includes determining parameters of a software synthesizer from, firstly, the at least one characteristic of the ambient music and, secondly, from defined relationships. The method
includes implementing the software synthesizer with the determined parameters so that it synthesizes sound effect audio data, in that the audio data of the transmission stem from the audio data that have been synthesized in this manner.
[0012] In another embodiment, a computer-readable storage medium is disclosed for generating a sound effect in a piece of game software.
[0013] In yet another embodiment, a system is disclosed for generating a sound effect in a piece of game software. The system includes a data processing system which includes a sound reproduction device, a storage device on which a computer program has been saved, and a central processing unit for executing the instructions of the computer program.
[0014] These and other aspects and embodiments will be apparent to those of ordinary skill in the art by reference to the following detailed description and the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
[0015] In the drawing figures, which are not to scale, and where like reference numerals indicate like elements throughout the several views:
[0016] Figure 1 is a block diagram of a data processing system in accordance with an embodiment of the present disclosure;
[0017] Figure 2 is a block diagram illustrating instruction blocks in a piece of game software implemented by the data processing system of Figure 1 in accordance with an embodiment of the present disclosure;
[0018] Figure 3 illustrates a flow chart for generating a sound effect in accordance an embodiment of the present disclosure;
[0019] Figure 4 is a block diagram illustrating an internal architecture of a computing device in accordance with an embodiment of the present disclosure.
DESCRIPTION OF EMBODIMENTS
[0020] Embodiments are now discussed in more detail referring to the drawings that accompany the present application. In the accompanying drawings, like and/or corresponding elements are referred to by like reference numbers.
[0021] Various embodiments are disclosed herein; however, it is to be understood that the disclosed embodiments are merely illustrative of the disclosure that can be embodied in various forms. In addition, each of the examples given in connection with the various embodiments is intended to be illustrative, and not restrictive. Further, the figures are not necessarily to scale, some features may be exaggerated to show details of particular components (and any size, material and similar details shown in the figures are intended to be illustrative and not restrictive). Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a representative basis for teaching one skilled in the art to variously employ the disclosed embodiments.
[0022] The present disclosure is described below with reference to block diagrams and operational illustrations of methods and devices. It is understood that each block of the block diagrams or operational illustrations, and combinations of blocks in the block diagrams or operational illustrations, can be implemented by means of analog or digital hardware and computer program instructions. These computer program instructions can be provided to a processor of a general purpose computer, special purpose computer, ASIC, or other programmable data processing apparatus, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, implement the functions/acts specified in the block diagrams or operational block or blocks.
[0023] In some alternate implementations, the functions/acts noted in the blocks can occur out of the order noted in the operational illustrations. For example, two blocks shown in succession can in fact be executed substantially concurrently or the blocks can sometimes be executed in the reverse order, depending upon the functionality/acts involved.
Furthermore, the embodiments of methods presented and described as flowcharts in this disclosure are provided by way of example in order to provide a more complete
understanding of the technology. The disclosed methods are not limited to the operations and logical flow presented herein. Alternative embodiments are contemplated in which the order of the various operations is altered and in which sub-operations described as being part of a larger operation are performed independently.
[0024] The principles described herein may be embodied in many different forms. The described systems and methods allow for synchronizing the sound effects of a video game to background music. The described systems and methods adjust the sound effects in such a way that they blend perfectly with whichever piece of music the player has decided to play as a substitution to the original game music. [0025] For the purposes of this disclosure the term "end user", "user" or "player" should be understood to refer to a consumer of data supplied by a data provider. By way of example, and not limitation, the term "user" can refer to a person who receives data provided by the data provider over the Internet in a browser session, or can refer to an automated software application which receives the data and stores or processes the data.
[0026] For the purposes of this disclosure, a computer readable medium stores computer data, which data can include computer program code that is executable by a computer, in machine readable form. By way of example, and not limitation, a computer readable medium may comprise computer readable storage media, for tangible or fixed storage of data, or communication media for transient interpretation of code-containing signals. Computer readable storage media, as used herein, refers to physical or tangible storage (as opposed to signals) and includes without limitation volatile and non-volatile, removable and non-removable media implemented in any method or technology for the tangible storage of information such as computer-readable instructions, data structures, program modules or other data. Computer readable storage media includes, but is not limited to, RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, DVD, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other physical or material medium which can be used to tangibly store the desired information or data or instructions and which can be accessed by a computer or processor.
[0027] For the purposes of this disclosure a module is a software, hardware, or firmware (or combinations thereof) system, process or functionality, or component thereof, that performs or facilitates the processes, features, and/or functions described herein (with or without human interaction or augmentation). A module can include sub-modules.
Software components of a module may be stored on a computer readable medium.
Modules may be integral to one or more computers (or servers), or be loaded and executed by one or more computers (or servers). One or more modules may be grouped into an engine or an application. As discussed herein, a background music analyzer, game sound effects analyzer and a sound effect scheduler can be a module that is a software, hardware, or firmware (or combinations thereof) system for automatically synchronizing game sound effects with background music.
[0028] For the purposes of this disclosure the term "server" should be understood to refer to a service point which provides processing, database, and communication facilities. By way of example, and not limitation, the term "server" can refer to a single, physical processor with associated communications and data storage and database facilities, or it can refer to a networked or clustered complex of processors and associated network and storage devices, as well as operating software and one or more database systems and applications software which support the services provided by the server.
[0029] As discussed herein, many users of game software prefer to play music from their own music collection rather than the music initially provided with the game software. By way of non-limiting examples, there are several ways of replacing the music initially provided in the game software with other ambient music via a background music analyzer. By way of an example, at the game-level, the game software may provide an option to use an ambient music file (for example a file in mp3 format) from the user instead of the ambient music initially provided. As a non-limiting variant, at the system-level, users simply turn off the ambient music initially provided to replace it with ambient music from a piece of software other than the game software, generally a multimedia player such as the software VLC or the software foobar2000. As a further non- limiting variant, at room-level, users simply turn off the ambient music initially provided to replace it with ambient music from a source other than the data processing system executing the game, for example a hi-fi system. Moreover, it has been noticed that users also often turn off the sound effects provided in the game software because they are perceived as disturbing the ambient music which they have chosen. As a result, they are less immersed in the game and the playing pleasure decreases. The background music analyzer is a library integrated into a game responsible for recording the music which is substituted to the original game music, either through direct access to the audio file (at the game-level), through OS-level interception of audio buffers (at the system-level), or through direct recording with a microphone (at the room-level).
[0030] According to some embodiments, as discussed herein, a recorded signal can be split into overlapping frames, such as 100ms frames. The following functions can be used to extract features for each frame: (1) Beat detection function: a function showing sharp peaks at beats; (2) Key detection function: indicating the probability that the music has been, over a past period of time, such as 20s, in a specific tonality. According to some embodiments, a predetermined number of the key detection functions are computed for each minor and major tonalities. For example, 24 of the key detection functions are computed for each of the 12 minor and major tonalities. The beat detection function is computed by a periodicity estimation and tracking of onset detection. The key detection function is computed by matching a bass and melody chromagram with note distribution templates computed for each scale. The chromogram is obtained by binning the frequency spectrum into a number of bins (e.g., 12 bins) mapped to a number of tones (e.g., 12 tones) of equal temperament scale; or by encoding into a number of pitch classes (e.g., 12 pitch classes) the output of a multi-pitch estimator. Additional genre information can be extracted through the use of standard machine learning techniques, such as but not limited to, SVM or Bayesian classifier using mixtures of Gaussian distributions trained on annotated audio files.
[0031] As discussed herein, at least in view of the above discussion of the background music analyzer, a game sound effects analyzer analyzes each of the sound effects samples used in the game to detect their fundamental frequency, using an algorithm such as YIN. It is either used during the game development process, in which all the sound effect samples produced for the game can be annotated with their pitch, or embedded in the game, in which the analysis can be performed every time the game is launched. In the situation the analysis is part of the game asset preparation procedure, different sound effects can also be annotated with a specific music genre, or different sets of sound effects can be created that match different music genres. For example, the destruction of an enemy in a game can be sonified by a synthesizer sound in the "electro" sample set, and a brass hit in the "soul" sample set.
[0032] As discussed herein, at least in view of the above discussion of the background music analyzer and game sound effects analyzer, a sound effect scheduler can be embedded in the game and may be responsible for the playback of the game sound effects. It can operate in two modes. In a normal operating mode, the samples are played at their original pitch immediately after the moment the action that triggers them has taken place. In a music-synchronous mode, the sound effect scheduler queries the background music analyzer to retrieve the times at which the past number of beats (e.g., 4 beats) have been played in the background music, and the most probable tonality of the background music. The position in time of the past number of beats (e.g., 4 beats) can be used to anticipate the time at which the next beat will occur. Every time the player initiates in or during the game an action that triggers a sound effect, the sound effect is not played instantly, but instead, it is delayed so that its playback will coincide with the next beat in the music. Additionally, the difference in pitch between the original sound effect sample (as computed by the sound effect analyzer) and the tonality of the music is compensated for, using transpositions methods such as sample rate conversion or pitch-shifting. In the situation where the game sound effects bank has been annotated by genre, the genre information returned by the analysis module can be used to restrain further the set of sound effects played back.
[0033] Certain embodiments will now be discussed in greater detail with reference to the figures. In general, with reference to Figure 1, a data processing system 100 in accordance with an embodiment for synchronizing sound effects of a video game with background music is shown. The data processing system 100 includes a central unit 102 which contains a central processing unit 104, such as a microprocessor, and a storage device 106, such as a hard disk. The data processing system 100 has a man/machine interface 108 comprising input devices, such as for example a keyboard 110 and a mouse 112, and output devices, such as for example a display screen 114 and a sound reproduction device 118, 120. By way of example, the sound reproduction device can be comprised of a sound card 118 arranged in the central unit 102 and speakers 120 connected to the sound card 118.
[0034] The data processing system 100 includes a sound capture device 122, such as a microphone connected to the sound card 118. The sound capture device 122 is designed to capture a musical source 114 which can be external 124 to the data processing system 100. A non- limiting example of an external musical source 124 is a hi-fi system.
[0035] It is to be understood that the present disclosure may be implemented utilizing any number of computer technologies. For example, although certain embodiments relate to providing access to game software and ambient music via a computing device, the disclosure may be utilized over any computer network, including, for example, a wide area network, local area network or, corporate intranet. Similarly, a computing device discussed in the data processing system 100 may be any computing device that may be coupled to a network, including, for example, personal digital assistants, Web-enabled cellular telephones, devices that dial into the network, mobile computers, personal computers, Internet appliances, wireless communication devices, game consoles and the like.
Computing devices in data processing system 100 include a program for interfacing with the network. Such program, as understood in the art, can be a window or browser, or other similar graphical user interface, for visually displaying the game to the end user (or player) on the display 114 of the computing device. Furthermore, servers for providing game software and/or ambient music external to the game software may be of any type, running any software, and the software modules, objects or plug-ins may be written in any suitable programming language.
[0036] Figure 2 illustrates instruction blocks in a piece of game software implemented by the data processing system 100 of Figure 1 in accordance with some embodiments of the present disclosure. In Figure 2, audio data FXA, FXB and FXC are saved in the storage device 106 of the data processing system of Figure 1. The audio data FXA, FXB or FXC represent a sound effect and are associated with respective musical genres GA, Gb and Go A piece of game software 200 allowing a user to play a game is likewise saved in the storage device 106.
[0037] The game software 200 includes game instructions 202 which are designed to supply game information to a user through the output devices of the man/machine interface 108, in that the game information evolves on the basis of commands input by a user using the input devices (e.g., 110, 112) of the man/machine interface 108. The game instructions 202 are designed to send a request R for emission of a sound effect when the game is being executed. By way of example, the request R is sent upon every action in the game which is performed by the user using the input devices of the man/machine interface 108, in that said action is associated with a sound effect, as discussed below.
[0038] The game software 200 includes sound effect analysis instructions 204. The sound effect analysis instructions 204 are designed to analyze each saved instance of audio data FXA, FXB and FXC and to determine the pitch PA, PB and Pc thereof. According to some exemplary embodiments, the pitch corresponds to a fundamental frequency for the audio data, as determined by means of, for example, a YIN algorithm. The sound effect analysis instructions 204 are furthermore designed to create associations between the audio data FXA, FXB or FXC and the respective pitch PA, PB or Pc thereof. That is, a pitch value PA, PB or Pc are determined from the audio samples FXA, FXB or FXC respectively, and this determination is taken into account for assigning a pitch value to the sound effects. [0039] The game software 200 includes instructions 206 for analyzing a piece of music in the course of reproduction either by the reproduction device 118, 120 or by the external reproduction device 124. This music is referred to as ambient music. The ambient music analysis instructions 206 are designed to recover audio data MUS representing the ambient music. In a first case of replacing ambient music, for example, the ambient music analysis instructions 206 are designed to directly access the music file indicated by the user in the game software options. The game software options can be a dialog box, window, menu or any other graphical user interface element through which the user can configure aspects of the game, such as, input controls, sound volume, music selection, etc. In a second case of replacing ambient music, for example, the ambient music analysis instructions 206 are designed to intercept the audio buffers of an operating system running on the data processing system 100 and executing the game software. In a third case of replacing ambient music, for example, the ambient music analysis instructions 206 are designed to use the sound capture device 122 to convert the ambient music into the audio data MUS.
[0040] The ambient music analysis instructions 206 are designed to analyze the audio data MUS in order to determine at least one characteristic of the ambient music. More precisely, in an example, three characteristics of the ambient music are determined. Thus, the ambient music analysis instructions 206 are designed to analyze the audio data MUS in order to determine instants, denoted as BEAT in Figure 2, at which the ambient music has a rhythmic beat. The ambient music analysis instructions 206 are also designed to analyze the audio data MUS in order to determine a musical genre, denoted GENRE in Figure 2, for the ambient music. The ambient music analysis instructions 206 are also designed to analyze the audio data MUS in order to determine a key, denoted KEY in Figure 2, for the ambient music. A key is defined as the set of a tonic and a mode. By way of example, the tonic is one of the twelve notes in the classical scale (C, C sharp, D, D sharp, E, F, F sharp, G, G sharp, A, A sharp, B), and the mode is chosen from among the harmonic major mode and the harmonic minor mode, there are thus twenty-four possible keys. To perform the analysis, for example, the ambient music analysis instructions 206 are designed to analyze the audio data MUS in order to determine a bass line and a melody line for the ambient music. From this, the key of the music from the bass line and the melody line is determined.
[0041] The game software 200 has sound effect generation instructions 208. This coincides with the sound effects scheduler discussed above. The sound effect generation instructions 208 are designed to, in response to the sending of the request R, define at least one characteristic for an audio data transmission, which are denoted FX in Figure 2
representing a sound effect, to the reproduction device 118, 120. This at least one transmission characteristic is determined from the at least one ambient music characteristic determined by the ambient music analysis instructions 204. More precisely, according to some embodiments, and by way of a non-limiting example, the sound effect generation instructions 208 are designed to define three transmission characteristics from,
respectively, the three ambient music characteristics: BEAT, GENRE and KEY. Thus, the sound effect generation instructions 208 are designed to define an instant To at which the transmission starts from the instants BEAT, at which the ambient music has a rhythmic beat. By way of example, the sound effect generation instructions 208 are designed to define this instant T as following the last rhythmic beat instant by a time interval equal to an integer number of times the average time interval separating the rhythmic beat instants. According to some embodiments, transmission occurs once this average time interval.
[0042] Furthermore, the sound effect generation instructions 208 are designed to select, from among the default audio data FXA, FXb and FXc, those which are associated with the musical genre GENRE of the ambient music, as provided by the instructions 204. The selected default audio data will subsequently be denoted FX; and the pitch thereof Pi.
Furthermore, the sound effect generation instructions 208 are designed to determine a desired pitch P from the key KEY of the ambient music MUS as provided by the instructions 204. Preferably, according to some embodiments,, the desired pitch P is the tonic or the fifth of the key KEY. The sound effect generation instructions 208 are designed to recover the selected default audio data FX; which, as indicated previously, have a default pitch P;.
[0043] The sound effect generation instructions 208 are designed to modify the recovered default audio data FX; so that the sound effect which they represent has the desired pitch P. The sound effect generation instructions 208 are designed to define the selected and modified audio data as audio data FX which represents the desired sound effect. The sound effect generation instructions 208 are designed to implement the transmission having the characteristics defined previously, that is to say: the instant To at which transmission starts, the audio data FX stemming from default audio data FX; corresponding to the genre of the ambient music and having the desired pitch P.
[0044] Having discussed the functional and executable components for generating a sound effect in a piece of game software, its operation will now be described with reference to Figure 3. Figure 3 is a flow chart showing the steps in a method 300 for generating a sound effect, via the data processing system 100 in Figure 1 executing the instructions of the game software in Figure 2, in accordance an embodiment of the present disclosure. In Step 302, the data processing system 100 receives a request for execution of the game software 200 from the user through the man/machine interface 108. In Step 304, in response to reception of the request, the data processing system 100 launches the game software 200. In Step 305 in which the game is initialized, the processing unit 104 executing the sound effect analysis instructions 204 analyzes the audio data FXA, FXb and FXc, determines the respective pitch PA, PB and Pc thereof, in the manner indicated with reference to Figure 2, and creates associations between the audio data FXA, FXB and FXC and the respective pitch PA, PB, PC thereof. [0045] In Step 306, the central processing unit 104 executing the game instructions 202 supplies game information to the user through the output devices (screen, sound reproduction device, etc.) of the man/machine interface 108 on the basis of commands which are input by the user using the input devices 110, 112 (keyboard, mouse, etc.) of the man/machine interface 108. In parallel with Step 306, as in Step 308, the processing unit 104 executing the ambient music analysis instructions 204 recovers audio data MUS representing the ambient music. Still in parallel with Step 306, in Step 310, the processing unit 104 executing the ambient music analysis instructions 204 analyzes the audio data MUS in order to determine at least one characteristic of the ambient music, for example the three characteristics BEAT, GENRE and KEY indicated previously.
[0046] In Step 316, the central processing unit 104 executing the game instructions 202 receives a command from the user through the input devices of the man/machine interface 108 in order to perform an action in the game, where the action is associated with a sound effect. In Step 318, in response to reception of the command from the user, the central processing unit 104 executing the game instructions 202 sends a request R for emission of a sound effect. In Step 320, in response to the request R, the central processing unit 104 executing the sound effect generation instructions 208 defines the three characteristics T, FX; and P on the basis of, respectively, the three characteristics BEAT, GENRE and KEY of the ambient music which were determined during step 310. In Step 322, the central processing unit 104 executing the sound effect generation instructions 208 recovers the selected default audio data FX; which, as indicated previously, represents a sound effect having the default pitch P;. In Step 324, the central processing unit 104 executing the sound effect generation instructions 208 modifies the default audio data FX; so that the sound effect which they represent changes from the pitch P; to the desired pitch P. The audio data modified in this manner are denoted FX. In Step 326, the central processing unit 104 executing the sound effect generation instructions 208 performs the transmission at the instant T, with the audio data FX which, firstly, represents a sound effect at the pitch P and, secondly, stems from the audio data FX; selected in accordance with the genre of the ambient music.
[0047] Thus, the generated sound effect is harmoniously incorporated into the ambient music on several levels: on a rhythmic level as a result of the transmission instant To, on a melodic level as a result of the pitch P0 of said sound effect, and on a stylistic level as a result of the selection of the audio data FX; that matched the genre of the ambient music. The method 300 then returns to Steps 306 and 308.
[0048] Figure 4 is a block diagram illustrating an internal architecture of an example of a computing device, as discussed in data processing system 100 of Figures 1-3, in
accordance with one or more embodiments of the present disclosure.
[0049] A computing device as referred to herein refers to any device with a processor capable of executing logic or coded instructions, and could be, as understood in context, a server, personal computer, game console, set top box, smart phone, pad/tablet computer or media device, to name a few such devices.
[0050] As shown in the example of Fig. 4, internal architecture 400 includes one or more processing units (also referred to herein as CPUs) 412, which interface with at least one computer bus 402. Also interfacing with computer bus 402 are persistent storage medium/media 406, network interface 414, memory 404, e.g., random access memory (RAM), run-time transient memory, read only memory (ROM), etc., media disk drive interface 408 as an interface for a drive that can read and/or write to media including removable media such as floppy, CD ROM, DVD, etc. media, display interface 410 as interface for a monitor or other display device, keyboard interface 416 as interface for a keyboard, pointing device interface 418 as an interface for a mouse or other pointing device, and miscellaneous other interfaces not shown individually, such as parallel and serial port interfaces, a universal serial bus (USB) interface, and the like. [0051] Memory 404 interfaces with computer bus 402 so as to provide information stored in memory 404 to CPU 412 during execution of software programs such as an operating system, application programs, device drivers, and software modules that comprise program code, and/or computer executable process steps, incorporating functionality described herein, e.g., one or more of process flows described herein. CPU 412 first loads computer executable process steps from storage, e.g., memory 404, storage medium / media 406, removable media drive, and/or other storage device. CPU 412 can then execute the stored process steps in order to execute the loaded computer-executable process steps. Stored data, e.g., data stored by a storage device, can be accessed by CPU 412 during the execution of computer-executable process steps.
[0052] Persistent storage medium/media 406 is a computer readable storage medium(s) that can be used to store software and data, e.g., an operating system and one or more application programs. Persistent storage medium / media 406 can also be used to store device drivers, such as one or more of a digital camera driver, monitor driver, printer driver, scanner driver, or other device drivers, web pages, content files, playlists and other files. Persistent storage medium / media 406 can further include program modules and data files used to implement one or more embodiments of the present disclosure.
[0053] Thus, from the above discussion, it is clear that a computer program 200 and a method 300 as described above allow harmonious incorporation of sound effects into any kind of ambient music chosen by a user, or even predefined by the game software.
[0054] Those skilled in the art will recognize that the methods and systems of the present disclosure may be implemented in many manners and as such are not to be limited by the foregoing exemplary embodiments and examples. In other words, functional elements being performed by single or multiple components, in various combinations of hardware and software or firmware, and individual functions, may be distributed among software applications at either the client or server or both. [0055] Thus, for example the system can be composed of a games console, of an input for music, of an input for introducing the game into the console, the console being provided so as to implement the whole of the method. The input for the music may be a USB port or a digital disk reader.
[0056] In this regard, any number of the features of the different embodiments described herein may be combined into single or multiple embodiments, and alternate embodiments having fewer than, or more than, all of the features described herein are possible.
Functionality may also be, in whole or in part, distributed among multiple components, in manners now known or to become known. Thus, myriad software/hardware/firmware combinations are possible in achieving the functions, features, interfaces and preferences described herein. Moreover, the scope of the present disclosure covers conventionally known manners for carrying out the described features and functions and interfaces, as well as those variations and modifications that may be made to the hardware or software or firmware components described herein as would be understood by those skilled in the art now and hereafter
[0057] In particular, the saved instances of the sound effect audio data could be associated with pitches outside of execution of the game software, either automatically (with software analysis) during development of the game or by the musicians or engineers of the sound themselves. In this case, step 305 of the method 300 from Figure 3 may be unnecessary.
[0058] Furthermore, the sound effect audio data could be adapted not only to suit the possible musical genres of the ambient music but also to suit possible keys of the ambient music. For example, the sound effect audio data could be adapted to suit the twenty-four keys corresponding to the twelve possible tonics and to the two possible modes as discussed above. Thus, each saved instance of audio data would be associated, in addition to a genre, with a tonic and with a mode. According to some embodiments, the sound effect generation instructions 208 would be designed to select, from among the default audio data, those which are associated not only with the musical genre of the ambient music but also with the key thereof. Step 322 of the method in figure 3 would be adapted as a result. Furthermore, it would no longer be necessary to analyze the sound effect audio data in order to determine the pitch thereof, nor to modify them in order to transpose said pitch, so that steps 305 and 324 of the method in figure 3 may be unnecessary.
[0059] Furthermore, the sound effect generation instructions 208 could be designed to synthesize the sound effect, that is to provide the audio data corresponding to said sound effect on the basis of sound synthesis taking account of the characteristics of the ambient music which are determined by the means 206, particularly the characteristics KEY, GENRE and BEAT. There would thus no longer be any need for sound effects to be saved nor for the analysis means 204 illustrated in Figure 2. By way of example, the sound synthesis could comprise, firstly, a software synthesizer having a certain number of modifiable parameters (for example the fundamental frequency or the waveform from an oscillator, or else the cutoff frequency of a filter) and, secondly, a set of relationships, defined by mathematical expressions, between the parameters of the software synthesizer and the characteristics of the ambient music. Thus, steps 322 and 324 of the method in figure 3 would be replaced by a step involving determination of the parameters of the software synthesizer from, firstly, the characteristics of the ambient music KEY and GENRE and, secondly, the defined relationships, and via a step involving implementation of the software synthesizer with the determined parameters so that it synthesizes sound effect audio data.
[0060] While the system and method have been described in terms of one or more embodiments, it is to be understood that the disclosure need not be limited to the disclosed embodiments. It is intended to cover various modifications and similar arrangements included within the spirit and scope of the claims, the scope of which should be accorded the broadest interpretation so as to encompass all such modifications and similar structures.

Claims

What is claimed is:
1. Method (300) for generating a sound effect in a piece of game software (200), involving:
- in response to a request (R) for emission of a sound effect from the game software (200), performing (326) transmission of audio data (FX) representing a sound effect to a sound reproduction device (118, 120),
characterized in that it furthermore involves:
- analyzing (310) audio data (MUS) representing music in the course of reproduction, called ambient music, in order to determine at least one characteristic (BEAT, GENRE, KEY) of the ambient music,
- defining (320) at least one characteristic (T, FX;, P) of the transmission from the at least one characteristic (BEAT, GENRE, KEY) of the ambient music.
2. Method according to claim 1, involving, firstly, in order to analyze (310) audio data (MUS) representing the ambient music in order to determine the at least one characteristic (BEAT, GENRE, KEY) of the ambient music:
- analyzing the audio data (MUS) representing the ambient music in order to determine instants (BEAT) at which the ambient music has a rhythmic beat,
and, secondly, in order to determine (320) the at least one characteristic (T, FX;, P) of the transmission from the at least one characteristic (BEAT, GENRE, KEY) of the ambient music:
- defining an instant (T) at which the transmission starts from the instants (BEAT) at which the ambient music has a rhythmic beat.
3. Method according to claim 2, involving, in order to determine the instant (T) at which the transmission starts from the instants (BEAT) at which the music has a rhythmic beat:
- defining as the instant (T) at which the transmission starts an instant that follows the last instant at which the music has a rhythmic beat by an integer number of times the average time interval separating the instants (BEAT) at which the music has a rhythmic beat, preferably once this average time interval.
4. Method according to one of claims 1 to 3, involving, firstly, in order to analyze (310) the audio data (MUS) representing the ambient music in order to determine the at least one characteristic (BEAT, GENRE, KEY) of the ambient music:
- analyzing the audio data (MUS) representing the ambient music in order to determine a musical genre (GENRE) for the ambient music,
and, secondly, in order to define (320) the at least one characteristic of the transmission (T, FX;, P) from the at least one characteristic (BEAT, GENRE, KEY) of the ambient music:
- selecting, from among several audio data (FXA, FXB, FXC) associated with different musical genres (GA, GB, GC), the audio data (FX;) which are associated with the genre (GENRE) of the ambient music,
and in which the audio data (FX) for the transmission stem from the selected audio data (FX;).
5. Method according to one of claims 1 to 4, involving, firstly, in order to analyze (310) the audio data (MUS) representing the ambient music in order to determine the at least one characteristic (BEAT, GENRE, KEY) of the ambient music:
- analyzing the audio data (MUS) representing the ambient music in order to determine a key (KEY) for the ambient music, and, secondly, in order to determine (320) the at least one characteristic (T, FX;, P) of the transmission from the at least one characteristic (BEAT, GENRE, KEY) of the ambient music:
- determining a desired pitch (P) from the determined key (KEY).
6. Method according to claim 5, involving, in order to analyze the audio data (MUS) representing the ambient music in order to determine a key (KEY) for the ambient music:
- analyzing the audio data (MUS) representing the ambient music in order to determine a bass line and a melody line for the ambient music,
- determining the key (KEY) of the ambient music from the bass line and the melody line that have been determined.
7. Method according to claim 5 or 6, furthermore involving:
- recovering (322) audio data (FX;) representing a sound effect having a certain pitch (Pi),
- modifying the recovered audio data (FX;) so that the sound effect that they represent has the desired pitch (P),
and in which the audio data (FX) of the transmission stem from the audio data that have been modified in this manner.
8. Method according to claim 1, 2 or 3, furthermore involving:
- determining parameters of a software synthesizer from, firstly, the at least one characteristic (GENRE, KEY) of the ambient music and, secondly, defined relationships, and
- implementing the software synthesizer with the determined parameters so that it synthesizes sound effect audio data,
and in which the audio data (FX) of the transmission stem from the audio data that have been synthesized in this manner.
9. Computer program (200) having instructions (204, 206, 208) which, when executed by a computer, prompt the implementation of a method according to any one of claims 1 to 8 by said computer.
10. Data processing system (100) having:
- a sound reproduction device (118, 120),
- a storage device (106) on which a computer program (200) according to claim 9 has been saved, and
- a central processing unit (104) for executing the instructions of the computer program (200).
PCT/IB2011/003221 2011-04-12 2011-10-12 Method for generating a sound effect in a piece of game software, associated computer program and data processing system for executing instructions of the computer program WO2012140468A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/264,189 US20140128160A1 (en) 2011-04-12 2011-10-12 Method and system for generating a sound effect in a piece of game software

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FR1153197A FR2974226A1 (en) 2011-04-12 2011-04-12 METHOD FOR GENERATING SOUND EFFECT IN GAME SOFTWARE, ASSOCIATED COMPUTER PROGRAM, AND COMPUTER SYSTEM FOR EXECUTING COMPUTER PROGRAM INSTRUCTIONS.
FR11/53197 2011-04-12

Publications (1)

Publication Number Publication Date
WO2012140468A1 true WO2012140468A1 (en) 2012-10-18

Family

ID=45558781

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2011/003221 WO2012140468A1 (en) 2011-04-12 2011-10-12 Method for generating a sound effect in a piece of game software, associated computer program and data processing system for executing instructions of the computer program

Country Status (3)

Country Link
US (1) US20140128160A1 (en)
FR (1) FR2974226A1 (en)
WO (1) WO2012140468A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106935236A (en) * 2017-02-14 2017-07-07 复旦大学 A kind of piano performance appraisal procedure and system
US10453434B1 (en) 2017-05-16 2019-10-22 John William Byrd System for synthesizing sounds from prototypes

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9192857B2 (en) * 2013-07-23 2015-11-24 Igt Beat synchronization in a game
WO2016153004A1 (en) 2015-03-25 2016-09-29 株式会社タクミナ Non-return valve and valve body
US9947170B2 (en) 2015-09-28 2018-04-17 Igt Time synchronization of gaming machines
JP7287826B2 (en) * 2019-04-22 2023-06-06 任天堂株式会社 Speech processing program, speech processing system, speech processing device, and speech processing method
WO2020263073A1 (en) * 2019-06-28 2020-12-30 Ciscomani Davila Geovani Francesco Two-way device for measuring electricity consumption with anti-theft system for monitoring an alternative energy source
CN112863466B (en) * 2021-01-07 2024-05-31 广州欢城文化传媒有限公司 Audio social interaction method and device
EP4105924B1 (en) * 2021-06-15 2024-04-24 Lemon Inc. System and method for selecting points in a music and audio signal for placement of sound effect
US20230128812A1 (en) * 2021-10-21 2023-04-27 Universal International Music B.V. Generating tonally compatible, synchronized neural beats for digital audio files

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1229513A2 (en) * 2001-01-22 2002-08-07 Sega Corporation Audio signal outputting method and BGM generation method
US20030070538A1 (en) * 2001-10-11 2003-04-17 Keiichi Sugiyama Audio signal outputting method, audio signal reproduction method, and computer program product
US20040235564A1 (en) * 2003-05-20 2004-11-25 Turbine Entertainment Software Corporation System and method for enhancing the experience of participant in a massively multiplayer game
GB2425730A (en) * 2005-05-03 2006-11-08 Codemasters Software Co Rhythm action game
WO2009036564A1 (en) * 2007-09-21 2009-03-26 The University Of Western Ontario A flexible music composition engine
US7674966B1 (en) * 2004-05-21 2010-03-09 Pierce Steven M System and method for realtime scoring of games and other applications
WO2010142297A2 (en) * 2009-06-12 2010-12-16 Jam Origin Aps Generative audio matching game system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1229513A2 (en) * 2001-01-22 2002-08-07 Sega Corporation Audio signal outputting method and BGM generation method
US20030070538A1 (en) * 2001-10-11 2003-04-17 Keiichi Sugiyama Audio signal outputting method, audio signal reproduction method, and computer program product
US20040235564A1 (en) * 2003-05-20 2004-11-25 Turbine Entertainment Software Corporation System and method for enhancing the experience of participant in a massively multiplayer game
US7674966B1 (en) * 2004-05-21 2010-03-09 Pierce Steven M System and method for realtime scoring of games and other applications
GB2425730A (en) * 2005-05-03 2006-11-08 Codemasters Software Co Rhythm action game
WO2009036564A1 (en) * 2007-09-21 2009-03-26 The University Of Western Ontario A flexible music composition engine
WO2010142297A2 (en) * 2009-06-12 2010-12-16 Jam Origin Aps Generative audio matching game system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106935236A (en) * 2017-02-14 2017-07-07 复旦大学 A kind of piano performance appraisal procedure and system
US10453434B1 (en) 2017-05-16 2019-10-22 John William Byrd System for synthesizing sounds from prototypes

Also Published As

Publication number Publication date
US20140128160A1 (en) 2014-05-08
FR2974226A1 (en) 2012-10-19

Similar Documents

Publication Publication Date Title
US20140128160A1 (en) Method and system for generating a sound effect in a piece of game software
US7979146B2 (en) System and method for automatically producing haptic events from a digital audio signal
US8378964B2 (en) System and method for automatically producing haptic events from a digital audio signal
CN101796587B (en) Automatic accompaniment for vocal melodies
EP2136286B1 (en) System and method for automatically producing haptic events from a digital audio file
JP4640407B2 (en) Signal processing apparatus, signal processing method, and program
US20200306641A1 (en) Real-time audio generation for electronic games based on personalized music preferences
US20140080606A1 (en) Methods and systems for generating a scenario of a game on the basis of a piece of music
EP4004916B1 (en) System and method for hierarchical audio source separation
CN109922268B (en) Video shooting method, device, equipment and storage medium
CN109346043B (en) Music generation method and device based on generation countermeasure network
WO2017028686A1 (en) Information processing method, terminal device and computer storage medium
Hsu Strategies for managing timbre and interaction in automatic improvisation systems
CN112669811B (en) Song processing method and device, electronic equipment and readable storage medium
Carey Designing for Cumulative Interactivity: The _derivations System.
CN113781989A (en) Audio animation playing and rhythm stuck point identification method and related device
EP3096242A1 (en) Media content selection
JP2008216486A (en) Music reproduction system
WO2023273440A1 (en) Method and apparatus for generating plurality of sound effects, and terminal device
JP7428182B2 (en) Information processing device, method, and program
CN114783408A (en) Audio data processing method and device, computer equipment and medium
US11609948B2 (en) Music streaming, playlist creation and streaming architecture
US20240184515A1 (en) Vocal Attenuation Mechanism in On-Device App
US20240168994A1 (en) Music selection system and method
Grollmisch et al. Server-Based Pitch Detection for Web Applications

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 13264189

Country of ref document: US

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11815573

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 13.02.2014)

122 Ep: pct application non-entry in european phase

Ref document number: 11815573

Country of ref document: EP

Kind code of ref document: A1