US20090259326A1 - Server side audio file beat mixing - Google Patents
Server side audio file beat mixing Download PDFInfo
- Publication number
- US20090259326A1 US20090259326A1 US12/424,503 US42450309A US2009259326A1 US 20090259326 A1 US20090259326 A1 US 20090259326A1 US 42450309 A US42450309 A US 42450309A US 2009259326 A1 US2009259326 A1 US 2009259326A1
- Authority
- US
- United States
- Prior art keywords
- audio
- audio files
- mixing
- song
- mixed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/16—Analogue secrecy systems; Analogue subscription systems
- H04N7/173—Analogue secrecy systems; Analogue subscription systems with two-way working, e.g. subscriber sending a programme selection signal
- H04N7/17309—Transmission or handling of upstream communications
- H04N7/17318—Direct or substantially direct transmission and handling of requests
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H1/00—Details of electrophonic musical instruments
- G10H1/0008—Associated control or indicating means
- G10H1/0025—Automatic or semi-automatic music composition, e.g. producing random music, applying rules from music theory or modifying a musical piece
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
- G11B27/034—Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/233—Processing of audio elementary streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/235—Processing of additional data, e.g. scrambling of additional data or processing content descriptors
- H04N21/2353—Processing of additional data, e.g. scrambling of additional data or processing content descriptors specifically adapted to content descriptors, e.g. coding, compressing or processing of metadata
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/266—Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
- H04N21/26616—Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel for merging a unicast channel into a multicast channel, e.g. in a VOD application, when a client served by unicast channel catches up a multicast channel to save bandwidth
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/47202—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting content on demand, e.g. video on demand
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/63—Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
- H04N21/64—Addressing
- H04N21/6408—Unicasting
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/8106—Monomedia components thereof involving special audio data, e.g. different tracks for different languages
- H04N21/8113—Monomedia components thereof involving special audio data, e.g. different tracks for different languages comprising music, e.g. song in MP3 format
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/8166—Monomedia components thereof involving executable data, e.g. software
- H04N21/8193—Monomedia components thereof involving executable data, e.g. software dedicated tools, e.g. video decoder software or IPMP tool
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2210/00—Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
- G10H2210/031—Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
- G10H2210/061—Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal for extraction of musical phrases, isolation of musically relevant segments, e.g. musical thumbnail generation, or for temporal structure analysis of a musical piece, e.g. determination of the movement sequence of a musical work
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2210/00—Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
- G10H2210/031—Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
- G10H2210/076—Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal for extraction of timing, tempo; Beat detection
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2240/00—Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
- G10H2240/121—Musical libraries, i.e. musical databases indexed by musical parameters, wavetables, indexing schemes using musical parameters, musical rule bases or knowledge bases, e.g. for automatic composing methods
- G10H2240/131—Library retrieval, i.e. searching a database or selecting a specific musical piece, segment, pattern, rule or parameter set
- G10H2240/135—Library retrieval index, i.e. using an indexing scheme to efficiently retrieve a music piece
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2250/00—Aspects of algorithms or signal processing methods without intrinsic musical character, yet specifically adapted for or used in electrophonic musical processing
- G10H2250/025—Envelope processing of music signals in, e.g. time domain, transform domain or cepstrum domain
- G10H2250/035—Crossfade, i.e. time domain amplitude envelope control of the transition between musical sounds or melodies, obtained for musical purposes, e.g. for ADSR tone generation, articulations, medley, remix
Definitions
- the present invention relates to a server-side audio file beat mixing.
- a device for audio file beat mixing may include a website, a plurality of audio files, an audio processing server, audio mixing software (i.e., sound mixing engine), at least one audio processor and an audio encoder.
- audio mixing software i.e., sound mixing engine
- the website may be the front end of the invention where a client could create a mixed audio file including a custom play list of recordings that they desire to mix.
- a database of mix-ready audio files (e.g., songs) is provided with corresponding Marker Time Stamp information—a collection of songs that have been tempo adjusted to one or more “base tempos.”
- Base tempos are starting tempos of songs that are to be mixed. For example, a client might choose 128 beats per minute (BPM). The mix would select from songs that have a base tempo of 128 BPM.
- FIG. 1 which is a simplified schematic illustrating an example of an embodiment of a breakdown of a song 100 , each song is formatted to have a Part 1 108 , Part 2 110 , and Part 3 112 . The example shown in FIG.
- Part 1 108 is based on a 4 minute song with a tempo of 120 beats per minute at a 44.11 kHz sample rate. Therefore, total song file length including silence at a beginning and an end of the song file is 10,584,000 bits.
- Part 1 108 consists of a set number of musical beats (64 beats “intro” at 120 beats per minute equal to 1,411,200 samples for this example).
- the range of Part 1 108 is tagged by marker A 101 (the start of beat 1 of the range and Marker B 102 of the end of beat 64 ).
- Marker B is 64 beats after Marker A.
- the end of the 64 count intro is referred to as Part 1 .
- Part 2 110 is the core sequence of sounds and beats encapsulating the essence or core of the song (can be of any length).
- Part 3 112 is the final substantive section of the song. Like Part 1 , Part 3 112 contains an equal number of beats at the same tempo (64 beats at the end of song “outro” at 120 beats per minute equal to 1,411,200 samples in this example) and the range is tagged by Marker C 104 located at a time stamp of “X”-64 beats where “X” is the time stamp of the end of the final 64th beat of the 64 beat Part 3 112 section and Marker D 106 is the time stamp value of “X.” Marker D the end of the 64 count outro; a short crash and delay may follow for a number of seconds.
- Markers 101 , 102 , 104 and 106 are represented in Bit Samples as Time Stamps as further explained hereunder and in FIG. 1 .
- the database has associated with each song a record of the bit sample time stamp address referred to as time stamp throughout for the start and end points of Part 1 (Markers A and B), Part 2 110 , and Part 3 (Markers C and D) so that in the mixing process, the “Parts” of a song can be accessed when needed.
- the example Song has a Marker A 101 Time Stamp of 88,200, a Marker B 102 Time Stamp of 1,499,400, a Marker C 104 Time Stamp of 7,585,200, and a Marker D 106 Time Stamp of 10,407,600.
- the audio processing server may include, for example, a computer server that processes the bit-by-bit mixing and processing of one song with another in a virtual multi-track environment.
- the audio processing server may also be used to convert the mixed audio file to a compressed format for delivery or pickup by a customer.
- Mixing software may be used to combine the audio files (e.g., songs) and other sounds (Audio Bridge), apply audio processors, and convert the resulting file to new audio format.
- audio files e.g., songs
- other sounds e.g., Audio Bridge
- Audio processors may be used to adjust a number of audio attributes including, but not limited to, amplitude (volume) of incoming signal, frequency response (EQ) of incoming signals, sound limiting or compression of the signal to reduce or eliminate distortion, phase shifters to remove any “phase cancellation” resulting from the exact placement of “similar beats over top of similar beats,” automated stereo panning envelopes adjusting the left-right stereo image of one or more separate stereo tracks to provide interesting special effects, and time compression or expansion algorithms may be employed to adjust the speed of the audio file on a fixed or gliding/gradual basis.
- An MP3 encoder or other encoder may be used to convert the mixed audio file to a compressed audio format suitable for quick download by a user of the service.
- FIG. 1 is a simplified diagram illustrating an embodiment of a method of breaking down a song according to embodiments of the present invention
- FIGS. 2A to 2C are simplified diagrams illustrating an embodiment of a method of forming a mixed audio file according to embodiments of the present invention.
- FIGS. 3A and 3B are simplified diagrams illustrating an embodiment of a method of forming a mixed audio file according to embodiments of the present invention.
- Step 1 Project play list including at least one song is created and sent to mixing software to create a mixed audio file template.
- Step 2 The mixer software compiles song-related data from the database (e.g., Time Stamps of Marker Points, total number of bits in a song) related to each song that is selected for mixing as part of the mixed audio file and translates this into an instruction list that is further processed by the software invention. Specifically, a map is created by creating a time line (in bits) of the entire audio file by adding length of Part 1 of first song to Part 2 of each subsequent song, followed by Part 3 of the last song on the mixed audio file to determine the overall bit length of the mixed audio file. Time Stamp Location points of each Mix Region, defined below, are stored on the server to be accessed during the mix process so that Audio Processes can be applied to the Mix Region real-time during the re-sampling process.
- the database e.g., Time Stamps of Marker Points, total number of bits in a song
- a map is created by creating a time line (in bits) of the entire audio file by adding length of Part 1 of first song to Part 2 of each subsequent song,
- a Mix Region is defined as the range of time in the time line where two songs are being combined to create a blended mix of two songs within a mixed audio file, similar to DJ mixing.
- FIG. 2 A is a simplified schematic diagram illustrating a position of tracks and audio bridge in time. As shown in FIG. 2A , the overall start point of each Mix Region is the point where Marker C 208 of Song 1 is aligned with Marker A 202 of Song 2 . The end point of a mix region is where Marker D 210 of Song 1 meets Marker B 204 of Song 2 .
- the Mix Region is illustrated in FIG. 2B and may include EQ filtering, amplitude adjustments/cross-fades and stereo imaging effect.
- the first significant audio mixing point begins at Marker C of Song 1 , the first track in a Project.
- audio bits from Song 2 Marker A
- are mixed with Track A while a series of audio processors are applied for the length of the 64-Beat mix up to and through end point of the Mix Region where Marker D of Song 1 overlays Marker B of Song 2 .
- a short 16-count “Audio Bridge” has been overlaid in the mix instructions to help transition from one song to the next.
- FIG. 3A is a simplified schematic 300 illustrating a method of forming a mixed file by feeding multiple tracks into a mixer with Mix Regions and Audio Bridges.
- audio data is combined bit by bit from the start of Song 1 through the end of Song 3 .
- the first significant audio mixing point begins at Marker C 306 of Song 1 .
- audio bits from Marker A 310 of Song 2 are mixed with Song 1 while a series of audio processors are applied for the length of the 64-beat mix up to and through end point of the Mix Region where Marker D 308 of Song 1 overlays Marker B 312 of Song 2 .
- FIG. 3B is a simplified schematic illustrating the final mixed file created using the method illustrated in FIG.> 3 A.
- the Audio Bridge is simply a sound file that, when layered over the file at the end of a “Mix Region,” helps smooth out any noticeable or abrupt transitions from one Song to another, commonly experienced when two songs of different production style are mixed.
- An Audio Bridge would have one Marker of note, Marker X.
- Marker X is the ninth beat in a 16-count bridge, but since the audio bridge is often non-rhythmic, it can be of any length and the “X” position can be set by the peak in amplitude of the segment.
- the sound prior to the ninth beat or Marker X would normally increase in amplitude or volume while the sounds after the peak of the ninth beat or Marker X would normally decrease in volume to fade out by the end of the 16 count bridge, as shown in FIGS. 2A , 3 A and 3 B.
- Step 4 Once the entire mixed audio file has been processed or mixed, a Time Compression/Expansion process may be called to change the tempo of the mixed audio file from its base tempo (128 BPM) to any flat tempo or to a gliding tempo profile that can be selected during the mixed audio file creation process in Step 1 .
- a mixed audio file can be gradually pitched up from the base tempo to a user-defined or static-option tempo higher or lower than the base tempo, or the entire mixed audio file could be shifted up or down in tempo entirely. This Step can also be accomplished during the real-time processing of the audio mixing.
- Step 5 The mixed audio file may be converted to a new compressed format and posted for the customer to download.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Library & Information Science (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Human Computer Interaction (AREA)
- Electrophonic Musical Instruments (AREA)
Abstract
Description
- This application is a utility conversion of U.S. Provisional Patent Application Ser. No. 61/045,186, filed Apr. 15, 2008, for “Server-Side Audio File Beat Mixing (SSAFBM).”
- The present invention relates to a server-side audio file beat mixing.
- A device for audio file beat mixing may include a website, a plurality of audio files, an audio processing server, audio mixing software (i.e., sound mixing engine), at least one audio processor and an audio encoder.
- The website may be the front end of the invention where a client could create a mixed audio file including a custom play list of recordings that they desire to mix.
- A database of mix-ready audio files (e.g., songs) is provided with corresponding Marker Time Stamp information—a collection of songs that have been tempo adjusted to one or more “base tempos.” Base tempos are starting tempos of songs that are to be mixed. For example, a client might choose 128 beats per minute (BPM). The mix would select from songs that have a base tempo of 128 BPM. As shown in
FIG. 1 , which is a simplified schematic illustrating an example of an embodiment of a breakdown of asong 100, each song is formatted to have aPart 1 108,Part 2 110, andPart 3 112. The example shown inFIG. 1 is based on a 4 minute song with a tempo of 120 beats per minute at a 44.11 kHz sample rate. Therefore, total song file length including silence at a beginning and an end of the song file is 10,584,000 bits.Part 1 108 consists of a set number of musical beats (64 beats “intro” at 120 beats per minute equal to 1,411,200 samples for this example). The range ofPart 1 108 is tagged by marker A 101 (the start ofbeat 1 of the range and Marker B 102 of the end of beat 64). The start of the fill 64 count intro directly following an ambient non-rhythmic sounds prior to this first beat (Marker A). Marker B is 64 beats after Marker A. The end of the 64 count intro is referred to asPart 1.Part 2 110 is the core sequence of sounds and beats encapsulating the essence or core of the song (can be of any length).Part 3 112 is the final substantive section of the song. LikePart 1,Part 3 112 contains an equal number of beats at the same tempo (64 beats at the end of song “outro” at 120 beats per minute equal to 1,411,200 samples in this example) and the range is tagged by Marker C 104 located at a time stamp of “X”-64 beats where “X” is the time stamp of the end of the final 64th beat of the 64beat Part 3 112 section and Marker D 106 is the time stamp value of “X.” Marker D the end of the 64 count outro; a short crash and delay may follow for a number of seconds.Markers FIG. 1 . The database has associated with each song a record of the bit sample time stamp address referred to as time stamp throughout for the start and end points of Part 1 (Markers A and B),Part 2 110, and Part 3 (Markers C and D) so that in the mixing process, the “Parts” of a song can be accessed when needed. In the example inFIG. 1 , the example Song has a Marker A 101 Time Stamp of 88,200, a Marker B 102 Time Stamp of 1,499,400, a Marker C 104 Time Stamp of 7,585,200, and a Marker D 106 Time Stamp of 10,407,600. - The audio processing server may include, for example, a computer server that processes the bit-by-bit mixing and processing of one song with another in a virtual multi-track environment. The audio processing server may also be used to convert the mixed audio file to a compressed format for delivery or pickup by a customer.
- Mixing software may be used to combine the audio files (e.g., songs) and other sounds (Audio Bridge), apply audio processors, and convert the resulting file to new audio format.
- Audio processors may be used to adjust a number of audio attributes including, but not limited to, amplitude (volume) of incoming signal, frequency response (EQ) of incoming signals, sound limiting or compression of the signal to reduce or eliminate distortion, phase shifters to remove any “phase cancellation” resulting from the exact placement of “similar beats over top of similar beats,” automated stereo panning envelopes adjusting the left-right stereo image of one or more separate stereo tracks to provide interesting special effects, and time compression or expansion algorithms may be employed to adjust the speed of the audio file on a fixed or gliding/gradual basis.
- An MP3 encoder or other encoder may be used to convert the mixed audio file to a compressed audio format suitable for quick download by a user of the service.
-
FIG. 1 is a simplified diagram illustrating an embodiment of a method of breaking down a song according to embodiments of the present invention; -
FIGS. 2A to 2C are simplified diagrams illustrating an embodiment of a method of forming a mixed audio file according to embodiments of the present invention; and -
FIGS. 3A and 3B are simplified diagrams illustrating an embodiment of a method of forming a mixed audio file according to embodiments of the present invention. - Step 1: Project play list including at least one song is created and sent to mixing software to create a mixed audio file template.
- Step 2: The mixer software compiles song-related data from the database (e.g., Time Stamps of Marker Points, total number of bits in a song) related to each song that is selected for mixing as part of the mixed audio file and translates this into an instruction list that is further processed by the software invention. Specifically, a map is created by creating a time line (in bits) of the entire audio file by adding length of
Part 1 of first song toPart 2 of each subsequent song, followed byPart 3 of the last song on the mixed audio file to determine the overall bit length of the mixed audio file. Time Stamp Location points of each Mix Region, defined below, are stored on the server to be accessed during the mix process so that Audio Processes can be applied to the Mix Region real-time during the re-sampling process. A Mix Region is defined as the range of time in the time line where two songs are being combined to create a blended mix of two songs within a mixed audio file, similar to DJ mixing.FIG. 2 A is a simplified schematic diagram illustrating a position of tracks and audio bridge in time. As shown inFIG. 2A , the overall start point of each Mix Region is the point where Marker C 208 ofSong 1 is aligned with Marker A 202 ofSong 2. The end point of a mix region is where Marker D 210 ofSong 1 meets Marker B 204 ofSong 2. The Mix Region is illustrated inFIG. 2B and may include EQ filtering, amplitude adjustments/cross-fades and stereo imaging effect. - MIX REGION PROCESSING detailed: The mixer would set up a virtual multi-track workspace equal to the number of songs in a mixed audio file and the number of Audio Bridges required (
FIG. 3 ). Multiple tracks of silence and audio data are combined bit by bit from the start ofSongs 1 through the end of Song Y (where Y=the number of songs in a Project). The first significant audio mixing point begins at Marker C ofSong 1, the first track in a Project. At this point, or even slightly prior to this point, audio bits fromSong 2, Marker A, are mixed with Track A while a series of audio processors are applied for the length of the 64-Beat mix up to and through end point of the Mix Region where Marker D ofSong 1 overlays Marker B ofSong 2. Just before the end point of the Mix Region where Marker D and Marker B are aligned, a short 16-count “Audio Bridge” has been overlaid in the mix instructions to help transition from one song to the next. -
FIG. 3A is a simplified schematic 300 illustrating a method of forming a mixed file by feeding multiple tracks into a mixer with Mix Regions and Audio Bridges. As shown inFIG. 3A , audio data is combined bit by bit from the start ofSong 1 through the end ofSong 3. The first significant audio mixing point begins at Marker C 306 ofSong 1. At this point, or even slightly prior to this point, audio bits from Marker A 310 ofSong 2, are mixed withSong 1 while a series of audio processors are applied for the length of the 64-beat mix up to and through end point of the Mix Region where Marker D 308 ofSong 1 overlays Marker B 312 ofSong 2. Just before the end point of the Mix Region where Marker D 308 and Marker B 312 are aligned, a short 16-count “Audio Bridge” (seeFIG. 2C ) has been overlaid in the mix instructions to help transition from one song to the next. At Marker C 314 ofSong 2, or even slightly prior to this point, audio bits from Marker A 318 ofSong 3, are mixed withSong 2 while a series of audio processors are applied for the length of the 64-beat mix up to and through end point of the Mix Region where Marker D 316 ofSong 2 overlays Marker B 320 ofSong 3. Just before the end point of the Mix Region where Marker D 316 ofSong 2 and Marker B 320 ofSong 3 are aligned, a short 16-count “Audio Bridge” (seeFIG. 2C ) may be overlaid in the mix instructions to help transition from one song to the next.FIG. 3B is a simplified schematic illustrating the final mixed file created using the method illustrated in FIG.>3A. - The Audio Bridge is simply a sound file that, when layered over the file at the end of a “Mix Region,” helps smooth out any noticeable or abrupt transitions from one Song to another, commonly experienced when two songs of different production style are mixed. An Audio Bridge would have one Marker of note, Marker X. Marker X is the ninth beat in a 16-count bridge, but since the audio bridge is often non-rhythmic, it can be of any length and the “X” position can be set by the peak in amplitude of the segment. The sound prior to the ninth beat or Marker X would normally increase in amplitude or volume while the sounds after the peak of the ninth beat or Marker X would normally decrease in volume to fade out by the end of the 16 count bridge, as shown in
FIGS. 2A , 3A and 3B. - Step 4: Once the entire mixed audio file has been processed or mixed, a Time Compression/Expansion process may be called to change the tempo of the mixed audio file from its base tempo (128 BPM) to any flat tempo or to a gliding tempo profile that can be selected during the mixed audio file creation process in
Step 1. A mixed audio file can be gradually pitched up from the base tempo to a user-defined or static-option tempo higher or lower than the base tempo, or the entire mixed audio file could be shifted up or down in tempo entirely. This Step can also be accomplished during the real-time processing of the audio mixing. - Step 5: The mixed audio file may be converted to a new compressed format and posted for the customer to download.
Claims (17)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/424,503 US9014831B2 (en) | 2008-04-15 | 2009-04-15 | Server side audio file beat mixing |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US4518608P | 2008-04-15 | 2008-04-15 | |
US12/424,503 US9014831B2 (en) | 2008-04-15 | 2009-04-15 | Server side audio file beat mixing |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US61045186 Continuation | 2008-04-15 |
Publications (2)
Publication Number | Publication Date |
---|---|
US20090259326A1 true US20090259326A1 (en) | 2009-10-15 |
US9014831B2 US9014831B2 (en) | 2015-04-21 |
Family
ID=41164637
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/424,503 Expired - Fee Related US9014831B2 (en) | 2008-04-15 | 2009-04-15 | Server side audio file beat mixing |
Country Status (1)
Country | Link |
---|---|
US (1) | US9014831B2 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110011245A1 (en) * | 2009-07-20 | 2011-01-20 | Apple Inc. | Time compression/expansion of selected audio segments in an audio file |
US20110257772A1 (en) * | 2010-04-15 | 2011-10-20 | William Kerber | Remote Server System for Combining Audio Files and for Managing Combined Audio Files for Downloading by Local Systems |
WO2012089313A1 (en) * | 2010-12-30 | 2012-07-05 | Dolby International Ab | Song transition effects for browsing |
US20130117248A1 (en) * | 2011-11-07 | 2013-05-09 | International Business Machines Corporation | Adaptive media file rewind |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20010039872A1 (en) * | 2000-05-11 | 2001-11-15 | Cliff David Trevor | Automatic compilation of songs |
US20030183064A1 (en) * | 2002-03-28 | 2003-10-02 | Shteyn Eugene | Media player with "DJ" mode |
US20050144016A1 (en) * | 2003-12-03 | 2005-06-30 | Christopher Hewitt | Method, software and apparatus for creating audio compositions |
US20080314232A1 (en) * | 2007-06-25 | 2008-12-25 | Sony Ericsson Mobile Communications Ab | System and method for automatically beat mixing a plurality of songs using an electronic equipment |
US20090049979A1 (en) * | 2007-08-21 | 2009-02-26 | Naik Devang K | Method for Creating a Beat-Synchronized Media Mix |
US20090272253A1 (en) * | 2005-12-09 | 2009-11-05 | Sony Corporation | Music edit device and music edit method |
-
2009
- 2009-04-15 US US12/424,503 patent/US9014831B2/en not_active Expired - Fee Related
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20010039872A1 (en) * | 2000-05-11 | 2001-11-15 | Cliff David Trevor | Automatic compilation of songs |
US20030183064A1 (en) * | 2002-03-28 | 2003-10-02 | Shteyn Eugene | Media player with "DJ" mode |
US20050144016A1 (en) * | 2003-12-03 | 2005-06-30 | Christopher Hewitt | Method, software and apparatus for creating audio compositions |
US20090272253A1 (en) * | 2005-12-09 | 2009-11-05 | Sony Corporation | Music edit device and music edit method |
US20080314232A1 (en) * | 2007-06-25 | 2008-12-25 | Sony Ericsson Mobile Communications Ab | System and method for automatically beat mixing a plurality of songs using an electronic equipment |
US20090049979A1 (en) * | 2007-08-21 | 2009-02-26 | Naik Devang K | Method for Creating a Beat-Synchronized Media Mix |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110011245A1 (en) * | 2009-07-20 | 2011-01-20 | Apple Inc. | Time compression/expansion of selected audio segments in an audio file |
US8153882B2 (en) * | 2009-07-20 | 2012-04-10 | Apple Inc. | Time compression/expansion of selected audio segments in an audio file |
US20120180619A1 (en) * | 2009-07-20 | 2012-07-19 | Apple Inc. | Time compression/expansion of selected audio segments in an audio file |
US8415549B2 (en) * | 2009-07-20 | 2013-04-09 | Apple Inc. | Time compression/expansion of selected audio segments in an audio file |
US20110257772A1 (en) * | 2010-04-15 | 2011-10-20 | William Kerber | Remote Server System for Combining Audio Files and for Managing Combined Audio Files for Downloading by Local Systems |
US9312969B2 (en) * | 2010-04-15 | 2016-04-12 | North Eleven Limited | Remote server system for combining audio files and for managing combined audio files for downloading by local systems |
WO2012089313A1 (en) * | 2010-12-30 | 2012-07-05 | Dolby International Ab | Song transition effects for browsing |
US20130282388A1 (en) * | 2010-12-30 | 2013-10-24 | Dolby International Ab | Song transition effects for browsing |
US9326082B2 (en) * | 2010-12-30 | 2016-04-26 | Dolby International Ab | Song transition effects for browsing |
US20130117248A1 (en) * | 2011-11-07 | 2013-05-09 | International Business Machines Corporation | Adaptive media file rewind |
US9483110B2 (en) * | 2011-11-07 | 2016-11-01 | International Business Machines Corporation | Adaptive media file rewind |
Also Published As
Publication number | Publication date |
---|---|
US9014831B2 (en) | 2015-04-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6462039B2 (en) | DJ stem system and method | |
JP5243042B2 (en) | Music editing apparatus and music editing method | |
US7973230B2 (en) | Methods and systems for providing real-time feedback for karaoke | |
JP5259083B2 (en) | Mashup data distribution method, mashup method, mashup data server device, and mashup device | |
US7584218B2 (en) | Method and apparatus for attaching metadata | |
CA2477697C (en) | Methods and apparatus for use in sound replacement with automatic synchronization to images | |
US11721312B2 (en) | System, method, and non-transitory computer-readable storage medium for collaborating on a musical composition over a communication network | |
US20140157970A1 (en) | Mobile Music Remixing | |
US20100064882A1 (en) | Mashup data file, mashup apparatus, and content creation method | |
WO2017035471A1 (en) | Looping audio-visual file generation based on audio and video analysis | |
EP2562758A3 (en) | Reproduction device, reproduction method, and program | |
US9014831B2 (en) | Server side audio file beat mixing | |
JP2014520352A (en) | Enhanced media recording and playback | |
JP2007025570A (en) | Karaoke sound-recording and editing device performing cut and paste editing on the basis of lyrics character | |
CN108564973A (en) | A kind of audio file play method and device | |
JP2003255956A (en) | Music providing method and its system, and music production system | |
JP2003241770A (en) | Method and device for providing contents through network and method and device for acquiring contents | |
Petelin et al. | Cool Edit Pro2 in Use | |
JP2005300739A (en) | Device for editing musical performance data | |
JP2005017706A (en) | System and method for sound recording | |
EP2181381A1 (en) | A user interface for handling dj functions | |
TW200426779A (en) | MIDI playing system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: CASSANOVA GROUP, LLC, DISTRICT OF COLUMBIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PIPITONE, MICHAEL JOSEPH;LEE, JAROM ROGER;BABBITT, MICHAEL DAREN;AND OTHERS;SIGNING DATES FROM 20090625 TO 20090626;REEL/FRAME:022920/0117 Owner name: CASSANOVA GROUP, LLC, DISTRICT OF COLUMBIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PIPITONE, MICHAEL JOSEPH;LEE, JAROM ROGER;BABBITT, MICHAEL DAREN;AND OTHERS;REEL/FRAME:022920/0117;SIGNING DATES FROM 20090625 TO 20090626 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FEPP | Fee payment procedure |
Free format text: SURCHARGE FOR LATE PAYMENT, SMALL ENTITY (ORIGINAL EVENT CODE: M2554); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2551); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY Year of fee payment: 4 |
|
FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
LAPS | Lapse for failure to pay maintenance fees |
Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20230421 |