EP2659483A1 - Song transition effects for browsing - Google Patents

Song transition effects for browsing

Info

Publication number
EP2659483A1
EP2659483A1 EP11808580.2A EP11808580A EP2659483A1 EP 2659483 A1 EP2659483 A1 EP 2659483A1 EP 11808580 A EP11808580 A EP 11808580A EP 2659483 A1 EP2659483 A1 EP 2659483A1
Authority
EP
European Patent Office
Prior art keywords
audio
segment
entry
transition effect
exit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
EP11808580.2A
Other languages
German (de)
French (fr)
Other versions
EP2659483B1 (en
Inventor
Jonas Engdegard
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dolby International AB
Original Assignee
Dolby International AB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dolby International AB filed Critical Dolby International AB
Publication of EP2659483A1 publication Critical patent/EP2659483A1/en
Application granted granted Critical
Publication of EP2659483B1 publication Critical patent/EP2659483B1/en
Not-in-force legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S1/00Two-channel systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0008Associated control or indicating means
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • G11B27/038Cross-faders therefor
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/101Music Composition or musical creation; Tools or processes therefor
    • G10H2210/125Medley, i.e. linking parts of different musical pieces in one single piece, e.g. sound collage, DJ mix
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/101Music Composition or musical creation; Tools or processes therefor
    • G10H2210/131Morphing, i.e. transformation of a musical piece into a new different one, e.g. remix
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/101Music Composition or musical creation; Tools or processes therefor
    • G10H2210/131Morphing, i.e. transformation of a musical piece into a new different one, e.g. remix
    • G10H2210/136Morphing interpolation, i.e. interpolating in pitch, harmony or time, tempo or rhythm, between two different musical pieces, e.g. to produce a new musical work
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H60/00Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
    • H04H60/02Arrangements for generating broadcast information; Arrangements for generating broadcast-related information with a direct linking to broadcast information or to broadcast space-time; Arrangements for simultaneous generation of broadcast information and broadcast-related information
    • H04H60/04Studio equipment; Interconnection of studios
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/11Positioning of individual sound objects, e.g. moving airplane, within a sound field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field

Definitions

  • the invention disclosed herein generally relates to audio signal processing. It more precisely relates to methods and devices for transitioning between audio signals using transition effects that are susceptible to carry directive information to the listener.
  • the transition effects may be in concord with the temporal structure of the audio content.
  • a first particular object is to propose transitions that can be used to provide useful guidance to the listener, especially directive transitions.
  • a second particular object is to propose transitions more in concord with the structure of the audio content than certain available transitions. For instance, it is desirable to enable such transitions to be positioned in an integrated fashion in view of the temporal structure of a piece of music.
  • the invention proposes methods and devices for transitioning be- tween audio signals in accordance with the independent claims.
  • the dependent claims define advantageous embodiments of the invention.
  • an audio signal is a pure audio signal or an audio component of a video signal or other compound signal.
  • An audio signal in this sense may be in the form of a finite signal portion, such as a song, or may be a streaming signal, such as a radio channel.
  • An audio signal may be encoded in audio data, which may be arranged in accordance with a predetermined format which - in addition to waveform data, transform coefficients and the like - includes audio metadata useful for the playback of the signal. Metadata associated with a song that is distributed as a file may also be supplied from an external database.
  • a finite audio signal portion may be encoded as a computer file of a well-defined size, whereas a streaming signal may be distributed over a packet-switched communications network as a bitstream or distributed over an analogue or digital broadcast network.
  • a song may refer to a unit in which digital audio content is distributed and/or filed in a digital library. It may relate to a piece of vocal or non-vocal music, a segment of speech or other recorded or synthetic sounds.
  • One song in this sense may present an internal structure including at least one section, which may be a verse, refrain, chorus or the like.
  • the structure may be a temporal structure but may also correspond to a listener's perception of the song, the (first) occurrence of singing or a certain instrument, or to its semantic content. For instance, one may identify sections in a spoken radio program on the basis of the topics discussed, possibly in real time.
  • a plurality of songs in this sense may be grouped into an album (or record), and a plurality of albums may form a collection.
  • the adjective di- rective is sometimes used broadly as a synonym of 'guiding'. In some instances, it may also be employed in a narrower sense, where it may mean 'referring to a direction', 'associated with a direction' or the like.
  • a direction in this sense may be one- dimensional (the property of being forward/backward, upward/downward, positive/negative etc.) or many-dimensional, including spatial directions.
  • a method of providing directive transitions includes associating a first browsing direction, for transitioning from a current audio signal to a first alternative audio signal, with a first transition effect template; and associating a second browsing direction, for transitioning from a current audio signal to a second alternative audio signal, with a second transition effect template, which is perceptually different from the first transition effect template.
  • the method further includes playing a transition, in which an exit segment, extracted from the current audio signal, and an entry segment, extracted from the alternative audio signal, are mixed in accordance with the associated transition effect template. After this, the alternative audio signal is played from the end of the entry segment.
  • This first aspect also relates to a decoder adapted to perform each step of the method.
  • This aspect achieves the object of providing useful guidance to a listener, since different browsing actions are associated with perceptually distinguishable transitions.
  • the templates are so different in objective terms, that they are auditorily distinguishable.
  • the user is notified during the transition of whether the entity (e.g., a music player) playing the audio signal is effecting an transition in the first or second direction, which is an internal state of the entity. After the transition has been accomplished - possibly by combining the information on the browsing direction with knowledge of an ordered relationship between the available audio signals - the user is able to derive the identity of the alternative audio signal that the entity is playing.
  • the invention pro- vides for automatic indications about conditions prevailing in the entity playing the audio signal.
  • This arrangement may help the listener to use a visual browsing interface for navigating among audio signals or may help replace such interface by non- visual means.
  • the transitions are not commanded by the listener, (s)he may also receive useful information by hearing whether a transition takes place in a for- ward or a backward direction referring to some physical entity that is not necessarily related to the audio content as such, e.g., auditory traffic signals, elevator announcements and various applications in the entertainment field.
  • transitions that are based on mixing the material playing are likely to be perceived as more agreeable than, for instance, transitions involving a synthetic voice overlay for con- veying the same information.
  • a transition effect template is used for generating a transition on the basis of the exit and entry segments.
  • the template may contain information regarding the length of each segment.
  • the template may further control how the segments are to be mixed, such as by specifying the power at which each segment is to be played back on the different channels, possibly in a time-dependent manner (e.g., fade in, fade out), by specifying effects to be applied (e.g., simulated motion, simulated Dop- pler effect, stereo regeneration, spectral band replication, reverberation) or by specifying predetermined content to be superposed (overlaid) on top of the signal generated from the entry and exit segments.
  • one or both transition effect templates may comprise a channel power distribution to be used for the entry and/or exit segments.
  • each transition effect template may include two channel power distributions, an exit channel distribution and an entry channel distribution, to be applied to the respective segments.
  • the channel power distribution may be time-invariant or time-variant, as will be explained below. It is particularly advantageous to include a time dependence when playback takes place over a single channel. Where several playback channels exist and first transition effect template has been defined, a second transition effect template can be automatically generated by permuting the power distribution coeffi- cients among the channels within each of the exit and entry channel power distributions.
  • the permutation may correspond to a reflection of the channels in one or more of these directions, e.g., by letting coefficients for right and left channels trade places but leaving the centre chan- nel unchanged. This saves time for the designer, who may conveniently generate a large number of transition effect templates.
  • the symmetry may also have a self- explanatory effect on the listener, so that (s)he realizes that the first and second transition effect templates are related but different.
  • a useful class of transition effect templates can be defined in connection with stereophonic playback over two or more channels, which is generally known to be able to create an illusion of locality, directivity or movement.
  • the first transition effect template is obtainable by simulating a movement of the audio source playing the exit segment or the entry segment in a first spatial direction relative to the intended listening point.
  • both the exit-segment audio source and the entry-segment audio source may be moving. This may entail using a time-dependent channel power distribution, creating a time-dependent time difference (or phase difference) between channels, or the like.
  • the second transition effect template may then correspond to a simulated movement of the same or, preferably, the other audio source in a second, different direction.
  • the first and second directions are perceptually distinguishable and may for example be opposite one another.
  • a third browsing direction is defined and associated with a third transition effect template, which is perceptually different from the first and second transition effect templates.
  • the first and second browsing directions refer to the up and down directions in a list of songs in an album
  • the third browsing direction may correspond to jumping to a different album in a library.
  • This concept may readily be generalized to also comprise a fourth browsing direction, a fifth browsing direction etc.
  • a second aspect of the invention relates to a method of providing a transition between a current and an alternative audio signal decoded from audio data, wherein the audio data include time markers encoded as audio metadata and indicating at least one section of the respective audio signal.
  • the method includes retrieving time marking information in the audio data and extracting an exit segment from the current audio signal and an entry segment from the alternative audio signal, wherein an endpoint of at least one of the segments is synchronized with a time marker.
  • the method then includes playing a transition, in which the exit segment and the entry segment are mixed in accordance with a transition effect template, and subsequently playing (in online or offline mode) the alternative audio signal from the end of the entry segment.
  • the second aspect also relates to a decoder adapted to perform each of the above steps.
  • the decoder may be integrated into some other device, such as a computer, media processing system, mobile telephone or music player. Methods in accordance with the invention may also be performed by processing means provided in a different setting, such as an online music service.
  • a section may be a verse, chorus, refrain or similar portion of an audio signal.
  • an end- point may be either an initial endpoint or a final endpoint.
  • Said synchronization includes aligning the endpoint and marker in time, by letting them either coincide or define a predetermined time interval.
  • the second aspect achieves the object of providing useful guidance because it is possible to enter the alternative signal at a section of interest, such as the chorus of a song or the announcement of contents in a spoken radio program, to make browsing more efficient. Indeed, a piece of music can be often identified by hearing to a characteristic part, such as the chorus or the refrain of the piece of music.
  • hearing a characteristic part of the piece of music may be sufficient for a music consumer to determine whether (s)he likes or dislikes the piece.
  • a music consumer seeks the characteristic part of a piece of music stored as digital audio data when using prior-art technology, (s)he manually has to fast-forward within the piece to find the characteristic part, which is cumbersome.
  • the characteristic part refers to a piece of music or audio material of a different type, it acts as an audio thumbnail of the piece.
  • transitions in accordance with the second aspect can also be accommodated into the content more seamlessly by avoiding abrupt or unrhythmic joining of two songs. This possibility can be used to enhance the listening experience.
  • the synchronization may consist in extracting the segment from the respective audio signal in such manner that an endpoint coincides in time with a time marker. This way, an entry or exit segment begins or ends at a time marker, which may in turn denote the beginning or end of a section of the audio signal.
  • the entry and/or exit segment may also be extracted in such manner that it is located some time distance away from a time marker. This allows an upbeat, an intro section, a bridge section, a program signature, a fade-in/fade-out effect or the like to be accommodated.
  • a segment endpoint may be located some distance before a time marker indicating the beginning of a section of the audio signal. If the endpoint refers to an entry segment, then a corresponding transition effect template may include gradually increasing the playback volume up to the beginning of the indicated section, preferably a chorus, which introduces the section without interfering unnecessarily with the content.
  • a segment endpoint may be located some distance after a time marker indicating an end of a section. Similarly, this allows for a smooth fade-out effect initiated at or around the final end- point of the section.
  • a time marker may delineate sections of an audio signal but may alternatively refer to beats, so that a the transitions can be given an enhanced rhythmic accuracy.
  • Time markers referring to sections may also be aligned with beat markers or with a beat grid before they are utilized for providing transition effects.
  • the time markers indicate endpoint of representative segments extracted by the methods disclosed in the applicant's co-pending Provisional U.S. Patent Application No. 61/428,554 filed on 30 December 2010, as well as any related application claiming its priority, which are hereby incorporated by reference in their entirety.
  • the combination of these teach- ings and the second aspect of the present invention enables browsing directly between representative sections of the audio signals, which saves the listener time and helps him or her retain focus.
  • the time markers may be encoded in one or more header sections of the audio data in the format disclosed in the applicant's co-pending Provisional U.S. Patent Application No. 61/252,788 filed on 19 October 2009, as well as any related application claiming its priority, which are hereby incorporated by reference in their entirety.
  • the encoding formats described therein advantageously package the information together with the wave- form data or transform coefficients themselves. Such joint distribution of the audio data in a standalone format provides robustness and uses both transmission bandwidth and storage space efficiently.
  • transition effect templates can be defined by simulating a movement of a virtual audio source playing the exit segment and/or the entry segment relative to the intended listening point.
  • the simulation may be based on a model for sound wave propagation; such models are widely known in the art.
  • the movement of the virtual source may follow a straight line or be curvilinear and may be illustrated by using a time-variable channel power distribution, creating a phase difference between chan- nels and the like.
  • the simulation may in particular illustrate how the virtual audio source travels between different locations in a changing acoustic landscape, which may include closed or semi-closed reverberating spaces defined by walls and differing, possibly, by their volumes, shapes or wall reflectivity values.
  • the reverberating spaces may not be sharply delimited and the virtual audio source may be located at variable distance to the walls on its motion path, there may be a gradual change over time in the reverberation properties, particularly the dry-to-wet signal ratio, i.e., the ratio between direct and reverberated signal level.
  • the beginning of the entry or exit seg- ment may be subjected to reverberation processing based on a different set of parameter values than the end of the same segment, wherein the change between these is gradual and continuous.
  • Doppler shifts may be used to illustrate a constant or variable motion velocity of a virtual audio source.
  • Doppler shifts may be simulated by non-uniform, dynamic re-sampling of an audio signal, so as to achieve a (variable) time stretch.
  • Advanced re-sampling methods are well-known by those skilled in the art and may include spline or Lagrange interpolation, or other methods.
  • embodiments of the invention adapted for use with stereophonic playback equipment may also include a transition effect template that applies a dif- ferent channel power distribution for the exit segment than for the entry segment.
  • One or both channel power distributions may be time variable.
  • the distribution(s) may also be obtainable by moving a virtual audio source, which plays the concerned segment, in a spatial direction relative to a listener. Such simulated movement may entail a change in impact angle, stereo width (if the virtual audio source is a stereo source and has more than one channel), attenuation, directivity etc.
  • Transition effect templates based on any of the concepts discussed above may be developed further by addition (superposition) of a previously obtained audio segment.
  • the previously obtained audio segment is thereby combined with the entry and exit segments by mixing.
  • the segment to be added is preferably independent of the songs between which transition takes place, but may for instance be selected from a list of available options. If a decoder performs the method, the selection may be effectuated by a processing component within the decoder. The selection may be random or be related to properties of the songs to which or from which transition takes place.
  • the segment(s) may have been recorded or sampled, and then encoded and stored in a memory.
  • the segment(s) to be added may also be synthesized in real time on the basis of a predetermined set of parameter values or values corresponding to an entry selected from a list.
  • transition effects are dynamically adapted to suit an ac- tual physical playback configuration.
  • a decoder or other device performing the method may receive an indication of properties of the physical playback sources, either by manual input or automatically.
  • a playback source may include a loudspeaker or a set of headphones.
  • the playback equipment may be characterized by the number of channels, properties of individual physical audio sources, the num- ber of audio sources, the geometric configuration of the audio sources or the like.
  • a simulated motion of a virtual audio source reproducing the entry or exit segment will produce a first pair of waveforms at the points, separated by a first distance, where it is intended to locate physical playback audio sources (e.g., headphones) which is different from a second pair of waveforms occurring a pair of physical playback audio sources separated by a second distance (e.g., a pair of loudspeakers in a room).
  • a dynamical adaptation of the transition effect template may in this case include varying the settings of an acoustic model for computing what effect the virtual audio source has at the points where the physical audio sources are to be located.
  • the adaptation may as well consist in cascading an origi- nal transition effect template with a transfer function representing the path between the original playback sources and the alternative sources, e.g., from loudspeakers to headphones.
  • the adaptation may further involve adapting EQ parameters in accordance with the playback source.
  • Methods and devices known from the field of virtual source localization and spatial synthesis for virtual sources may be useful in implementations of this embodiment. This includes the use of head-related transfer functions (HRTFs).
  • HRTFs head-related transfer functions
  • the transition effects may also be dynamically adapted to properties of the current and/or alternative signal.
  • the properties may be determinable by automatic processing (in real time or at a preliminary stage) of the respective signal. Such automatically determinable properties may include tempo, beatiness, key, timbre and beat strength or - for spoken content - gender of speaker, speed, language etc.
  • the properties may also be of a type for which the classification may require human intervention, such as musical genre, age, mood etc. Classification data for properties of the latter type may be encoded in audio metadata related to the signal, whereas properties of the former type may either be determined in real time on the decoder side or encoded in metadata as a result of a preliminary step either on the encoder or decoder side.
  • the invention and its variations discussed above may be embodied as com- puter-executable instructions stored on a computer-readable medium.
  • figure 1 schematically shows audio signals of finite duration, between which transitions in an "up” and a “down” direction are possible, and where these transitions have been made distinguishable by being associated with distinct transition effect templates;
  • figure 2 shows, similarly to figure 1 , how perceptually different transition effect templates may be used in accordance with the first aspect of the invention to distinguish transitions between streaming audio signals;
  • figure 3 illustrates a database structure in which it is relevant to distinguish between three different transition directions
  • figure 4 illustrates browsing between characteristic sections (audio thumbnails) of audio signals by allowing time markers to guide the extraction of entry segments in accordance with the second aspect
  • figure 5 illustrates, similarly to figure 4, browsing between characteristic sec- tions, wherein a time interval is interposed between a section time marker and an endpoint of an entry segment;
  • figure 6 visualizes a transition effect template in terms of the evolution of respective attenuations applied to the entry segment and exit segment with respect to time (downward direction);
  • figure 7a visualizes, similarly to figure 6, another transition effect template, intended for use with a stereo playback equipment and obtainable by simulating movement of virtual audio sources;
  • figure 7b visualizes the transition effect template of figure 7a (as well as a further transition effect template) in terms of a simulation of mobile, virtual audio sources and their geometrical relationship to an intended listener;
  • figure 8 visualizes a further transition effect template obtainable by simulating movement of a virtual audio source through reverberating spaces with different properties
  • figure 9 is a generalized block diagram of an audio player in accordance with the first or second aspect of the invention.
  • FIGS 10 and 1 1 are flowcharts of methods in accordance with embodiments of the first and second aspects, respectively;
  • figure 12 is a generalized block diagram of a decoder in accordance with an embodiment of the second aspect of the invention.
  • figure 13 is a generalized block diagram of a component for extracting a representative segment.
  • Figure 1 shows audio entries (or tracks) T1-T8 ordered in a database.
  • the database may or may not have a visual interface for displaying the audio entries and their relationships.
  • the database is located in a database storage means 901 , storing either the actual data or pointers (addresses) to a location where they can be accessed.
  • the database storage means 901 is arranged in a audio player 904 together with a decoder 902 for supplying an audio signal or audio signals to a (physical) playback au- dio source 903 on the basis of one or more of the database entries T1 -T8.
  • the same notation will be used for database entries and audio signals.
  • the database 901 , decoder 902 and playback source 903 are communicatively coupled.
  • the playback source 903 may accept the audio signal in the format (analogue or digital) in which it is supplied by the decoder 902, or may also include a suitable converter (not shown), such as a digital-to-analogue converter.
  • the playback source 903 may be arranged at a different location than the decoder 902 and may be connected to this by a communications network.
  • the playback process and the decoding process may also be separated in time, wherein the decoder 902 operates in an offline mode and the resulting audio signal is recorded on a storage medium (not shown) for later playback.
  • the audio player 904 may be a dedicated device or integrated in a device, in particular a server accessible via a communications network, such as the World Wide Web.
  • the decoder 902 is currently playing entry T6 and about half of its duration has elapsed.
  • the audio player 904 is associated with a control means (not shown) enabling a user to browse in a first direction A1 and a second direction A2, whereby playback of either entry T5 or T7 is initiated instead of the currently playing entry T6.
  • the control means may for example be embodied as hard or soft keys on a keyboard or keypad, dedicated control buttons, fields in a touch-sensitive screen, haptic control means (possibly including an accelerometer or orientation sensor) or voice-control means.
  • a user may perform a browsing action by selecting, using the control means, a database entry which is to be decoded by the decoder 902 and give rise to an audio signal or audio signals to be supplied to the audio source 903.
  • the control means may for instance control the database 901 directly in order that it supplies the decoder 902 with a requested alternative database entry or entries. It may alternatively cause the decoder 902 to communicate with the database 901 in order that it supplies the information (i.e., database entry or entries) necessary to fulfill a user request.
  • the decoder 902 is config- ured so that at least one browsing direction is associated with a transition effect template for producing a transition effect to be played before normal playback of the alternative database entry is initiated, which then produces an alternative audio signal to be supplied to the playback source 903.
  • both browsing directions A1 , A2 are associated with transition effect templates, according to which an entry segment and an exit segment are extracted and mixed in a specified fashion.
  • a browsing action in the first direction A1 will cause an exit segment T6-out1 to be extracted from the currently playing audio signal T6 and an entry segment T5-in to be extracted from the audio signal T5 located 'before' the currently playing signal T6.
  • the invention is not limited to any particular length of the segments; in a personal music player they may be of the order of a few seconds, whereas in a discotheque more lengthy transitions may be desirable, possibly exceeding one minute in length; transitions that are perceptually very distinctive - as may be the case if they are accompanied by overlaid audio segments - may be chosen to be shorter than a second.
  • the entry segment begins at the beginning of audio signal T5.
  • the decoder 902 includes segment extraction means (not shown) and a mixer (not shown). Information for controlling the mixer forms part of the first transition effect template. As illustrated, the subsequent portion of signal T6 will be completely attenuated or, put differently, will not be used as a basis for providing the transition. As also suggested by the drawing, on which the upper and lower portion of the bars symbolizing the segments are not shaded equally at all points in time, the power distribution applied to each of the entry and exit segment is not symmetric.
  • the asymmetry may for instance refer to the spatial left/right or front/rear directions of a conventional stereo system.
  • this exemplifying transition illustrates, however, the power distributions of the respective segments T5- in, T6-out1 are symmetric with respect to one another at all points in time.
  • a browsing action in the second direction A2 will cause the decoder 902 and database 901 to generate a transition followed by playback of the audio signal T7 located 'after' the currently playing signal.
  • the transition is controlled by instructions contained in the second transition effect template, which differs to such an extent from the first template that an intended user will be able to distinguish them auditorily in normal listening conditions.
  • entry and exit segments having a different, greater duration are extracted from the audio signals.
  • the second template also defines a different time evolution of the power distribution to be applied to the entry and the exit segments, respectively.
  • both time evolutions include a time-invariable intermediate phase. As suggested by the asymmetry, both segments are then played at approximately equal power but from different directions.
  • a listener may experience that a new audio source playing the alternative audio signal T7 enters from one end of the scene while pushing an existing audio source playing the current audio signal T6 towards the other end of the scene; after a short time interval has elapsed (corresponding to the intermediate phase), both audio sources continue their movements so that the existing audio source disappears completely and the new audio source is centered on the scene.
  • an audio signal yoi representing a transition generated on the basis of a first transition effect template Tr01 may be written as
  • All five components may be independent of the audio signals xo, xi .
  • One or more components may also be dynamically adaptable in accordance with one or both audio signals x o , xi.
  • the initial endpoints may be chosen with regard to the structure of each audio signal, as may the total duration of the transition.
  • the transition functions may be adaptable, either directly in response to properties of the audio signals or indirectly by stretching to match a de- sirable transition duration.
  • an audio signal y 02 representing a transition based on a second transition effect template Tr02 may be written as
  • TK)2 ( ⁇ 2 , ⁇ 2 ' ⁇ ⁇ 2 ' ⁇ ⁇ 2 ' ⁇ 2 ) -
  • T (Tr0l,TiO2)
  • the process described above is visualized in flowchart form in figure 10.
  • the flowchart illustrates the states of the audio player 904 at different points in time.
  • the process starts in point 1010.
  • the first and second browsing directions A1 , A2 are associated with the first and second transition effect templates, respectively.
  • the audio player 904 may re- ceive either a browsing action in the first direction A1 , upon which it moves to a first transition state 1041 , or a browsing action in the second direction A2, which causes it to move to a second transition state 1042.
  • the audio player 904 plays the transition generated by mixing an entry segment T5-in and an exit segment T6-out1 in accordance with the first transition effect template.
  • the second transition state 1042 is governed by the second transition effect template.
  • the audio player 904 enters a first (second) playback state 1051 (1052), in which playback of the first (second) alternative audio signal continues.
  • the process then either receives new user input, such as a transition command, or moves after the playback has been complet- ed to the first (second) end state 1091 (1092) of the process.
  • This process may be embodied as a computer program.
  • figure 1 can be generalized to audio signals for which either an initial or a final endpoint is undefined (or unknown), as is often the case of streaming broadcast audio or video channels.
  • the invention can be applied to such signals as well with slight modifications, the main difference being the manner in which entry and exit segments are to be extracted.
  • figure 2 shows three audio signals CO, C1 , C2, which are received at a playback device continuously and in real time.
  • the audio signals contain timestamps indicating distances 30, 60 and 90 seconds from some reference point in time.
  • the timestamps are either explicit or indirectly derivable, e.g., from metadata in data packets received over a packet- switched network.
  • the exit segments C0-out1 , C0-out2 may be extracted from the current audio signal CO using the current playback point as a starting point.
  • the entry segments C1 -in, C2-in may be extracted in a similar fashion while using a time corresponding to the current playing point as an initial endpoint. An approximation of the time of the current playing point may be derived by interpolation between timestamps in a fashion known per se.
  • Figure 2 illustrates a transition effect template associated with the first browsing direction A1 , wherein attenuation is gradually and symmetrically applied to the exit segment together with an increasing reverberation effect REV.
  • the increase in reverberation may more precisely correspond to an increase of the wet-to-dry ratio of the first exit segment C0-out1 .
  • Figure 2 also shows another transition effect template, which is associated with the second browsing direction A2. It includes playing the second exit segment C0-out2 at a power that increases gradually from a refer- ence value (e.g., 100 %) and then goes to zero abruptly.
  • the entry segments C1 -in, C2-in are played at gradually increasing power until the reference level is reached.
  • Figure 3 shows an alternative logical structure of the database 901 , wherein database entries (audio signals) are arranged in a two-dimensional matrix allowing browsing in upward, downward and lateral directions A1 , A2, A3.
  • the logical structure may correspond to conventional audio distribution formats insofar as S41 , S42, S43 and S44 may refer to different tracks in an album and S1 , S2, S3, S4, S5 may refer to different albums in a collection.
  • An album may be associated with a representative segment further facilitating orientation in the database, such as a well- known portion of a track in the album. As such, browsing in the lateral direction A3 from the current playing point may initiate playing of such representative segment.
  • inventive concept can be readily extended to include three perceptually distinct transition ef- feet templates for facilitating navigation in the database 901. Extending the inventive concept to four or more distinct browsing directions is also considered within the abilities of the skilled person.
  • Figure 6 illustrates mixing information encoded in a transition effect template which is primarily adapted for one-channel playback equipment.
  • the figure shows the respective playback powers to be applied to the first exit segment C0-out1 (shaded; left is positive direction; the scale may be linear or logarithmic) and the first entry segment C1 -in (non-shaded; right is positive direction; the scale may be linear or logarithmic).
  • the time evolution of each playback power is shown normalized with respect to a reference power level. Put differ- ently, this reference level corresponds to no attenuation and zero power corresponds to full attenuation.
  • the exit segment C0-out1 is played at the reference power at the beginning of the transition, whereas the exit segment is played at the reference power at its end.
  • Each of the power curves increases or de- creases in a linear fashion between zero power and the reference power level. In this example, the increase and the decrease phase are not synchronized with each other.
  • Figure 7a illustrates mixing information relating to another transition effect template, which is primarily adapted for two-channel playback equipment.
  • the fig- ure includes two graphs showing a left (L) and right (R) channel of each of an exit segment S0-out2 (shaded; left is positive direction of left channel, while right is positive direction of right channel; the scales may be linear or logarithmic) and an entry segment S2-in (non-shaded), as well as a common, downward time axis.
  • the playbacks powers in figure 7a exhibit continuously variable rates of increase and decrease. It will now be explained, with reference to figure 7b, how such mixing and attenuation behavior can be obtained by simulating movement of an audio source in relation to an intended listener position.
  • a virtual listener with left and right ears L, R is initially located op- posite a scene with a virtual current audio source SO reproducing a current audio signal.
  • a first transition effect template Tr01 involves removing the virtual current audio source SO from the scene in the rightward direction; meanwhile, but not necessarily in synchronicity, a virtual first alternative audio source S1 enters the scene from the right.
  • Tr02 the vir- tual current audio source SO exits to the left, while a virtual second alternative audio source S2 enters from the left.
  • the first and second transition effect templates contain information obtainable from simulating the motion to the virtual audio sources as described in figure 7b.
  • Such simulation would include a computation, in accordance with a suitable acoustic model, of the waveforms obtained at the locations of the virtual listener's ears as a result of the superposition of the sound waves emitted by the mobile audio sources.
  • the resulting waveforms are to be reproduced by virtual audio sources (e.g., headphones) located approximately at the ear positions.
  • the audio sources SO, S2 are therefore virtual in the sense that they exist in the framework of the simulation, while the headphones, which may be referred to as physical, exist in use situations where the second transition effect template Tr02 is used for providing a song transition.
  • the acoustic model may preferably take into account the attenuation of a sound wave (as a function of distance), the phase differ- ence between the two ear positions (as a function of their spacing and the celerity of the sound wave, and the ensuing time difference) and the Doppler shift (as a function of the velocity).
  • a transition effect template may either be formulated in terms of geometric or kinematic control parameters to a simulation module (e.g., a spatial synthesis engine, such as an HRTF rendering engine) or in terms of channel power distributions, phase difference data and other pre-calculated information resulting from such simu- lation.
  • a simulation module e.g., a spatial synthesis engine, such as an HRTF rendering engine
  • channel power distributions, phase difference data and other pre-calculated information resulting from such simu- lation the information in the transition effect template itself is independent of the audio signals between which transition is to take place.
  • the simulation which may be implemented in software and/or hardware
  • the simulation module is necessary only at the design stage of the transition effect template, which thus contains parameters intended to control a mixing module or the like.
  • Figure 8 shows an acoustic configuration by which further simulation-based transition effect templates may be obtained. More precisely, the figure shows an au- dio source 803 adapted to reproduce an entry or exit segment and movable relative to a listener 899 and walls 801 , 802 for influencing the reverberation characteristics.
  • a first, semi-closed space is defined by the first set of walls 801 , which are provided with an acoustically damping lining. Thus, the first space will be characterized by a dry impulse response.
  • a second, semi-closed space is defined by the second set of walls 802, which are harder than the first set of walls 801 and also enclose a larger volume. The reverberation in the second space will therefore have a longer response time and slower decay.
  • this acoustic 'landscape' is input to a simulation module for deriving the waveforms resulting at ear positions of a listener when the audio source 803 is moved along the dashed arrow through the different reverberating spaces.
  • a listener will hear a variable degree of reverberation being applied to the audio signal repro- cuted by the audio source 803, which (s)he may associate with the disappearance of the audio source 803 and hence, with the end of the playback of the corresponding audio signal. It has been noted that a gradual change in the ratio between a dry (direct) audio component and a wet (singly or multiply reflected) audio component is generally associated with movement or change in distance between audio source and listener.
  • figure 4 illustrates how time markers delineating sections of audio signals can be used to enable efficient browsing between the signals by directly jumping to a characteristic portion (audio thumbnail) of a new signal.
  • the figure shows three music signals S1 , S2, S3, which have been encoded together with (or have been associated with) time markers in metadata which indicate the locations of choruses (R).
  • An audio player (not shown) currently plays the second audio signal S2 at a point indicated by the triangular play symbol.
  • a user can control the audio player so that it switches to an alternative signal and begins playing this.
  • the user can select a first signal S1 (transi- tion A1 ) or a third signal S3 (transition A2) as alternatives to the currently playing one.
  • the audio player is adapted to begin playback approximately at the beginning of the first chorus section (R) of the selected alternative signal.
  • this may include playing a transition in which an exit segment (extracted from the currently playing signal S2) and an entry segment (extracted from the alternative signal) are mixed and wherein an initial or final endpoint of the entry segment coincides with or is related to a time marker indicating the beginning of the first chorus section of the entry segment.
  • Figure 5 shows an instance of a transition A2 from the second music signal S2 to the third signal.
  • the music signals have been synchronized in time by laterally moving the bars symbolizing the signals, so that synchronous points in the two segments are located side by side, one directly above the other.
  • the exit segment S2-out and the entry segment S3-in have been indicated by braces.
  • the final (right) endpoint of the entry segment S3-in coincides with the beginning of the first chorus of the third music signal S3. This means, after the transition has been accomplished, that playback of the third music signal S3 will be continued from its first chorus.
  • a transition effect template applying an entry segment extracting of this type may be advantageously combined with a conventional fade-in type of channel power evolution with respect to time, such as the one shown in figure 1 .
  • a tem- plate where the entry segment is played at audible power from an early point in time one may instead synchronize the initial endpoint of the segment with the beginning of the chorus.
  • an entry segment may be extracted in such manner that a time marker is located a predefined time interval ⁇ from its initial endpoint, this interval being equal to the duration of a previously obtained (e.g., recorded) segment which is to be superposed on the initial portion of the entry and exit segments by mixing.
  • the superposed previously obtained segment may then function as an introduction to the most characteristic portion of the alterna- tive audio signal.
  • synchronized is intended to cover such a segment extraction procedure.
  • synchronizing segment endpoints with time markers is equally applicable to exit segments. This may be used to enable deferred switching, wherein playback of the currently playing signal is continued up to the end of the current sec- tion, which may be a song section, a spoken news item, an advertisement or the like.
  • Transitions between musical signals may be further improved by taking beat points into account in addition to time markers delineating sections. For example, while sections in Western music may generally be identified in terms of bars, time markers having been derived using statistical methods are not necessarily aligned with the bar lines. By extracting entry and/or exit segments beginning or ending at a full bar, the transitions can be made more rhythmical.
  • a process in accordance with the second aspect of this invention is illustrated by the flowchart in figure 1 1 .
  • the process retrieves time markers from metadata, either from an audio file or bitstream or by contacting an external database, which constitutes a first step 1 120.
  • At least the alternative audio signal is associated with metadata containing time markers.
  • the method extracts an exit segment and an entry segment from the current and al- ternative audio signals, respectively, wherein an endpoint of at least one segment is synchronized with a time marker.
  • a transition is played during which the exit segment and the entry segment are mixed in accordance with a transition effect template.
  • the alternative audio signal is played from a point corresponding to the end (i.e., final endpoint) of the entry seg- ment.
  • the process ends at point 1 190.
  • Figure 12 shows a decoder 1200 adapted to receive a first and a second audio signal SO, S1 , each of which is associated with metadata (META) defining time markers.
  • the decoder 1200 may be adapted to receive a first and second audio data bitstream containing such metadata.
  • a decoding unit 1205 is operable to play either the first or second audio signal or a transition obtained by mixing segments extracted from these. In the example, this is symbolically indicated by a three-position switch 1204 operable to supply the decoding unit 1205 with either the first SO or second S1 audio data signal or a transition signal obtained as follows.
  • the first and second audio signals are fed in parallel to the switch 1204, to a time marker extractor 1201 and a segment extractor 1202.
  • the time marker extractor 1201 retrieves the time markers and supplies a signal indicative of these to the segment extractor 1202.
  • the segment extractor 1202 is then able to synchronize one or more time instants in a signal, which are indicated by the time markers, with one or more endpoints of an entry or exit segment.
  • the segment extractor 1202 outputs an entry segment S1 -in and an exit segment SO-out to a mixer 1203, which passes this on to the upstream side of the switch 1204, making it available for playback.
  • the output signal obtained at the downstream side of the switch 1204 may for instance be supplied to a local or remote playback source, or may be recorded for later playback.
  • the time marker extractor 1201 may retrieve the time markers by extracting them from the metadata encoded together with the audio data.
  • the metadata may also be fetched remotely from an external database which hosts the metadata and is accessible via a communications network.
  • An external metadata database is Gracenote's CD Database. This may proceed in accordance with the teachings of the applicant's co-pending Provisional U.S. Patent Application No. 61/252,788 filed on 19 October 2009., Pages 16-25 in this related application are of particular relevance for understanding the present invention, and protection is sought also for combinations with features disclosed therein.
  • the time marker extractor 1201 may be adapted to determine the time markers (or equivalently, the locations of the sections of the signal) on the basis of the audio signal directly.
  • Figure 13 shows a possible internal structure of the time marker extractor 1201 in an simplified example embodiment wherein it is adapted to determine the sections in one single audio signal, and therefore has one input only.
  • such time marker extractor comprises a feature-extraction component 1301 which outputs a signal indicating features from audio data to each of a repetition detection component 1302, a scene-change detection component (which may be embodied as a portion of a more general refinement component) 1303 and a ranking component 1304.
  • the repetition detection component 1302, the scene-change detection component 1303 and the ranking component 1304 are communicatively coupled.
  • the feature-extraction component 1301 may extract features of various types from media data such as a song.
  • the repetition detection component 1302 may find time-wise sections of the media data that are repetitive, for example, based on certain characteristics of the media data such as the melody, harmonies, lyrics, timbre of the song in these sections as represented in the extracted features of the media data.
  • the repetitive segments may be subjected to a refinement procedure performed by the scene change detection component 1303, which finds the correct start and end time points that delineate segments encompassing selected repetitive sections. These correct start and end time points may comprise beginning and ending scene change points of one or more scenes possessing distinct characteristics in the media data.
  • a pair of a beginning scene-change point and an ending scene-change point may delineate a candidate representative segment.
  • a ranking algorithm performed by the ranking component 1304 may be applied for the purpose of selecting a representative segment from all the candidate representative segments. In a particular embodiment, the representative segment selected may be the chorus of the song.
  • the decoder 902 shown in figure 9 which has so far been discussed primarily in connection with the first aspect, may have an internal structure similar to the decoder 1200 in figure 12, which the skilled person may therefore rely upon for practicing the first aspect of the invention as well.
  • the time marker extractor 1201 of the decoder 1200 may be inactive or even absent.
  • the systems and methods disclosed hereinabove may be implemented as software, firmware, hardware or a combination thereof.
  • the division of tasks between functional units referred to in the above description does not necessarily correspond to the division into physical units; to the contra- ry, one physical component may have multiple functionalities, and one task may be carried out by several physical components in cooperation.
  • Certain components or all components may be implemented as software executed by a digital signal processor or microprocessor, or be implemented as hardware or as an application- specific integrated circuit.
  • Such software may be distributed on computer readable media, which may comprise computer storage media (or non-transitory media) and communication media (or transitory media).
  • computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, pro- gram modules or other data.
  • Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer.
  • communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.

Abstract

In one aspect, a method of providing directive transitions between audio signals comprises associating a first/second browsing direction (A1, A2) with a first/second transition effect template. In response to a browsing action in one of said browsing directions, a transition is played in which an exit segment (S0-out1, S0-out2) and an entry segment (S1-in, S2-in) are mixed in accordance with the associated transition effect template. A further aspect proposes a method of transitioning between audio signals decoded from audio data including time markers encoded as audio metadata and indicating at least one section of the respective audio signal. The method includes retrieving time marking information in the audio data; extracting an exit segment (S0-out1, S0-out2) and an entry segment (S1-in, S2-in), wherein an endpoint of at least one of the segments is synchronized with a time marker; and playing a transition in which the exit and entry segments are mixed in accordance with a transition effect template.

Description

SONG TRANSITION EFFECTS FOR BROWSING
Technical field
The invention disclosed herein generally relates to audio signal processing. It more precisely relates to methods and devices for transitioning between audio signals using transition effects that are susceptible to carry directive information to the listener. In particular, the transition effects may be in concord with the temporal structure of the audio content.
Background of the invention
In online media services, standalone music players and other products for reproducing audio content, the switching between two audio signals may or may not be marked out by an audible transition feature. For example, US 7 424 1 17 B2 discloses a method of creating an illusion of motion when transitioning between two songs are played back in a multi-channel system. Existing ways of transitioning between signals are not always very helpful to the listener and may even be experienced as detrimental to the total listening experience. Some approaches may for instance in- volve joining two songs with no regard to differences in tempo, key, beat number etc. Since the listener is likely to remain mentally in the musical context of the previous song during an initial portion of the next song, (s)he may perceive this portion less attentively. The mental refocusing process that takes place during this portion may also involve some discomfort.
Summary of the invention
It is an object of the present invention to enable transitions between audio signals that are more appealing or at least neutral to the total listening experience. A first particular object is to propose transitions that can be used to provide useful guidance to the listener, especially directive transitions. A second particular object is to propose transitions more in concord with the structure of the audio content than certain available transitions. For instance, it is desirable to enable such transitions to be positioned in an integrated fashion in view of the temporal structure of a piece of music.
To this end, the invention proposes methods and devices for transitioning be- tween audio signals in accordance with the independent claims. The dependent claims define advantageous embodiments of the invention.
As used herein, an audio signal is a pure audio signal or an audio component of a video signal or other compound signal. An audio signal in this sense may be in the form of a finite signal portion, such as a song, or may be a streaming signal, such as a radio channel. An audio signal may be encoded in audio data, which may be arranged in accordance with a predetermined format which - in addition to waveform data, transform coefficients and the like - includes audio metadata useful for the playback of the signal. Metadata associated with a song that is distributed as a file may also be supplied from an external database. A finite audio signal portion may be encoded as a computer file of a well-defined size, whereas a streaming signal may be distributed over a packet-switched communications network as a bitstream or distributed over an analogue or digital broadcast network. For the purposes of this specification, a song may refer to a unit in which digital audio content is distributed and/or filed in a digital library. It may relate to a piece of vocal or non-vocal music, a segment of speech or other recorded or synthetic sounds. One song in this sense may present an internal structure including at least one section, which may be a verse, refrain, chorus or the like. The structure may be a temporal structure but may also correspond to a listener's perception of the song, the (first) occurrence of singing or a certain instrument, or to its semantic content. For instance, one may identify sections in a spoken radio program on the basis of the topics discussed, possibly in real time. A plurality of songs in this sense may be grouped into an album (or record), and a plurality of albums may form a collection. Furthermore, the adjective di- rective is sometimes used broadly as a synonym of 'guiding'. In some instances, it may also be employed in a narrower sense, where it may mean 'referring to a direction', 'associated with a direction' or the like. A direction in this sense may be one- dimensional (the property of being forward/backward, upward/downward, positive/negative etc.) or many-dimensional, including spatial directions.
In a first aspect of the invention, a method of providing directive transitions includes associating a first browsing direction, for transitioning from a current audio signal to a first alternative audio signal, with a first transition effect template; and associating a second browsing direction, for transitioning from a current audio signal to a second alternative audio signal, with a second transition effect template, which is perceptually different from the first transition effect template. When a browsing action in one of said browsing directions is performed, the method further includes playing a transition, in which an exit segment, extracted from the current audio signal, and an entry segment, extracted from the alternative audio signal, are mixed in accordance with the associated transition effect template. After this, the alternative audio signal is played from the end of the entry segment.
This first aspect also relates to a decoder adapted to perform each step of the method.
This aspect achieves the object of providing useful guidance to a listener, since different browsing actions are associated with perceptually distinguishable transitions. This is to say, the templates are so different in objective terms, that they are auditorily distinguishable. By identifying transitions of different types, the user is notified during the transition of whether the entity (e.g., a music player) playing the audio signal is effecting an transition in the first or second direction, which is an internal state of the entity. After the transition has been accomplished - possibly by combining the information on the browsing direction with knowledge of an ordered relationship between the available audio signals - the user is able to derive the identity of the alternative audio signal that the entity is playing. Hence, the invention pro- vides for automatic indications about conditions prevailing in the entity playing the audio signal. This arrangement may help the listener to use a visual browsing interface for navigating among audio signals or may help replace such interface by non- visual means. Where the transitions are not commanded by the listener, (s)he may also receive useful information by hearing whether a transition takes place in a for- ward or a backward direction referring to some physical entity that is not necessarily related to the audio content as such, e.g., auditory traffic signals, elevator announcements and various applications in the entertainment field. Finally, transitions that are based on mixing the material playing are likely to be perceived as more agreeable than, for instance, transitions involving a synthetic voice overlay for con- veying the same information.
A transition effect template is used for generating a transition on the basis of the exit and entry segments. The template may contain information regarding the length of each segment. The template may further control how the segments are to be mixed, such as by specifying the power at which each segment is to be played back on the different channels, possibly in a time-dependent manner (e.g., fade in, fade out), by specifying effects to be applied (e.g., simulated motion, simulated Dop- pler effect, stereo regeneration, spectral band replication, reverberation) or by specifying predetermined content to be superposed (overlaid) on top of the signal generated from the entry and exit segments. In particular, one or both transition effect templates may comprise a channel power distribution to be used for the entry and/or exit segments. In other words, each transition effect template may include two channel power distributions, an exit channel distribution and an entry channel distribution, to be applied to the respective segments. The channel power distribution may be time-invariant or time-variant, as will be explained below. It is particularly advantageous to include a time dependence when playback takes place over a single channel. Where several playback channels exist and first transition effect template has been defined, a second transition effect template can be automatically generated by permuting the power distribution coeffi- cients among the channels within each of the exit and entry channel power distributions. In particular, if the playback channels are spatially arranged, such as with respect to a left/right and/or a forward/backward direction, the permutation may correspond to a reflection of the channels in one or more of these directions, e.g., by letting coefficients for right and left channels trade places but leaving the centre chan- nel unchanged. This saves time for the designer, who may conveniently generate a large number of transition effect templates. The symmetry may also have a self- explanatory effect on the listener, so that (s)he realizes that the first and second transition effect templates are related but different.
A useful class of transition effect templates can be defined in connection with stereophonic playback over two or more channels, which is generally known to be able to create an illusion of locality, directivity or movement. The first transition effect template is obtainable by simulating a movement of the audio source playing the exit segment or the entry segment in a first spatial direction relative to the intended listening point. Optionally, both the exit-segment audio source and the entry-segment audio source may be moving. This may entail using a time-dependent channel power distribution, creating a time-dependent time difference (or phase difference) between channels, or the like. The second transition effect template may then correspond to a simulated movement of the same or, preferably, the other audio source in a second, different direction. The first and second directions are perceptually distinguishable and may for example be opposite one another.
In a further development of the first aspect, a third browsing direction is defined and associated with a third transition effect template, which is perceptually different from the first and second transition effect templates. As an example, if the first and second browsing directions refer to the up and down directions in a list of songs in an album, the third browsing direction may correspond to jumping to a different album in a library. This concept may readily be generalized to also comprise a fourth browsing direction, a fifth browsing direction etc.
A second aspect of the invention relates to a method of providing a transition between a current and an alternative audio signal decoded from audio data, wherein the audio data include time markers encoded as audio metadata and indicating at least one section of the respective audio signal. The method includes retrieving time marking information in the audio data and extracting an exit segment from the current audio signal and an entry segment from the alternative audio signal, wherein an endpoint of at least one of the segments is synchronized with a time marker. The method then includes playing a transition, in which the exit segment and the entry segment are mixed in accordance with a transition effect template, and subsequently playing (in online or offline mode) the alternative audio signal from the end of the entry segment.
The second aspect also relates to a decoder adapted to perform each of the above steps. In either aspect, the decoder may be integrated into some other device, such as a computer, media processing system, mobile telephone or music player. Methods in accordance with the invention may also be performed by processing means provided in a different setting, such as an online music service.
It is recalled that a section may be a verse, chorus, refrain or similar portion of an audio signal. For the purposes of the claims within the second aspect, an end- point may be either an initial endpoint or a final endpoint. Said synchronization includes aligning the endpoint and marker in time, by letting them either coincide or define a predetermined time interval. The second aspect achieves the object of providing useful guidance because it is possible to enter the alternative signal at a section of interest, such as the chorus of a song or the announcement of contents in a spoken radio program, to make browsing more efficient. Indeed, a piece of music can be often identified by hearing to a characteristic part, such as the chorus or the refrain of the piece of music. Also, hearing a characteristic part of the piece of music may be sufficient for a music consumer to determine whether (s)he likes or dislikes the piece. When a music consumer seeks the characteristic part of a piece of music stored as digital audio data when using prior-art technology, (s)he manually has to fast-forward within the piece to find the characteristic part, which is cumbersome. Thus, whether the characteristic part refers to a piece of music or audio material of a different type, it acts as an audio thumbnail of the piece. Further, transitions in accordance with the second aspect can also be accommodated into the content more seamlessly by avoiding abrupt or unrhythmic joining of two songs. This possibility can be used to enhance the listening experience.
The synchronization may consist in extracting the segment from the respective audio signal in such manner that an endpoint coincides in time with a time marker. This way, an entry or exit segment begins or ends at a time marker, which may in turn denote the beginning or end of a section of the audio signal.
The entry and/or exit segment may also be extracted in such manner that it is located some time distance away from a time marker. This allows an upbeat, an intro section, a bridge section, a program signature, a fade-in/fade-out effect or the like to be accommodated. On the one hand, a segment endpoint may be located some distance before a time marker indicating the beginning of a section of the audio signal. If the endpoint refers to an entry segment, then a corresponding transition effect template may include gradually increasing the playback volume up to the beginning of the indicated section, preferably a chorus, which introduces the section without interfering unnecessarily with the content. On the other hand, a segment endpoint may be located some distance after a time marker indicating an end of a section. Similarly, this allows for a smooth fade-out effect initiated at or around the final end- point of the section.
A time marker may delineate sections of an audio signal but may alternatively refer to beats, so that a the transitions can be given an enhanced rhythmic accuracy. Time markers referring to sections may also be aligned with beat markers or with a beat grid before they are utilized for providing transition effects.
In embodiments of the second aspect of the invention, the time markers indicate endpoint of representative segments extracted by the methods disclosed in the applicant's co-pending Provisional U.S. Patent Application No. 61/428,554 filed on 30 December 2010, as well as any related application claiming its priority, which are hereby incorporated by reference in their entirety. The combination of these teach- ings and the second aspect of the present invention enables browsing directly between representative sections of the audio signals, which saves the listener time and helps him or her retain focus.
In embodiments of the second aspect of the invention, the time markers may be encoded in one or more header sections of the audio data in the format disclosed in the applicant's co-pending Provisional U.S. Patent Application No. 61/252,788 filed on 19 October 2009, as well as any related application claiming its priority, which are hereby incorporated by reference in their entirety. The encoding formats described therein advantageously package the information together with the wave- form data or transform coefficients themselves. Such joint distribution of the audio data in a standalone format provides robustness and uses both transmission bandwidth and storage space efficiently.
In both the first and second aspect, irrespective of the number of playback channels, transition effect templates can be defined by simulating a movement of a virtual audio source playing the exit segment and/or the entry segment relative to the intended listening point. The simulation may be based on a model for sound wave propagation; such models are widely known in the art. The movement of the virtual source may follow a straight line or be curvilinear and may be illustrated by using a time-variable channel power distribution, creating a phase difference between chan- nels and the like. The simulation may in particular illustrate how the virtual audio source travels between different locations in a changing acoustic landscape, which may include closed or semi-closed reverberating spaces defined by walls and differing, possibly, by their volumes, shapes or wall reflectivity values. This enables transition effects that human listeners may associate with the appearing or disappearing of an audio source on the listening scene. As the reverberating spaces may not be sharply delimited and the virtual audio source may be located at variable distance to the walls on its motion path, there may be a gradual change over time in the reverberation properties, particularly the dry-to-wet signal ratio, i.e., the ratio between direct and reverberated signal level. As such, the beginning of the entry or exit seg- ment may be subjected to reverberation processing based on a different set of parameter values than the end of the same segment, wherein the change between these is gradual and continuous. Another advantageous type of transition effects includes (simulated) Doppler shifts, which may be used to illustrate a constant or variable motion velocity of a virtual audio source. Doppler shifts may be simulated by non-uniform, dynamic re-sampling of an audio signal, so as to achieve a (variable) time stretch. Advanced re-sampling methods are well-known by those skilled in the art and may include spline or Lagrange interpolation, or other methods.
Furthermore, embodiments of the invention adapted for use with stereophonic playback equipment may also include a transition effect template that applies a dif- ferent channel power distribution for the exit segment than for the entry segment. One or both channel power distributions may be time variable. The distribution(s) may also be obtainable by moving a virtual audio source, which plays the concerned segment, in a spatial direction relative to a listener. Such simulated movement may entail a change in impact angle, stereo width (if the virtual audio source is a stereo source and has more than one channel), attenuation, directivity etc.
Transition effect templates based on any of the concepts discussed above may be developed further by addition (superposition) of a previously obtained audio segment. The previously obtained audio segment is thereby combined with the entry and exit segments by mixing. The segment to be added is preferably independent of the songs between which transition takes place, but may for instance be selected from a list of available options. If a decoder performs the method, the selection may be effectuated by a processing component within the decoder. The selection may be random or be related to properties of the songs to which or from which transition takes place. The segment(s) may have been recorded or sampled, and then encoded and stored in a memory. The segment(s) to be added may also be synthesized in real time on the basis of a predetermined set of parameter values or values corresponding to an entry selected from a list.
Advantageously, the transition effects are dynamically adapted to suit an ac- tual physical playback configuration. More precisely, a decoder or other device performing the method may receive an indication of properties of the physical playback sources, either by manual input or automatically. A playback source may include a loudspeaker or a set of headphones. The playback equipment may be characterized by the number of channels, properties of individual physical audio sources, the num- ber of audio sources, the geometric configuration of the audio sources or the like. In a two-channel setting, a simulated motion of a virtual audio source reproducing the entry or exit segment will produce a first pair of waveforms at the points, separated by a first distance, where it is intended to locate physical playback audio sources (e.g., headphones) which is different from a second pair of waveforms occurring a pair of physical playback audio sources separated by a second distance (e.g., a pair of loudspeakers in a room). A dynamical adaptation of the transition effect template may in this case include varying the settings of an acoustic model for computing what effect the virtual audio source has at the points where the physical audio sources are to be located. The adaptation may as well consist in cascading an origi- nal transition effect template with a transfer function representing the path between the original playback sources and the alternative sources, e.g., from loudspeakers to headphones. The adaptation may further involve adapting EQ parameters in accordance with the playback source. Methods and devices known from the field of virtual source localization and spatial synthesis for virtual sources may be useful in implementations of this embodiment. This includes the use of head-related transfer functions (HRTFs).
The transition effects may also be dynamically adapted to properties of the current and/or alternative signal. The properties may be determinable by automatic processing (in real time or at a preliminary stage) of the respective signal. Such automatically determinable properties may include tempo, beatiness, key, timbre and beat strength or - for spoken content - gender of speaker, speed, language etc. The properties may also be of a type for which the classification may require human intervention, such as musical genre, age, mood etc. Classification data for properties of the latter type may be encoded in audio metadata related to the signal, whereas properties of the former type may either be determined in real time on the decoder side or encoded in metadata as a result of a preliminary step either on the encoder or decoder side.
The invention and its variations discussed above may be embodied as com- puter-executable instructions stored on a computer-readable medium.
It is noted that the invention relates to all combinations of features from both aspects, even if recited in different claims.
Brief description of the drawings
Advantageous embodiments of the invention will now be described with refer- ence to the accompanying drawings, on which:
figure 1 schematically shows audio signals of finite duration, between which transitions in an "up" and a "down" direction are possible, and where these transitions have been made distinguishable by being associated with distinct transition effect templates;
figure 2 shows, similarly to figure 1 , how perceptually different transition effect templates may be used in accordance with the first aspect of the invention to distinguish transitions between streaming audio signals;
figure 3 illustrates a database structure in which it is relevant to distinguish between three different transition directions; figure 4 illustrates browsing between characteristic sections (audio thumbnails) of audio signals by allowing time markers to guide the extraction of entry segments in accordance with the second aspect;
figure 5 illustrates, similarly to figure 4, browsing between characteristic sec- tions, wherein a time interval is interposed between a section time marker and an endpoint of an entry segment;
figure 6 visualizes a transition effect template in terms of the evolution of respective attenuations applied to the entry segment and exit segment with respect to time (downward direction);
figure 7a visualizes, similarly to figure 6, another transition effect template, intended for use with a stereo playback equipment and obtainable by simulating movement of virtual audio sources;
figure 7b visualizes the transition effect template of figure 7a (as well as a further transition effect template) in terms of a simulation of mobile, virtual audio sources and their geometrical relationship to an intended listener;
figure 8 visualizes a further transition effect template obtainable by simulating movement of a virtual audio source through reverberating spaces with different properties;
figure 9 is a generalized block diagram of an audio player in accordance with the first or second aspect of the invention;
figures 10 and 1 1 are flowcharts of methods in accordance with embodiments of the first and second aspects, respectively;
figure 12 is a generalized block diagram of a decoder in accordance with an embodiment of the second aspect of the invention; and
figure 13 is a generalized block diagram of a component for extracting a representative segment.
Detailed description of embodiments
Figure 1 shows audio entries (or tracks) T1-T8 ordered in a database. The database may or may not have a visual interface for displaying the audio entries and their relationships. In this example intended for illustrative purposes, as shown in figure 9, the database is located in a database storage means 901 , storing either the actual data or pointers (addresses) to a location where they can be accessed. The database storage means 901 is arranged in a audio player 904 together with a decoder 902 for supplying an audio signal or audio signals to a (physical) playback au- dio source 903 on the basis of one or more of the database entries T1 -T8. For the purposes of this description, the same notation will be used for database entries and audio signals. The database 901 , decoder 902 and playback source 903 are communicatively coupled. The playback source 903 may accept the audio signal in the format (analogue or digital) in which it is supplied by the decoder 902, or may also include a suitable converter (not shown), such as a digital-to-analogue converter. The playback source 903 may be arranged at a different location than the decoder 902 and may be connected to this by a communications network. The playback process and the decoding process may also be separated in time, wherein the decoder 902 operates in an offline mode and the resulting audio signal is recorded on a storage medium (not shown) for later playback. The audio player 904 may be a dedicated device or integrated in a device, in particular a server accessible via a communications network, such as the World Wide Web.
As indicated by the triangular play symbol (►), the decoder 902 is currently playing entry T6 and about half of its duration has elapsed. The audio player 904 is associated with a control means (not shown) enabling a user to browse in a first direction A1 and a second direction A2, whereby playback of either entry T5 or T7 is initiated instead of the currently playing entry T6. The control means may for example be embodied as hard or soft keys on a keyboard or keypad, dedicated control buttons, fields in a touch-sensitive screen, haptic control means (possibly including an accelerometer or orientation sensor) or voice-control means. A user may perform a browsing action by selecting, using the control means, a database entry which is to be decoded by the decoder 902 and give rise to an audio signal or audio signals to be supplied to the audio source 903. The control means may for instance control the database 901 directly in order that it supplies the decoder 902 with a requested alternative database entry or entries. It may alternatively cause the decoder 902 to communicate with the database 901 in order that it supplies the information (i.e., database entry or entries) necessary to fulfill a user request.
In accordance with the first aspect of the invention, the decoder 902 is config- ured so that at least one browsing direction is associated with a transition effect template for producing a transition effect to be played before normal playback of the alternative database entry is initiated, which then produces an alternative audio signal to be supplied to the playback source 903. In the example shown in figure 1 , both browsing directions A1 , A2 are associated with transition effect templates, according to which an entry segment and an exit segment are extracted and mixed in a specified fashion.
More precisely, a browsing action in the first direction A1 will cause an exit segment T6-out1 to be extracted from the currently playing audio signal T6 and an entry segment T5-in to be extracted from the audio signal T5 located 'before' the currently playing signal T6. The invention is not limited to any particular length of the segments; in a personal music player they may be of the order of a few seconds, whereas in a discotheque more lengthy transitions may be desirable, possibly exceeding one minute in length; transitions that are perceptually very distinctive - as may be the case if they are accompanied by overlaid audio segments - may be chosen to be shorter than a second. In this example, the entry segment begins at the beginning of audio signal T5. As schematically shown in the enlarged portion, the entry and exit segments T5-in, T6-out1 will be mixed in such manner that the total power given to signal T5 is gradually increased and the total power given to signal T6 is gradually decreased. To this end, the decoder 902 includes segment extraction means (not shown) and a mixer (not shown). Information for controlling the mixer forms part of the first transition effect template. As illustrated, the subsequent portion of signal T6 will be completely attenuated or, put differently, will not be used as a basis for providing the transition. As also suggested by the drawing, on which the upper and lower portion of the bars symbolizing the segments are not shaded equally at all points in time, the power distribution applied to each of the entry and exit segment is not symmetric. The asymmetry may for instance refer to the spatial left/right or front/rear directions of a conventional stereo system. In this exemplifying transition illustrates, however, the power distributions of the respective segments T5- in, T6-out1 are symmetric with respect to one another at all points in time.
Similarly, a browsing action in the second direction A2 will cause the decoder 902 and database 901 to generate a transition followed by playback of the audio signal T7 located 'after' the currently playing signal. The transition is controlled by instructions contained in the second transition effect template, which differs to such an extent from the first template that an intended user will be able to distinguish them auditorily in normal listening conditions. According to the second template, entry and exit segments having a different, greater duration are extracted from the audio signals. The second template also defines a different time evolution of the power distribution to be applied to the entry and the exit segments, respectively. Here, both time evolutions include a time-invariable intermediate phase. As suggested by the asymmetry, both segments are then played at approximately equal power but from different directions. In response to this, based on acquired everyday acoustic experience which basically reflects the physical laws governing sound propagation, a listener may experience that a new audio source playing the alternative audio signal T7 enters from one end of the scene while pushing an existing audio source playing the current audio signal T6 towards the other end of the scene; after a short time interval has elapsed (corresponding to the intermediate phase), both audio sources continue their movements so that the existing audio source disappears completely and the new audio source is centered on the scene.
In mathematical notation, an audio signal yoi representing a transition generated on the basis of a first transition effect template Tr01 may be written as
3¾i (') = /oi (*o ({ + σ0! >. + * oi (*i (' + ½ ). 0 < t < L0l , where f0i and g0i are respective transition functions, which are time-variable in the general case and which control applied channel power and mixing behavior etc.; xo and Xi are the current audio signal and the first alternative audio signal; σ0ι, τ0ι are initial endpoints of the exit and entry segments, respectively; and L0i is the duration of the transition. Hence, the first transition effect template may be identified with the 5-tuple TrOl = (f0l , gm ,am , m , L0l ) . All five components may be independent of the audio signals xo, xi . One or more components may also be dynamically adaptable in accordance with one or both audio signals xo, xi. In particular, the initial endpoints may be chosen with regard to the structure of each audio signal, as may the total duration of the transition. The transition functions may be adaptable, either directly in response to properties of the audio signals or indirectly by stretching to match a de- sirable transition duration. Similarly to this, an audio signal y02 representing a transition based on a second transition effect template Tr02 may be written as
y02 (0 = f02 (> + <*02 }.')+ #02 (¾ (' + T02 )Λ o < / < L02 , and continuing the analogy the second template may be identified with
TK)2 = ( ο2 , §ο2 ' σο2 ' το2 ' ^ο2 ) - Hence, a pair of transition effect templates may be identi- fied with the ordered pair T = (Tr0l,TiO2) .
It will be obvious to the skilled person having studied this disclosure that a multitude of pairs of transition effect templates can be designed. Whether or not a proposed pair of transition effect templates will produce distinguishable transitions will in many cases be immediately apparent to the skilled person. In more doubtful situations, one may resort to experiments using representative audio signals for the intended application and a suitable group of trial users instructed to try to distinguish transitions. Conventional statistic methods can be applied in order to establish whether the templates within a proposed pair are sufficiently distinguishable.
The process described above is visualized in flowchart form in figure 10. The flowchart illustrates the states of the audio player 904 at different points in time. The process starts in point 1010. In a configuration state 1020, the first and second browsing directions A1 , A2 are associated with the first and second transition effect templates, respectively. In a subsequent state 1030, the audio player 904 may re- ceive either a browsing action in the first direction A1 , upon which it moves to a first transition state 1041 , or a browsing action in the second direction A2, which causes it to move to a second transition state 1042. In the first transition state 1041 , the audio player 904 plays the transition generated by mixing an entry segment T5-in and an exit segment T6-out1 in accordance with the first transition effect template. Simi- larly, the second transition state 1042 is governed by the second transition effect template. After the first (second) transition state 1041 (1042), the audio player 904 enters a first (second) playback state 1051 (1052), in which playback of the first (second) alternative audio signal continues. The process then either receives new user input, such as a transition command, or moves after the playback has been complet- ed to the first (second) end state 1091 (1092) of the process. This process may be embodied as a computer program.
The ideas illustrated in figure 1 can be generalized to audio signals for which either an initial or a final endpoint is undefined (or unknown), as is often the case of streaming broadcast audio or video channels. The invention can be applied to such signals as well with slight modifications, the main difference being the manner in which entry and exit segments are to be extracted. To this end, figure 2 shows three audio signals CO, C1 , C2, which are received at a playback device continuously and in real time. The audio signals contain timestamps indicating distances 30, 60 and 90 seconds from some reference point in time. The timestamps are either explicit or indirectly derivable, e.g., from metadata in data packets received over a packet- switched network. The exit segments C0-out1 , C0-out2 may be extracted from the current audio signal CO using the current playback point as a starting point. The entry segments C1 -in, C2-in may be extracted in a similar fashion while using a time corresponding to the current playing point as an initial endpoint. An approximation of the time of the current playing point may be derived by interpolation between timestamps in a fashion known per se.
Figure 2 illustrates a transition effect template associated with the first browsing direction A1 , wherein attenuation is gradually and symmetrically applied to the exit segment together with an increasing reverberation effect REV. The increase in reverberation may more precisely correspond to an increase of the wet-to-dry ratio of the first exit segment C0-out1 . Figure 2 also shows another transition effect template, which is associated with the second browsing direction A2. It includes playing the second exit segment C0-out2 at a power that increases gradually from a refer- ence value (e.g., 100 %) and then goes to zero abruptly. According to both transition effect templates shown in this figure, the entry segments C1 -in, C2-in are played at gradually increasing power until the reference level is reached.
Figure 3 shows an alternative logical structure of the database 901 , wherein database entries (audio signals) are arranged in a two-dimensional matrix allowing browsing in upward, downward and lateral directions A1 , A2, A3. The logical structure may correspond to conventional audio distribution formats insofar as S41 , S42, S43 and S44 may refer to different tracks in an album and S1 , S2, S3, S4, S5 may refer to different albums in a collection. An album may be associated with a representative segment further facilitating orientation in the database, such as a well- known portion of a track in the album. As such, browsing in the lateral direction A3 from the current playing point may initiate playing of such representative segment. After that point, browsing in the upward and downward directions A1 , A2 causes switching between representative segments of the respective albums. The inventive concept can be readily extended to include three perceptually distinct transition ef- feet templates for facilitating navigation in the database 901. Extending the inventive concept to four or more distinct browsing directions is also considered within the abilities of the skilled person.
Figure 6 illustrates mixing information encoded in a transition effect template which is primarily adapted for one-channel playback equipment. As functions of time (downward direction), the figure shows the respective playback powers to be applied to the first exit segment C0-out1 (shaded; left is positive direction; the scale may be linear or logarithmic) and the first entry segment C1 -in (non-shaded; right is positive direction; the scale may be linear or logarithmic). The time evolution of each playback power is shown normalized with respect to a reference power level. Put differ- ently, this reference level corresponds to no attenuation and zero power corresponds to full attenuation. As the curves show, the exit segment C0-out1 is played at the reference power at the beginning of the transition, whereas the exit segment is played at the reference power at its end. Each of the power curves increases or de- creases in a linear fashion between zero power and the reference power level. In this example, the increase and the decrease phase are not synchronized with each other.
Figure 7a illustrates mixing information relating to another transition effect template, which is primarily adapted for two-channel playback equipment. The fig- ure includes two graphs showing a left (L) and right (R) channel of each of an exit segment S0-out2 (shaded; left is positive direction of left channel, while right is positive direction of right channel; the scales may be linear or logarithmic) and an entry segment S2-in (non-shaded), as well as a common, downward time axis. In addition to the constant and linearly varying behaviors illustrated in figure 6, the playbacks powers in figure 7a exhibit continuously variable rates of increase and decrease. It will now be explained, with reference to figure 7b, how such mixing and attenuation behavior can be obtained by simulating movement of an audio source in relation to an intended listener position.
In figure 7b, a virtual listener with left and right ears L, R is initially located op- posite a scene with a virtual current audio source SO reproducing a current audio signal. As shown by the corresponding arrows, a first transition effect template Tr01 involves removing the virtual current audio source SO from the scene in the rightward direction; meanwhile, but not necessarily in synchronicity, a virtual first alternative audio source S1 enters the scene from the right. In a second template Tr02, the vir- tual current audio source SO exits to the left, while a virtual second alternative audio source S2 enters from the left.
To be precise, the first and second transition effect templates contain information obtainable from simulating the motion to the virtual audio sources as described in figure 7b. Such simulation would include a computation, in accordance with a suitable acoustic model, of the waveforms obtained at the locations of the virtual listener's ears as a result of the superposition of the sound waves emitted by the mobile audio sources. The resulting waveforms are to be reproduced by virtual audio sources (e.g., headphones) located approximately at the ear positions. The audio sources SO, S2 are therefore virtual in the sense that they exist in the framework of the simulation, while the headphones, which may be referred to as physical, exist in use situations where the second transition effect template Tr02 is used for providing a song transition. In this example, the acoustic model may preferably take into account the attenuation of a sound wave (as a function of distance), the phase differ- ence between the two ear positions (as a function of their spacing and the celerity of the sound wave, and the ensuing time difference) and the Doppler shift (as a function of the velocity).
It is the second transition effect template that is illustrated in figure 7a. The rate of absolute increase or decrease augments gradually and is maximal at the end of the transition. Assuming that the power scale reflects a listener's perception of distance, the transition will suggest that the audio sources undergo a gradually accelerated movement. Figure 7a does not visualize the phase difference between different channels and/or segments, although such information may nevertheless be included in the transition effect template.
In respect of this and other transition effects obtainable by simulation, it is noted that a transition effect template may either be formulated in terms of geometric or kinematic control parameters to a simulation module (e.g., a spatial synthesis engine, such as an HRTF rendering engine) or in terms of channel power distributions, phase difference data and other pre-calculated information resulting from such simu- lation. Irrespective of the approach, the information in the transition effect template itself is independent of the audio signals between which transition is to take place. In the first approach, the simulation (which may be implemented in software and/or hardware) is to be executed on every occasion where a transition has been requested, using as input these control parameters and the concerned audio signals. Ac- cording to the second approach, the simulation module is necessary only at the design stage of the transition effect template, which thus contains parameters intended to control a mixing module or the like.
Figure 8 shows an acoustic configuration by which further simulation-based transition effect templates may be obtained. More precisely, the figure shows an au- dio source 803 adapted to reproduce an entry or exit segment and movable relative to a listener 899 and walls 801 , 802 for influencing the reverberation characteristics. A first, semi-closed space is defined by the first set of walls 801 , which are provided with an acoustically damping lining. Thus, the first space will be characterized by a dry impulse response. A second, semi-closed space is defined by the second set of walls 802, which are harder than the first set of walls 801 and also enclose a larger volume. The reverberation in the second space will therefore have a longer response time and slower decay. Outside each of the first and second spaces, there remains a third space, which is void of reflective surfaces apart from the walls 801 , 802 and which will therefore be more or less reverberation-less. In one embodiment of the invention, this acoustic 'landscape' is input to a simulation module for deriving the waveforms resulting at ear positions of a listener when the audio source 803 is moved along the dashed arrow through the different reverberating spaces. A listener will hear a variable degree of reverberation being applied to the audio signal repro- duced by the audio source 803, which (s)he may associate with the disappearance of the audio source 803 and hence, with the end of the playback of the corresponding audio signal. It has been noted that a gradual change in the ratio between a dry (direct) audio component and a wet (singly or multiply reflected) audio component is generally associated with movement or change in distance between audio source and listener.
Turning to the second aspect of the invention, figure 4 illustrates how time markers delineating sections of audio signals can be used to enable efficient browsing between the signals by directly jumping to a characteristic portion (audio thumbnail) of a new signal. The figure shows three music signals S1 , S2, S3, which have been encoded together with (or have been associated with) time markers in metadata which indicate the locations of choruses (R). An audio player (not shown) currently plays the second audio signal S2 at a point indicated by the triangular play symbol. A user can control the audio player so that it switches to an alternative signal and begins playing this. In the present example, the user can select a first signal S1 (transi- tion A1 ) or a third signal S3 (transition A2) as alternatives to the currently playing one. The audio player is adapted to begin playback approximately at the beginning of the first chorus section (R) of the selected alternative signal. As will be shown in more detail in figure 5, this may include playing a transition in which an exit segment (extracted from the currently playing signal S2) and an entry segment (extracted from the alternative signal) are mixed and wherein an initial or final endpoint of the entry segment coincides with or is related to a time marker indicating the beginning of the first chorus section of the entry segment.
Figure 5 shows an instance of a transition A2 from the second music signal S2 to the third signal. Unlike in figure 4, the music signals have been synchronized in time by laterally moving the bars symbolizing the signals, so that synchronous points in the two segments are located side by side, one directly above the other. Further, the exit segment S2-out and the entry segment S3-in have been indicated by braces. The final (right) endpoint of the entry segment S3-in coincides with the beginning of the first chorus of the third music signal S3. This means, after the transition has been accomplished, that playback of the third music signal S3 will be continued from its first chorus. A transition effect template applying an entry segment extracting of this type may be advantageously combined with a conventional fade-in type of channel power evolution with respect to time, such as the one shown in figure 1 . In a tem- plate where the entry segment is played at audible power from an early point in time, one may instead synchronize the initial endpoint of the segment with the beginning of the chorus.
It is also envisaged to use time markers for synchronization with points in an entry segment that are not endpoints. As one example, an entry segment may be extracted in such manner that a time marker is located a predefined time interval Δ from its initial endpoint, this interval being equal to the duration of a previously obtained (e.g., recorded) segment which is to be superposed on the initial portion of the entry and exit segments by mixing. The superposed previously obtained segment may then function as an introduction to the most characteristic portion of the alterna- tive audio signal. The term "synchronized" is intended to cover such a segment extraction procedure.
The idea of synchronizing segment endpoints with time markers is equally applicable to exit segments. This may be used to enable deferred switching, wherein playback of the currently playing signal is continued up to the end of the current sec- tion, which may be a song section, a spoken news item, an advertisement or the like.
There are known methods for automatically detecting the locations of beats in musical content. Transitions between musical signals may be further improved by taking beat points into account in addition to time markers delineating sections. For example, while sections in Western music may generally be identified in terms of bars, time markers having been derived using statistical methods are not necessarily aligned with the bar lines. By extracting entry and/or exit segments beginning or ending at a full bar, the transitions can be made more rhythmical.
A process in accordance with the second aspect of this invention is illustrated by the flowchart in figure 1 1 . Starting from point 1 1 10, the process retrieves time markers from metadata, either from an audio file or bitstream or by contacting an external database, which constitutes a first step 1 120. At least the alternative audio signal is associated with metadata containing time markers. In a second step 1 130, the method extracts an exit segment and an entry segment from the current and al- ternative audio signals, respectively, wherein an endpoint of at least one segment is synchronized with a time marker. In a third step 1 140, a transition is played during which the exit segment and the entry segment are mixed in accordance with a transition effect template. After this, in a fourth step 1 150, the alternative audio signal is played from a point corresponding to the end (i.e., final endpoint) of the entry seg- ment. The process ends at point 1 190.
Figure 12 shows a decoder 1200 adapted to receive a first and a second audio signal SO, S1 , each of which is associated with metadata (META) defining time markers. In practical circumstances, the decoder 1200 may be adapted to receive a first and second audio data bitstream containing such metadata. Using a (physical) playback audio source 1206, a decoding unit 1205 is operable to play either the first or second audio signal or a transition obtained by mixing segments extracted from these. In the example, this is symbolically indicated by a three-position switch 1204 operable to supply the decoding unit 1205 with either the first SO or second S1 audio data signal or a transition signal obtained as follows. The first and second audio signals are fed in parallel to the switch 1204, to a time marker extractor 1201 and a segment extractor 1202. The time marker extractor 1201 retrieves the time markers and supplies a signal indicative of these to the segment extractor 1202. The segment extractor 1202 is then able to synchronize one or more time instants in a signal, which are indicated by the time markers, with one or more endpoints of an entry or exit segment. The segment extractor 1202 outputs an entry segment S1 -in and an exit segment SO-out to a mixer 1203, which passes this on to the upstream side of the switch 1204, making it available for playback. The output signal obtained at the downstream side of the switch 1204 may for instance be supplied to a local or remote playback source, or may be recorded for later playback.
The time marker extractor 1201 may retrieve the time markers by extracting them from the metadata encoded together with the audio data. The metadata may also be fetched remotely from an external database which hosts the metadata and is accessible via a communications network. A well-known example of such external metadata database is Gracenote's CD Database. This may proceed in accordance with the teachings of the applicant's co-pending Provisional U.S. Patent Application No. 61/252,788 filed on 19 October 2009., Pages 16-25 in this related application are of particular relevance for understanding the present invention, and protection is sought also for combinations with features disclosed therein.
Alternatively, the time marker extractor 1201 may be adapted to determine the time markers (or equivalently, the locations of the sections of the signal) on the basis of the audio signal directly. Figure 13 shows a possible internal structure of the time marker extractor 1201 in an simplified example embodiment wherein it is adapted to determine the sections in one single audio signal, and therefore has one input only. Reference is again made to the applicant's co-pending Provisional U.S. Patent
Application No. 61/428,554 filed on 30 December 2010, and in particular to sections 2, 6, 7, 8 and 10, which describe features that can be advantageously combined with the embodiments disclosed herein. In accordance with the teachings of this related application, such time marker extractor comprises a feature-extraction component 1301 which outputs a signal indicating features from audio data to each of a repetition detection component 1302, a scene-change detection component (which may be embodied as a portion of a more general refinement component) 1303 and a ranking component 1304. In turn, the repetition detection component 1302, the scene-change detection component 1303 and the ranking component 1304 are communicatively coupled. The feature-extraction component 1301 may extract features of various types from media data such as a song. The repetition detection component 1302 may find time-wise sections of the media data that are repetitive, for example, based on certain characteristics of the media data such as the melody, harmonies, lyrics, timbre of the song in these sections as represented in the extracted features of the media data. In some possible embodiments, the repetitive segments may be subjected to a refinement procedure performed by the scene change detection component 1303, which finds the correct start and end time points that delineate segments encompassing selected repetitive sections. These correct start and end time points may comprise beginning and ending scene change points of one or more scenes possessing distinct characteristics in the media data. A pair of a beginning scene-change point and an ending scene-change point may delineate a candidate representative segment. A ranking algorithm performed by the ranking component 1304 may be applied for the purpose of selecting a representative segment from all the candidate representative segments. In a particular embodiment, the representative segment selected may be the chorus of the song.
It is noted that the decoder 902 shown in figure 9, which has so far been discussed primarily in connection with the first aspect, may have an internal structure similar to the decoder 1200 in figure 12, which the skilled person may therefore rely upon for practicing the first aspect of the invention as well. When used within the first aspect, the time marker extractor 1201 of the decoder 1200 may be inactive or even absent.
Further embodiments of the present invention will become apparent to a per- son skilled in the art after studying the description above. Even though the present description and drawings disclose embodiments and examples, the invention is not restricted to these specific examples. Numerous modifications and variations can be made without departing from the scope of the present invention, which is defined by the accompanying claims. Any reference signs appearing in the claims are not to be understood as limiting their scope.
The systems and methods disclosed hereinabove may be implemented as software, firmware, hardware or a combination thereof. In a hardware implementation, the division of tasks between functional units referred to in the above description does not necessarily correspond to the division into physical units; to the contra- ry, one physical component may have multiple functionalities, and one task may be carried out by several physical components in cooperation. Certain components or all components may be implemented as software executed by a digital signal processor or microprocessor, or be implemented as hardware or as an application- specific integrated circuit. Such software may be distributed on computer readable media, which may comprise computer storage media (or non-transitory media) and communication media (or transitory media). As is well known to a person skilled in the art, the term computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, pro- gram modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. Further, it is well known to the skilled person that communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.

Claims

1 . A method of providing directive transitions between audio signals, comprising the steps of:
associating a first browsing direction (A1 ), for transitioning from a current audio signal (SO) to a first alternative audio signal (S1 ), with a first transition effect template;
associating a second browsing direction (A2), for transitioning from a current audio signal to a second alternative audio signal (S2), with a second transition effect template, which is perceptually different from the first transition effect template; and playing, in response to a browsing action in one of said browsing directions, a transition in which an exit segment (S0-out1 , S0-out2), extracted from the current audio signal, and an entry segment (S1 -in, S2-in), extracted from the alternative audio signal, are mixed in accordance with the associated transition effect template; and
subsequently playing the alternative audio signal from the end of the entry segment.
2. The method of claim 1 , wherein:
said steps of playing includes reproducing the entry and exit segments stere- ophonically using at least two channels;
the first transition effect template includes using a first exit channel power distribution for the exit segment (S0-out1 ) and a first entry channel power distribution for the entry segment (S1 -in) ;
the second transition effect template includes using a second exit channel power distribution for the exit segment (S0-out2) and a second exit channel distribution for the entry segment (S2-in);
and at least one of the following holds:
(a) the exit channel power distributions differ by a permutation of at least two channels;
(b) the entry channel power distributions differ by a permutation of at least two channels.
3. The method of claim 1 , wherein: said steps of playing include reproducing the entry and exit segments stereo- phonically using at least two channels;
the first transition effect template is obtainable by moving an audio source reproducing one of the segments in a first spatial direction relative to a listener;
the second transition effect template is obtainable by moving an audio source reproducing one of the segments in a second spatial direction relative to a listener; and
the first and second directions are distinct.
4. The method of any of the preceding claims, further comprising the step of associating a third browsing direction (A3), for transitioning from a current audio signal to a third alternative audio signal (S3), with a third transition effect template, which is perceptually different from the first and second transition effect templates.
5. The method of any of the preceding claims, wherein at least one transition effect template is obtainable by spatially moving an audio source reproducing said entry segment and/or an audio source reproducing said exit segment relative to a listener.
6. The method of claim 5, wherein said transition effect template is obtainable by moving at least one of the audio sources into or out of a reverberating space.
7. The method of claim 6, wherein said transition effect template includes a gradual change in wet-to-dry ratio.
8. The method of one of claims 5 to 7, wherein said transition effect template includes a simulated Doppler shift of at least one of the segments.
9. The method of any of the preceding claims, wherein:
said step of playing includes reproducing the entry and exit segments stereo- phonically using at least two channels; and
said transition effect template includes using an exit channel power distribution for the exit segment and using an entry channel power distribution, different from the exit channel power distribution, for the entry segment.
10. The method of claim 9, wherein at least one channel power distribution is time-variable and obtainable by moving an audio source reproducing the concerned segment in a spatial direction relative to a listener.
1 1. The method of any of the preceding claims, wherein:
said step of playing includes reproducing the entry and exit segments stereo- phonically in at least two channels; and
said transition effect template includes a change in stereo width for one of the segments.
12. The method of any of the preceding claims wherein said transition effect template includes mixing the entry and exit segments with a previously obtained audio segment.
13. The method of any of the preceding claims, further comprising obtaining characteristics of a playback configuration, such as a distance between audio sources reproducing different channels, and adapting said transition effect templates accordingly.
14. The method of any of the preceding claims, further comprising obtaining characteristics of at least one of the audio signals, such as tempo, beatiness and beat strength, and adapting said transition effect templates accordingly, such as by modifying a duration of the transition.
15. A computer-readable medium storing computer-executable instructions for performing the method set forth in any of the preceding claims.
16. A decoder (902) for outputting audio signals by decoding audio entries in an ordered database (901 ) permitting browsing in at least a first and a second browsing direction (A1 , A2), wherein:
the first and second browsing directions are respectively associated with perceptually distinct first and second transition effect templates; and the decoder is configured to react to a browsing action in one of said browsing directions by:
initially outputting a transition signal segment comprising an exit segment (S0- outl , S0-out2), decoded from a current audio entry, and an entry segment (S1 -in, S2-in), decoded from an alternative audio entry located in the concerned browsing direction in relation to the current audio entry, mixed in accordance with the associated transition effect template; and
subsequently outputting an alternative signal decoded from the alternative audio entry.
17. An audio player (904) comprising:
an ordered database (901 ) presenting audio entries and permitting browsing in at least a first and a second browsing direction (A1 , A2);
an audio source (903) for reproducing an audio signal; and
the decoder of claim 17, configured to decode entries from the database and to output resulting audio signals to the audio source.
18. A method of providing a transition between a current and an alternative audio signal decoded from audio data,
wherein the audio data include time markers encoded as audio metadata and indicating at least one section of the respective audio signal;
the method comprising the steps of:
retrieving time marking information in the audio data;
extracting an exit segment (S0-out1 ) from the current audio signal and an en- try segment (S1 -in) from the alternative audio signal, wherein an endpoint of at least one of the segments is synchronized with a time marker;
playing a transition in which the exit segment and the entry segment are mixed in accordance with a transition effect template; and
subsequently playing the alternative audio signal from the end of the entry segment.
19. The method of claim 18, wherein the endpoint coincides with a time marker.
20. The method of claim 19, wherein the time marker refers to a beginning of a segment and the endpoint is located at a time interval before the time marker.
21. The method of claim 18, wherein the time marker refers to an end of a section and the endpoint is located at a time interval after the time marker.
22. The method of any of claims 18-21 , wherein the time markers are endpoints of representative segments extracted by:
assigning a plurality of ranking scores to a plurality of candidate representa- tive segments, each individual candidate representative segment comprising at least one scene in one or more statistical patterns in media features of the audio data based on one or more types of features extractable from the audio data, each individual ranking score in the plurality of ranking scores being assigned to an individual candidate representative segment; and
selecting from the candidate representative segments, based on said plurality of ranking scores, a representative segment.
23. The method of claim 22, wherein each individual ranking score in said plurality of ranking scores comprises at least one component score based on one or more of: duration, a measure for overlapping between different candidate representative segments, time-wise positions of candidate representative segments in the media data, chroma distance, MFCC, spectral contrast, spectral centroid, spectral bandwidth, spectral roll-off, spectral flatness, presence of singing voice, absence of singing voice, one or more rhythm patterns, energy, one or more stereo parameters, per- ceptual entropy, co-modulation, dynamics.
24. The method of any of claims 18-23, wherein the time markers are encoded in a header section of the audio data.
25. The method of any of claims 18-24, wherein each audio signal is encoded as an audio data bitstream and the time markers are encoded in multiple sections of a bitstream, preferably occurring at a predetermined occurrence rate therein.
26. The method of any of claims 18-25, wherein the audio signals are encoded as one or more of: Advanced Audio Coding (AAC) bitstreams, High-Efficiency AAC bit- streams, MPEG-1/2 Audio Layer 3 (MP3) bitstreams, Dolby Digital (AC3) bitstreams, Dolby Digital Plus bitstreams, Dolby Pulse bitstreams, or Dolby TrueHD bitstreams.
27. The method of any of claims 18-26, wherein at least one transition effect template is obtainable by spatially moving an audio source reproducing said entry segment and/or an audio source reproducing said exit segment relative to a listener.
28. The method of claim 27, wherein said transition effect template is obtainable by moving at least one of the audio sources into or out of a reverberating space.
29. The method of claim 28, wherein said transition effect template includes a gradual change in wet-to-dry ratio.
30. The method of one of claims 27 to 29, wherein said transition effect template includes a simulated Doppler shift of at least one of the segments.
31 . The method of any of claims 18-30, wherein:
said step of playing includes reproducing the entry and exit segments stereo- phonically using at least two channels; and
said transition effect template includes using an exit channel power distribution for the exit segment and using an entry channel power distribution, different from the exit channel power distribution, for the entry segment.
32. The method of claim 31 , wherein at least one channel power distribution is time-variable and obtainable by moving an audio source reproducing the concerned segment in a spatial direction relative to a listener.
33. The method of any of claims 18-32, wherein:
said step of playing includes reproducing the entry and exit segments stereo- phonically in at least two channels; and
said transition effect template includes a change in stereo width for one of the segments.
34. The method of any of claims 18-33, wherein said transition effect template includes mixing the entry and exit segments with a previously obtained audio segment.
35. The method of any of claims 18-34, further comprising obtaining characteristics of a playback configuration, such as a distance between audio sources reproducing different channels, and adapting said transition effect templates accordingly.
36. The method of any of claims 18-35, further comprising obtaining characteristics of at least one of the audio signals, such as tempo, beatiness and beat strength, and adapting said transition effect templates accordingly, such as by modifying a duration of the transition.
37. A computer-readable medium storing computer-executable instructions for performing the method set forth in any of claims 18-36.
38. A decoder (1200) for outputting audio signals by decoding audio data, comprising:
a time marker extractor (1201 ) for retrieving time markers indicating at least one section of an audio signal;
a segment extractor (1202) for extracting an exit segment (S0-out1 ) from a current audio signal and an entry segment (S1 -in) from an alternative audio signal, wherein segment extractor is configured to synchronize an endpoint of at least one of the segments with a time marker;
a decoding unit (1205) operable
i) to play a transition, in which the exit segment and the entry segment are mixed in accordance with a transition effect template; and subsequently
ii) to play the alternative audio signal from the end of the entry segment.
39. The decoder of claim 38, wherein the time marker extractor (1201 ) comprises: a feature-extraction component (1301 ) for extracting features from audio data; a repetition detection component (1302) for finding time-wise segments of the audio data that are repetitive; a scene-change detection component (1303) for finding endpoints that delineate segments encompassing selected repetitive sections; and
a ranking component (1304) for selecting a representative segment from the candidate representative segments,
wherein the time marker extractor (1201 ) is adapted to assign a time marker at least to one endpoint of said representative segment.
40. The decoder of claim 38, wherein the time marker extractor is adapted to extract time markers encoded as audio metadata within the audio data.
41 . An audio player (904) comprising:
an ordered database (901 ) for storing audio data including time markers encoded as audio metadata and indicating at least one section of the respective audio signal;
an audio source (903) for reproducing an audio signal; and
the decoder of any of claims 38-40, configured to decode entries from the database and to output resulting audio signals to the audio source.
EP11808580.2A 2010-12-30 2011-12-15 Song transition effects for browsing Not-in-force EP2659483B1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201061428554P 2010-12-30 2010-12-30
US201161470604P 2011-04-01 2011-04-01
PCT/EP2011/006346 WO2012089313A1 (en) 2010-12-30 2011-12-15 Song transition effects for browsing

Publications (2)

Publication Number Publication Date
EP2659483A1 true EP2659483A1 (en) 2013-11-06
EP2659483B1 EP2659483B1 (en) 2015-11-25

Family

ID=45491507

Family Applications (1)

Application Number Title Priority Date Filing Date
EP11808580.2A Not-in-force EP2659483B1 (en) 2010-12-30 2011-12-15 Song transition effects for browsing

Country Status (3)

Country Link
US (1) US9326082B2 (en)
EP (1) EP2659483B1 (en)
WO (1) WO2012089313A1 (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6019803B2 (en) * 2012-06-26 2016-11-02 ヤマハ株式会社 Automatic performance device and program
JP5462330B2 (en) * 2012-08-17 2014-04-02 株式会社スクウェア・エニックス Video game processing apparatus and video game processing program
JP6051991B2 (en) * 2013-03-21 2016-12-27 富士通株式会社 Signal processing apparatus, signal processing method, and signal processing program
JP2014182748A (en) * 2013-03-21 2014-09-29 Fujitsu Ltd Signal processing apparatus, signal processing method, and signal processing program
GB201312490D0 (en) * 2013-07-12 2013-08-28 Calrec Audio Ltd Mixer control apparatus and method
US9411882B2 (en) 2013-07-22 2016-08-09 Dolby Laboratories Licensing Corporation Interactive audio content generation, delivery, playback and sharing
US10373611B2 (en) 2014-01-03 2019-08-06 Gracenote, Inc. Modification of electronic system operation based on acoustic ambience classification
EP3108474A1 (en) * 2014-02-18 2016-12-28 Dolby International AB Estimating a tempo metric from an audio bit-stream
US10679407B2 (en) 2014-06-27 2020-06-09 The University Of North Carolina At Chapel Hill Methods, systems, and computer readable media for modeling interactive diffuse reflections and higher-order diffraction in virtual environment scenes
US9977644B2 (en) * 2014-07-29 2018-05-22 The University Of North Carolina At Chapel Hill Methods, systems, and computer readable media for conducting interactive sound propagation and rendering for a plurality of sound sources in a virtual environment scene
US10101960B2 (en) * 2015-05-19 2018-10-16 Spotify Ab System for managing transitions between media content items
GB2581032B (en) * 2015-06-22 2020-11-04 Time Machine Capital Ltd System and method for onset detection in a digital signal
FR3038440A1 (en) * 2015-07-02 2017-01-06 Soclip! METHOD OF EXTRACTING AND ASSEMBLING SONGS FROM MUSICAL RECORDINGS
US10248744B2 (en) 2017-02-16 2019-04-02 The University Of North Carolina At Chapel Hill Methods, systems, and computer readable media for acoustic classification and optimization for multi-modal rendering of real-world scenes
JP7375002B2 (en) * 2019-05-14 2023-11-07 AlphaTheta株式会社 Sound equipment and music playback program
JP7432225B2 (en) 2020-01-22 2024-02-16 クレプシードラ株式会社 Sound playback recording device and program
CN115700870A (en) * 2021-07-31 2023-02-07 华为技术有限公司 Audio data processing method and device
CN114818732A (en) * 2022-05-19 2022-07-29 北京百度网讯科技有限公司 Text content evaluation method, related device and computer program product

Family Cites Families (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2927229B2 (en) 1996-01-23 1999-07-28 ヤマハ株式会社 Medley playing equipment
EP1274069B1 (en) 2001-06-08 2013-01-23 Sony France S.A. Automatic music continuation method and device
US6933432B2 (en) * 2002-03-28 2005-08-23 Koninklijke Philips Electronics N.V. Media player with “DJ” mode
US7189913B2 (en) 2003-04-04 2007-03-13 Apple Computer, Inc. Method and apparatus for time compression and expansion of audio data with dynamic tempo change during playback
US7424117B2 (en) * 2003-08-25 2008-09-09 Magix Ag System and method for generating sound transitions in a surround environment
US7081582B2 (en) 2004-06-30 2006-07-25 Microsoft Corporation System and method for aligning and mixing songs of arbitrary genres
US7571016B2 (en) 2005-09-08 2009-08-04 Microsoft Corporation Crossfade of media playback between different media processes
US8239766B2 (en) 2005-09-27 2012-08-07 Qualcomm Incorporated Multimedia coding techniques for transitional effects
KR20080066007A (en) * 2005-09-30 2008-07-15 코닌클리케 필립스 일렉트로닉스 엔.브이. Method and apparatus for processing audio for playback
US7790974B2 (en) 2006-05-01 2010-09-07 Microsoft Corporation Metadata-based song creation and editing
US7842874B2 (en) * 2006-06-15 2010-11-30 Massachusetts Institute Of Technology Creating music by concatenative synthesis
US8564543B2 (en) * 2006-09-11 2013-10-22 Apple Inc. Media player with imaged based browsing
US20080086687A1 (en) * 2006-10-06 2008-04-10 Ryutaro Sakai Graphical User Interface For Audio-Visual Browsing
US8774951B2 (en) 2006-12-18 2014-07-08 Apple Inc. System and method for enhanced media playback
US7888582B2 (en) * 2007-02-08 2011-02-15 Kaleidescape, Inc. Sound sequences with transitions and playlists
US8280539B2 (en) 2007-04-06 2012-10-02 The Echo Nest Corporation Method and apparatus for automatically segueing between audio tracks
WO2008129443A2 (en) 2007-04-20 2008-10-30 Koninklijke Philips Electronics N.V. Audio playing system and method
JP5702599B2 (en) * 2007-05-22 2015-04-15 コーニンクレッカ フィリップス エヌ ヴェ Device and method for processing audio data
US7525037B2 (en) 2007-06-25 2009-04-28 Sony Ericsson Mobile Communications Ab System and method for automatically beat mixing a plurality of songs using an electronic equipment
US8269093B2 (en) * 2007-08-21 2012-09-18 Apple Inc. Method for creating a beat-synchronized media mix
US9014831B2 (en) * 2008-04-15 2015-04-21 Cassanova Group, Llc Server side audio file beat mixing
GB2464545A (en) 2008-10-21 2010-04-28 Jason Burrage Providing and controlling a music playlist, via a communications network such as the internet
US8626322B2 (en) 2008-12-30 2014-01-07 Apple Inc. Multimedia display based on audio and visual complexity
US9105300B2 (en) 2009-10-19 2015-08-11 Dolby International Ab Metadata time marking information for indicating a section of an audio object
US20110231426A1 (en) * 2010-03-22 2011-09-22 Microsoft Corporation Song transition metadata
EP2659482B1 (en) 2010-12-30 2015-12-09 Dolby Laboratories Licensing Corporation Ranking representative segments in media data
US20130290818A1 (en) * 2012-04-27 2013-10-31 Nokia Corporation Method and apparatus for switching between presentations of two media items

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO2012089313A1 *

Also Published As

Publication number Publication date
EP2659483B1 (en) 2015-11-25
WO2012089313A1 (en) 2012-07-05
US20130282388A1 (en) 2013-10-24
US9326082B2 (en) 2016-04-26

Similar Documents

Publication Publication Date Title
US9326082B2 (en) Song transition effects for browsing
KR101512259B1 (en) Semantic audio track mixer
Emmerson et al. Electro-acoustic music
KR101512992B1 (en) A device for and a method of processing audio data
EP2191463B1 (en) A method and an apparatus of decoding an audio signal
US10623879B2 (en) Method of editing audio signals using separated objects and associated apparatus
US20170140745A1 (en) Music performance system and method thereof
CN105989823B (en) Automatic following and shooting accompaniment method and device
JP2003177784A (en) Method and device for extracting sound turning point, method and device for sound reproducing, sound reproducing system, sound delivery system, information providing device, sound signal editing device, recording medium for sound turning point extraction method program, recording medium for sound reproducing method program, recording medium for sound signal editing method program, sound turning point extraction method program, sound reproducing method program, and sound signal editing method program
EP3552200B1 (en) Audio variations editing using tempo-range metadata.
JP2001215979A (en) Karaoke device
WO2022248729A1 (en) Stereophonic audio rearrangement based on decomposed tracks
JP2007292847A (en) Musical piece editing/reproducing device
KR101944365B1 (en) Method and apparatus for generating synchronization of content, and interface module thereof
JP7036014B2 (en) Speech processing equipment and methods
CN110574107B (en) Data format
CN106448710B (en) A kind of calibration method and music player devices of music play parameters
JP2018155936A (en) Sound data edition method
JP4720974B2 (en) Audio generator and computer program therefor
US20170039026A1 (en) Auditory Transition System
WO2022230170A1 (en) Acoustic device, acoustic device control method, and program
Rostovtsev Scenes, For Two-Channel Fixed Media
Bayley Surround sound for the DAW owner
JP2018112644A (en) Information output device and information output method
Collins Matching parts: inner voice led control for symbolic and audio accompaniment

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20130730

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAX Request for extension of the european patent (deleted)
GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

RIC1 Information provided on ipc code assigned before grant

Ipc: G11B 17/038 20060101ALI20150624BHEP

Ipc: G10H 1/00 20060101AFI20150624BHEP

Ipc: H04S 7/00 20060101ALI20150624BHEP

Ipc: H04S 1/00 20060101ALI20150624BHEP

Ipc: G11B 27/038 20060101ALI20150624BHEP

Ipc: H04H 60/04 20080101ALI20150624BHEP

INTG Intention to grant announced

Effective date: 20150713

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 762948

Country of ref document: AT

Kind code of ref document: T

Effective date: 20151215

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 5

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602011021693

Country of ref document: DE

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20151229

Year of fee payment: 5

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20151217

Year of fee payment: 5

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20160225

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 762948

Country of ref document: AT

Kind code of ref document: T

Effective date: 20151125

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151125

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151125

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151125

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160225

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160325

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151125

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20151229

Year of fee payment: 5

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151125

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151125

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160325

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20151231

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151125

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151125

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151125

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151125

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160226

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151125

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151125

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602011021693

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151125

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151125

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151125

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151125

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151125

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151125

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20151215

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20151231

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20151231

26N No opposition filed

Effective date: 20160826

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151125

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151125

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20111215

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151125

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151125

REG Reference to a national code

Ref country code: DE

Ref legal event code: R119

Ref document number: 602011021693

Country of ref document: DE

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20161215

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151125

REG Reference to a national code

Ref country code: FR

Ref legal event code: ST

Effective date: 20170831

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20170102

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20161215

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20170701

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20151215

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151125

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151125

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151125