EP2099024B1 - Method for acoustic object-oriented analysis and note object-oriented processing of polyphonic sound recordings - Google Patents
Method for acoustic object-oriented analysis and note object-oriented processing of polyphonic sound recordings Download PDFInfo
- Publication number
- EP2099024B1 EP2099024B1 EP09001575.1A EP09001575A EP2099024B1 EP 2099024 B1 EP2099024 B1 EP 2099024B1 EP 09001575 A EP09001575 A EP 09001575A EP 2099024 B1 EP2099024 B1 EP 2099024B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- note
- objects
- frequency
- sound
- signal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims description 70
- 238000012545 processing Methods 0.000 title claims description 15
- 238000004458 analytical method Methods 0.000 title claims description 11
- 230000006870 function Effects 0.000 claims description 39
- 230000003595 spectral effect Effects 0.000 claims description 17
- 238000000354 decomposition reaction Methods 0.000 claims description 11
- 230000000737 periodic effect Effects 0.000 claims description 10
- 238000012805 post-processing Methods 0.000 claims description 4
- 230000008569 process Effects 0.000 claims description 4
- 230000002829 reductive effect Effects 0.000 claims description 3
- 238000005562 fading Methods 0.000 claims description 2
- 102000016550 Complement Factor H Human genes 0.000 claims 1
- 108010053085 Complement Factor H Proteins 0.000 claims 1
- 230000001427 coherent effect Effects 0.000 claims 1
- 230000001131 transforming effect Effects 0.000 claims 1
- 239000011295 pitch Substances 0.000 description 37
- 230000036961 partial effect Effects 0.000 description 13
- 239000000463 material Substances 0.000 description 12
- 230000005236 sound signal Effects 0.000 description 10
- 238000001228 spectrum Methods 0.000 description 6
- 230000008859 change Effects 0.000 description 5
- 238000000605 extraction Methods 0.000 description 5
- 230000009466 transformation Effects 0.000 description 5
- 238000009826 distribution Methods 0.000 description 4
- 238000011156 evaluation Methods 0.000 description 4
- 238000000926 separation method Methods 0.000 description 4
- 230000001755 vocal effect Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 239000000203 mixture Substances 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 241001136792 Alle Species 0.000 description 1
- 229910001369 Brass Inorganic materials 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- ZYXYTGQFPZEUFX-UHFFFAOYSA-N benzpyrimoxan Chemical compound O1C(OCCC1)C=1C(=NC=NC=1)OCC1=CC=C(C=C1)C(F)(F)F ZYXYTGQFPZEUFX-UHFFFAOYSA-N 0.000 description 1
- 239000010951 brass Substances 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000013016 damping Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000000151 deposition Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 238000012067 mathematical method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000036651 mood Effects 0.000 description 1
- 238000009527 percussion Methods 0.000 description 1
- 238000007639 printing Methods 0.000 description 1
- 230000029058 respiratory gaseous exchange Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 239000007858 starting material Substances 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000009827 uniform distribution Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H1/00—Details of electrophonic musical instruments
- G10H1/0008—Associated control or indicating means
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H1/00—Details of electrophonic musical instruments
- G10H1/02—Means for controlling the tone frequencies, e.g. attack or decay; Means for producing special musical effects, e.g. vibratos or glissandos
- G10H1/06—Circuits for establishing the harmonic content of tones, or other arrangements for changing the tone colour
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2210/00—Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
- G10H2210/031—Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
- G10H2210/066—Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal for pitch analysis as part of wider processing for musical purposes, e.g. transcription, musical performance evaluation; Pitch recognition, e.g. in polyphonic sounds; Estimation or use of missing fundamental
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2220/00—Input/output interfacing specifically adapted for electrophonic musical tools or instruments
- G10H2220/091—Graphical user interface [GUI] specifically adapted for electrophonic musical instruments, e.g. interactive musical displays, musical instrument icons or menus; Details of user interactions therewith
- G10H2220/101—Graphical user interface [GUI] specifically adapted for electrophonic musical instruments, e.g. interactive musical displays, musical instrument icons or menus; Details of user interactions therewith for graphical creation, edition or control of musical data or parameters
- G10H2220/116—Graphical user interface [GUI] specifically adapted for electrophonic musical instruments, e.g. interactive musical displays, musical instrument icons or menus; Details of user interactions therewith for graphical creation, edition or control of musical data or parameters for graphical editing of sound parameters or waveforms, e.g. by graphical interactive control of timbre, partials or envelope
Definitions
- the present invention relates to a method for sound object-oriented analysis and for note object-oriented processing of polyphonic sound recordings according to claim 1.
- chord recognition and approaches for identifying individual notes are already known in the prior art, eg for printing notes (software WIDI) or for automatically recognizing titles ( DE 10 2004 049 477 A1 ).
- the method according to the invention automatically identifies musical objects in the sense of notes in a recorded audio material. These objects are then sonically extracted from the overall recording in a second step and thereby made manipulatable without noticeable loss of sound or noticeable distortion of the sound.
- This sonic extraction is not found in the prior art and means that a part of the overall sound is assigned to an identified note object.
- the original recording can even be changed musically in such a way that the notes can change their relative position to each other in pitch and time, but the original sound impression is retained.
- the method further provides that the identified note objects are made available to the user for editing. The user can then change individual or several musical objects, eg in the pitch.
- the sound reproduction, the resynthesis by the modified object is played together with the non-changed objects or with the reduced by the changed object total signal.
- the starting material can already be present in digital form or as an analog signal. Before the analysis of an analog signal, digitization may then have to be performed.
- the method can find various applications. It can be e.g. deliberately eliminate individual mutilers: a pianist accidentally struck a note too much, which is removed in post-production. Another application is the re-tuning, so the correction for a detuned guitar or a dirty string replacement. Temperate recordings can be transferred to pure mood. It can be re-harmonized recordings, such as the riff of a guitar from C major in F minor. So far, a chord could be moved only in pitch overall, but not changed the harmonic relationship of its individual tones. The possibilities even extend to a new composition, because there is access to the single tones.
- the method according to the invention is aimed primarily at recorded individual tracks during music production. It is also applicable to ready mixed titles to make them appear in a new musical garb.
- the method primarily refers to already recorded musical material, not to analysis and processing in "real time", i. at the moment of musical performance. This has to do with the fact that the method according to the invention presupposes a meaningful recognition of "notes” in the sense of closed objects in a certain context. Therefore, while the analysis can be carried out in the background during an ongoing recording, it has to be based on an already recorded time relationship of the order of magnitude e.g. of a few seconds.
- the method is designed to find and distinguish individual pitches or pitch curves in the overall signal, but not to separate individual sound sources. So it's not about, e. to separate the individual sources of sounds or voices from a recording of the sounds on one or more speakers in a room.
- two notes played at the same pitch by two different instruments simultaneously are identified as a single object, as is the sound of many First Violins in the orchestra playing the same note as a single note.
- the notion of the note object predominantly used here emphasizes that "notes" in the sense of this invention need not mean notes in the actual musical sense, although an identified note object may or may not correspond to a note in the actual musical sense.
- Notes to be found need not have a constant pitch over time, such as a piano tone, but can also have a pitch progression over time, such as a vibrato and / or portamento vocal. If such pitch gradients are consistent in themselves, the notes remain detectable as the signal tracks time. In this respect, two tones that intersect at different pitch in the pitch can be identified as two different notes.
- the procedure essentially consists of the two main steps a) identification of the individual objects involved in the overall sound, ie the notes and, if necessary, the more eventful / percussive sound events, and b) sound decomposition of the overall sound into the found individual objects whose sum gives the overall sound, each of which, however, can be separately manipulated without affecting the sound of the other objects or the overall sound in the sense of unwanted audible artifacts.
- step b) the present invention differs from the prior art.
- the result of the identification step a) can also be used alone without the sound decomposition step b), if it is only about the recognition and presentation of the musical content, without interfering with the sound itself. This can be the case if you want to create a musical notation from an existing recording or if you want to graphically illustrate the musical content in a different way. Or it may serve to recognize and name the sequence of musical harmonies, e.g. to enrich the music with other instruments.
- step a) also influences the quality of the sound decomposition in step b). If step a) only serves to determine the musical content as mentioned above, it usually suffices to determine that a sound object occurs at a certain time, for example the pitch of the note "cis", and how long this object sounds. If, on the other hand, the tonal decomposition in step b) is the goal, then it is advantageous to evaluate as much information as possible about the course and parameters of the note objects found, such as the exact course of the pitch curve over time, the amplitude of the object and theirs Change in the time, the manner of inserting a note, the consistency of the mixture of their partials in relation to notes of similar pitch in the same recording, etc .. Depending on the claim but you can also do without certain information.
- the main characteristic of the individual objects to be found in the audio material is that they have a consistent pitch or consistent pitch progression. Furthermore, they should have a consistent progression of their individual partials. That is, it is assumed that a sound object to be found consists of the course of a fundamental tone, as well as an essentially unlimited number of harmonics, which should be approximately integer multiples of the fundamental tone frequency. Furthermore, the course of the partials belonging to a sound object should not have any unmotivated jumps. These assumptions are derived from the properties of the sounds produced by natural instruments. Therefore, the method according to the invention may have its limits where music has been synthesized arbitrarily arbitrarily.
- the objects to be identified in the audio material and described above may be referred to as "sonic" objects, in the claims they are named as note objects. Its main characteristic is to have a pitch or pitch curve over a perceptible duration, and the waveform of its time signal behaves essentially periodically or quasi-periodically. It distinguishes the non-sound objects, the noisy objects. Event-aware objects are a subset of the noisy objects.
- the event objects are characterized by the fact that they have a sudden increase in amplitude in the time signal and therefore are not periodic at this point. In addition, you usually sound fast. These objects are also generated musically by a sudden event, such as striking or plucking a string, or hitting a percussion instrument such as a drum. In this context, they can also be further distinguished by the method according to the invention: if on a event-related object immediately follows the increase in amplitude of a sound object, it can be assumed that the event represents the attack of a note and can therefore be assigned to this note. In this case, the sonic and the eventual object can be summarized for presentation and handling into a single object, which is also to be designated as a note object. For sound generation in the resynthesis, ie in the sound reproduction after editing a note object or several note objects, but it may be useful to process the two mentioned sub-objects separately for reasons of sound.
- the eventual object can not be assigned to a subsequent sonic object, it can be assumed that it is a purely percussive-rhythmically understood note without pitch meaning, such as a drum beat. This can be treated differently in the further processing accordingly.
- noisy objects have in common with the eventual objects that they have no periodic sound component, but differ from them in that they do not start suddenly and can not end quickly, but can be extended in time.
- Such objects may e.g. Sibilants of the consonants in the vocals, breathing noises, background noise of the player like a finger slide on the guitar string, or even not belonging to the actual music noise.
- Such noisy objects could e.g. be made available to the user separately for further manipulation or simply flat rate treated as a "residual signal".
- Window functions are used to show or hide a signal when it is being processed at a time or frequency location. So you can refer to the time or to the frequency spectrum.
- the window shape is not fixed and may in a simple case be a triangle window. Better results, e.g. a von Hann window.
- the choice of the window shape can be optimized for the respective purpose. For the embodiment of a method according to the invention described below, overlapping windows are used.
- Bins are the frequency bands that result from the FFT. This also refers to the possibly changed frequencies of a band, if the method of the instantaneous frequency is applied.
- the instantaneous frequency of a bin results from the consideration of the phase value of the respective bin. If the analysis windows overlap, the instantaneous frequency of the particular bin can be determined from the difference between the bin's expected phase value and the actual phase value of the bin. The more frequent the overlap, the more adjacent bins can represent a particular frequency that does not have to match the computational frequency of the bin itself.
- energetic quantities are calculated which receive the short notation E and subsequently referred to as "energy” and "Magnitude” should be called.
- each energy is assigned to each bin, which is calculated from the real and imaginary parts of the Fourier series according to (Re * Re) + (Im * Im), so that a value related to the amplitude of the frequency is produced.
- the magnitude is the root of it.
- the magnitude value can possibly be raised to a suitable value between 1 and 2, so that the size then obtained represents something between energy and magnitude.
- energy is used here in a more general sense, not in the sense of sound energy or other energy terms.
- the method described serves pictorially an energy landscape, which is understood as meaning a three-dimensional mathematical entity F (t, f, E) in which the x-axis is the time t, the y-axis the frequency f and the z Axis represents an energy E at the respective time / frequency location.
- the frequency axis is expediently scaled by cent so that sound intervals in each frequency range are always the same.
- the energy landscape is represented in the method to be described by discrete points, the time axis through the measuring points in time in about 0.01 s distance, the frequency axis by points in the interval distance of eg 5 cents each. In some process steps, the discrete points are converted via window functions into continuous spectrograms, which is to be regarded as optional.
- the examined audio material is after recording and analog-digital conversion, e.g. as audio file in PCM format (pulse-code modulation, time- and time-discrete signal).
- Size data in the further text such. for analysis windows, refer to a digital signal that is available with a sample rate of 44100 samples / s. For other sample rates, the sizes would have to be adjusted accordingly.
- the method described by way of example works both in the analysis and in the sound extraction for specific subtasks directly in the time domain ( Fig. 1 and 2 ), for others in the frequency domain ( Fig. 3 to 7 ).
- the treatment in the time domain is better suited for the event-like objects, the treatment in the frequency range better for the sound objects. Since the search is for discrete note objects in time, the signal is not processed continuously, but a time period is always buffered both in the time domain and in the frequency domain, which is then examined.
- FIG. 1 shows a waveform F (A, t) to an audio recording.
- FIG. 2 shows an enlarged section to it. 1 denotes an event object in these figures, which can be fixed to an amplitude jump. In contrast, the area designated 2 shows a significantly more periodic course.
- the signal is read out with uniformly overlapping and overlapping window functions and first transferred via an FFT into a complex array for the respective time slice.
- the size of the FFT can be eg 2048 samples, the overlap should be at least 4 times. This results in intervals between the time slices of, for example, 512 samples or around 0.01 sec.
- FIG. 3 An example of a further processed transformation result shows FIG. 3 , There is shown a function F (f, t, E), which results from the Fourier transform of the Figures 1 and 2 shown time signal F (A, t) results in that the instantaneous frequencies and energies are determined to the bins. Coded as shades of gray, the energies of the individual bins are plotted in their instantaneous frequency over time.
- Event objects 3 are distinguished in this illustration by the uniform distribution of the instantaneous frequencies, which indicates the noiseiness of this object. Note objects are characterized by the concentration of energy at a few instantaneous frequencies.
- the tonality value is an arithmetic variable that stands for the degree of periodicity in the binary frequency. It is calculated for each bin by determining how close the instantaneous frequencies of the neighboring bins are to the instantaneous frequency of the bin in question. The number of neighbor bins included is equal to the number of window overlaps, as this determines how many bins can represent a frequency. The tonal value of a bin becomes higher the closer the instantaneous frequencies of the bins in its environment are.
- a high tonality value means that a note object tends to be present, while a low tonality value tends to speak for an event object.
- the tonality values are scaled to a value range between 0 and 1.
- each bin is assigned a noise value directly from the tonality value is derived and calculated as a 1-tonality value. A sudden increase in noise values speaks for an event object.
- an energy landscape exemplified in Fig. 4 is shown as a section along the time t 1 and the frequency f 1 , and represents the distribution of the energy E in the time t and frequency f and the actual finding of the note objects serves as ridges in this landscape.
- a smoothing in time direction can be performed.
- Fig. 5 shown smoothed function F (f, t, E).
- objects 5 emerge clearly, which receive high energy values.
- the calculated energies of the bins are weighted with the ascertained tonality values: for each time slice, its energy is multiplied by its tonality value for each bin. This weighting changes the result only gradually, can therefore be omitted.
- the width of the distribution window in cents is expedient on the order of a semitone.
- the landscape can be smoothed in time direction with a low-pass filter (see FIG. 5 ). This makes it easier to find connected note objects as ridges.
- the total energy of the landscape is summed up and is available as a value for the abort criterion of the subsequent iteration.
- the identification of the note objects is done by an iteration in such a way that the most prominent object in the sense of the maximum is traced as a ridge and its energy is then subtracted from the landscape, whereupon the next most clearly projecting object is searched, etc.
- the most pronounced sound object is not identical with the highest ridge in the energy landscape F (f, t, E). This is due to the fact that a sound object is not defined by a single ridge in the energy landscape, but that it must be assumed that the energy in ridges on the integer multiples of a fundamental frequency also belong to just this sought sound object because they overtones the fundamental tone with the fundamental frequency.
- the energy of the fundamental tone is weaker than that of the higher partials, but nevertheless the object is to be found and tracked on the frequency of the fundamental tone.
- the background of these considerations is that although the fundamental frequency determines the pitch of a sound, the sound of a sound is decisively influenced by the overtones.
- a second landscape is pictorially created, the so-called relevance landscape, which is exemplified in Fig. 6 is shown, and in its x- and y-axes and their size with the energy landscape F (f, t, E) is identical, but their z-values E 'derived from this: this is for each xy coordinate point of the relevance landscape F '(f, t, E') is formed as the z-value E 'the sum of all z-values E which are in the energy landscape F (f, t, E) at this xy point and at all points which correspond to the integer frequency multiples of the starting point. It is expedient to add the existing there energy value with decreasing weight with increasing atomic number of frequency multiples.
- Finding the note objects in the relevance landscape generated as just described which is basically just a special energy landscape, namely a consideration of the harmonic energy, is carried out by an iterative mathematical process. It will be explained below how finding the note objects in this relevance landscape occurs without the invention being restricted thereto.
- finding the note objects in this relevance landscape occurs without the invention being restricted thereto.
- overtones would be identified as separate notes, and e.g. by post-processing with the basic tones would be to connect.
- the task of separating note objects also in terms of sound is then solved well if the combination of fundamental and overtones succeeds. Therefore, the maximum search in the relevance landscape is preferred because it leads to the best results.
- the energy maximum was found at t 1 and f 1 .
- the ridge of the ridge belonging to this maximum is tracked forward and backward in time direction.
- the maximum in the landscape closest to the last found point in the pitch direction is sought. If the distance to the nearest maximum is such that a continuation of the pitch line as the same object is not plausible, for example, a jump of more than 50 cents from one time slice to the next, the search will be aborted in the current direction. Similarly, the search is aborted when a certain height value of the found maximum is exceeded, such as 10% of the initial value.
- the Tracing the ridge is done expediently in the relevance landscape, because its course corresponds better due to the co-weighting of the partials the pitch of the searched object. If the search is canceled in both directions, a new note object is created and all points of the found ridge are added to it as its pitch curve.
- a value field belonging to the maximum is determined.
- the method of determining this value field can also be carried out differently than described above, for example by using other mathematical methods. For example, the value field could be queried in all directions from the maximum point by point until a threshold is exceeded in each direction. All points above the threshold would be assigned to the maximum as a value field.
- the energy landscape E (f, t, E) is deprived of the arithmetic energy of the found note object, namely at the locations of the course of its fundamental tone and that of all partials, ie the integer multiples of the fundamental frequency.
- this could also take place in the relevance landscape E '(f, t, E'), but this would be a poorer embodiment, because the possible superimposition of overtones, which belong to different basic tones, is better offset by energy withdrawal from the energy landscape.
- Reducing the energy at the given frequency locations in the energy landscape E (f, t, E) advantageously takes place again in the form of a window function which fades up and down in the frequency direction and whose width is of the order of one semitone.
- a model of the overtone spectrum of the sound is known, e.g. because a reference spectrum exists or is known model-wise for the instrument producing the sound, the energy extraction in the harmonics can be carried out according to this reference spectrum or model.
- the note object "remembers" the amount of energy withdrawn by it for its later evaluation by attributing that portion of energy to it.
- the relevance landscape is recalculated as described above, since in this time range the energy landscape has changed as the basis for the relevance landscape due to the withdrawal of energy.
- the newly found note object it is determined whether it overlaps in time and in the course of its fundamental frequency with another previously found note object. If it intersects with one of them or if it directly adjoins the fact that it is plausible that it is the same note object, it will be added to it (if necessary, by extending its pitch progression). Otherwise, it will be included as a new note object in the set of found note objects. Since in each iteration step only e.g. 50% of the energy for the note object is removed, usually every note object is found several times in the course of the iteration.
- the iteration continues by searching for the highest point in the changed relevance landscape. The iteration will take until you reach a Abort criterion continued.
- An advantageous termination criterion for the iteration is the reduction of energy in relation to the original energy in the energy landscape.
- the iteration can be interrupted, for example, if only 10% of the original energy is present in the energy landscape. This can also be made available to the user as a parameter to be changed.
- Locating the event-like objects which are characterized by a sudden increase of the noisy component in the signal, can be done either in the time domain signal by tracking the increase of above all high-pass filtered signal components, or in the frequency domain using the noise values of the bins that are weighted with the energies of the respective bins, are added up for each time slice. In both cases, one obtains a progression curve of the noisy component in the overall signal. At the points of the largest slopes of this curve, if necessary defined by a threshold value for the slope, the event-related objects are to be assumed.
- the eventual objects found in the previous step may either be isolated in themselves in the signal, as will be the case with purely percussive events, or they may be the note noises of note objects previously found in the iteration as plucked or struck tonal instruments such as guitar, piano, etc. will be the case.
- it is examined whether a significant increase in its energy occurs at one or more of the note objects located there immediately after the occurrence of the event. If this is the case, the event-aware object is understood as a stop of the note object and assigned to this. If the energy increase is the case for multiple notes, the eventual object is assigned to all these notes.
- FIG. 7 shows the note objects found in this example along with the event-like objects marked as vertical dashes that could be assigned to these note objects.
- Finding the note-like objects should advantageously be followed by an evaluation step.
- a plausibility criterion is e.g. the relative energy and the occlusion.
- too many small note objects with too little energy are found in the described iterative procedure. Therefore, it examines how much energy the note has in proportion to the total energy in its time domain. If it has too little relative energy, it can be removed.
- This described identification step may also be followed by intervention by the user, to whom the found note objects are graphically displayed in a suitable manner, eg in the Fig. 7 and, for example, mouse-sensitive and / or menu-controlled as a note identified objects can still split or separate notes to an object can unite.
- the user can also delete individual objects or add other objects to be considered. For this purpose, he can be offered the objects for activation, which were previously rated as insufficiently relevant in the automatic evaluation.
- the automatic identification can optionally be optimized by depositing the notes of the recorded piece of music, so that in the above-mentioned method based on the stored notes is specifically trying to find there basic tones that correspond to the frequencies of the deposited notes. This can e.g. done by evaluating a stored MIDI file containing the notes of the recorded composition.
- support tracks can also be recorded simultaneously with the recording of the actually used overall signal, as e.g. by single microphone of the participating instrumentalists or singers, or in a guitar by individual removal of the strings. In the then pronounced monophonic signal of the individual voices, the desired notes of the total signal can be identified more clearly and thus the overall sound is better separated for processing.
- the sonic decomposition of the overall sound can take place.
- the most accurate determination possible of the existing note objects, their pitch and amplitude course and the nature of their insertion is an influencing factor for the quality of the result of the sound decomposition.
- the overall sound is only decomposed as far as possible into individual sound objects, as is necessary for the resynthesis of the new overall sound. So if, for example, If the user only touches a single note in a complex chord and modifies its pitch, then only the signal of that single note must be extracted and subtracted from the original signal. Thus, the more notes are generated the more signals are generated. Each is then a monophonic-periodic signal and can be changed, summed and played back using known methods for time- and pitch-independent playback.
- the eventual objects are extracted from the original signal.
- the original signal is split into individual signals belonging to the individual note objects, the event-sensitive locations in the time signal are smeared by the division of the frequency spectrum. Therefore, it makes sense to first separate the locations of the event-related objects from the time signal and to perform the decomposition into the individual signals of the note objects on the resulting residual signal.
- this optional method step can also be omitted.
- a modified time signal is first generated, in which the sound components are subtracted as completely as possible.
- the magnitudes of all bins are multiplied by the noise values of the bins and, with these new magnitudes and the original phases, a time signal is again generated via the FFT.
- factors for damping the lower signal components can be included in the factors for the magnitudes, since the higher proportions are often more relevant for the event-related objects.
- the time signal of the eventual objects is cut out with a suitable window function, eg with a short rise time of about 0.005 s and a release time of about 0.05 s.
- the division into the individual sounds of the note objects is based on the fact that each note object in each time slice announces a "claim" on a spectral portion of the total signal F (f, t, E).
- This claim is mathematically represented by spectral proportional factors, which are calculated for each note object from a spectral component function, which is obtained, for example, from a model of the sound of a single note.
- This model can be easily specified, or it can be modeled on the real sound of an instrument, if the instrument to which the note object is based is known.
- the model is based on the following components: It is assumed that harmonics exist as spectral components in integer multiples of the fundamental frequency for the fundamental tone progression of the note object.
- the amplitudes of the overtones follow a fundamental tone of an overtone model. In the simplest case this can be the decrease of the amplitude with the reciprocal of the harmonic number. The overtone model but it can also reproduce the harmonic amplitude progression derived from an empirical sound.
- the harmonic amplitudes are related to the course of the fundamental energy of the note object. In the simplest case, this can be assumed to be proportional to the fundamental tone energy, but another context can also be derived from an empirical sound.
- a spectral fraction function is provided, e.g. may be different for different instruments, and for each note object in each time slice, the spectral proportions are calculated, that is, the claims this object has on each bin.
- share functions are stored, e.g. be provided that the user selects one of the share functions. There may also be an automatic selection, e.g. if the user inputs with which instrument the note object was played, or if it is automatically recognized that the note object was played on a particular instrument, e.g. can result from this that the overtone amplitude ratios of a note object correspond to a deposited share function.
- the amount of the calculated proportional factors depends on the partial frequencies and the amplitudes of the partials, which follow eg from the underlying model of the sound of a single note. Furthermore, the amount of the proportional factors depends on the proximity or distance of the sub-tone frequency to the instantaneous frequency of the respective bin. The amount of the proportional factors as a function of the distance may, for example, enter via a weighting curve in the frequency range, which should be wide enough, for example, to permit even slight deviations from the given frequency. On the other hand, the weighting curve in the central area should be so narrow that the harmonic component factors of different concurrent notes with different fundamental pitch separated sufficiently well and the overtones of the correct note can be assigned.
- a suitable weighting curve for evaluating the removal of the frequencies may be, for example, a von Hann window raised to the fourth power, which in its total width corresponds, for example, to two semitones.
- the sum of the proportional factors of all notes for each one bin is normalized to 1.
- a separate time signal with the duration of this note object is created.
- the magnitudes or other appropriate energetic value of all bins are divided among the note objects according to the normalized proportion factors.
- note objects are merged with the eventual shares. It was already described above that the event objects were assigned to the note objects, and that a time signal was generated for the event-related objects. Now the note objects to which eventful objects have been assigned, this time signal can be added to their beginning. If an eventful object has multiple note objects have been assigned, because it was assumed that this was struck simultaneously, the time signal of the event-like object is distributed in its amplitude to the associated note objects. This may conveniently be done in proportion to the energies of the note objects themselves or done on the basis of the assumed instrument model.
- Event-related objects to which no note objects have been assigned can be available with their extracted time signal as independent percussive objects.
- time sequence of the above-mentioned individual method steps can also be chosen differently.
- assignment of event objects to note objects also be made only immediately before the resynthesis.
Description
Die vorliegende Erfindung betrifft ein Verfahren zur klangobjektorientierten Analyse und zur notenobjektorientierten Bearbeitung von polyphonen Klangaufnahmen nach Anspruch 1.The present invention relates to a method for sound object-oriented analysis and for note object-oriented processing of polyphonic sound recordings according to
Es ist seit langem bekannt, Tonaufnahmen mit musikalischem Inhalt einer klanglichen Nachbearbeitung zu unterziehen. Fand dies in der Studiotechnik früherer Jahre noch unter Zuhilfenahme von kostspieligen Hardwarekomponenten statt, wie z.B. mit komplexen Filterbänken, so werden dazu heutzutage Computer und spezielle Computerprogramme eingesetzt, die weit weniger kostenintensiv sind und daher eine größere Verbreitung gefunden haben. Ein Übriges zu dieser Entwicklung hat der Einzug digitaler Aufnahmetechnik getan. Ziel solcher Nachbearbeitungen ist es in der Regel, den Klang der Aufnahmen zu verbessern oder Klangeffekte einzuarbeiten. Dieser Art von klanglicher Nachbearbeitung ist gemein, dass sie rein effektorientiert arbeitet und den musikalischen Gehalt des Signals nicht zu erkennen vermag, sondern das Audio-Signal nur als eine sich in der Zeit verändernde Signalamplitude versteht.It has long been known to subject sound recordings with musical content to a sonic reworking. In the studio technique of earlier years, this still took place with the aid of expensive hardware components, such as e.g. With complex filter banks, computer and special computer programs are used today, which are far less expensive and therefore have found a wider distribution. Another contributing factor to this development has been the advent of digital recording technology. The aim of such post-processing is usually to improve the sound of the recordings or incorporate sound effects. This kind of tonal post-processing has in common that it works purely effektorientiert and the musical content of the signal is not able to recognize, but understands the audio signal only as a time-varying signal amplitude.
Im Stand der Technik sind z.B. aus der
Wünschenswert ist eine Bearbeitung von Audio-Material auf der Ebene der Einzelnoten, aus dem die Klangaufnahme besteht. Es ist im Stand der Technik bekannt, aus einer Audio-Aufnahme Einzelnoten im Hinblick auf deren Notenhöhe, Notenlänge und Auftrittszeitpunkt zu extrahieren. Eine solche Notenextraktion ist z.B. aus der
Ein besonders kritischer Punkt bei Bearbeitungen von Audio-Material ist, dass der ursprüngliche Klangeindruck z.B. einer Gesangsstimme auch nach der Bearbeitung erhalten bleiben soll. Dies gelingt in hervorragender Weise in der zum Stand der Technik gehörenden Software "Melodyne" der Celemony Software GmbH, die auf einer notenbasierten Herangehensweise fußt. Allerdings setzt diese Software voraus, dass einstimmiges Material vorliegt. Akkord-Instrumente wie Gitarre, Klavier oder Chorgesang lassen sich bisher nicht zufriedenstellend tonbasiert bearbeiten. Bisher konnten solche Akkordaufnahmen nur akkordweise geschnitten oder per Timestretching in der Zeit oder Tonhöhe bearbeitet werden, allerdings ohne Zugriff auf die einzelnen Töne eines Akkords. So war es bisher nicht möglich, einen einzelnen Akkordton (z.B. das E eines C-Dur-Akkords) zu verändern (z.B. auf Es für C-Moll), ohne gleichzeitig die anderen Töne des Akkords mit zu bearbeiten.A particularly critical issue with audio material processing is that the original sound impression is e.g. a vocal part should be preserved even after editing. This succeeds excellently in the state of the art software "Melodyne" of Celemony Software GmbH, which is based on a note-based approach. However, this software assumes that there is unanimous material. Chord instruments such as guitar, piano or choir singing can not be satisfactorily processed sound-based. So far, such chord recordings could only be cut in chords or edited by time-stretching in time or pitch, but without access to the individual notes of a chord. Thus, it has not been possible to change a single chord tone (e.g., the E of a C major chord) (e.g., to Es for C minor) without copying the other notes of the chord at the same time.
Akkord-Erkennung und Ansätze zur Identifikation von Einzelnoten sind im Stand der Technik allerdings bereits bekannt, z.B. um Noten drucken zu können (Software WIDI) oder automatisch Titel erkennen zu können (
Es ist die Aufgabe der vorliegenden Erfindung einen Weg aufzuzeigen, wie eine notenobjektorientierte Bearbeitung polyphonen Klangmaterials erfolgen kann. Diese Aufgabe wird mit einem Verfahren nach Anspruch 1 gelöst. Vorteilhafte Ausgestaltungen sind in den Unteransprüchen angegeben.It is the object of the present invention to show a way in which a notation-oriented processing of polyphonic sound material can take place. This object is achieved by a method according to
Das erfindungsgemäße Verfahren identifiziert in einem ersten Schritt automatisch in einem aufgenommenen Audio-Material musikalische Objekte im Sinne von Noten. Diese Objekte werden dann in einem zweiten Schritt aus der Gesamtaufnahme klanglich extrahiert und dadurch ohne merklichen Klangverlust oder merkliche Klangverfälschung manipulierbar gemacht. Diese klangliche Extraktion ist im Stand der Technik nicht zu finden und bedeutet, daß einem identifizierten Notenobjekt ein Teil des Gesamtklanges zugeordnet wird. Dadurch kann die ursprüngliche Aufnahme sogar musikalisch in der Weise frei verändert werden, dass die Noten ihre relative Position zueinander in Tonhöhe und Zeit ändern können, der ursprüngliche Klangeindruck bleibt aber erhalten. Das Verfahren sieht weiter vor, daß die identifizierten Notenobjekte dem Anwender zur Bearbeitung zur Verfügung gestellt werden. Der Anwender kann dann einzelne oder mehrere musikalische Objekte verändern, z.B. in der Tonhöhe verschieben. Nach einer solchen Bearbeitung durch einen Anwender erfolgt die klangliche Wiedergabe, die Resynthese, indem das veränderte Objekt zusammen mit den nicht veränderten Objekten bzw. mit dem um das veränderte Objekt verminderten Gesamtsignal wiedergegeben wird. Das Ausgangsmaterial kann dabei bereits in digitaler Form oder auch als analoges Signal vorliegen. Vor der Analyse eines analogen Signals muß dann ggf. eine Digitalisierung durchgeführt werden.In a first step, the method according to the invention automatically identifies musical objects in the sense of notes in a recorded audio material. These objects are then sonically extracted from the overall recording in a second step and thereby made manipulatable without noticeable loss of sound or noticeable distortion of the sound. This sonic extraction is not found in the prior art and means that a part of the overall sound is assigned to an identified note object. Thus, the original recording can even be changed musically in such a way that the notes can change their relative position to each other in pitch and time, but the original sound impression is retained. The method further provides that the identified note objects are made available to the user for editing. The user can then change individual or several musical objects, eg in the pitch. After such a processing by a user, the sound reproduction, the resynthesis by the modified object is played together with the non-changed objects or with the reduced by the changed object total signal. The starting material can already be present in digital form or as an analog signal. Before the analysis of an analog signal, digitization may then have to be performed.
Das Verfahren kann verschiedene Anwendungen finden. Es lassen sich z.B. gezielt einzelne Verspieler beseitigen: ein Pianist hat versehentlich einen Ton zuviel angeschlagen, der in der Nachbearbeitung entfernt wird. Eine andere Anwendung betrifft das Nachstimmen, also die Korrektur für eine verstimmte Gitarre oder einen unsauberen Streichersatz. Temperierte Aufnahmen können auf reine Stimmung übertragen werden. Es lassen sich Aufnahmen Umharmonisieren, etwa das Riff einer Gitarre von C-Dur in F-Moll. Bisher konnte ein Akkord nur insgesamt in der Tonhöhe verschoben, nicht aber die harmonische Beziehung seiner Einzeltöne verändert werden. Die Möglichkeiten reichen sogar bis hin zu einer Neukomposition, denn es besteht Zugriff auf die Einzeltöne.The method can find various applications. It can be e.g. deliberately eliminate individual mutilers: a pianist accidentally struck a note too much, which is removed in post-production. Another application is the re-tuning, so the correction for a detuned guitar or a dirty string replacement. Temperate recordings can be transferred to pure mood. It can be re-harmonized recordings, such as the riff of a guitar from C major in F minor. So far, a chord could be moved only in pitch overall, but not changed the harmonic relationship of its individual tones. The possibilities even extend to a new composition, because there is access to the single tones.
Das erfindungsgemäße Verfahren richtet sich vorwiegend an aufgenommene Einzelspuren während der Musikproduktion. Es ist aber auch anwendbar auf fertig gemischte Titel, um diese in einem neuen musikalischen Gewand erscheinen zu lassen.The method according to the invention is aimed primarily at recorded individual tracks during music production. It is also applicable to ready mixed titles to make them appear in a new musical garb.
Mit der bisherigen Technik war das oben Gesagte nur für klanglich monophones Material realisierbar, d.h. zur Bearbeitung von beispielsweise isoliert aufgenommenen Gesangs- oder Bläserstimmen, bei denen nur je eine Note mit identifizierbarer Tonhöhe gleichzeitig erklingt. Mit dem erfindungsgemäßen Verfahren wird das Ziel der Einzelnotenbearbeitung realisierbar für polyphon gespielte Instrumente, also solche, bei denen in der Regel mehrere Noten oder ganze Akkorde gleichzeitig erklingen, wie Klavier, Gitarre usw., wobei nicht nur der Akkord als Ganzes transponierbar wird (d.h. Veränderung der Tonhöhe unter Beibehaltung der relativen Tonhöhenbezüge innerhalb eines Akkordes), sondern auch und besonders die Noten innerhalb eines Akkordklanges relativ zueinander verändert werden können, wie etwa die Veränderung eines Akkordklanges von Dur zu Moll.With the previous technique, the above was only possible for tonal monophonic material, ie for processing, for example, isolated recorded vocal or brass parts in which only one note with identifiable pitch sounds simultaneously. With the method according to the invention, the goal of single-note editing becomes feasible for polyphonic instruments, ie those in which several notes or whole chords are usually heard simultaneously, such as the piano, guitar, etc., whereby not only the chord as a whole becomes transposable (ie change the pitch while retaining the relative pitch references within a chord), but also and especially the notes within a chord sound can be changed relative to each other, such as changing a chord sound from major to minor.
Das Verfahren bezieht sich vordringlich auf bereits aufgenommenes musikalisches Material, also nicht auf die Analyse und Bearbeitung in "Echtzeit", d.h. im Moment der musikalischen Aufführung. Das hat damit zu tun, dass das erfindungsgemäße Verfahren eine sinnvolle Erkennung von "Noten" im Sinne von abgeschlossenen Objekten in einem gewissen Kontext voraussetzt. Daher kann die Analyse zwar bei einer laufenden Aufnahme im Hintergrund durchgeführt werden, es muss dabei aber auf einen bereits aufgenommenen Zeitzusammenhang in der Größenordnung z.B. von einigen Sekunden zugreifen können.The method primarily refers to already recorded musical material, not to analysis and processing in "real time", i. at the moment of musical performance. This has to do with the fact that the method according to the invention presupposes a meaningful recognition of "notes" in the sense of closed objects in a certain context. Therefore, while the analysis can be carried out in the background during an ongoing recording, it has to be based on an already recorded time relationship of the order of magnitude e.g. of a few seconds.
Das Verfahren ist dazu ausgelegt, einzelne Tonhöhen oder Tonhöhenverläufe im Gesamtsignal zu finden und zu unterscheiden, nicht aber einzelne Klangquellen zu trennen. Es geht also nicht darum, z.B. aus einer Aufnahme der Geräusche an einer Straße oder mehrerer Sprecher in einem Raum die einzelnen Quellen der Geräusche oder Stimmen zu trennen. Es werden in der Regel zwei Noten, die auf gleicher Tonhöhe von zwei verschiedenen Instrumenten gleichzeitig gespielt wurden, als ein einziges Objekt identifiziert, ebenso der Klang von vielen Ersten Geigen im Orchester, die dieselbe Note spielen, als eine einzige Note. Der hier überwiegend verwendete Begriff des Notenobjektes unterstreicht, dass mit "Noten" im Sinne dieser Erfindung nicht Noten im eigentlichen musikalischen Sinne gemeint sein müssen, wenngleich ein identifiziertes Notenobjekt einer Note im eigentlichen musikalischen Sinn entsprechen kann, aber nicht muß.The method is designed to find and distinguish individual pitches or pitch curves in the overall signal, but not to separate individual sound sources. So it's not about, e. to separate the individual sources of sounds or voices from a recording of the sounds on one or more speakers in a room. As a rule, two notes played at the same pitch by two different instruments simultaneously are identified as a single object, as is the sound of many First Violins in the orchestra playing the same note as a single note. The notion of the note object predominantly used here emphasizes that "notes" in the sense of this invention need not mean notes in the actual musical sense, although an identified note object may or may not correspond to a note in the actual musical sense.
Zu findende Noten müssen nicht, wie z.B. eine Klavierton, eine über die Zeit konstante Tonhöhe haben, sondern können auch, wie z.B. eine Singstimme mit Vibrato und/oder Portamento, über die Zeit einen beliebigen Tonhöhenverlauf haben. Wenn solche Tonhöhenverläufe in sich konsistent sind bleiben die Noten bei Zeitverfolgung des Signals erkennbar. Insofern können auch zwei Töne, die sich bei unterschiedlichem Verlauf in der Tonhöhe kreuzen, als zwei verschiedene Noten identifiziert werden.Notes to be found need not have a constant pitch over time, such as a piano tone, but can also have a pitch progression over time, such as a vibrato and / or portamento vocal. If such pitch gradients are consistent in themselves, the notes remain detectable as the signal tracks time. In this respect, two tones that intersect at different pitch in the pitch can be identified as two different notes.
Das Verfahren besteht im wesentlichen aus den beiden Hauptschritten a) Identifizierung der am Gesamtklang beteiligten Einzelobjekte, also der Noten und ggf. auch der mehr ereignishaft/perkussiven Klangereignisse, und b) klangliche Zerlegung des Gesamtklanges in die gefundenen Einzelobjekte, deren Summe den Gesamtklang ergibt, deren jedes dadurch aber separat manipulierbar wird, ohne den Klang der übrigen Objekte bzw. des Gesamtklanges im Sinn von unerwünschten hörbaren Artefakten zu beeinflussen. Gerade im Schritt b) unterscheidet sich die vorliegende Erfindung vom Stand der Technik.The procedure essentially consists of the two main steps a) identification of the individual objects involved in the overall sound, ie the notes and, if necessary, the more eventful / percussive sound events, and b) sound decomposition of the overall sound into the found individual objects whose sum gives the overall sound, each of which, however, can be separately manipulated without affecting the sound of the other objects or the overall sound in the sense of unwanted audible artifacts. Especially in step b), the present invention differs from the prior art.
Das Ergebnis des Identifizierungsschrittes a) kann auch für sich genommen ohne den Klangzerlegungsschritt b) verwendet werden, wenn es nur um die Erkenntnis und Darstellung des musikalischen Inhaltes geht, ohne dass in den Klang selbst eingegriffen werden soll. Das kann der Fall sein, wenn man von einer bestehenden Aufnahme eine musikalische Notation erzeugen will oder den musikalischen Inhalt auf andere Weise graphisch veranschaulichen will. Oder es kann dazu dienen, die Folge der musikalischen Harmonien zu erkennen und zu benennen, um z.B. die Musik mit weiteren Instrumenten anzureichern.The result of the identification step a) can also be used alone without the sound decomposition step b), if it is only about the recognition and presentation of the musical content, without interfering with the sound itself. This can be the case if you want to create a musical notation from an existing recording or if you want to graphically illustrate the musical content in a different way. Or it may serve to recognize and name the sequence of musical harmonies, e.g. to enrich the music with other instruments.
Die Qualität des Ergebnisses von Identifizierungsschritt a) beeinflusst auch die Qualität der klanglichen Zerlegung in Schritt b). Wenn Schritt a) nur wie erwähnt zur Ermittlung des musikalischen Inhaltes dienen soll, genügt es in der Regel zu ermitteln, dass zu einer bestimmten Zeit ein Klangobjekt auftritt, das z.B. die Tonhöhe der Note "Cis" hat, und wie lange dieses Objekt klingt. Wenn hingegen die klangliche Zerlegung in Schritt b) das Ziel ist, dann ist es vorteilhaft so viele Informationen wie möglich über den Verlauf und die Parameter der gefundenen Notenobjekte auszuwerten, wie z.B. den genauen Verlauf der Tonhöhenkurve in der Zeit, die Amplitude des Objektes und ihre Änderung in der Zeit, die Art des Einsetzens einer Note, die Konsistenz des Gemisches ihrer Teiltöne im Verhältnis zu Noten mit ähnlicher Tonhöhe in derselben Aufnahme, usw.. Je nach Anspruch kann man aber auch auf bestimmte Informationen verzichten.The quality of the result of identification step a) also influences the quality of the sound decomposition in step b). If step a) only serves to determine the musical content as mentioned above, it usually suffices to determine that a sound object occurs at a certain time, for example the pitch of the note "cis", and how long this object sounds. If, on the other hand, the tonal decomposition in step b) is the goal, then it is advantageous to evaluate as much information as possible about the course and parameters of the note objects found, such as the exact course of the pitch curve over time, the amplitude of the object and theirs Change in the time, the manner of inserting a note, the consistency of the mixture of their partials in relation to notes of similar pitch in the same recording, etc .. Depending on the claim but you can also do without certain information.
Wie beschrieben ist die Haupteigenschaft der im Audio-Material zu findenden Einzelobjekte, dass sie eine konsistente Tonhöhe bzw. einen konsistenten Tonhöhenverlauf haben. Des Weiteren sollten sie einen konsistenten Verlauf ihrer einzelnen Teiltöne aufweisen. Das heißt, dass angenommen wird, dass ein zu findendes Klangobjekt aus dem Verlauf eines Grundtones besteht, sowie aus einer im Prinzip unbegrenzten Zahl von Obertönen, die annähernd ganzzahlige Vielfache der Grundtonfrequenz sein sollen. Weiter sollte der Verlauf der zu einem Klangobjekt gehörenden Teiltöne keine unmotivierten Sprünge aufweisen. Diese Annahmen sind aus den Eigenschaften der von natürlichen Instrumenten erzeugten Klänge abgeleitet. Daher hat das erfindungsgemäße Verfahren möglicherweise dort seine Grenzen, wo Musik auf beliebige willkürliche Weise synthetisch erzeugt wurde.As described, the main characteristic of the individual objects to be found in the audio material is that they have a consistent pitch or consistent pitch progression. Furthermore, they should have a consistent progression of their individual partials. That is, it is assumed that a sound object to be found consists of the course of a fundamental tone, as well as an essentially unlimited number of harmonics, which should be approximately integer multiples of the fundamental tone frequency. Furthermore, the course of the partials belonging to a sound object should not have any unmotivated jumps. These assumptions are derived from the properties of the sounds produced by natural instruments. Therefore, the method according to the invention may have its limits where music has been synthesized arbitrarily arbitrarily.
Die im Audio-Material zu identifizierenden und oben beschriebenen Objekte kann man als "klanghafte" Objekte bezeichnen, in den Ansprüchen werden sie als Notenobjekte benannt. Deren Haupteigenschaft es ist, eine Tonhöhe oder einen Tonhöhenverlauf über eine wahrnehmbare Dauer zu haben, und die Kurvenform ihres Zeitsignals verhält sich im Wesentlichen periodisch oder quasiperiodisch. Es werden davon die nicht klanghaften Objekte unterschieden, die geräuschhaften Objekte. Ereignishafte Objekte sind eine Untermenge der geräuschhaften Objekte.The objects to be identified in the audio material and described above may be referred to as "sonic" objects, in the claims they are named as note objects. Its main characteristic is to have a pitch or pitch curve over a perceptible duration, and the waveform of its time signal behaves essentially periodically or quasi-periodically. It distinguishes the non-sound objects, the noisy objects. Event-aware objects are a subset of the noisy objects.
Die Ereignisobjekte zeichnen sich dadurch aus, dass sie im Zeitsignal einen sprunghaften Amplitudenanstieg aufweisen und schon deshalb an dieser Stelle nicht periodisch sind. Außerdem klingen Sie meist schnell aus. Diese Objekte werden musikalisch auch meist von einem plötzlichen Ereignis erzeugt, wie vom Anschlagen oder Anzupfen einer Saite, oder dem Anschlagen eines Schlaginstrumentes wie einer Trommel. In diesem Zusammenhang können sie vom erfindungsgemäßen Verfahren auch noch weiter unterschieden werden: falls auf ein ereignishaftes Objekt sofort der Amplitudenanstieg eines klanghaften Objektes folgt, kann angenommen werden, dass das Ereignis den Anschlag einer Note darstellt und kann deshalb dieser Note zugeordnet werden. In diesem Fall können das klanghafte und das ereignishafte Objekt zur Darstellung und Handhabung zu einem einzigen Objekt zusammengefasst werden, das ebenfalls als Notenobjekt zu bezeichnen ist. Zur Klangerzeugung bei der Resynthese, d.h. bei der Klangwiedergabe nach der Bearbeitung eines Notenobjektes oder mehrerer Notenobjekte, kann es aber sinnvoll sein, die beiden genannten Teilobjekte aus klanglichen Gründen getrennt zu verarbeiten.The event objects are characterized by the fact that they have a sudden increase in amplitude in the time signal and therefore are not periodic at this point. In addition, you usually sound fast. These objects are also generated musically by a sudden event, such as striking or plucking a string, or hitting a percussion instrument such as a drum. In this context, they can also be further distinguished by the method according to the invention: if on a event-related object immediately follows the increase in amplitude of a sound object, it can be assumed that the event represents the attack of a note and can therefore be assigned to this note. In this case, the sonic and the eventual object can be summarized for presentation and handling into a single object, which is also to be designated as a note object. For sound generation in the resynthesis, ie in the sound reproduction after editing a note object or several note objects, but it may be useful to process the two mentioned sub-objects separately for reasons of sound.
Wenn das ereignishafte Objekt nicht einem folgenden klanghaften Objekt zugeordnet werden kann, kann angenommen werden, dass es sich um eine rein perkussiv-rhythmisch verstandene Note ohne Tonhöhenbedeutung handelt, wie etwa ein Trommelschlag. Dieses kann in der weiteren Bearbeitung dementsprechend anders behandelt werden.If the eventual object can not be assigned to a subsequent sonic object, it can be assumed that it is a purely percussive-rhythmically understood note without pitch meaning, such as a drum beat. This can be treated differently in the further processing accordingly.
Die geräuschhaften Objekte haben mit den ereignishaften Objekten gemeinsam, dass sie keinen periodischen Klanganteil haben, unterscheiden sich aber von diesen dadurch, dass sie nicht plötzlich anfangen und auch nicht schnell ausklingen, sondern zeitlich ausgedehnter sein können. Solche Objekte können z.B. Zischlaute der Konsonanten im Gesang, Atemgeräusche, Nebengeräusche des Spielers wie ein Fingerrutschen auf der Gitarrensaite, oder auch gar nicht zur eigentlichen Musik gehörende Nebengeräusche sein. Solche geräuschhaften Objekte könnten z.B. dem Anwender separat zur weiteren Manipulation zur Verfügung gestellt werden oder einfach pauschal als "Restsignal" behandelt werden.The noisy objects have in common with the eventual objects that they have no periodic sound component, but differ from them in that they do not start suddenly and can not end quickly, but can be extended in time. Such objects may e.g. Sibilants of the consonants in the vocals, breathing noises, background noise of the player like a finger slide on the guitar string, or even not belonging to the actual music noise. Such noisy objects could e.g. be made available to the user separately for further manipulation or simply flat rate treated as a "residual signal".
Nachfolgend soll das erfindungsgemäße Verfahren an einem Ausführungsbeispiel erläutert werden unter Bezugnahme auf die Figuren, in denen Ergebnisse einzelner Verfahrensschritte dargestellt sind. Das Verfahren wurde durchgeführt an einem 6-Sekunden-Ausschnitt aus einer Klavieraufnahme der Invention No. 1, C-Dur von J.S. Bach. Es zeigen:
- Fig. 1
- das Audio-Signal F(A,t) als Amplitude A über die Zeit t;
- Fig. 2
- einen vergrößerten Zeitausschnitt aus der
Fig. 1 über 0.5 Sekunden; - Fig. 3
- nach Transformation des Audio-Signals der
Fig. 1 in den Frequenzbereich die grautonkodierten Energien der einzelnen Bins in ihrer Augenblicksfrequenz, wobei dunkle Bereiche hohe Energie bedeuten, in einer Darstellung der Frequenz f in Cent über der Zeit t: F(f, t, E); - Fig. 4
- eine Darstellung gemäß
Fig. 3 mit einem Schnitt in x- und y-Richtung, der die errechneten Energien E bei dieser Frequenz f1 und der Zeit t1 zeigt; - Fig. 5
- eine Energielandschaft F(f, t, E) ähnlich der in
Fig. 4 gezeigten, bei der die Energiewerte über eine Fensterfunktion aufsummiert und in Zeitrichtung geglättet wurden, mit einem Schnitt in x- und y-Richtung bei der Frequenz f1 und der Zeit t1; - Fig. 6
- eine Relevanzlandschaft zu der Energielandschaft der
Fig. 5 mit einem Schnitt in x- und y-Richtung mit dem ersten gefundenen Maximum; und - Fig. 7
- alle gefundenen Notenobjekte nach Abschluss einer iterativ ausgeführten Identifizierung.
- Fig. 1
- the audio signal F (A, t) as amplitude A over time t;
- Fig. 2
- an enlarged time excerpt from the
Fig. 1 over 0.5 seconds; - Fig. 3
- after transformation of the audio signal of the
Fig. 1 in the frequency domain, the gray-tone coded energies of the individual bins at their instantaneous frequency, where dark areas indicate high energy, in a representation of the frequency f in cents over time t: F (f, t, E); - Fig. 4
- a representation according to
Fig. 3 with a section in the x and y directions, showing the calculated energies E at this frequency f 1 and the time t 1 ; - Fig. 5
- an energy landscape F (f, t, E) similar to the one in
Fig. 4 shown, in which the energy values were summed over a window function and smoothed in time direction, with a section in the x and y direction at the frequency f 1 and the time t 1 ; - Fig. 6
- a relevance landscape to the energy landscape of the
Fig. 5 with a section in the x and y directions with the first maximum found; and - Fig. 7
- all found note objects after completion of an iterative identification.
Es werden zur Erläuterung der Erfindung Begriffe verwendet die zunächst in ihrer Bedeutung definiert werden sollen.There are used to explain the invention terms that are initially defined in their meaning.
Die Begriffe "Zeitbereich" bzw. "Frequenzbereich" entsprechen dem üblichen Gebrauch von time domain und frequency domain im Englischen, und betreffen also Untersuchungen oder Berechnungen entweder im ursprünglichen Zeitsignal F(A, t) selbst (= Zeitbereich) oder in seiner Repräsentation F(f, t) in der über eine diskrete Fourier-Transformation, speziell FFT, überführten Form (= Frequenzbereich).The terms "time domain" and "frequency domain" correspond to the usual use of time domain and frequency domain in English, and therefore concern investigations or calculations either in the original time signal F (A, t) itself (= time domain) or in its representation F (FIG. f, t) in the form transferred over a discrete Fourier transformation, in particular FFT, (= frequency range).
"Fensterfunktionen" dienen zum Ein- und Ausblenden eines Signals bei seiner Bearbeitung an einem Zeit- oder Frequenzort. Sie können sich also auf die Zeit oder auf das Frequenzspektrum beziehen. Die Fensterform ist nicht festgelegt und kann in einem einfachen Fall ein Dreiecksfenster sein. Bessere Ergebnisse ergeben z.B. ein von-Hann-Fenster. Die Wahl der Fensterform kann für den jeweiligen Zweck optimiert werden. Für das nachfolgend beschriebene Ausführungsbeispiel eines erfindungsgemäßen Verfahrens werden sich überlappende Fenster verwendet."Window functions" are used to show or hide a signal when it is being processed at a time or frequency location. So you can refer to the time or to the frequency spectrum. The window shape is not fixed and may in a simple case be a triangle window. Better results, e.g. a von Hann window. The choice of the window shape can be optimized for the respective purpose. For the embodiment of a method according to the invention described below, overlapping windows are used.
Als "Bins" werden die Frequenzbänder bezeichnet, die sich aus der FFT ergeben. Damit werden auch die möglicherweise geänderten Frequenzen eines Bandes bezeichnet, wenn die Methode der Augenblicksfrequenz angewendet wird.Bins are the frequency bands that result from the FFT. This also refers to the possibly changed frequencies of a band, if the method of the instantaneous frequency is applied.
Die Augenblicksfrequenz eines Bins ergibt sich aus der Berücksichtigung des Phasenwertes des jeweiligen Bins. Wenn die Analysefenster sich überlappen, kann aus der Differenz zwischen dem durch den Zeitfortschritt erwarteten und dem tatsächlich vorhanden Phasenwert des Bins die Augenblicksfrequenz des jeweiligen Bins ermittelt werden. Je häufiger die Überlappung ist, desto mehr benachbarte Bins können eine bestimmte Frequenz repräsentieren, die nicht mit der rechnerischen Frequenz des Bins selbst übereinstimmen muss.The instantaneous frequency of a bin results from the consideration of the phase value of the respective bin. If the analysis windows overlap, the instantaneous frequency of the particular bin can be determined from the difference between the bin's expected phase value and the actual phase value of the bin. The more frequent the overlap, the more adjacent bins can represent a particular frequency that does not have to match the computational frequency of the bin itself.
Es werden für das erfindungsgemäße Verfahren für die Bins energiehafte Größen berechnet, die die Kurznotation E erhalten und nachfolgend als "Energie" und "Magnitude" bezeichnet werden sollen. Nach der FFT werden jedem Bin jeweils eine Energie zugeordnet, die sich aus Real- und Imaginärteil der Fourier-Reihe nach (Re*Re) + (Im*Im) errechnet, so dass ein mit der Amplitude der Frequenz zusammenhängender Wert entsteht. Die Magnitude ist die Wurzel daraus. Zur optimalen Skalierung des jeweiligen Wertes in der Auswertung kann aber der Magnitudenwert ggf. mit einem geeigneten Wert zwischen 1 und 2 potenziert werden, so dass die dann erhaltene Größe etwas zwischen Energie und Magnitude darstellt. Der Begriff Energie wird also hier in einem allgemeineren Sinn verwendet, und nicht im Sinne von Schallenergie oder anderer Energiebegriffe.For the method according to the invention for the bins, energetic quantities are calculated which receive the short notation E and subsequently referred to as "energy" and "Magnitude" should be called. After the FFT each energy is assigned to each bin, which is calculated from the real and imaginary parts of the Fourier series according to (Re * Re) + (Im * Im), so that a value related to the amplitude of the frequency is produced. The magnitude is the root of it. For the optimal scaling of the respective value in the evaluation, however, the magnitude value can possibly be raised to a suitable value between 1 and 2, so that the size then obtained represents something between energy and magnitude. The term energy is used here in a more general sense, not in the sense of sound energy or other energy terms.
Das Cent ist ein im musikalischen Zusammenhang verwendetes Maß für Frequenzverhältnisse, also Intervallgrößen, definiert als Cent = log(f1/f2) / log(2) * 1200. Deshalb ist in dieser Einheit ein musikalisches Intervall unabhängig von seiner absoluten Tonhöhe immer gleich groß, nämlich Halbton = 100 Cent, Oktave = 1200 Cent.The cent is a measure of frequency relations used in the musical context, ie interval sizes defined as Cent = log (f1 / f2) / log (2) * 1200. Therefore, in this unit, a musical interval is always the same regardless of its absolute pitch, namely half tone = 100 cents, octave = 1200 cents.
Zur Identifizierung von Notenobjekten bedient sich das beschriebene Verfahren bildhaft gesprochen einer Energielandschaft, worunter ein dreidimensionales mathematisches Gebilde F (t, f, E) verstanden wird, bei dem die x-Achse die Zeit t, die y-Achse die Frequenz f und die z-Achse eine Energie E am jeweiligen Zeit/Frequenz-Ort darstellen. Dabei ist zweckmäßig die Frequenzachse nach Cent skaliert, damit Tonintervalle in jedem Frequenzbereich immer gleich groß sind. Die Energielandschaft ist im weiter zu beschreibenden Verfahren durch diskrete Punkte repräsentiert, die Zeitachse durch die Messpunkte in der Zeit in z.B. ca. 0.01 s Abstand, die Frequenzachse durch Punkte im Intervallabstand von z.B. je 5 Cent. In einigen Verfahrensschritten werden die diskreten Punkte über Fensterfunktionen in kontinuierliche Spektrogramme gewandelt, was aber als optional zu betrachten ist.To identify note objects, the method described serves pictorially an energy landscape, which is understood as meaning a three-dimensional mathematical entity F (t, f, E) in which the x-axis is the time t, the y-axis the frequency f and the z Axis represents an energy E at the respective time / frequency location. In this case, the frequency axis is expediently scaled by cent so that sound intervals in each frequency range are always the same. The energy landscape is represented in the method to be described by discrete points, the time axis through the measuring points in time in about 0.01 s distance, the frequency axis by points in the interval distance of eg 5 cents each. In some process steps, the discrete points are converted via window functions into continuous spectrograms, which is to be regarded as optional.
Das untersuchte Audiomaterial liegt nach Aufnahme und Analog-DigitalWandlung z.B. als Audiodatei im PCM-Format vor (Pulse-Code-Modulation, werte- und zeitdiskretes Signal). Größenangaben im weiteren Text, wie z.B. für Analysefenster, beziehen sich auf ein digitales Signal, das mit einer Samplerate von 44100 Samples/s vorliegt. Für andere Sampleraten wären die Größen entsprechend anzupassen.The examined audio material is after recording and analog-digital conversion, e.g. as audio file in PCM format (pulse-code modulation, time- and time-discrete signal). Size data in the further text, such. for analysis windows, refer to a digital signal that is available with a sample rate of 44100 samples / s. For other sample rates, the sizes would have to be adjusted accordingly.
Das exemplarisch beschriebene Verfahren arbeitet sowohl bei der Analyse als auch bei der Klangextraktion für bestimmte Teilaufgaben direkt im Zeitbereich (
Für die Bearbeitung im Frequenzbereich wird das Signal mit gleichmäßig aufeinander folgenden und sich überlappenden Fensterfunktionen ausgelesen und zunächst über eine FFT in ein komplexes Array für die jeweilige Zeitscheibe überführt. Die Größe der FFT kann z.B. 2048 Samples betragen, die Überlappung sollte mindestens 4-fach sein. Es ergeben sich so Abstände der Zeitscheiben von z.B. 512 Samples oder rund 0.01 sec.For processing in the frequency domain, the signal is read out with uniformly overlapping and overlapping window functions and first transferred via an FFT into a complex array for the respective time slice. The size of the FFT can be eg 2048 samples, the overlap should be at least 4 times. This results in intervals between the time slices of, for example, 512 samples or around 0.01 sec.
Ein Beispiel für ein weiter bearbeitetes Transformationsergebnis zeigt
Zu dem Signal F(f, t, E) im Frequenzbereich werden die folgenden Werte bestimmt: die Magnituden aller Bins, die Augenblicksfrequenzen aller Bins, die Tonalitätswerte aller Bins. Der Tonalitätswert ist dabei eine rechnerische Größe, die für den Grad der Periodizität in der Binfrequenz steht. Er wird für je ein Bin berechnet, indem ermittelt wird, wie nah die Augenblicksfrequenzen der Nachbarbins an der Augenblicksfrequenz des in Frage stehenden Bins liegen. Die Anzahl der einbezogenen Nachbarbins ist dabei gleich der Anzahl der Fensterüberlappungen, da diese bestimmt, wie viele Bins eine Frequenz repräsentieren können. Der Tonalitätswert eines Bins wird umso höher, je näher die Augenblicksfrequenzen der Bins in seiner Umgebung zusammen liegen. Dabei bedeutet ein hoher Tonalitätswert, dass tendenziell ein Notenobjekt vorliegt, während ein niedriger Tonalitätswert tendenziell für ein Ereignisobjekt spricht. Die Tonalitätswerte werden auf einen Wertebereich zwischen 0 und 1 skaliert. Zusätzlich wird jedem Bin ein Geräuschhaftigkeitswert zugewiesen, der direkt aus dem Tonalitätswert abgeleitet wird, und sich berechnet als 1 - Tonalitätswert. Ein plötzlicher Anstieg der Geräuschhaftigkeitswerte spricht für ein Ereignisobjekt.To the signal F (f, t, E) in the frequency domain, the following values are determined: the magnitudes of all bins, the instantaneous frequencies of all bins, the tonality values of all bins. The tonality value is an arithmetic variable that stands for the degree of periodicity in the binary frequency. It is calculated for each bin by determining how close the instantaneous frequencies of the neighboring bins are to the instantaneous frequency of the bin in question. The number of neighbor bins included is equal to the number of window overlaps, as this determines how many bins can represent a frequency. The tonal value of a bin becomes higher the closer the instantaneous frequencies of the bins in its environment are. A high tonality value means that a note object tends to be present, while a low tonality value tends to speak for an event object. The tonality values are scaled to a value range between 0 and 1. In addition, each bin is assigned a noise value directly from the tonality value is derived and calculated as a 1-tonality value. A sudden increase in noise values speaks for an event object.
Es wird dann eine Energielandschaft erzeugt, die exemplarisch in
Da es zunächst um das Auffinden klanghafter Objekte geht, werden die errechneten Energien der Bins zusätzlich mit den ermittelten Tonalitätswerten gewichtet: für jede Zeitscheibe wird für jedes Bin seine Energie mit seinem Tonalitätswert multipliziert. Diese vorgenommene Wichtung verändert das Ergebnis nur graduell, kann deshalb auch weggelassen werden. Gemäß der Augenblicksfrequenz des Bins wird dann seine Cent-Position (= y-Position) in der Landschaft ermittelt und von diesem Punkt ausgehend das Energie-Tonalitätsprodukt mit einer gewissen Verteilungsbreite in Cent-Richtung über eine Fensterfunktion auf die Landschaft aufsummiert. Die Breite des Verteilungsfensters in Cent liegt zweckmäßig etwa in der Größenordnung von einem Halbton. Der Darstellung der
Das Identifizieren der Notenobjekte geschieht durch eine Iteration in der Weise, dass das jeweils am deutlichsten hervortretende Objekt im Sinne des Maximums als Höhenzug verfolgt wird und dessen Energie anschließend aus der Landschaft subtrahiert wird, worauf das nächste am deutlichsten hervortretende Objekt gesucht wird, usw. Das klanglich am deutlichsten hervortretende Objekt ist aber nicht identisch mit dem höchsten Höhenzug in der Energielandschaft F(f, t, E). Das ist dadurch bedingt, dass ein klanghaftes Objekt nicht definiert ist durch einen einzelnen Höhenzug in der Energielandschaft, sondern dass angenommen werden muss, dass die Energie in Höhenzügen auf den ganzzahligen Vielfachen einer Grundfrequenz auch zu eben diesem gesuchten klanghaften Objekt gehören, weil sie Obertöne zu dem Grundton mit der Grundfrequenz sind. Dabei kann es durchaus sein, dass die Energie des Grundtons schwächer ist als die der höheren Teiltöne, trotzdem aber das Objekt auf der Frequenz des Grundtons gefunden und verfolgt werden soll. Hintergrund dieser Überlegungen ist, daß die Grundfrequenz zwar die Tonhöhe eines Tones bestimmt, der Klang eines Tones wird aber maßgeblich von den Obertönen geprägt.The identification of the note objects is done by an iteration in such a way that the most prominent object in the sense of the maximum is traced as a ridge and its energy is then subtracted from the landscape, whereupon the next most clearly projecting object is searched, etc. Das However, the most pronounced sound object is not identical with the highest ridge in the energy landscape F (f, t, E). This is due to the fact that a sound object is not defined by a single ridge in the energy landscape, but that it must be assumed that the energy in ridges on the integer multiples of a fundamental frequency also belong to just this sought sound object because they overtones the fundamental tone with the fundamental frequency. It may well be that the energy of the fundamental tone is weaker than that of the higher partials, but nevertheless the object is to be found and tracked on the frequency of the fundamental tone. The background of these considerations is that although the fundamental frequency determines the pitch of a sound, the sound of a sound is decisively influenced by the overtones.
Um dem Rechnung zu tragen wird bildhaft gesprochen eine zweite Landschaft erzeugt, die sogenannte Relevanzlandschaft, die exemplarisch in
Das Auffinden der Notenobjekte in der wie eben geschildert erzeugten Relevanzlandschaft, die im Grunde nur eine besondere Energielandschaft ist, nämlich eine die Obertonenergie berücksichtigende, erfolgt durch ein iteratives mathematisches Verfahren. Es wird nachfolgend erläutert, wie das Auffinden der Notenobjekte in dieser Relevanzlandschaft erfolgt, ohne dass die Erfindung sich darauf beschränkt. Denn das Auffinden der Notenobjekte könnte prinzipiell auch in einer der anderen oben geschilderten oder weiter modifizierten Energielandschaften erfolgen, was aber den Nachteil hätte, dass Obertöne als eigene Noten identifiziert würden und z.B. durch Nachbearbeitung mit den Grundtönen zu verbinden wären. Die Aufgabe, Notenobjekte auch klanglich zu separieren, ist dann gut gelöst, wenn die Verknüpfung zwischen Grund- und Obertönen gelingt. Daher ist die Maximumsuche in der Relevanzlandschaft bevorzugt, weil sie zu den besten Ergebnissen führt.Finding the note objects in the relevance landscape generated as just described, which is basically just a special energy landscape, namely a consideration of the harmonic energy, is carried out by an iterative mathematical process. It will be explained below how finding the note objects in this relevance landscape occurs without the invention being restricted thereto. For the discovery of the note objects could in principle also take place in one of the other energy landscapes described above or further modified, but this would have the disadvantage that overtones would be identified as separate notes, and e.g. by post-processing with the basic tones would be to connect. The task of separating note objects also in terms of sound is then solved well if the combination of fundamental and overtones succeeds. Therefore, the maximum search in the relevance landscape is preferred because it leads to the best results.
Es wird zunächst der höchste Punkt der Relevanzlandschaft gesucht. In
Anschließend wird der Energielandschaft E (f, t, E) die rechnerische Energie des gefundenen Notenobjektes entzogen, und zwar an den Orten des Verlaufes seines Grundtones sowie dem aller Teiltöne, also der ganzzahligen Vielfachen der Grundtonfrequenz. Prinzipiell könnte das auch in der Relevanzlandschaft E'(f, t, E') erfolgen, was aber eine schlechtere Ausführungsvariante wäre, weil die mögliche Überlagerung von Obertönen, die zu verschiedenen Grundtönen gehören, bei Energieentzug aus der Energielandschaft besser aufgehoben wird.Subsequently, the energy landscape E (f, t, E) is deprived of the arithmetic energy of the found note object, namely at the locations of the course of its fundamental tone and that of all partials, ie the integer multiples of the fundamental frequency. In principle, this could also take place in the relevance landscape E '(f, t, E'), but this would be a poorer embodiment, because the possible superimposition of overtones, which belong to different basic tones, is better offset by energy withdrawal from the energy landscape.
Mit Vorteil wird aber nicht die gesamte dort vorhandene Energie entzogen, sondern nur ein festgelegter Anteil, z.B. 50%. Dieser Anteil kann z.B. als Parameter vom Anwender auf andere Werte gesetzt werden, weil je nach Audio-Material andere Anteile bessere Ergebnisse ergeben können. Bei starker Überlagerung in den Obertönen kann z.B. eine Herabsetzung auf 25% zu besseren Ergebnissen führen. Der Entzug nur eines Anteils der Energie ist sinnvoll, da zunächst nicht bekannt ist, ob nicht andere gleichzeitig klingende Notenobjekte Teiltöne haben, die nahe an den Teiltönen des zuerst gefundenen Notenobjektes liegen. Durch den nur teilweisen Energieabzug können weitere Notenobjekte in den folgenden Iterationen noch gefunden werden.Advantageously, however, not all the energy present there is withdrawn, but only a fixed proportion, eg 50%. For example, this percentage can be set as a parameter by the user to other values because, depending on the audio material, other parts may produce better results. For example, if the harmonics are strongly superimposed, a reduction to 25% can lead to better results. The withdrawal of only a portion of the energy makes sense, since it is not known at first whether other note objects that are sounding at the same time do not have partials that are close to the partials of the first found note object. By the only partial deduction of energy further notes objects in the following iterations can still be found.
Das Reduzieren der Energie an den gegebenen Frequenzorten in der Energielandschaft E(f, t, E) geschieht vorteilhaft wieder in Form einer in Frequenzrichtung nach oben und unten ausblendenden Fensterfunktion, deren Breite in der Größenordnung von einem Halbton liegt. Falls ein Modell des Obertonspektrums des Klanges bekannt ist, z.B. weil zu dem den Klang erzeugenden Instrument ein Referenzspektrum vorliegt oder modellhaft bekannt ist, kann der Energieentzug in den Obertönen diesem Referenzspektrum oder Modell entsprechend erfolgen. Das Notenobjekt "merkt sich" den durch sie entzogenen Energieanteil für ihre spätere Bewertung, indem ihr dieser Energieanteil zugeschrieben wird.Reducing the energy at the given frequency locations in the energy landscape E (f, t, E) advantageously takes place again in the form of a window function which fades up and down in the frequency direction and whose width is of the order of one semitone. If a model of the overtone spectrum of the sound is known, e.g. because a reference spectrum exists or is known model-wise for the instrument producing the sound, the energy extraction in the harmonics can be carried out according to this reference spectrum or model. The note object "remembers" the amount of energy withdrawn by it for its later evaluation by attributing that portion of energy to it.
In dem Zeitbereich, der durch das neu gefundene Notenobjekt betroffen ist, wird die Relevanzlandschaft wie weiter oben beschrieben neu berechnet, da sich in diesem Zeitbereich die Energielandschaft als Grundlage für die Relevanzlandschaft durch das Entziehen der Energie geändert hat.In the time range affected by the newly found note object, the relevance landscape is recalculated as described above, since in this time range the energy landscape has changed as the basis for the relevance landscape due to the withdrawal of energy.
Für das neu gefundene Notenobjekt wird ermittelt, ob es sich in der Zeit und im Verlauf seiner Grundfrequenz mit einem anderen vorher bereits gefundenen Notenobjekt überschneidet. Falls es sich mit einem solchen so überschneidet oder es direkt daran anschließt, dass es plausibel ist, dass es sich um dasselbe Notenobjekt handelt, wird es diesem zugeschlagen (ggf. unter Verlängerung dessen Tonhöhenverlaufes). Anderenfalls wird es als neues Notenobjekt in die Menge der gefundenen Notenobjekte aufgenommen. Da in jedem Iterationsschritt nur z.B. 50% der Energie für das Notenobjekt entzogen werden, wird in der Regel jedes Notenobjekt im Laufe der Iteration mehrfach gefunden.For the newly found note object, it is determined whether it overlaps in time and in the course of its fundamental frequency with another previously found note object. If it intersects with one of them or if it directly adjoins the fact that it is plausible that it is the same note object, it will be added to it (if necessary, by extending its pitch progression). Otherwise, it will be included as a new note object in the set of found note objects. Since in each iteration step only e.g. 50% of the energy for the note object is removed, usually every note object is found several times in the course of the iteration.
Die Iteration wird fortgesetzt, indem in der veränderten Relevanzlandschaft wieder der höchste Punkt gesucht wird. Die Iteration wird bis zum Erreichen eines Abbruchkriteriums fortgesetzt. Ein vorteilhaftes Abbruchkriterium für die Iteration ist die Reduktion der Energie im Verhältnis zur ursprünglichen Energie in der Energielandschaft. Die Iteration kann z.B. abgebrochen werden, wenn nur noch 10% der ursprünglichen Energie in der Energielandschaft vorhanden ist. Auch dies kann dem Anwender als zu verändernder Parameter zur Verfügung gestellt werden.The iteration continues by searching for the highest point in the changed relevance landscape. The iteration will take until you reach a Abort criterion continued. An advantageous termination criterion for the iteration is the reduction of energy in relation to the original energy in the energy landscape. The iteration can be interrupted, for example, if only 10% of the original energy is present in the energy landscape. This can also be made available to the user as a parameter to be changed.
Das Auffinden der ereignishaften Objekte, die sich durch ein plötzliches Ansteigen des geräuschhaften Anteils im Signal auszeichnen, kann entweder im Zeitbereichssignal erfolgen, indem das Ansteigen vor allem hochpassgefilterter Signalanteile verfolgt wird, oder im Frequenzbereich mit Hilfe der Geräuschhaftigkeitswerte der Bins, die dazu, gewichtet mit den Energien der jeweiligen Bins, für jede Zeitscheibe aufaddiert werden. In beiden Fällen erhält man eine Verlaufskurve des geräuschhaften Anteils im Gesamtsignal. An den Punkten der größten Steigungen dieser Kurve, ggf. durch einen Schwellenwert für die Steigung definiert, sind die ereignishaften Objekte anzunehmen.Locating the event-like objects, which are characterized by a sudden increase of the noisy component in the signal, can be done either in the time domain signal by tracking the increase of above all high-pass filtered signal components, or in the frequency domain using the noise values of the bins that are weighted with the energies of the respective bins, are added up for each time slice. In both cases, one obtains a progression curve of the noisy component in the overall signal. At the points of the largest slopes of this curve, if necessary defined by a threshold value for the slope, the event-related objects are to be assumed.
Die im vorhergehenden Schritt gefundenen ereignishaften Objekte können entweder isoliert für sich selbst im Signal vorkommen, wie es bei rein perkussiven Ereignissen der Fall sein wird, oder sie können die Anschlaggeräusche der Notenobjekte sein, die zuvor in der Iteration gefunden worden sind, wie es bei gezupften oder angeschlagenen tonalen Instrumenten wie Gitarre, Klavier etc. der Fall sein wird. Um das zu unterscheiden, wird für jedes gefundene ereignishafte Objekt am Zeitpunkt seines Auftretens untersucht, ob an einer oder an mehreren der dort befindlichen Notenobjekte unmittelbar nach dem Auftreten des Ereignisses ein signifikanter Anstieg ihrer Energie auftritt. Falls dies der Fall ist, wird das ereignishafte Objekt als Anschlag des Notenobjektes verstanden und diesem zugeordnet. Falls der Energieanstieg bei mehreren Noten der Fall ist, wird das ereignishafte Objekt allen diesen Noten zugeordnet. Falls der Energieanstieg mitten in einem Notenobjekt auftritt, wird das Notenobjekt dort getrennt und ab dort als neues Notenobjekt verstanden. Wenn zum Zeitpunkt des ereignishaften Objekts kein korrespondierendes Notenobjekt gefunden wird, wird das ereignishafte Objekt als isoliertes perkussives Ereignis verstanden.
Dem Auffinden der notenhaften Objekte sollte mit Vorteil ein Bewertungsschritt folgen. Bei der Suche nach Notenobjekten in der beschriebenen Iteration werden in der Regel mehr Objekte gefunden, als musikalisch plausibel vorhanden sind. Deshalb wird die Menge der gefundenen Notenobjekte am Ende noch nach verschiedenen Plausibilitätskriterien überprüft und ggf. nicht ausreichend plausible Notenobjekte entfernt. Ein Plausibilitätskriterium ist z.B. die relative Energie und die Verdeckung. Im Allgemeinen werden bei dem geschilderten iterativen Vorgehen zu viele kleine Notenobjekte mit zu wenig Energie gefunden. Deshalb wird untersucht, wie viel Energie die Note im Verhältnis zur gesamten Energie in ihrem Zeitbereich hat. Wenn sie zuwenig relative Energie besitzt, kann sie entfernt werden.Finding the note-like objects should advantageously be followed by an evaluation step. When searching for note objects in the described iteration, more objects are usually found than are musically plausible. Therefore, the set of found note objects will be checked at the end according to different plausibility criteria and possibly not sufficiently plausible note objects removed. A plausibility criterion is e.g. the relative energy and the occlusion. In general, too many small note objects with too little energy are found in the described iterative procedure. Therefore, it examines how much energy the note has in proportion to the total energy in its time domain. If it has too little relative energy, it can be removed.
Manchmal werden auch Objekte als eigenständige Noten identifiziert, die eigentlich Obertöne einer anderen vorhandenen Note sind. Hier kann z.B. untersucht werden, ob die höhere Note einen eigenständigen Verlauf in Tonhöhe, Amplitude und Dauer aufweist, oder ob sie in diesen Parametern im Wesentlichen wie eine tiefere Note verläuft. Wenn letzteres der Fall ist, kann das Objekt entfernt werden oder der tieferen Note zugeschlagen werden.Sometimes objects are also identified as independent notes, which are actually overtones of another existing note. Here, e.g. It will be examined whether the higher note has an independent progression in pitch, amplitude and duration, or whether it essentially runs like a lower note in these parameters. If the latter is the case, the object can be removed or added to the lower note.
Weitere Bewertungen können nach musikalischen Gesichtspunkten erfolgen. Wenn z.B. ein Notenobjekt in ihrer Tonhöhengegend sehr isoliert steht (sehr hoch oder sehr tief, wenn sich dort keine anderen Noten befinden), ist sie musikalisch unwahrscheinlich. Wenn z.B. eine Note sich mit anderen Noten in ihrer tonhöhen- und zeitlichen Nachbarschaft zu einer aufsteigenden oder absteigenden Linie verbindet, ist sie musikalisch sehr wahrscheinlich, auch wenn sie sonst eher schwach ist, usw. All diese Kriterien lassen sich mathematisch abbilden und z.B. gewichten, um zu einer möglichst plausiblen Menge von Notenobjekte zu gelangen.Further reviews can be made according to musical criteria. For example, if a note object is very isolated in its pitch range (very high or very low if there are no other notes), it is musically unlikely. For example, if a note deals with other notes in its It is musically very probable, even if it is otherwise rather weak, etc. All these criteria can be mapped mathematically and weighted, for example, in order to arrive at as plausible a set of musical notation objects as possible ,
Diesem geschilderten Identifizierungsschritt kann auch noch ein Eingreifen durch den Anwender folgen, dem die aufgefundenen Notenobjekte in geeigneter Weise graphisch dargestellt werden, z.B. in der in
Die automatische Identifizierung kann optional dadurch optimiert werden, dass die Noten des aufgenommen Musikstück hinterlegt werden, so dass in dem oben genannten Verfahren anhand der hinterlegten Noten gezielt versucht wird, dort Grundtöne aufzufinden, die den Frequenzen der hinterlegten Noten entsprechen. Dies kann z.B. geschehen durch Auswertung einer hinterlegten MIDI-Datei, die die Noten der aufgenommenen Komposition enthält. Es können alternativ auch gleichzeitig mit der Aufnahme des eigentlich verwendeten Gesamtsignals Stützspuren aufgenommen werden, wie z.B. durch Einzelmikrofonierung der beteiligten Instrumentalisten oder Sänger, oder bei einer Gitarre durch Einzelabnahme der Saiten. In dem dann ausgeprägter monophonen Signal der Einzelstimmen können die gewünschten Noten des Gesamtsignals eindeutiger identifiziert und somit der Gesamtklang besser zur Bearbeitung separiert werden.The automatic identification can optionally be optimized by depositing the notes of the recorded piece of music, so that in the above-mentioned method based on the stored notes is specifically trying to find there basic tones that correspond to the frequencies of the deposited notes. This can e.g. done by evaluating a stored MIDI file containing the notes of the recorded composition. Alternatively, support tracks can also be recorded simultaneously with the recording of the actually used overall signal, as e.g. by single microphone of the participating instrumentalists or singers, or in a guitar by individual removal of the strings. In the then pronounced monophonic signal of the individual voices, the desired notes of the total signal can be identified more clearly and thus the overall sound is better separated for processing.
Nachdem die Identifizierung der am Gesamtklang beteiligten Einzelobjekte abgeschlossen ist, kann in einem nächsten Schritt die klangliche Zerlegung des Gesamtklanges erfolgen. Dabei ist die möglichst genaue Bestimmung der vorhandenen Notenobjekte, ihres Tonhöhen- und Amplitudenverlaufes und des Charakters ihres Einsetzens eine Einflussgröße für die Qualität des Ergebnisses der klanglichen Zerlegung. Bei der nachfolgend beschriebenen klanglichen Zerlegung wird der Gesamtklang nur so weit in einzelne klangliche Objekte zerlegt, wie es zur Resynthese des neuen Gesamtklanges notwendig ist. Wenn also z.B. in einem gefundenen komplexen Akkord vom Anwender nur eine einzelne Note angefasst und in ihrer Tonhöhe verändert wird, so muss auch nur das Signal dieser einen Note extrahiert und von dem ursprünglichen Signal subtrahiert werden. Es werden also um so mehr einzelne Signale erzeugt, je mehr Noten verändert werden. Jedes ist dann jeweils ein monophon-periodisches Signal und kann mit bereits bekannten Verfahren zur zeit- und tonhöhenunabhängigen Wiedergabe verändert, summiert und abgespielt werden.After the identification of the individual objects involved in the overall sound has been completed, in a next step the sonic decomposition of the overall sound can take place. The most accurate determination possible of the existing note objects, their pitch and amplitude course and the nature of their insertion is an influencing factor for the quality of the result of the sound decomposition. In the sonic decomposition described below, the overall sound is only decomposed as far as possible into individual sound objects, as is necessary for the resynthesis of the new overall sound. So if, for example, If the user only touches a single note in a complex chord and modifies its pitch, then only the signal of that single note must be extracted and subtracted from the original signal. Thus, the more notes are generated the more signals are generated. Each is then a monophonic-periodic signal and can be changed, summed and played back using known methods for time- and pitch-independent playback.
In einem ersten Schritt der Klangzerlegung werden die ereignishaften Objekte aus dem Originalsignal extrahiert. Wenn das Originalsignal in zu den einzelnen Notenobjekten gehörende Einzelsignale zerlegt wird, werden dabei durch die Aufteilung des Frequenzspektrums die ereignishaften Orte im Zeitsignal verschmiert. Deshalb ist es sinnvoll, zunächst die Orte der ereignishaften Objekte aus dem Zeitsignal zu separieren und die Zerlegung in die Einzelsignale der Notenobjekte an dem so entstehenden Restsignal durchzuführen. Gleichwohl kann dieser optionale Verfahrensschritt auch weggelassen werden.In a first step of sound decomposition, the eventual objects are extracted from the original signal. When the original signal is split into individual signals belonging to the individual note objects, the event-sensitive locations in the time signal are smeared by the division of the frequency spectrum. Therefore, it makes sense to first separate the locations of the event-related objects from the time signal and to perform the decomposition into the individual signals of the note objects on the resulting residual signal. However, this optional method step can also be omitted.
Zur Separation der ereignishaften Objekte wird zunächst ein modifiziertes Zeitsignal erzeugt, in dem die klanghaften Anteile möglichst vollständig abgezogen sind. Dazu werden im Frequenzbereich die Magnituden aller Bins mit den Geräuschhaftigkeitswerten der Bins multipliziert und mit diesen neuen Magnituden und den Originalphasen über die FFT wieder ein Zeitsignal erzeugt. Optional können in die Faktoren für die Magnituden noch Faktoren zur Dämpfung der tieferen Signalanteile eingehen, da für die ereignishaften Objekte oft die höheren Anteile relevanter sind. Aus diesem neuen geräuschhaften Zeitsignal wird an den Stellen, an denen in dem weiter oben beschriebenen Identifizierungsschritt ereignishafte Objekte gefunden worden sind, das Zeitsignal der ereignishaften Objekte mit einer geeigneten Fensterfunktion herausgetrennt, z.B. mit einer kurzen Anstiegszeit von etwa 0.005 s und einer Ausklingzeit von etwa 0.05 s. Diese kurzen Zeitsignale der ereignishaften Objekte werden vom Original-Zeitsignal abgezogen und z.B. separat gespeichert.To separate the event-like objects, a modified time signal is first generated, in which the sound components are subtracted as completely as possible. For this purpose, in the frequency domain, the magnitudes of all bins are multiplied by the noise values of the bins and, with these new magnitudes and the original phases, a time signal is again generated via the FFT. optional For example, factors for damping the lower signal components can be included in the factors for the magnitudes, since the higher proportions are often more relevant for the event-related objects. From this new noisy time signal, at the points where eventual objects have been found in the above-described identification step, the time signal of the eventual objects is cut out with a suitable window function, eg with a short rise time of about 0.005 s and a release time of about 0.05 s. These short time signals of the event-related objects are subtracted from the original time signal and stored separately, for example.
Es folgt dann eine Separation der Notenobjekte aus dem Originalsignal. Die Aufteilung des Originalsignals (von dem die ereignishaften Anteile abgezogen wurden) in die Einzelklänge der Notenobjekte geschieht im Frequenzbereich. Dazu wird das Originalsignal nach seiner Modifizierung durch die Separation der Ereignisobjekte (siehe oben) zunächst neu in den Frequenzbereich transformiert.This is followed by a separation of the note objects from the original signal. The division of the original signal (from which the eventful components were deducted) into the individual sounds of the note objects occurs in the frequency domain. For this purpose, the original signal is first, after its modification, transformed into the frequency domain by the separation of the event objects (see above).
Die Aufteilung in die Einzelklänge der Notenobjekte beruht darauf, dass jedes Notenobjekt in jeder Zeitscheibe einen "Anspruch" auf einen spektralen Anteil des Gesamtsignals F(f, t, E) anmeldet. Dieser Anspruch wird mathematisch dargestellt durch spektrale Anteilsfaktoren, die errechnet werden für jedes Notenobjekt aus einer spektralen Anteilsfunktion, welches z.B. aus einem Modell des Klanges einer einzelnen Note gewonnen wird. Dieses Modell kann einfach vorgegeben werden, oder es kann dem realen Klang eines Instrumentes nachgebildet sein, wenn das Instrument bekannt ist, auf das das Notenobjekt zurückgeht. Das Modell beruht im vorliegenden Beispiel auf den folgenden Komponenten: Es wird angenommen, dass zum Grundtonverlauf des Notenobjektes Obertöne als spektrale Komponenten in ganzzahligen Vielfachen der Grundtonfrequenz existieren. Es wird weiter angenommen, dass die Amplituden der Obertöne zu einem Grundton einem Obertonmodell folgen. Das kann im einfachsten Fall die Abnahme der Amplitude mit dem Kehrwert der Obertonnummer sein. Das Obertonmodell kann aber auch den Obertonamplitudenverlauf abbilden, der aus einem empirischen Klang abgeleitet wurde. Es wird schließlich angenommen, dass die Obertonamplituden in einem Verhältnis zum Verlauf der Grundtonenergie des Notenobjektes stehen. Dies kann im einfachsten Fall als proportional zur Grundtonenergie angenommen werden, es kann aber auch ein anderer Zusammenhang aus einem empirischen Klang abgeleitet werden.The division into the individual sounds of the note objects is based on the fact that each note object in each time slice announces a "claim" on a spectral portion of the total signal F (f, t, E). This claim is mathematically represented by spectral proportional factors, which are calculated for each note object from a spectral component function, which is obtained, for example, from a model of the sound of a single note. This model can be easily specified, or it can be modeled on the real sound of an instrument, if the instrument to which the note object is based is known. In the present example, the model is based on the following components: It is assumed that harmonics exist as spectral components in integer multiples of the fundamental frequency for the fundamental tone progression of the note object. It is further assumed that the amplitudes of the overtones follow a fundamental tone of an overtone model. In the simplest case this can be the decrease of the amplitude with the reciprocal of the harmonic number. The overtone model but it can also reproduce the harmonic amplitude progression derived from an empirical sound. Finally, it is assumed that the harmonic amplitudes are related to the course of the fundamental energy of the note object. In the simplest case, this can be assumed to be proportional to the fundamental tone energy, but another context can also be derived from an empirical sound.
Basierend auf diesen Annahmen wird eine spektrale Anteilsfunktion vorgegeben, die z.B. für unterschiedliche Instrumente unterschiedlich sein kann, und es werden für jedes Notenobjekt in jeder Zeitscheibe die spektralen Anteilsfaktoren errechnet, also die Ansprüche, die dieses Objekt an jedem Bin hat.Based on these assumptions, a spectral fraction function is provided, e.g. may be different for different instruments, and for each note object in each time slice, the spectral proportions are calculated, that is, the claims this object has on each bin.
Wenn mehrere Anteilsfunktionen hinterlegt sind, kann z.B. vorgesehen sein, daß der Anwender eine der Anteilsfunktionen auswählt. Es kann auch eine automatische Auswahl erfolgen, z.B. wenn der Anwender eingibt, mit welchem Instrument das Notenobjekt gespielt wurde, oder wenn automatisch erkannt wird, daß das Notenobjekt auf einem bestimmten Instrument gespielt wurde, was sich z.B. daraus ergeben kann, daß die Obertonamplitudenverhältnisse eines Notenobjektes einer hinterlegten Anteilsfunktion entsprechen.If several share functions are stored, e.g. be provided that the user selects one of the share functions. There may also be an automatic selection, e.g. if the user inputs with which instrument the note object was played, or if it is automatically recognized that the note object was played on a particular instrument, e.g. can result from this that the overtone amplitude ratios of a note object correspond to a deposited share function.
Die Höhe der errechneten Anteilsfaktoren hängt ab von den Teiltonfrequenzen und den Amplituden der Teiltöne, die z.B. aus dem zugrunde gelegten Modell des Klanges einer einzelnen Note folgen. Weiter hängt die Höhe der Anteilsfaktoren ab von der Nähe oder Entfernung der Teiltonfrequenz zur Augenblicksfrequenz des jeweiligen Bins. Die Höhe der Anteilsfaktoren in Abhängigkeit von der Entfernung kann z.B. über eine Gewichtungskurve im Frequenzbereich eingehen, die z.B. breit genug sein sollte, dass auch leichte Abweichungen von der gegebenen Frequenz zugelassen werden. Andererseits sollte die Gewichtungskurve im Zentralbereich so schmal sein, dass die Obertonanteilsfaktoren von unterschiedlichen gleichzeitig erklingenden Noten mit unterschiedlicher Grundtonhöhe ausreichend gut getrennt und die Obertöne der richtigen Note zugewiesen werden können. Eine geeignete Gewichtungskurve zur Bewertung der Entfernung der Frequenzen kann z.B. ein zur vierten Potenz erhobenes von-Hann-Fenster sein, das in seiner Gesamtbreite z.B. zwei Halbtönen entspricht.The amount of the calculated proportional factors depends on the partial frequencies and the amplitudes of the partials, which follow eg from the underlying model of the sound of a single note. Furthermore, the amount of the proportional factors depends on the proximity or distance of the sub-tone frequency to the instantaneous frequency of the respective bin. The amount of the proportional factors as a function of the distance may, for example, enter via a weighting curve in the frequency range, which should be wide enough, for example, to permit even slight deviations from the given frequency. On the other hand, the weighting curve in the central area should be so narrow that the harmonic component factors of different concurrent notes with different fundamental pitch separated sufficiently well and the overtones of the correct note can be assigned. A suitable weighting curve for evaluating the removal of the frequencies may be, for example, a von Hann window raised to the fourth power, which in its total width corresponds, for example, to two semitones.
Wenn in der jeweils aktuellen Zeitscheibe alle dort als klingend gefundenen Notenobjekte ihre Ansprüche über die Anteilsfaktoren an allen Bins angemeldet haben, wird die Summe der Anteilsfaktoren aller Noten für je ein Bin auf 1 normiert. Für jedes Notenobjekt wird ein eigenes Zeitsignal mit der Dauer dieses Notenobjektes angelegt. Für jede Zeitscheibe werden die Magnituden oder ein anderer geeigneter energiehafter Wert aller Bins gemäß den normierten Anteilsfaktoren auf die Notenobjekte aufgeteilt. Diese Anteile der einzelnen Notenobjekte im Frequenzbereich werden mit den Originalphasen über die FFT in den Zeitbereich zurücktransformiert und die Zeitsignale auf die einzelnen Zeitsignale der Notenobjekte akkumuliert.If in the respective current time slice all note objects found there as sounding have registered their claims on the share factors on all bins, the sum of the proportional factors of all notes for each one bin is normalized to 1. For each note object, a separate time signal with the duration of this note object is created. For each time slice, the magnitudes or other appropriate energetic value of all bins are divided among the note objects according to the normalized proportion factors. These parts of the individual note objects in the frequency domain are transformed back into the time domain with the original phases via the FFT and the time signals are accumulated on the individual time signals of the note objects.
Da die Magnitudenanteile oder andere energiehafte Anteile zuvor verändert wurden, sind die Enden des Signals in den Zeitfenstern nach Rücktransformation in den Zeitbereich nicht mehr auf 0 ausgeblendet, was zu unerwünschten Artefakten führt. Deshalb sollte das Ergebnis der Rücktransformation nochmals gefenstert werden. Dazu ist es zweckmäßig, aus den Werten der eigentlich vorgesehenen Fensterfunktion die Wurzel zu ziehen und dieses Fenster dann vor der FFT und nach der inversen FFT anzuwenden.Since the magnitude components or other energetic components have previously been changed, the ends of the signal in the time windows after retransformation into the time domain are no longer faded out to 0, which leads to undesirable artifacts. Therefore, the result of the inverse transformation should be windowed again. For this purpose, it is useful to take the root of the values of the actually provided window function and then apply this window before the FFT and after the inverse FFT.
Schließlich werden die Notenobjekte mit den ereignishaften Anteilen zusammengeführt. Weiter oben war bereits beschrieben worden, dass den Notenobjekten die ereignishaften Objekte zugeordnet wurden, und dass für die ereignishaften Objekte ein Zeitsignal erzeugt wurde. Jetzt kann den Notenobjekten, denen ereignishaften Objekte zugeordnet wurden, dieses Zeitsignal an ihren Anfang aufaddiert werden. Wenn einem ereignishaften Objekt mehrere Notenobjekte zugeordnet worden sind, weil angenommen wurde, dass diese gleichzeitig angeschlagen wurde, wird das Zeitsignal des ereignishaften Objekts in seiner Amplitude auf die zugeordneten Notenobjekte verteilt. Dies kann zweckmäßig im Verhältnis der Energien der Notenobjekte selbst geschehen oder aufgrund des angenommenen Instrumenten-Modells erfolgen..Finally, the note objects are merged with the eventual shares. It was already described above that the event objects were assigned to the note objects, and that a time signal was generated for the event-related objects. Now the note objects to which eventful objects have been assigned, this time signal can be added to their beginning. If an eventful object has multiple note objects have been assigned, because it was assumed that this was struck simultaneously, the time signal of the event-like object is distributed in its amplitude to the associated note objects. This may conveniently be done in proportion to the energies of the note objects themselves or done on the basis of the assumed instrument model.
Ereignishafte Objekte, denen keine Notenobjekte zugeordnet wurden, können mit ihrem extrahierten Zeitsignal als eigenständige perkussive Objekte zur Verfügung stehen.Event-related objects to which no note objects have been assigned can be available with their extracted time signal as independent percussive objects.
Wenn für alle gefundenen Notenobjekte und den diesen zugeordneten Ereignisobjekten ihr Zeitsignal erzeugt wurde, werden diese Zeitsignale aller Noten vom Originalsignal abgezogen. Weil im der Klangaufteilung zugrunde gelegten Klangmodell vorausgesetzt wurde, dass die Notenobjekte im wesentlichen aus Teiltönen bestehen, die annähernd ganzzahlige Vielfache einer Grundfrequenz sind, wird nicht der gesamte Klang und damit nicht das gesamt Zeitsignal auf die Einzelobjekte verteilt worden sein. Deshalb verbleibt nach Abzug der Zeitsignale aller Einzelobjekte vom Originalsignal ein Restsignal, das die eher geräuschhaften Anteile enthält. Dieses Restsignal kann bei des Resynthese einfach mit wiedergegeben werden, oder es kann dem Anwender, als ganzes oder zerlegt in weitere Einzelobjekte, zur weiteren Bearbeitung zur Verfügung gestellt werden.If all the note objects found and the event objects associated with them have their time signal generated, these time signals of all notes will be subtracted from the original signal. Because the sound model used in the sound split assumes that the note objects consist essentially of partials that are approximately integer multiples of a fundamental frequency, not all of the sound, and thus the entire time signal, will have been distributed among the individual objects. Therefore, after deducting the timing signals of all the individual objects from the original signal, a residual signal containing the rather noisy components remains. This residual signal can be easily reproduced in the resynthesis, or it can be made available to the user, as a whole or disassembled into further individual objects, for further processing.
Die zeitliche Abfolge der oben genannten einzelnen Verfahrensschritte kann auch anders gewählt werden. So kann z.B. die Zuordnung von Ereignisobjekten zu Notenobjekten auch erst unmittelbar vor der Resynthese vorgenommen werden. Das gilt in analoger Weise für andere Verfahrensschritte, wie z.B. die Identifizierung der Ereignis- und Notenobjekte oder das Errechnen von Anteilsfaktoren. The time sequence of the above-mentioned individual method steps can also be chosen differently. Thus, e.g. the assignment of event objects to note objects also be made only immediately before the resynthesis. This applies analogously to other process steps, e.g. the identification of event and note objects or the calculation of proportional factors.
Claims (31)
- A method for sound-object-oriented analysis and note-object-oriented processing of a polyphonic digitised sound recording which is provided as a time signal F(A, t), comprising the following analysis and processing steps:- reading the time signal F(A, t) section by section, with use being made of a window function and overlapping windows,- Fourier transform of the read signal to the frequency domain, more particularly by executing a discrete Fourier transform,- calculating an energy value E for each bin (4) from the frequency amplitude resulting from the Fourier transform, more particularly by squaring the real and imaginary contents or forming an energy value derived therefrom,- generating a three-dimensional function F(t, f, E),- identifying event objects (1, 3) which are characterised in that they have an abrupt amplitude rise and are not periodic at this point and which, in most of the cases, are quickly fading,- identifying note objects the main property of which is that they have a tone pitch or a tone pitch pattern for a perceivable duration and that the curve shape of their time signal, in essence, has a periodic or quasi-periodic behaviour,- comparing the point in time of the occurrence of event objects (1, 3) and note objects and allocating event objects (1, 3) to note objects in case of an occurrence that is plausible in relation to time,- calculating spectral content factors for each note object,- allocating signal contents of the frequency signal F(f, t, E) to found note objects by means of the calculated content factors,- reverse transforming the frequency signal contents allocated to a note object into a time signal,- graphical representation of the note objects and/or event objects (1, 3) in a time/frequency graph on a monitor,- processing one or a plurality of note objects under user control or in an automated manner,- storing the time signals of processed note objects,- reproducing the stored time signals of processed note objects together with the time signal that is reduced by the time signal allocated to a note object.
- The method according to Claim 1, characterised in that a function F'(t, f, E') is calculated from the function F(t, f, E) with energy values E' that are the sum total of all energy values E at a time t and at a fundamental frequency f and all multiples thereof.
- The method according to Claim 2, characterised in that the energy values with the multiples of the fundamental frequency are added after weighting with a factor other than 1.
- The method according to Claim 2 or 3, characterised in that the following method steps are carried out to identify the note objects:- determining an energy maximum in function F'(f, t, E'),- determining a coherent value field belonging to the maximum,- allocating the found value field to one note object each.
- The method according to Claim 4, characterised in that the energy values E of the value field allocated to the note object are subtracted from function F(t, f, E).
- The method according to Claim 5, characterised in that the energy values E are only subtracted at the level G*E, wherein 0 < G < 1 is applicable to factor G.
- The method according to Claim 6, characterised in that factor G is a parameter that can be modified by the user.
- The method according to Claim 5, 6 or 7, characterised in that the search for the maximum is continued at a function from which the energy values have been subtracted and at a function calculated therefrom.
- The method according to any one of the preceding claims, characterised in that the search for the maximum is carried out iteratively until a termination criterion is reached.
- The method according to Claim 9, characterised in that a total energy value Etot is calculated for function F(t, f, E) and the iteration is terminated as soon as a certain content H * Etot of this total value has been allocated to the found note objects, in particular when this content is in excess of 90 per cent.
- The method according to Claim 10, characterised in that factor H is a parameter that can be modified by the user.
- The method according to any one of the preceding claims, characterised in that, in a subsequent automated rectification process, an identified note object is discarded when one or a plurality of the following criteria is fulfilled:- the energy of the found note object is very low as compared with the total energy,- the tone pitch and amplitude pattern of the note object essentially corresponds to that of another note object of lower frequency, in particular when the lower frequency is possible as the fundamental frequency,- the frequency spacing of a note object from the other note objects is very large.
- The method according to any one of the preceding claims, characterised in that, in a post-processing step, a user separates, connects and/or deletes automatically identified note objects.
- The method according to any one of the preceding claims, characterised in that an instantaneous frequency is calculated for each bin (4) from the phase differences of neighbouring bins (4), said instantaneous frequency being used as the frequency of the bin (4) in function F(t, f, E) or F'(t, f, E').
- The method according to any one of the preceding claims, characterised in that a tonality value and/or a noise value is calculated for each bin (4) in order to find the event objects (1, 3).
- The method according to Claim 15, characterised in that the energy values of each bin (4) are weighted with the tonality value.
- The method according to any one of the preceding claims, characterised in that filed notes are accessed for identification of the note objects.
- The method according to any one of the preceding claims, characterised in that the found event objects (1, 3) are extracted from the time signal F(A, t) and the sound decomposition is carried out at the remaining signal.
- The method according to Claim 18, characterised in that the event objects (1, 3) are stored separately.
- The method according to any one of the preceding claims, characterised in that the spectral content factors of a note object are calculated from a filed spectral content function.
- The method according to Claim 20, characterised in that the filed spectral content function is the mathematical image of a sound model of the note object, the sound model formulating one or a plurality of the following postulates:- there are spectral components on integer multiples of the fundamental tone frequency, i.e., harmonics on harmonic frequencies, the spectral components being associated with a fundamental tone having a fundamental tone frequency,- the amplitude pattern of the harmonics associated with a fundamental tone follows principles which result from a harmonic model or a harmonic amplitude pattern that has been determined empirically,- the harmonic amplitudes have a fixed ratio to the pattern of the fundamental tone energy.
- The method according to Claim 20, characterised in that the filed spectral content function is the mathematical image of the sound of a note played with a specific instrument.
- The method according to Claim 20, characterised in that a plurality of spectral content functions is filed, more particularly different content functions for a plurality of instruments, in particular such content functions according to Claim 22.
- The method according to Claim 23, characterised in that one out of the plurality of content functions is selected by the user.
- The method according to Claim 23, characterised in that one out of the plurality of content functions is automatically allocated to a note object when the user enters the instrument with which the note was played or when the user automatically detects such an instrument.
- The method according to any one of Claims 20 to 25, characterised in that, using a window function, the spectral content function carries out a weighting within the frequency range at a predefined frequency width.
- The method according to any one of the preceding claims, characterised in that a residual signal is calculated by subtracting all of the time signals allocated to the note objects and the event objects (1, 3) from the original time signal.
- The method according to Claim 27, characterised in that the residual signal is subjected to a further identification of note or event objects (1, 3).
- The method according to any one of the preceding claims, characterised in that, for the purpose of sound reproduction after a note object has been processed, the sound content of the note object is subtracted from the total sound and the differential signal thus obtained is played together with the sound content of the note object.
- The method according to Claim 27, characterised in that, during sound reproduction after a note object has been processed, the residual signal is also reproduced by a user.
- A computer programme with a programme code for applying the method according to any one of the preceding claims if the computer programme is running on a computer.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102008013172A DE102008013172B4 (en) | 2008-03-07 | 2008-03-07 | Method for sound-object-oriented analysis and notation-oriented processing of polyphonic sound recordings |
Publications (2)
Publication Number | Publication Date |
---|---|
EP2099024A1 EP2099024A1 (en) | 2009-09-09 |
EP2099024B1 true EP2099024B1 (en) | 2016-11-30 |
Family
ID=40778208
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP09001575.1A Active EP2099024B1 (en) | 2008-03-07 | 2009-02-05 | Method for acoustic object-oriented analysis and note object-oriented processing of polyphonic sound recordings |
Country Status (4)
Country | Link |
---|---|
US (1) | US8022286B2 (en) |
EP (1) | EP2099024B1 (en) |
JP (1) | JP2009217260A (en) |
DE (1) | DE102008013172B4 (en) |
Families Citing this family (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7459624B2 (en) | 2006-03-29 | 2008-12-02 | Harmonix Music Systems, Inc. | Game controller simulating a musical instrument |
US8678895B2 (en) | 2007-06-14 | 2014-03-25 | Harmonix Music Systems, Inc. | Systems and methods for online band matching in a rhythm action game |
US8678896B2 (en) | 2007-06-14 | 2014-03-25 | Harmonix Music Systems, Inc. | Systems and methods for asynchronous band interaction in a rhythm action game |
US8494842B2 (en) * | 2007-11-02 | 2013-07-23 | Soundhound, Inc. | Vibrato detection modules in a system for automatic transcription of sung or hummed melodies |
US8465366B2 (en) | 2009-05-29 | 2013-06-18 | Harmonix Music Systems, Inc. | Biasing a musical performance input to a part |
US7923620B2 (en) * | 2009-05-29 | 2011-04-12 | Harmonix Music Systems, Inc. | Practice mode for multiple musical parts |
US8026435B2 (en) * | 2009-05-29 | 2011-09-27 | Harmonix Music Systems, Inc. | Selectively displaying song lyrics |
US20100304811A1 (en) * | 2009-05-29 | 2010-12-02 | Harmonix Music Systems, Inc. | Scoring a Musical Performance Involving Multiple Parts |
US7935880B2 (en) | 2009-05-29 | 2011-05-03 | Harmonix Music Systems, Inc. | Dynamically displaying a pitch range |
US20100304810A1 (en) * | 2009-05-29 | 2010-12-02 | Harmonix Music Systems, Inc. | Displaying A Harmonically Relevant Pitch Guide |
US8076564B2 (en) * | 2009-05-29 | 2011-12-13 | Harmonix Music Systems, Inc. | Scoring a musical performance after a period of ambiguity |
US8449360B2 (en) | 2009-05-29 | 2013-05-28 | Harmonix Music Systems, Inc. | Displaying song lyrics and vocal cues |
US8080722B2 (en) * | 2009-05-29 | 2011-12-20 | Harmonix Music Systems, Inc. | Preventing an unintentional deploy of a bonus in a video game |
US8017854B2 (en) * | 2009-05-29 | 2011-09-13 | Harmonix Music Systems, Inc. | Dynamic musical part determination |
US7982114B2 (en) * | 2009-05-29 | 2011-07-19 | Harmonix Music Systems, Inc. | Displaying an input at multiple octaves |
US9981193B2 (en) | 2009-10-27 | 2018-05-29 | Harmonix Music Systems, Inc. | Movement based recognition and evaluation |
US10357714B2 (en) | 2009-10-27 | 2019-07-23 | Harmonix Music Systems, Inc. | Gesture-based user interface for navigating a menu |
EP2362375A1 (en) | 2010-02-26 | 2011-08-31 | Fraunhofer-Gesellschaft zur Förderung der Angewandten Forschung e.V. | Apparatus and method for modifying an audio signal using harmonic locking |
US8568234B2 (en) | 2010-03-16 | 2013-10-29 | Harmonix Music Systems, Inc. | Simulating musical instruments |
EP2579955B1 (en) | 2010-06-11 | 2020-07-08 | Harmonix Music Systems, Inc. | Dance game and tutorial |
US8562403B2 (en) | 2010-06-11 | 2013-10-22 | Harmonix Music Systems, Inc. | Prompting a player of a dance game |
US9358456B1 (en) | 2010-06-11 | 2016-06-07 | Harmonix Music Systems, Inc. | Dance competition game |
US9024166B2 (en) | 2010-09-09 | 2015-05-05 | Harmonix Music Systems, Inc. | Preventing subtractive track separation |
US8584198B2 (en) * | 2010-11-12 | 2013-11-12 | Google Inc. | Syndication including melody recognition and opt out |
US8584197B2 (en) * | 2010-11-12 | 2013-11-12 | Google Inc. | Media rights management using melody identification |
US10984768B2 (en) * | 2016-11-04 | 2021-04-20 | International Business Machines Corporation | Detecting vibrato bar technique for string instruments |
CN110428839B (en) * | 2019-07-22 | 2022-05-31 | 深圳市同洲电子股份有限公司 | Content matching method based on voice recognition |
US11798523B2 (en) * | 2020-01-31 | 2023-10-24 | Soundtrap Ab | Systems and methods for generating audio content in a digital audio workstation |
CN112420071B (en) * | 2020-11-09 | 2022-12-02 | 上海交通大学 | Constant Q transformation based polyphonic electronic organ music note identification method |
Family Cites Families (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5248845A (en) * | 1992-03-20 | 1993-09-28 | E-Mu Systems, Inc. | Digital sampling instrument |
US5536902A (en) * | 1993-04-14 | 1996-07-16 | Yamaha Corporation | Method of and apparatus for analyzing and synthesizing a sound by extracting and controlling a sound parameter |
US5567901A (en) | 1995-01-18 | 1996-10-22 | Ivl Technologies Ltd. | Method and apparatus for changing the timbre and/or pitch of audio signals |
US5792971A (en) * | 1995-09-29 | 1998-08-11 | Opcode Systems, Inc. | Method and system for editing digital audio information with music-like parameters |
US5886276A (en) * | 1997-01-16 | 1999-03-23 | The Board Of Trustees Of The Leland Stanford Junior University | System and method for multiresolution scalable audio signal encoding |
US6140568A (en) * | 1997-11-06 | 2000-10-31 | Innovative Music Systems, Inc. | System and method for automatically detecting a set of fundamental frequencies simultaneously present in an audio signal |
US6057502A (en) * | 1999-03-30 | 2000-05-02 | Yamaha Corporation | Apparatus and method for recognizing musical chords |
US6836761B1 (en) * | 1999-10-21 | 2004-12-28 | Yamaha Corporation | Voice converter for assimilation by frame synthesis with temporal alignment |
US6323412B1 (en) * | 2000-08-03 | 2001-11-27 | Mediadome, Inc. | Method and apparatus for real time tempo detection |
DE10117870B4 (en) | 2001-04-10 | 2005-06-09 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Method and apparatus for transferring a music signal into a score-based description and method and apparatus for referencing a music signal in a database |
JP2005292207A (en) * | 2004-03-31 | 2005-10-20 | Ulead Systems Inc | Method of music analysis |
DE102004049478A1 (en) * | 2004-10-11 | 2006-04-20 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Method and device for smoothing a melody line segment |
DE102004049457B3 (en) * | 2004-10-11 | 2006-07-06 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Method and device for extracting a melody underlying an audio signal |
DE102004049477A1 (en) * | 2004-10-11 | 2006-04-20 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Method and device for harmonic conditioning of a melody line |
US8093484B2 (en) * | 2004-10-29 | 2012-01-10 | Zenph Sound Innovations, Inc. | Methods, systems and computer program products for regenerating audio performances |
US7598447B2 (en) * | 2004-10-29 | 2009-10-06 | Zenph Studios, Inc. | Methods, systems and computer program products for detecting musical notes in an audio signal |
WO2007119221A2 (en) * | 2006-04-18 | 2007-10-25 | Koninklijke Philips Electronics, N.V. | Method and apparatus for extracting musical score from a musical signal |
-
2008
- 2008-03-07 DE DE102008013172A patent/DE102008013172B4/en not_active Expired - Fee Related
-
2009
- 2009-02-05 EP EP09001575.1A patent/EP2099024B1/en active Active
- 2009-02-20 JP JP2009037291A patent/JP2009217260A/en active Pending
- 2009-03-05 US US12/398,707 patent/US8022286B2/en active Active
Non-Patent Citations (1)
Title |
---|
None * |
Also Published As
Publication number | Publication date |
---|---|
US8022286B2 (en) | 2011-09-20 |
DE102008013172B4 (en) | 2010-07-08 |
EP2099024A1 (en) | 2009-09-09 |
US20090241758A1 (en) | 2009-10-01 |
JP2009217260A (en) | 2009-09-24 |
DE102008013172A1 (en) | 2009-09-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2099024B1 (en) | Method for acoustic object-oriented analysis and note object-oriented processing of polyphonic sound recordings | |
DE10232916B4 (en) | Apparatus and method for characterizing an information signal | |
EP1371055B1 (en) | Device for the analysis of an audio signal with regard to the rhythm information in the audio signal using an auto-correlation function | |
EP1797552B1 (en) | Method and device for the extraction of a melody on which an audio signal is based | |
DE10117870B4 (en) | Method and apparatus for transferring a music signal into a score-based description and method and apparatus for referencing a music signal in a database | |
EP1407446B1 (en) | Method and device for characterising a signal and for producing an indexed signal | |
WO2002073592A2 (en) | Method and device for characterising a signal and method and device for producing an indexed signal | |
EP1388145B1 (en) | Device and method for analysing an audio signal in view of obtaining rhythm information | |
WO2006039995A1 (en) | Method and device for harmonic processing of a melodic line | |
WO2005122135A1 (en) | Device and method for converting an information signal into a spectral representation with variable resolution | |
EP1280138A1 (en) | Method for audio signals analysis | |
DE10157454B4 (en) | A method and apparatus for generating an identifier for an audio signal, method and apparatus for building an instrument database, and method and apparatus for determining the type of instrument | |
EP1787283A1 (en) | Extraction of a melody on which an audio signal is based | |
DE102004028693B4 (en) | Apparatus and method for determining a chord type underlying a test signal | |
EP1758096A1 (en) | Method and Apparatus for Pattern Recognition in Acoustic Recordings | |
EP1377924B1 (en) | Method and device for extracting a signal identifier, method and device for creating a database from signal identifiers and method and device for referencing a search time signal | |
DE102004033867A1 (en) | Method and device for the rhythmic preparation of audio signals | |
EP1743324B1 (en) | Device and method for analysing an information signal | |
DE102009029615B4 (en) | Method and arrangement for processing audio data and a corresponding computer program and a corresponding computer-readable storage medium | |
DE102009019843A1 (en) | Method for synthesizing sound of e.g. musical instrument, based on frequency modulation based synthesis-algorithm, involves selecting values for parameter of synthesis-algorithm, where algorithm is adjusted by selected values for parameter |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO SE SI SK TR |
|
AX | Request for extension of the european patent |
Extension state: AL BA RS |
|
17P | Request for examination filed |
Effective date: 20100225 |
|
17Q | First examination report despatched |
Effective date: 20100326 |
|
AKX | Designation fees paid |
Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO SE SI SK TR |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
INTG | Intention to grant announced |
Effective date: 20160901 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO SE SI SK TR |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP Ref country code: GB Ref legal event code: FG4D Free format text: NOT ENGLISH |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: REF Ref document number: 850462 Country of ref document: AT Kind code of ref document: T Effective date: 20161215 |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D Free format text: LANGUAGE OF EP DOCUMENT: GERMAN |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 502009013421 Country of ref document: DE |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 9 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161130 |
|
REG | Reference to a national code |
Ref country code: LT Ref legal event code: MG4D |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: MP Effective date: 20161130 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: NO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170228 Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161130 Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161130 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170301 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161130 Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161130 Ref country code: BE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20170228 Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161130 Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170330 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161130 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161130 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161130 Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161130 Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161130 Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161130 Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161130 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161130 Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170228 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 502009013421 Country of ref document: DE |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MC Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161130 |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CH Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20170228 Ref country code: LI Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20170228 |
|
26N | No opposition filed |
Effective date: 20170831 |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: MM4A |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161130 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LU Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20170205 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 10 |
|
REG | Reference to a national code |
Ref country code: BE Ref legal event code: MM Effective date: 20170228 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20170205 |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: MM01 Ref document number: 850462 Country of ref document: AT Kind code of ref document: T Effective date: 20170205 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: AT Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20170205 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161130 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: HU Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO Effective date: 20090205 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CY Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20161130 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161130 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: TR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161130 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R082 Ref document number: 502009013421 Country of ref document: DE Representative=s name: MEISSNER BOLTE PATENTANWAELTE RECHTSANWAELTE P, DE Ref country code: DE Ref legal event code: R081 Ref document number: 502009013421 Country of ref document: DE Owner name: CELEMONY SOFTWARE GMBH, DE Free format text: FORMER OWNER: NEUBAECKER, PETER, 81371 MUENCHEN, DE Ref country code: DE Ref legal event code: R082 Ref document number: 502009013421 Country of ref document: DE Representative=s name: EISENFUEHR SPEISER PATENTANWAELTE RECHTSANWAEL, DE |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: 732E Free format text: REGISTERED BETWEEN 20200528 AND 20200603 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170330 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R082 Ref document number: 502009013421 Country of ref document: DE Representative=s name: EISENFUEHR SPEISER PATENTANWAELTE RECHTSANWAEL, DE |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: FR Payment date: 20230217 Year of fee payment: 15 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: DE Payment date: 20240312 Year of fee payment: 16 Ref country code: GB Payment date: 20240222 Year of fee payment: 16 |