GB2560391A - Extracting audio characteristics from audio signals - Google Patents

Extracting audio characteristics from audio signals Download PDF

Info

Publication number
GB2560391A
GB2560391A GB1708426.0A GB201708426A GB2560391A GB 2560391 A GB2560391 A GB 2560391A GB 201708426 A GB201708426 A GB 201708426A GB 2560391 A GB2560391 A GB 2560391A
Authority
GB
United Kingdom
Prior art keywords
audio
audio signal
filter
database
filter rules
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1708426.0A
Other versions
GB2560391B (en
GB201708426D0 (en
Inventor
Clark Robin
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Allen & Heath Ltd
Original Assignee
Allen & Heath Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Allen & Heath Ltd filed Critical Allen & Heath Ltd
Priority to GB1708426.0A priority Critical patent/GB2560391B/en
Publication of GB201708426D0 publication Critical patent/GB201708426D0/en
Publication of GB2560391A publication Critical patent/GB2560391A/en
Application granted granted Critical
Publication of GB2560391B publication Critical patent/GB2560391B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/02Means for controlling the tone frequencies, e.g. attack or decay; Means for producing special musical effects, e.g. vibratos or glissandos
    • G10H1/06Circuits for establishing the harmonic content of tones, or other arrangements for changing the tone colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/683Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0008Associated control or indicating means
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/36Accompaniment arrangements
    • G10H1/40Rhythm
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/031Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
    • G10H2210/076Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal for extraction of timing, tempo; Beat detection
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2240/00Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
    • G10H2240/005Data structures for use in electrophonic musical devices; Data structures including musical parameters derived from musical analysis
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2240/00Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
    • G10H2240/121Musical libraries, i.e. musical databases indexed by musical parameters, wavetables, indexing schemes using musical parameters, musical rule bases or knowledge bases, e.g. for automatic composing methods
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2250/00Aspects of algorithms or signal processing methods without intrinsic musical character, yet specifically adapted for or used in electrophonic musical processing
    • G10H2250/025Envelope processing of music signals in, e.g. time domain, transform domain or cepstrum domain
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H60/00Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
    • H04H60/02Arrangements for generating broadcast information; Arrangements for generating broadcast-related information with a direct linking to broadcast information or to broadcast space-time; Arrangements for simultaneous generation of broadcast information and broadcast-related information
    • H04H60/04Studio equipment; Interconnection of studios

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Auxiliary Devices For Music (AREA)

Abstract

A method of operating an audio processing apparatus to automatically extract characteristics of audio signals created by an audio source, comprising the steps of providing an audio processing apparatus which receives 20 an audio signal from an audio source. The apparatus obtains the tempo of the audio signal and the type of audio source which created the signal. The tempo and type may be determined by analysis of the signal or by manual user entry. The type of audio source may be a voice or musical instrument such as a guitar. Next, the apparatus accesses 40 a database and retrieves from the database one or more filter rules 50 such as a time domain envelope filter, the number and identity of the one or more filter rules being based on the tempo and type of audio source. The one or more filter rules are applied 60 to the audio signal to produce one or more extracted audio characteristics, such as energy characteristics, comprising of filtered audio signals 70. An apparatus for carrying out the method is also disclosed. The method may be used when mixing and sound-checking in live performance music environments.

Description

(54) Title of the Invention: Extracting audio characteristics from audio signals
Abstract Title: Extracting audio characteristics from audio signals based on tempo and audio source type (57) A method of operating an audio processing apparatus to 10 automatically extract characteristics of audio signals created by an audio source, comprising the steps of providing an audio processing apparatus which receives 20 an audio signal from an audio source. The apparatus obtains the tempo of the audio signal and the type of audio source which created the signal. The tempo and type may be determined by analysis of the signal or by manual user entry. The type of audio source may be a voice or musical instrument such as a guitar. Next, the apparatus accesses 40 a database and retrieves from the database one or more filter rules 50 such as a time domain envelope filter, the number and identity of the one or more filter rules being based on the tempo and type of audio source. The one or more filter rules are applied 60 to the audio signal to produce one or more extracted audio characteristics, such as energy characteristics, comprising of filtered audio signals 70. An apparatus for carrying out the method is also disclosed. The method may be used when mixing and sound-checking in live performance music environments.
Figure GB2560391A_D0001
Figure 1
1/3
Figure GB2560391A_D0002
Figure 1
2/3
Figure GB2560391A_D0003
ο ο
οι ο
cO
Figure GB2560391A_D0004
ί ί ί ί ί ο
Τ|ο
LT) ο
rο
3/3 o
T|n
Audio Source — 210
Figure GB2560391A_D0005
Extracting audio characteristics from audio signals
The present invention relates generally to a method of extracting audio characteristics from audio signals and audio processing apparatus arranged to extract audio characteristics from audio signals and finds particular, although not exclusive, utility in the processing of live performance music.
Audio processing (mixing) apparatus are used to process incoming audio signals for onward broadcast to an audience. Such apparatus is typically operated by sound engineers. However, live music mixing during public performance is becoming increasingly more complex and demanding on the sound engineer (sound mixer operator).
For instance, the number of audio/instrument channels is becoming greater in typical performances. Furthermore, the number of auditory scene configuration changes during performances are increasing with higher expectations of sonic quality (‘perfect mix’). In addition, musical equipment technology is becoming more complex and difficult to use which is placing more demands on the sound engineer leading to time pressure and the need to complete tasks in a short space of time during sound check or during the performance.
It is desirable to have audio mixing apparatus which simplifies the role of the sound engineer. For instance, mixer technology which assists the engineer in some mixing tasks, allowing the engineer time to concentrate on other, possibly more important, creative mixing tasks is desirable. Also, it is advantageous if such apparatus is able to assist during sound checks when the time pressures may mean the sound engineer does not have time to configure more minor instruments and provide an automatic configuration.
Finally, there are musical performance events now where sound engineers are unable to actually attend and engineer the mixing process. For example, a mid-week musical performance in certain venues, or for a rehearsal, it is often the case that an experienced sound engineer is not practically available. Equipment that can assist with the provision of some form of basic dynamic musical mix (beyond a static configuration) would be a great advantage.
It is therefore desirable to have a mixing audio processing apparatus which may operate autonomously or semi-autonomously such that it may assist a sound engineer.
In order for a mixing audio processing apparatus to operate in this autonomous or semi-autonomous manner, it is necessary to provide it with a set of processing rules for operating so that it processes the incoming signals appropriately. This set of processing rules may be retrievable from a database. However, the audio processing apparatus requires to be configured such that it may automatically retrieve the appropriate set of processing rules from the database.
The present invention provides a method and apparatus for this purpose.
In a first aspect, the invention provides a method of operating an audio processing apparatus to automatically extract characteristics of audio signals created by an audio source, the method comprising the steps of:
a. providing an audio processing apparatus;
b. the apparatus receiving an audio signal from an audio source;
c. the apparatus determining the tempo of the audio signal;
d. the apparatus determining the type of audio source which created the audio signal;
e. the apparatus accessing a database of filter rules and retrieving from the database one or more filter rules, the number and identity of the one or more filter rules being based on the determination of the tempo and type of audio source;
f. the apparatus applying the one or more filter rules to the audio signal to produce one or more extracted audio characteristics comprising of filtered audio signals. In this way, one or more audio signals are filtered such that one or more characteristics are obtainable therefrom. These characteristics may then be analysed by the audio processing apparatus and used as the basis to select processing rules and methods from a database for autonomously processing an original, unfiltered, audio signal.
The tempo may refer to the beats per minute (BPM) of the audio signal. It may be obtained by the apparatus by being manually entered. For instance, the sound engineer may enter a number on a keypad or use a device which is tapped in time to the beat and which determines the tempo from the frequency of taps.
Alternatively, the apparatus may obtain the tempo of the audio signal by analysing the received audio signal. In other words, it may autonomously determine the tempo.
Likewise, the apparatus may obtain the type of audio source as a result of it being manually entered, for instance, by an engineer, or it may be obtained by the apparatus analysing the received audio signal.
For instance, it may be used to determine the type of musical instrument or whether it is a human voice.
In one example, the name or ID given to the input channel, by a user, may be used by the apparatus to determine the type of audio source.
The audio signal may be split into two or more substantially identical signals and each one of the substantially identical signals is filtered using one of the received filter rules to produce a plurality of extracted audio characteristics comprising a filtered audio signal.
The substantially identical signals may be filtered simultaneously, (in parallel).
The one or more filter rules may include a filter envelope controlled by a time constant.
The method may include the step of applying a frequency filter to the audio signal prior to the application of the one or more filter rules. The determination of the frequency filter, and in particular, its lower and upper frequency points, may be based on the type of audio source. The apparatus may determine these points autonomously from a database, or a user may enter them manually. This filter may be used to attenuate any low frequencies and/or high frequencies limiting the frequency bandwidth for that audio source. It is also possible this would not be used for high transient material since it would affect the rest of the time domain filtering. In other word, it may be bypassed for particular audio sources/instruments. However, for some audio sources/instruments it would be useful in that it may attenuate ‘out of band’ frequency spectrum which may have resulted from another audio source/instrument, such as an adjacent microphone source.
In a second aspect, the invention provides audio processing apparatus including one or more audio signal receiving means, audio processing means arranged to process the audio signal, a database of filter rules, database communication means arranged to communicate with the database, wherein the database is arranged to send one or more filter rules to the processor, the number and identity of the one or more filter rules being based on the tempo and type of audios source, wherein the audio processing means is arranged to filter the received audio signal according to the one or more filter rules received from the database to produce one or more extracted audio characteristics comprising filtered audio signals.
The apparatus may include audio signal analysing means arranged to determine the tempo of a received audio signal. It may include means for the tempo to be manually input. It may include audio signal analysing means arranged to determine the type of audio source which created the received audio signal. It may include means for determining the type of audio source from the channel ID. It may include means for manually inputting the type of audio source.
The audio processing apparatus may further comprise audio splitting means arranged to split the received audio signal into two or more substantially identical signals.
The audio processing means may be arranged to filter each one of the substantially identical signals according to the one or more filter rules retrieved from the database to produce a plurality of extracted audio characteristics comprising filtered audio signals.
The above and other characteristics, features and advantages of the present invention will become apparent from the following detailed description, taken in conjunction with the accompanying drawings, which illustrate, by way of example, the principles of the invention. This description is given for the sake of example only, without limiting the scope of the invention. The reference figures quoted below refer to the attached drawings.
Figure 1 is a flow diagram representing a method of operation;
Figure 2 is a series of traces depicting the result of possible filtering actions; and
Figure 3 is a diagram for a method of extracting audio characteristics from audio signals.
The present invention will be described with respect to certain drawings but the invention is not limited thereto but only by the claims. The drawings described are only schematic and are non-limiting. Each drawing may not include all of the features of the invention and therefore should not necessarily be considered to be an embodiment of the invention. In the drawings, the size of some of the elements may be exaggerated and not drawn to scale for illustrative purposes. The dimensions and the relative dimensions do not correspond to actual reductions to practice of the invention.
Furthermore, the terms first, second, third and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a sequence, either temporally, spatially, in ranking or in any other manner. It is to be understood that the terms so used are interchangeable under appropriate circumstances and that operation is capable in other sequences than described or illustrated herein.
Moreover, the terms top, bottom, over, under and the like in the description and the claims are used for descriptive purposes and not necessarily for describing relative positions. It is to be understood that the terms so used are interchangeable under appropriate circumstances and that operation is capable in other orientations than described or illustrated herein.
It is to be noticed that the term “comprising”, used in the claims, should not be interpreted as being restricted to the means listed thereafter; it does not exclude other elements or steps. It is thus to be interpreted as specifying the presence of the stated features, integers, steps or components as referred to, but does not preclude the presence or addition of one or more other features, integers, steps or components, or groups thereof. Thus, the scope of the expression “a device comprising means A and B” should not be limited to devices consisting only of components A and B. It means that with respect to the present invention, the only relevant components of the device are A and B.
Similarly, it is to be noticed that the term “connected”, used in the description, should not be interpreted as being restricted to direct connections only. Thus, the scope of the expression “a device A connected to a device B” should not be limited to devices or systems wherein an output of device A is directly connected to an input of device B. It means that there exists a path between an output of A and an input of B which may be a path including other devices or means. “Connected” may mean that two or more elements are either in direct physical or electrical contact, or that two or more elements are not in direct contact with each other but yet still co-operate or interact with each other. For instance, wireless connectivity is contemplated.
Reference throughout this specification to “an embodiment” or “an aspect” means that a particular feature, structure or characteristic described in connection with the embodiment or aspect is included in at least one embodiment or aspect of the present invention. Thus, appearances of the phrases “in one embodiment”, “in an embodiment”, or “in an aspect” in various places throughout this specification are not necessarily all referring to the same embodiment or aspect, but may refer to different embodiments or aspects. Furthermore, the particular features, structures or characteristics of any embodiment or aspect of the invention may be combined in any suitable manner, as would be apparent to one of ordinary skill in the art from this disclosure, in one or more embodiments or aspects.
Similarly, it should be appreciated that in the description various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed invention requires more features than are expressly recited in each claim. Moreover, the description of any individual drawing or aspect should not necessarily be considered to be an embodiment of the invention. Rather, as the following claims reflect, inventive aspects lie in fewer than all features of a single foregoing disclosed embodiment. Thus, the claims following die detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Furthermore, while some embodiments described herein include some features included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention, and form yet further embodiments, as will be understood by those skilled in the art. For example, in the following claims, any of the claimed embodiments can be used in any combination.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practised without these specific details. In other instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
In the discussion of the invention, unless stated to the contrary, the disclosure of alternative values for the upper or lower limit of the permitted range of a parameter, coupled with an indication that one of said values is more highly preferred than the other, is to be construed as an implied statement that each intermediate value of said parameter, lying between the more preferred and the less preferred of said alternatives, is itself preferred to said less preferred value and also to each value lying between said less preferred value and said intermediate value.
The use of the term “at least one” may mean only one in certain circumstances.
The principles of the invention will now be described by a detailed description of at least one drawing relating to exemplary features of the invention. It is clear that other arrangements can be configured according to the knowledge of persons skilled in the art without departing from the underlying concept or technical teaching of the invention, the invention being limited only by the terms of the appended claims.
In Figure 1 a flow 10 of method steps is shown. In the first step 20, an audio signal is received by a mixing audio processing apparatus. In the next step 30, the beats per minute (tempo) and type of audio source are determined. The audio processing apparatus communicates with a database in step 40, to retrieve filter rules in step 50 based on the tempo and type of audio source determined in step 30.
The audio processing apparatus is then able to apply the one or more filters to the raw incoming audio signal to produce audio characteristics “extracted” 70 from the audio signal. In this regard, the incoming audio signal may be split into several identical copies each of which is filtered using one or more filter rules obtained from the database. The filtering of the various signals may be undertaken simultaneously. One or more signals may have more than one filter rules applied consecutively.
An example of the filtering is shown in Figure 2. This figure shows a sequence of traces showing time on the x-axis and amplitude on the y-axis. The raw incoming unfiltered signal is at the top 110. The signal is seen to include peaks of amplitude ranging from small to medium to high.
The series of different filters described below are seen to remove an indication of individual notes, smoothing the signal. This is not important because these filtered results are not for being played to an audience.
The second trace 120 shows a time domain (sub beat/transient envelope) filter having been applied which smooths-out unwanted amplitude energy that is not connected with the sub-beat transient region of the particular instrument type.
The third trace 130 shows a time domain (beat envelope) filter having been applied. This smooths-out the amplitude transient energy and extracts energy beat characteristics related to the instrument type and tempo of the music. Any sudden changes (sharp spikes) in amplitude are eliminated.
The fourth trace 140 shows a time domain (bar envelope) filter having been applied. This smooths-out the amplitude transient energy and extracts energy bar characteristics related to the instrument type and tempo of the music.
The fifth trace 150 shows a time domain (double bar envelope) filter having been applied. This smooths-out the amplitude transient energy and extracts energy double bar characteristics related to the instrument type and tempo of the music. The peaks have been rounded as a result.
The sixth trace 160 shows a time domain (4 bar envelope) filter having been applied. This smooths-out the amplitude transient energy and extracts energy 4 bar characteristics related to the instrument type and tempo of the music.
The seventh trace 170 shows a time domain (8 bar envelope) filter having been applied. This smooths-out the amplitude transient energy and extracts energy 8 bar characteristics related to the instrument type and tempo of the music.
The eighth trace 180 shows a time domain (passage envelope) filter having been applied. This smooths-out the amplitude transient energy and extracts energy characteristics related to passage in the music considering the instrument type and tempo of the music.
Other types of filter rules may be applied.
In Figure 3, an audio signal 210 is fed 205 to a processor 220 which determines the tempo of the signal and type of audio source which created the signal. This information is passed 225 to a processor 230. The raw signal 210 is also fed 215 to the processor 230. The processor 230 contacts 245 the database 240 and retrieves filter rules using the tempo and type of audio source information. The processor then applies the various filters to the raw signal 210 either consecutively or individually to copies of the signal to produce filtered signals which may be considered to be extracted characteristics or features thereof. The processor 230 may then send 235 these extracted features to a processor 250 which may use them to control or influence the way the raw signal 210, which it has received 255, is automatically processed for outputting to an audience.
Although shown as separate items it is to be understood that the processors 220, 230, 250 may all be located in a single console, or may even be the same processor.
Likewise, the database 240 may be located in the same console or may be located remotely.
The processor 230 extracts audio characteristics using the set of time domain filters working in parallel. The number of filters is typically between 8 and 18 per channel and depends on the input channel musical instrument type. For example, a snare drum is highly percussive and a major source in a mix and song rhythm and therefore may require up to as many as eighteen filters to extract the required features of that particular instrument.
Each filter is fed the raw input channel signal (audio source signal). Each filter 10 outputs a time domain response or envelope, which is the characteristic feature extraction output to the processor 250. Each filter’s envelope may be controlled by a time constant (damping factor). The set of filters resident on the channel have time constants relating to the channel instrument type and various music intervals relating to the instrument and the musical beats per minute (tempo) of the music. For instance, a transient envelope filter may have a time constant of approximately 1ms, a bar envelope may have a time constant of approximately 2 seconds, and an 8 bar envelope a time constant of approximately 16 seconds. The time constant in each filter may control which features in the time domain are extracted in that particular filter.

Claims (12)

1. A method of operating an audio processing apparatus to automatically extract characteristics of audio signals created by an audio source, the method comprising the steps of:
a) providing an audio processing apparatus;
b) the apparatus receiving an audio signal from an audio source;
c) the apparatus obtaining with the tempo of the audio signal;
d) the apparatus obtaining the type of audio source which created the audio signal;
e) the apparatus accessing a database of filter rules and retrieving from the database one or more filter rules, the number and identity of the one or more filter rules being based on the tempo and type of audio source;
f) the apparatus applying the one or more filter rules to the audio signal to produce one or more extracted audio characteristics comprising of filtered audio signals.
2. The method of claim 1, wherein the apparatus obtains the tempo of the audio signal as a result of it being manually entered.
3. The method of claim 1, wherein the apparatus obtains the tempo of the audio signal by analysing the received audio signal.
4. The method of any preceding claim, wherein the apparatus obtains the type of audio source as a result of it being manually entered.
5. The method of any one of claims 1 to 3, wherein the apparatus obtains the type of audio source by analysing the received audio signal.
6. The method of any preceding claim, wherein the audio signal is split into two or more substantially identical signals and each one of the substantially identical signals is filtered using one of the received filter rules to produce a plurality of extracted audio characteristics comprising a filtered audio signal.
7. The method of claim 6, wherein the substantially identical signals are filtered simultaneously.
8. The method of any preceding claim, wherein the one or more filter rules includes a filter envelope controlled by a time constant.
9. The method of any preceding claim, wherein a frequency filter is applied to the audio signal prior to the application of the one or more filter rules.
10. Audio processing apparatus including one or more audio signal receiving means, audio processing means arranged to process the audio signal, a database of filter rules, database communication means arranged to communicate with the database, wherein the database is arranged to send one or more filter rules to the processor, the number and identity of the one or more filter rules being based on the tempo and type of audios source, wherein the audio processing means is arranged to filter the received audio signal according to the one or more filter rules received from the database to produce one or more extracted audio characteristics comprising filtered audio signals.
11. The audio processing apparatus of claim 10, further comprising audio splitting means arranged to split the received audio signal into two or more substantially identical signals.
12. The audio processing means of claim 11, wherein the audio processing means is arranged to filter each one of the substantially identical signals according to the one or more filter rules retrieved from the database to produce a plurality of
30 extracted audio characteristics comprising filtered audio signals.
Intellectual
Property
Office
Application No: Claims searched:
GB1708426.0
1-12
12. The audio processing means of claim 11, wherein the audio processing means is arranged to filter each one of the substantially identical signals according to the one or more filter rules retrieved from the database to produce a plurality of extracted audio characteristics comprising filtered audio signals.
AMENDMENTS TO THE CLAIMS HAVE BEEN FILED AS FOLLOWS
08 03 18
1. A method of operating an audio processing apparatus to automatically extract characteristics of audio signals created by an audio source, the method comprising the steps of:
a) providing an audio processing apparatus;
b) the apparatus receiving an audio signal from an audio source;
c) the apparatus obtaining with the tempo of the audio signal;
d) the apparatus obtaining the type of audio source which created the audio signal;
e) the apparatus accessing a database of filter rules and retrieving from the database one or more filter rules, the number and identity of the one or more filter rules being based on the tempo and type of audio source;
f) the apparatus applying the one or more filter rules to the audio signal to produce one or more extracted audio characteristics comprising of filtered audio signals;
g) the apparatus analysing the one or more extracted audio characteristics and using the one or more extracted audio characteristics as a basis to select processing rules and methods from a database for autonomously processing an original, unfiltered, audio signal.
20 2. The method of claim 1, wherein the apparatus obtains the tempo of the audio signal as a result of it being manually entered.
3. The method of claim 1, wherein the apparatus obtains the tempo of the audio signal by analysing the received audio signal.
4. The method of any preceding claim, wherein the apparatus obtains the type of audio source as a result of it being manually entered.
The method of any one of claims 1 to 3, wherein the apparatus obtains the type of audio source by analysing the received audio signal.
6. The method of any preceding claim, wherein the audio signal is split into two or more substantially identical signals and each one of the substantially identical
08 03 18 signals is filtered using one of the received filter rules to produce a plurality of extracted audio characteristics comprising a filtered audio signal.
7. The method of claim 6, wherein the substantially identical signals are filtered
5 simultaneously.
8. The method of any preceding claim, wherein the one or more filter rules includes a filter envelope controlled by a time constant.
10 9. The method of any preceding claim, wherein a frequency filter is applied to the audio signal prior to the application of the one or more filter rules.
10. Audio processing apparatus including one or more audio signal receiving means, audio processing means arranged to process the audio signal, a database of filter
15 rules, database communication means arranged to communicate with the database, wherein the database is arranged to send one or more filter rules to the processor, the number and identity of the one or more filter rules being based on the tempo and type of audios source, wherein the audio processing means is arranged to filter the received audio signal according to the one or more filter
20 rules received from the database to produce one or more extracted audio characteristics comprising filtered audio signals.
11. The audio processing apparatus of claim 10, further comprising audio splitting means arranged to split the received audio signal into two or more substantially
25 identical signals.
GB1708426.0A 2017-05-26 2017-05-26 Extracting audio characteristics from audio signals Active GB2560391B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1708426.0A GB2560391B (en) 2017-05-26 2017-05-26 Extracting audio characteristics from audio signals

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1708426.0A GB2560391B (en) 2017-05-26 2017-05-26 Extracting audio characteristics from audio signals

Publications (3)

Publication Number Publication Date
GB201708426D0 GB201708426D0 (en) 2017-07-12
GB2560391A true GB2560391A (en) 2018-09-12
GB2560391B GB2560391B (en) 2020-09-30

Family

ID=59270948

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1708426.0A Active GB2560391B (en) 2017-05-26 2017-05-26 Extracting audio characteristics from audio signals

Country Status (1)

Country Link
GB (1) GB2560391B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2503867A (en) * 2012-05-08 2014-01-15 Queen Mary & Westfield College Mixing and processing audio signals in accordance with audio features extracted from the audio signals
US20140241538A1 (en) * 2013-02-26 2014-08-28 Harman International Industries, Ltd. Method of retrieving processing properties and audio processing system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2503867A (en) * 2012-05-08 2014-01-15 Queen Mary & Westfield College Mixing and processing audio signals in accordance with audio features extracted from the audio signals
US20140241538A1 (en) * 2013-02-26 2014-08-28 Harman International Industries, Ltd. Method of retrieving processing properties and audio processing system

Also Published As

Publication number Publication date
GB2560391B (en) 2020-09-30
GB201708426D0 (en) 2017-07-12

Similar Documents

Publication Publication Date Title
CN101405717B (en) Audio channel extraction using inter-channel amplitude spectra
CN1747608B (en) Audio signal processing apparatus and method
Grais et al. Raw multi-channel audio source separation using multi-resolution convolutional auto-encoders
JP6027087B2 (en) Acoustic signal processing system and method for performing spectral behavior transformations
US11610593B2 (en) Methods and systems for processing and mixing signals using signal decomposition
DE102012103552A1 (en) AUDIO SYSTEM AND METHOD FOR USING ADAPTIVE INTELLIGENCE TO DISTINCT THE INFORMATION CONTENT OF AUDIO SIGNALS AND TO CONTROL A SIGNAL PROCESSING FUNCTION
Fitzgerald Upmixing from mono-a source separation approach
WO2007041231A2 (en) Method and apparatus for removing or isolating voice or instruments on stereo recordings
JP2017520784A (en) On-the-fly sound source separation method and system
FitzGerald et al. Sound source separation using shifted non-negative tensor factorisation
Gonzalez et al. Automatic mixing: live downmixing stereo panner
EP2770498A1 (en) Method of retrieving processing properties and audio processing system
Sahai et al. Spectrogram feature losses for music source separation
US20230186782A1 (en) Electronic device, method and computer program
GB2560391A (en) Extracting audio characteristics from audio signals
Pishdadian et al. A multi-resolution approach to common fate-based audio separation
JP4274419B2 (en) Acoustic signal removal apparatus, acoustic signal removal method, and acoustic signal removal program
Gillet et al. Extraction and remixing of drum tracks from polyphonic music signals
Tachibana et al. Comparative evaluations of various harmonic/percussive sound separation algorithms based on anisotropic continuity of spectrogram
WO2019063547A1 (en) Method and electronic device for formant attenuation/amplification
WO2017135350A1 (en) Recording medium, acoustic processing device, and acoustic processing method
US10270551B2 (en) Mixing console with solo output
US12051436B2 (en) Signal processing apparatus, signal processing method, and program
CN110278721B (en) Method for outputting an audio signal depicting a musical piece into an interior space via an output device
DE112020002116T5 (en) Information processing device and method and program