WO2013150340A1 - Adaptive audio signal filtering - Google Patents

Adaptive audio signal filtering Download PDF

Info

Publication number
WO2013150340A1
WO2013150340A1 PCT/IB2012/051689 IB2012051689W WO2013150340A1 WO 2013150340 A1 WO2013150340 A1 WO 2013150340A1 IB 2012051689 W IB2012051689 W IB 2012051689W WO 2013150340 A1 WO2013150340 A1 WO 2013150340A1
Authority
WO
WIPO (PCT)
Prior art keywords
audio signal
dynamic range
enhance
parameter
speech
Prior art date
Application number
PCT/IB2012/051689
Other languages
French (fr)
Inventor
Andreas Fromel
Jani Samuli Kuivalainen
Original Assignee
Nokia Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Corporation filed Critical Nokia Corporation
Priority to PCT/IB2012/051689 priority Critical patent/WO2013150340A1/en
Priority to EP12873637.8A priority patent/EP2834815A4/en
Priority to US14/388,152 priority patent/US9633667B2/en
Publication of WO2013150340A1 publication Critical patent/WO2013150340A1/en

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03GCONTROL OF AMPLIFICATION
    • H03G9/00Combinations of two or more types of control, e.g. gain control and tone control
    • H03G9/02Combinations of two or more types of control, e.g. gain control and tone control in untuned amplifiers
    • H03G9/025Combinations of two or more types of control, e.g. gain control and tone control in untuned amplifiers frequency-dependent volume compression or expansion, e.g. multiple-band systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0316Speech enhancement, e.g. noise reduction or echo cancellation by changing the amplitude
    • G10L21/0364Speech enhancement, e.g. noise reduction or echo cancellation by changing the amplitude for improving intelligibility
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/78Detection of presence or absence of voice signals

Definitions

  • the present application relates to adaptive audio processing, and in particular, but not exclusively to an adaptive audio processing for use in portable apparatus.
  • electro-dynamic loudspeakers or earpiece units in apparatus is common.
  • Most electronic devices contain an electro dynamic loudspeaker or transducer configured to convert electrical signals into acoustic waves to be output and heard by the user of the apparatus.
  • mobile or similar telephones can contain an integrated transducer sometimes called an integrated handsfree (IHF) transducer configured to operate as an earpiece for speech and also as a loudspeaker for hands free and audio signal playback.
  • IHF integrated handsfree
  • Embodiments attempt to address the above problem.
  • a method comprising: analysing an audio signal comprises speech components; signal processing the audio signal using a first one or more parameter to enhance the speech component of the audio signal dependent on determining the audio signal comprises speech components; and signal processing the audio signal using a second one or more parameter to enhance the audio signal otherwise.
  • Signal processing the audio signal to enhance the speech component of the audio signal using a first one or more parameter dependent on determining the audio signal comprises speech components may comprise: filtering the audio signal into at least two bands; performing a dynamic range control processing on the at least two bands according to the first one or more parameter, the first one or more parameter being a first set of dynamic range control settings so to enhance an intelligibility of the speech component of the audio signal; and combining the dynamic range control processed bands into an output audio signal.
  • Performing a dynamic range control processing on the at least two bands may comprise compressing a mid-band frequency range compared to the higher- band frequency range.
  • Signal processing the audio signal using a first one or more parameter to enhance the speech component of the audio signal dependent on determining the audio signal comprises speech components may comprise at least one of: equalising the audio signal to enhance an intelligibility of the speech component of the audio signal; and filtering the audio signal to enhance an intelligibility of the speech component of the audio signal.
  • Signal processing the audio signal using the second one or more parameter to enhance the audio signal may comprise: filtering audio signal into at least two bands; performing a dynamic range control processing on the audio signal at least two bands according to the second one or more parameter, the second one or more parameter being a second set of dynamic range control settings so to enhance a loudness of the audio signal; and combining the dynamic range control processed bands into an output audio signal.
  • Performing a dynamic range control processing on the audio signal at least two bands according to a second set of dynamic range control settings may comprise compressing a higher-band frequency range compared to the mid- band frequency range.
  • Signal processing the audio signal using the second one or more parameter to enhance the loudness of the audio signal may comprise at least one of: equalising the audio signal using the second one or more parameter to enhance the loudness of the audio signal; and filtering the audio signal using the second one or more parameter to enhance the loudness of the audio signal.
  • the mid-band frequency range may be 700Hz and 4kHz and the higher-band frequency range may be greater than 4kHz.
  • Analysing the audio signal may comprise: determining a speech indicator in metadata associated with the audio signal; and determining voice activity in the audio signal.
  • an apparatus comprising at least one processor and at least one memory including computer code for one or more programs, the at least one memory and the computer code configured to with the at least one processor cause the apparatus to at least perform: analysing an audio signal comprises speech components; signal processing the audio signal using a first one or more parameter to enhance the speech component of the audio signal dependent on determining the audio signal comprises speech components; and signal processing the audio signal using a second one or more parameter to enhance the audio signal otherwise.
  • Signal processing the audio signal using a first one or more parameter to enhance the speech component of the audio signal dependent on determining the audio signal comprises speech components may cause the apparatus to perform: filtering the audio signal into at least two bands; performing a dynamic range control processing on the at least two bands according to the first one or more parameters, the first one or more parameters being a first set of dynamic range control settings so to enhance an intelligibility of the speech component of the audio signal; and combining the dynamic range control processed bands into an output audio signal.
  • Performing a dynamic range control processing on the at least two bands may cause the apparatus to perform compressing a mid-band frequency range compared to the higher-band frequency range.
  • Signal processing the audio signal using a first one or more parameter to enhance the speech component of the audio signal dependent on determining the audio signal comprises speech components may cause the apparatus to perform at least one of: equalising the audio signal to enhance an intelligibility of the speech component of the audio signal; and filtering the audio signal to enhance an intelligibility of the speech component of the audio signal.
  • Signal processing the audio signal using the second one or more parameter to enhance the audio signal may cause the apparatus to perform: filtering audio signal into at least two bands; performing a dynamic range control processing on the audio signal at least two bands according to the second one or more parameter, the second one or more parameter being a second set of dynamic range control settings so to enhance a loudness of the audio signal; and combining the dynamic range control processed bands into an output audio signal.
  • Performing a dynamic range control processing on the audio signal at least two bands according to a second set of dynamic range control settings may cause the apparatus to perform compressing a higher-band frequency range compared to the mid-band frequency range.
  • Signal processing the audio signal to enhance the loudness of the audio signal may cause the apparatus to perform at least one of: equalising the audio signal using the second one or more parameter to enhance the loudness of the audio signal; and filtering the audio signal using the second one or more parameter to enhance the loudness of the audio signal.
  • the mid-band frequency range may be 700Hz and 4kHz and the higher-band frequency range may be greater than 4kHz.
  • Analysing the audio signal may cause the apparatus to perform: determining a speech indicator in metadata associated with the audio signal; and determining voice activity in the audio signal.
  • an apparatus comprising: an audio signal analyser configured to analyse an audio signal; an audio signal processor configured to signal process the audio signal using a first one or more parameter to enhance the speech component of the audio signal dependent on determining the audio signal comprises speech components; and signal processing the audio signal using a second one or more parameter to enhance the audio signal otherwise.
  • the audio signal processor may comprise: a filter configured to filter the audio signal into at least two bands; at least one dynamic range controller configured to dynamic range control the at least two bands according to the first one or more parameter, the first one or more parameter being a first set of dynamic range control settings so to enhance an intelligibility of the speech component of the audio signal; and a combiner configured to combine the dynamic range control processed bands into an output audio signal.
  • the dynamic range controller may be configured to compress a mid-band frequency range compared to the higher-band frequency range.
  • the audio signal processor may comprise at least one of: an equaliser configured to equalise the audio signal to enhance an intelligibility of the speech component of the audio signal; and a filter configured to filter the audio signal to enhance an intelligibility of the speech component of the audio signal.
  • the audio signal processor may comprise: a filter configured to filter audio signal into at least two bands; at least one dynamic range controller configured to dynamic range control the audio signal at least two bands according to the second one or more parameter, the second one or more parameter being a second set of dynamic range control settings so to enhance a loudness of the audio signal; and a combiner configured to combine the dynamic range control processed bands into an output audio signal.
  • the dynamic range controller may be configured to compress a higher-band frequency range compared to the mid-band frequency range.
  • the audio signal processor may comprise at least one of: an equaliser using the second one or more parameter configured to equalise the audio signal to enhance the loudness of the audio signal; and a filter using the second one or more parameter configured to filter the audio signal to enhance the loudness of the audio signal.
  • the mid-band frequency range may be 700Hz and 4kHz and the higher-band frequency range may be greater than 4kHz.
  • the audio signal analyser may comprise: a speech indicator determiner configured to determine a speech indicator in metadata associated with the audio signal; and a voice activity determiner configured to determine voice activity in the audio signal.
  • an apparatus comprising: means for analysing an audio signal; means for signal processing using a first one or more parameter the audio signal to enhance the speech component of the audio signal dependent on determining the audio signal comprises speech components; and means for signal processing the audio signal using a second one or more parameter to enhance the audio signal otherwise.
  • the means for signal processing the audio signal using a first one or more parameter to enhance the speech component of the audio signal dependent on determining the audio signal comprises speech components may comprise: means for filtering the audio signal into at least two bands; means for performing a dynamic range control processing on the at least two bands according to the first one or more parameter, the first one or more parameter being a first set of dynamic range control settings so to enhance an intelligibility of the speech component of the audio signal; and means for combining the dynamic range control processed bands into an output audio signal.
  • the means for performing a dynamic range control processing on the at least two bands may comprise means for compressing a mid-band frequency range compared to the higher-band frequency range.
  • the means for signal processing the audio signal using a first one or more parameter to enhance the speech component of the audio signal dependent on determining the audio signal comprises speech components may comprise at least one of: means for equalising the audio signal using a first one or more parameter to enhance an intelligibility of the speech component of the audio signal; and means for filtering the audio signal using a first one or more parameter to enhance an intelligibility of the speech component of the audio signal.
  • the means for signal processing the audio signal using the second one or more parameter to enhance the audio signal may comprise: means for filtering audio signal into at least two bands; means for performing a dynamic range control processing on the audio signal at least two bands according to the second one or more parameter, the second one or more parameter being a second set of dynamic range control settings so to enhance a loudness of the audio signal; and means for combining the dynamic range control processed bands into an output audio signal.
  • the means for performing a dynamic range control processing on the audio signal at least two bands according to a second set of dynamic range control settings may comprise means for compressing a higher-band frequency range compared to the mid-band frequency range.
  • the means for signal processing the audio signal using the second one or more parameter to enhance the audio signal may comprise at least one of: means for equalising the audio signal using the second one or more parameter to enhance the loudness of the audio signal; and means for filtering the audio signal using the second one or more parameter to enhance the loudness of the audio signal.
  • the mid-band frequency range may be 700Hz and 4kHz and the higher-band frequency range may be greater than 4kHz.
  • the means for analysing an audio signal may comprise: means for determining a speech indicator in metadata associated with the audio signal; and means for determining voice activity in the audio signal.
  • An electronic device may comprise apparatus as described above.
  • a chipset may comprise apparatus as described above.
  • Figure 1 shows schematically an electronic device employing some embodiments of the application
  • FIG. 2 shows schematically an audio signal processor according to some embodiments
  • Figure 3a shows schematically an example of an audio signal analyser as shown in Figure 2 in further detail according to some embodiments
  • Figure 3b shows schematically a further example of the audio signal analyser as shown in Figure 2 in further detail according to some embodiments;
  • FIG 4 shows schematically the dynamic range controller as shown in Figure 2 in further detail according to some embodiments
  • FIG 5 shows schematically the operation of the audio signal analyser as shown in Figures 2, 3a and 3b according to some embodiments;
  • Figure 6 shows schematically the operation of the dynamic range controller as shown in Figures 2 and 4 according to some embodiments
  • Figure 7 shows an example of the dynamic range controller input to output settings for example frequency bands for speech or voice audio signals according to some embodiments
  • Figure 8 shows an example of the dynamic range controller input to output settings for example frequency bands for music audio signals according to some embodiments
  • Figure 9 shows an example measured loudness difference between speech and music audio tuning according to some embodiments.
  • Figure 10 shows an example measured frequency response for speech and standard tunings according to some embodiments. Description of Some Embodiments of the Application
  • Figure 1 shows a schematic block diagram of an exemplary electronic device or apparatus 10, which may incorporate an adaptive speech enhancement signal processing apparatus according to embodiments of the application.
  • the apparatus 10 may for example, as described herein be a mobile terminal or user equipment of a wireless communication system.
  • the apparatus 10 may be an audio-video device such as video camera, a Television (TV) receiver, audio recorder or audio player such as a mp3 recorder/player, a media recorder (also known as a mp4 recorder/player), or any computer suitable for the processing of audio signals.
  • TV Television
  • audio recorder or audio player such as a mp3 recorder/player, a media recorder (also known as a mp4 recorder/player), or any computer suitable for the processing of audio signals.
  • the electronic device or apparatus 10 in some embodiments comprises a microphone 1 1 , which is linked via an analogue-to-digital converter (ADC) 14 to a processor 21.
  • the processor 21 is further linked via a digital-to-analogue (DAC) converter 32 to loudspeakers 33.
  • the processor 21 is further linked to a transceiver (RX/TX) 13, to a user interface (Ul) 15 and to a memory 22.
  • the apparatus 10 comprises a processor 21.
  • the apparatus 10 comprises a memory 22, and further a data storage section 24 and program code section 23.
  • the processor 21 can in some embodiments be configured to execute various program codes.
  • the implemented program codes in some embodiments comprise adaptive speech enhancement signal processing code as described herein.
  • the implemented program codes 23 can in some embodiments be stored for example in the memory 22 for retrieval by the processor 21 whenever needed.
  • the memory 22 could further provide a section 24 for storing data, for example data that has been encoded in accordance with the application.
  • the adaptive speech enhancement signal processing code in some embodiments can be implemented in hardware or firmware.
  • the apparatus 10 comprises a user interface 15.
  • the user interface 15 enables a user to input commands to the electronic device 10, for example via a keypad, and/or to obtain information from the electronic device 10, for example via a display.
  • a touch screen may provide both input and output functions for the user interface.
  • the apparatus 10 in some embodiments comprises a transceiver 13 suitable for enabling communication with other apparatus, for example via a wireless communication network.
  • a user of the apparatus 10 for example can use the microphone 1 1 for inputting speech or other audio signals that are to be transmitted to some other apparatus or that are to be stored in the data section 24 of the memory 22.
  • the analogue-to-digital converter (ADC) 14 in some embodiments converts the input analogue audio signal into a digital audio signal and provides the digital audio signal to the processor 21.
  • the microphone 1 1 can comprise an integrated microphone and ADC function and provide digital audio signals directly to the processor for processing.
  • the processor 21 in such embodiments then processes the digital audio signal according to any suitable encoding process, for example a suitable adaptable multi-rate (AMR) coding or codec.
  • AMR adaptable multi-rate
  • the resulting bit stream can in some embodiments be provided to the transceiver 13 for transmission to another apparatus.
  • the coded audio data in some embodiments can be stored in the data section 24 of the memory 22, for instance for a later transmission or for a later presentation by the same apparatus 10.
  • the apparatus 10 in some embodiments can also receive a bit stream with correspondingly encoded data from another apparatus via the transceiver 13.
  • the processor 21 may execute decoding program code stored in the memory 22.
  • the processor 21 in such embodiments decodes the received data.
  • the processor 21 in some embodiments can be configured to apply adaptive speech enhancement audio signal processing as described herein, and provide the signal output to a digital-to-analogue converter 32.
  • the digital-to-analogue converter 32 converts the signal into analogue audio data and can in some embodiments output the analogue audio via the loudspeakers 33.
  • Execution of the decoding and speech enhancement adaptive audio processing program code in some embodiments can be triggered by an application called by the user via the user interface 15.
  • the received encoded data in some embodiments can also be stored instead of an immediate presentation via the loudspeakers 33 in the data section 24 of the memory 22, for instance for later decoding, speech enhancement adaptive audio signal processing and presentation or decoding and forwarding to still another apparatus.
  • tr ⁇ e application is to improve the intelligibility of mobile phone speech or audio signal speech in general by implementing speech enhancement adaptive audio signal processing.
  • the concept of the application is to improve the performance of loud speakers and transducers outputting speech audio signals. It is understood that the structure of audio signals containing speech is different from the audio signals containing music. The dynamics of speech are higher than typically found in music. Furthermore some parts of the audio spectrum for speech are not necessary in order to make the speech audio signal understandable. Music however requires a wider bandwidth in order to sound pleasant to the users ear.
  • an electronic apparatus or device can have an audio output device which attempts to maximise the loudspeaker output for music or speech or attempts to find a compromise between the outputs.
  • an analyser attempts to analyse and recognise speech in an audio signal and therefore determine whether the audio signal contains speech (or for example is music).
  • speech enhancement can be performed.
  • audio signal processing dynamic range control (DRC) tuning is described.
  • the analyser can be configured to determine DRC settings which produce more efficient output of the audio signal. It would be understood that although the embodiments describe herein discuss speech enhancement audio signal processing dynamic range control it would be understood that in some embodiments speech adaptive equaliser or filters can be also implemented.
  • the audio signal processor comprises an audio signal analyser 101 .
  • the audio signal analyser 101 is configured to receive the audio signal and analyse the audio signal to determine whether or not the audio signal is speech or non-speech (such as music) based.
  • the audio signal analyser 101 can output a set of parameters to control the speech enhanced audio signal processing of the audio signal.
  • the audio signal analyser 101 outputs the analysis result to the speech enhanced audio signal processor to be used to set the parameters within the speech enhanced audio processor.
  • the audio signal analyser 101 outputs the results to a dynamic range controller 103.
  • the audio enhanced speech signal processor comprises a dynamic range controller 103.
  • the dynamic range controller 103 can be configured to receive the audio signal and also the output of the audio signal analyser 101.
  • the dynamic range controller 103 can be configured to adaptively change the dynamic range controller processing of the audio signal dependent on the output of the audio signal analyser 101.
  • the dynamic range controller 03 can then output the processed audio signal to more efficiently drive the transducer.
  • the audio signal analyser 101 comprises a tag analyser 201.
  • the tag analyser 201 is configured to receive the audio signal to determine whether there is a music or speech tag associated with the audio signal.
  • step 401 The operation of inputting the audio signal to the speech analyser is shown in Figure 5 by step 401 .
  • the audio signal is associated with metadata containing tag or characteristic values identifying the audio as being speech or otherwise (such as music) in nature.
  • the audio signal analyser 101 receives the metadata but not the audio signal to be analysed by the tag analyser 201.
  • the tag analyser 201 can be configured to output the analysis of whether the input audio signal is music or speech audio to the DRC settings generator 202.
  • the audio signal analyser 101 can comprise a voice activity detector (VAD) 203.
  • the voice activity detector 203 can be any suitable voice activity detector configured to determine speech signals.
  • the audio signal analyser in some embodiments as shown in both Figures 3a and 3b, comprises a dynamic range controller (DRC) settings generator 202.
  • the DRC settings generator 202 is configured to receive the output of the tag analyser 201 , (or voice activity detector 203), or any other suitable speech detection analysis output and generate a set of dynamic range controller settings suitable for applying to the dynamic range controller 103 dependent on the output of whether the audio signal is speech or non-speech (such as music).
  • FIG. 7 an example set of dynamic range controller settings are shown for speech tuning are shown.
  • FIG. 7 there are a set of dynamic range control settings for various frequency bands which are passed to the dynamic range controller and applied to each of the bands.
  • dynamic range control settings are shown for similar bands but where the audio signal is determined to be non-speech (for example music).
  • the examples shown in Figures 7 and 8 show that in some embodiments for a speech audio signal the dynamic range controller settings compress much more the mid-range between 700Hz and 4kHz whereas for music only the higher frequencies are compressed. In other words for low levels of signal there is a much higher compression factor. The impact of which when applied would enhance speech components within audio signals with speech components but enhance the loudness of the audio signal otherwise.
  • the DRC settings generator 202 can output the settings to the dynamic range controller 103.
  • the outputting of DRC settings to the DRC is shown in Figure 5 by step 407.
  • the dynamic range controller 103 according to some embodiments is shown in further detail.
  • the operation of the dynamic range controller 103 is described.
  • the dynamic range controller 103 can in some embodiments receive the input audio signal.
  • the dynamic range controller 103 in some embodiments can comprise a sub- band filter 301 configured to filter the input audio signal into a determined number of sub-bands.
  • the sub-bands can be contiguous or overlapping and be linear or non-linear in distribution or frequency range depending on the implementation embodiment.
  • the sub-band filter 301 can be configured to generate 5 sub-bands for the audio signal, band 1 from 0- 217Hz, band 2 from 217-727Hz, band 3 from 727-1609Hz, band 4 from 1609- 4758Hz, and band 5 from 4758-24000Hz.
  • the sub-band filter can perform such filtering according to any suitable means.
  • the sub-band filter 301 can be configured to output each of the sub-bands to an associated band dynamic range controller.
  • the first band, band 1 is passed to the band 1 DRC 303
  • the second band, band 2 is to passed to the band 2 DRC 305
  • the number of sub-bands can be greater than or less than 5 (as represented by the value N).
  • the dynamic range controller 103 comprises a series of band dynamic range controllers.
  • a band 1 dynamic range controller 303 configured to receive the band 1 sub-band audio signal and the band 1 dynamic range control settings from the analyser output
  • a band 2 dynamic range controller 305 configured to receive the second band audio signals from the sub-band filter 301 and the dynamic range controller settings from the analyser output for the second band
  • Each of the band dynamic range controllers 303, 305 and 307 can be configured to receive the audio signal of the sub-band and apply the dynamic range control settings to each band to generate a dynamically range controlled band output signal.
  • the dynamically range controlled band output signals can be passed to a band combiner 309. The operation of applying the dynamic range control settings to each of the sub- bands is shown in Figure 6 by step 507.
  • the dynamic range controller 103 comprises a band combiner 309.
  • the band combiner 309 can be configured to recombine the received band dynamically controlled signals to a single audio signal.
  • step 509 The combination of the dynamically range control band signals into a single audio signal is shown in Figure 6 by step 509. Furthermore the band combiner 309 can be configured to output the dynamically range controlled band combined signals. The operation of outputting the DRC signals is shown in Figure 6 by step 51 1.
  • the combiner can be configured to apply interpolation on the audio signals such that where tuning sets are changed there is no sudden change when the dynamic range controller switches between speech and non- speech audio signals.
  • this dynamic switching DRC fading can be implemented within the band DRC components or in the DRC control settings components.
  • Figure 9 shows an example measured the total loudness output difference between a speech tuned loudness 801 and a standard or conventional tuning 803. In this example it is shown that the measured loudness difference is only 1 .5 phon difference.
  • an example measured frequency response between the speech tuning and a conventional tuning is shown.
  • the frequency responses for speech tunings 903 and conventional tuning 901 is shown where there is much more energy for lower frequencies and the frequency response is flatter with speech tunings.
  • the voice activity detector can be configured to determine an output such that where there is uncertainty the voice activity detector outputs a non-voice or non-speech result so that the detector does not determine music as being speech. This is because speech audio in the embodiments described herein attempts to achieve the best loudness by driving the speaker as hard as possible where speech audio is detected but avoid speaker damage as operating with speech DRC settings at full volume may produce distorted sound.
  • embodiments of the application operating within a codec within an apparatus 10, it would be appreciated that the invention as described below may be implemented as part of any audio (or speech) codec, including any variable rate/adaptive rate audio (or speech) codec.
  • embodiments of the application may be implemented in an audio codec which may implement audio coding over fixed or wired communication paths.
  • user equipment may comprise an audio codec such as those described in embodiments of the application above. It shall be appreciated that the term user equipment is intended to cover any suitable type of wireless user equipment, such as mobile telephones, portable data processing devices or portable web browsers.
  • PLMN public land mobile network
  • elements of a public land mobile network may also comprise audio codecs as described above.
  • the various embodiments of the application may be implemented in hardware or special purpose circuits, software, logic or any combination thereof.
  • some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device, although the invention is not limited thereto.
  • firmware or software which may be executed by a controller, microprocessor or other computing device, although the invention is not limited thereto.
  • While various aspects of the application may be illustrated and described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
  • an apparatus comprising at least one processor and at least one memory including computer program code the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to perform: analysing an audio signal; signal processing the audio signal to enhance the speech component of the audio signal dependent on determining the audio signal comprises speech components; and signal processing the audio signal to enhance a loudness of the audio signal otherwise.
  • any blocks of the logic flow as in the Figures may represent program steps, or interconnected logic circuits, blocks and functions, or a combination of program steps and logic circuits, blocks and functions.
  • a computer-readable medium encoded with instructions that, when executed by a computer perform: analysing an audio signal comprises speech components; signal processing the audio signal to enhance the speech component of the audio signal dependent on determining the audio signal comprises speech components; and signal processing the audio signal to enhance the loudness of the audio signal otherwise
  • the memory may be of any type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as semiconductor-based memory devices, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory.
  • the data processors may be of any type suitable to the local technical environment, and may include one or more of general purpose computers, special purpose computers, microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASIC), gate level circuits and processors based on multi-core processor architecture, as non-limiting examples.
  • Embodiments of the application may be practiced in various components such as integrated circuit modules.
  • the design of integrated circuits is by and large a highly automated process. Complex and powerful software tools are available for converting a logic level design into a semiconductor circuit design ready to be etched and formed on a semiconductor substrate.
  • Programs such as those provided by Synopsys, Inc. of Mountain View, California and Cadence Design, of San Jose, California automatically route conductors and locate components on a semiconductor chip using well established rules of design as well as libraries of pre-stored design modules.
  • the resultant design in a standardized electronic format (e.g., Opus, GDSII, or the like) may be transmitted to a semiconductor fabrication facility or "fab" for fabrication.
  • circuitry refers to all of the following:
  • circuits such as a microprocessor(s) or a portion of a microprocessor(s), that require software or firmware for operation, even if the software or firmware is not physically present.
  • This definition of 'circuitry' applies to all uses of this term in this application, including any claims.
  • the term 'circuitry' would also cover an implementation of merely a processor (or multiple processors) or portion of a processor and its (or their) accompanying software and/or firmware.
  • the term 'circuitry' would also cover, for example and if applicable to the particular claim element, a baseband integrated circuit or applications processor integrated circuit for a mobile phone or similar integrated circuit in server, a cellular network device, or other network device.

Abstract

An apparatus comprising: an audio signal analyser configured to analyse an audio signal; an audio signal processor configured to signal process the audio signal to enhance the speech component of the audio signal dependent on determining the audio signal comprises speech components; and signal processing the audio signal to enhance a loudness of the audio signal otherwise.

Description

Adaptive Audio Signal Filtering
Field The present application relates to adaptive audio processing, and in particular, but not exclusively to an adaptive audio processing for use in portable apparatus.
Background
The use of electro-dynamic loudspeakers or earpiece units in apparatus is common. Most electronic devices contain an electro dynamic loudspeaker or transducer configured to convert electrical signals into acoustic waves to be output and heard by the user of the apparatus. For example mobile or similar telephones can contain an integrated transducer sometimes called an integrated handsfree (IHF) transducer configured to operate as an earpiece for speech and also as a loudspeaker for hands free and audio signal playback.
Summary
Embodiments attempt to address the above problem.
There is provided according to a first aspect a method comprising: analysing an audio signal comprises speech components; signal processing the audio signal using a first one or more parameter to enhance the speech component of the audio signal dependent on determining the audio signal comprises speech components; and signal processing the audio signal using a second one or more parameter to enhance the audio signal otherwise. Signal processing the audio signal to enhance the speech component of the audio signal using a first one or more parameter dependent on determining the audio signal comprises speech components may comprise: filtering the audio signal into at least two bands; performing a dynamic range control processing on the at least two bands according to the first one or more parameter, the first one or more parameter being a first set of dynamic range control settings so to enhance an intelligibility of the speech component of the audio signal; and combining the dynamic range control processed bands into an output audio signal.
Performing a dynamic range control processing on the at least two bands may comprise compressing a mid-band frequency range compared to the higher- band frequency range.
Signal processing the audio signal using a first one or more parameter to enhance the speech component of the audio signal dependent on determining the audio signal comprises speech components may comprise at least one of: equalising the audio signal to enhance an intelligibility of the speech component of the audio signal; and filtering the audio signal to enhance an intelligibility of the speech component of the audio signal.
Signal processing the audio signal using the second one or more parameter to enhance the audio signal may comprise: filtering audio signal into at least two bands; performing a dynamic range control processing on the audio signal at least two bands according to the second one or more parameter, the second one or more parameter being a second set of dynamic range control settings so to enhance a loudness of the audio signal; and combining the dynamic range control processed bands into an output audio signal.
Performing a dynamic range control processing on the audio signal at least two bands according to a second set of dynamic range control settings may comprise compressing a higher-band frequency range compared to the mid- band frequency range. Signal processing the audio signal using the second one or more parameter to enhance the loudness of the audio signal may comprise at least one of: equalising the audio signal using the second one or more parameter to enhance the loudness of the audio signal; and filtering the audio signal using the second one or more parameter to enhance the loudness of the audio signal.
The mid-band frequency range may be 700Hz and 4kHz and the higher-band frequency range may be greater than 4kHz. Analysing the audio signal may comprise: determining a speech indicator in metadata associated with the audio signal; and determining voice activity in the audio signal.
According to a second aspect there is provided an apparatus comprising at least one processor and at least one memory including computer code for one or more programs, the at least one memory and the computer code configured to with the at least one processor cause the apparatus to at least perform: analysing an audio signal comprises speech components; signal processing the audio signal using a first one or more parameter to enhance the speech component of the audio signal dependent on determining the audio signal comprises speech components; and signal processing the audio signal using a second one or more parameter to enhance the audio signal otherwise.
Signal processing the audio signal using a first one or more parameter to enhance the speech component of the audio signal dependent on determining the audio signal comprises speech components may cause the apparatus to perform: filtering the audio signal into at least two bands; performing a dynamic range control processing on the at least two bands according to the first one or more parameters, the first one or more parameters being a first set of dynamic range control settings so to enhance an intelligibility of the speech component of the audio signal; and combining the dynamic range control processed bands into an output audio signal. Performing a dynamic range control processing on the at least two bands may cause the apparatus to perform compressing a mid-band frequency range compared to the higher-band frequency range.
Signal processing the audio signal using a first one or more parameter to enhance the speech component of the audio signal dependent on determining the audio signal comprises speech components may cause the apparatus to perform at least one of: equalising the audio signal to enhance an intelligibility of the speech component of the audio signal; and filtering the audio signal to enhance an intelligibility of the speech component of the audio signal.
Signal processing the audio signal using the second one or more parameter to enhance the audio signal may cause the apparatus to perform: filtering audio signal into at least two bands; performing a dynamic range control processing on the audio signal at least two bands according to the second one or more parameter, the second one or more parameter being a second set of dynamic range control settings so to enhance a loudness of the audio signal; and combining the dynamic range control processed bands into an output audio signal.
Performing a dynamic range control processing on the audio signal at least two bands according to a second set of dynamic range control settings may cause the apparatus to perform compressing a higher-band frequency range compared to the mid-band frequency range.
Signal processing the audio signal to enhance the loudness of the audio signal may cause the apparatus to perform at least one of: equalising the audio signal using the second one or more parameter to enhance the loudness of the audio signal; and filtering the audio signal using the second one or more parameter to enhance the loudness of the audio signal. The mid-band frequency range may be 700Hz and 4kHz and the higher-band frequency range may be greater than 4kHz.
Analysing the audio signal may cause the apparatus to perform: determining a speech indicator in metadata associated with the audio signal; and determining voice activity in the audio signal.
According to a third aspect there is provided an apparatus comprising: an audio signal analyser configured to analyse an audio signal; an audio signal processor configured to signal process the audio signal using a first one or more parameter to enhance the speech component of the audio signal dependent on determining the audio signal comprises speech components; and signal processing the audio signal using a second one or more parameter to enhance the audio signal otherwise.
The audio signal processor may comprise: a filter configured to filter the audio signal into at least two bands; at least one dynamic range controller configured to dynamic range control the at least two bands according to the first one or more parameter, the first one or more parameter being a first set of dynamic range control settings so to enhance an intelligibility of the speech component of the audio signal; and a combiner configured to combine the dynamic range control processed bands into an output audio signal.
The dynamic range controller may be configured to compress a mid-band frequency range compared to the higher-band frequency range.
The audio signal processor may comprise at least one of: an equaliser configured to equalise the audio signal to enhance an intelligibility of the speech component of the audio signal; and a filter configured to filter the audio signal to enhance an intelligibility of the speech component of the audio signal. The audio signal processor may comprise: a filter configured to filter audio signal into at least two bands; at least one dynamic range controller configured to dynamic range control the audio signal at least two bands according to the second one or more parameter, the second one or more parameter being a second set of dynamic range control settings so to enhance a loudness of the audio signal; and a combiner configured to combine the dynamic range control processed bands into an output audio signal.
The dynamic range controller may be configured to compress a higher-band frequency range compared to the mid-band frequency range.
The audio signal processor may comprise at least one of: an equaliser using the second one or more parameter configured to equalise the audio signal to enhance the loudness of the audio signal; and a filter using the second one or more parameter configured to filter the audio signal to enhance the loudness of the audio signal.
The mid-band frequency range may be 700Hz and 4kHz and the higher-band frequency range may be greater than 4kHz.
The audio signal analyser may comprise: a speech indicator determiner configured to determine a speech indicator in metadata associated with the audio signal; and a voice activity determiner configured to determine voice activity in the audio signal.
According to a fourth aspect there is provided an apparatus comprising: means for analysing an audio signal; means for signal processing using a first one or more parameter the audio signal to enhance the speech component of the audio signal dependent on determining the audio signal comprises speech components; and means for signal processing the audio signal using a second one or more parameter to enhance the audio signal otherwise. The means for signal processing the audio signal using a first one or more parameter to enhance the speech component of the audio signal dependent on determining the audio signal comprises speech components may comprise: means for filtering the audio signal into at least two bands; means for performing a dynamic range control processing on the at least two bands according to the first one or more parameter, the first one or more parameter being a first set of dynamic range control settings so to enhance an intelligibility of the speech component of the audio signal; and means for combining the dynamic range control processed bands into an output audio signal.
The means for performing a dynamic range control processing on the at least two bands may comprise means for compressing a mid-band frequency range compared to the higher-band frequency range. The means for signal processing the audio signal using a first one or more parameter to enhance the speech component of the audio signal dependent on determining the audio signal comprises speech components may comprise at least one of: means for equalising the audio signal using a first one or more parameter to enhance an intelligibility of the speech component of the audio signal; and means for filtering the audio signal using a first one or more parameter to enhance an intelligibility of the speech component of the audio signal.
The means for signal processing the audio signal using the second one or more parameter to enhance the audio signal may comprise: means for filtering audio signal into at least two bands; means for performing a dynamic range control processing on the audio signal at least two bands according to the second one or more parameter, the second one or more parameter being a second set of dynamic range control settings so to enhance a loudness of the audio signal; and means for combining the dynamic range control processed bands into an output audio signal. The means for performing a dynamic range control processing on the audio signal at least two bands according to a second set of dynamic range control settings may comprise means for compressing a higher-band frequency range compared to the mid-band frequency range.
The means for signal processing the audio signal using the second one or more parameter to enhance the audio signal may comprise at least one of: means for equalising the audio signal using the second one or more parameter to enhance the loudness of the audio signal; and means for filtering the audio signal using the second one or more parameter to enhance the loudness of the audio signal.
The mid-band frequency range may be 700Hz and 4kHz and the higher-band frequency range may be greater than 4kHz. The means for analysing an audio signal may comprise: means for determining a speech indicator in metadata associated with the audio signal; and means for determining voice activity in the audio signal.
An electronic device may comprise apparatus as described above.
A chipset may comprise apparatus as described above.
Brief Description of Drawings For better understanding of the present invention, reference will now be made by way of example to the accompanying drawings in which:
Figure 1 shows schematically an electronic device employing some embodiments of the application;
Figure 2 shows schematically an audio signal processor according to some embodiments;
Figure 3a shows schematically an example of an audio signal analyser as shown in Figure 2 in further detail according to some embodiments; Figure 3b shows schematically a further example of the audio signal analyser as shown in Figure 2 in further detail according to some embodiments;
Figure 4 shows schematically the dynamic range controller as shown in Figure 2 in further detail according to some embodiments;
Figure 5 shows schematically the operation of the audio signal analyser as shown in Figures 2, 3a and 3b according to some embodiments;
Figure 6 shows schematically the operation of the dynamic range controller as shown in Figures 2 and 4 according to some embodiments;
Figure 7 shows an example of the dynamic range controller input to output settings for example frequency bands for speech or voice audio signals according to some embodiments;
Figure 8 shows an example of the dynamic range controller input to output settings for example frequency bands for music audio signals according to some embodiments;
Figure 9 shows an example measured loudness difference between speech and music audio tuning according to some embodiments; and
Figure 10 shows an example measured frequency response for speech and standard tunings according to some embodiments. Description of Some Embodiments of the Application
The following describes in more detail possible adaptive audio signal processing for use in speech or speech like audio for the provision of higher quality voice communication. In this regard reference is first made to Figure 1 which shows a schematic block diagram of an exemplary electronic device or apparatus 10, which may incorporate an adaptive speech enhancement signal processing apparatus according to embodiments of the application.
The apparatus 10 may for example, as described herein be a mobile terminal or user equipment of a wireless communication system. In other embodiments the apparatus 10 may be an audio-video device such as video camera, a Television (TV) receiver, audio recorder or audio player such as a mp3 recorder/player, a media recorder (also known as a mp4 recorder/player), or any computer suitable for the processing of audio signals.
The electronic device or apparatus 10 in some embodiments comprises a microphone 1 1 , which is linked via an analogue-to-digital converter (ADC) 14 to a processor 21. The processor 21 is further linked via a digital-to-analogue (DAC) converter 32 to loudspeakers 33. The processor 21 is further linked to a transceiver (RX/TX) 13, to a user interface (Ul) 15 and to a memory 22. In some embodiments the apparatus 10 comprises a processor 21. Furthermore in some embodiments the apparatus 10 comprises a memory 22, and further a data storage section 24 and program code section 23. The processor 21 can in some embodiments be configured to execute various program codes. The implemented program codes in some embodiments comprise adaptive speech enhancement signal processing code as described herein. The implemented program codes 23 can in some embodiments be stored for example in the memory 22 for retrieval by the processor 21 whenever needed. The memory 22 could further provide a section 24 for storing data, for example data that has been encoded in accordance with the application.
The adaptive speech enhancement signal processing code in some embodiments can be implemented in hardware or firmware.
In some embodiments the apparatus 10 comprises a user interface 15. The user interface 15 enables a user to input commands to the electronic device 10, for example via a keypad, and/or to obtain information from the electronic device 10, for example via a display. In some embodiments a touch screen may provide both input and output functions for the user interface. The apparatus 10 in some embodiments comprises a transceiver 13 suitable for enabling communication with other apparatus, for example via a wireless communication network. A user of the apparatus 10 for example can use the microphone 1 1 for inputting speech or other audio signals that are to be transmitted to some other apparatus or that are to be stored in the data section 24 of the memory 22. The analogue-to-digital converter (ADC) 14 in some embodiments converts the input analogue audio signal into a digital audio signal and provides the digital audio signal to the processor 21. In some embodiments the microphone 1 1 can comprise an integrated microphone and ADC function and provide digital audio signals directly to the processor for processing.
The processor 21 in such embodiments then processes the digital audio signal according to any suitable encoding process, for example a suitable adaptable multi-rate (AMR) coding or codec. The resulting bit stream can in some embodiments be provided to the transceiver 13 for transmission to another apparatus. Alternatively, the coded audio data in some embodiments can be stored in the data section 24 of the memory 22, for instance for a later transmission or for a later presentation by the same apparatus 10.
The apparatus 10 in some embodiments can also receive a bit stream with correspondingly encoded data from another apparatus via the transceiver 13. In this example, the processor 21 may execute decoding program code stored in the memory 22. The processor 21 in such embodiments decodes the received data. Furthermore the processor 21 in some embodiments can be configured to apply adaptive speech enhancement audio signal processing as described herein, and provide the signal output to a digital-to-analogue converter 32. The digital-to-analogue converter 32 converts the signal into analogue audio data and can in some embodiments output the analogue audio via the loudspeakers 33. Execution of the decoding and speech enhancement adaptive audio processing program code in some embodiments can be triggered by an application called by the user via the user interface 15. The received encoded data in some embodiments can also be stored instead of an immediate presentation via the loudspeakers 33 in the data section 24 of the memory 22, for instance for later decoding, speech enhancement adaptive audio signal processing and presentation or decoding and forwarding to still another apparatus.
It is to be understood again that the structure of the apparatus 10 could be supplemented and varied in many ways.
It would be appreciated that the schematic structures described in Figures 2, 3a, 3b, and 4 and the method steps shown in Figures 5 and 6 represent only a part of the operation of audio signal playback apparatus and specifically adaptive audio signal processing apparatus or methods as exemplarily shown implemented in the apparatus shown in Figure 1.
The concept of tr^e application is to improve the intelligibility of mobile phone speech or audio signal speech in general by implementing speech enhancement adaptive audio signal processing.
In particular the concept of the application is to improve the performance of loud speakers and transducers outputting speech audio signals. It is understood that the structure of audio signals containing speech is different from the audio signals containing music. The dynamics of speech are higher than typically found in music. Furthermore some parts of the audio spectrum for speech are not necessary in order to make the speech audio signal understandable. Music however requires a wider bandwidth in order to sound pleasant to the users ear.
Typically an electronic apparatus or device can have an audio output device which attempts to maximise the loudspeaker output for music or speech or attempts to find a compromise between the outputs. In the embodiments described herein an analyser attempts to analyse and recognise speech in an audio signal and therefore determine whether the audio signal contains speech (or for example is music). Depending on the analysis speech enhancement can be performed. In the following examples audio signal processing dynamic range control (DRC) tuning is described. In such embodiments the analyser can be configured to determine DRC settings which produce more efficient output of the audio signal. It would be understood that although the embodiments describe herein discuss speech enhancement audio signal processing dynamic range control it would be understood that in some embodiments speech adaptive equaliser or filters can be also implemented.
With respect to Figure 2 the overview of the audio signal processor according to some embodiments is shown. In some embodiments the audio signal processor comprises an audio signal analyser 101 . The audio signal analyser 101 is configured to receive the audio signal and analyse the audio signal to determine whether or not the audio signal is speech or non-speech (such as music) based. In some embodiments the audio signal analyser 101 can output a set of parameters to control the speech enhanced audio signal processing of the audio signal. Furthermore in some embodiments the audio signal analyser 101 outputs the analysis result to the speech enhanced audio signal processor to be used to set the parameters within the speech enhanced audio processor.
In some embodiments the audio signal analyser 101 outputs the results to a dynamic range controller 103. In some embodiments the audio enhanced speech signal processor comprises a dynamic range controller 103. The dynamic range controller 103 can be configured to receive the audio signal and also the output of the audio signal analyser 101. The dynamic range controller 103 can be configured to adaptively change the dynamic range controller processing of the audio signal dependent on the output of the audio signal analyser 101. The dynamic range controller 03 can then output the processed audio signal to more efficiently drive the transducer.
With respect to Figures 3a and 3b examples of the audio signal analyser 101 are shown in further detail. Furthermore with respect to Figure 5 the operation of the audio signal analyser 101 as shown in Figures 3a and 3b are shown in further detail according to some embodiments.
In some embodiments as shown in Figure 3a the audio signal analyser 101 comprises a tag analyser 201. The tag analyser 201 is configured to receive the audio signal to determine whether there is a music or speech tag associated with the audio signal.
The operation of inputting the audio signal to the speech analyser is shown in Figure 5 by step 401 .
In some embodiments the audio signal is associated with metadata containing tag or characteristic values identifying the audio as being speech or otherwise (such as music) in nature. In some embodiments the audio signal analyser 101 receives the metadata but not the audio signal to be analysed by the tag analyser 201.
The tag analyser 201 can be configured to output the analysis of whether the input audio signal is music or speech audio to the DRC settings generator 202.
The operation of analysing the audio signals for speech components is shown in Figure 5 by step 403.
As shown in Figure 3b the audio signal analyser 101 can comprise a voice activity detector (VAD) 203. The voice activity detector 203 can be any suitable voice activity detector configured to determine speech signals. The audio signal analyser, in some embodiments as shown in both Figures 3a and 3b, comprises a dynamic range controller (DRC) settings generator 202. The DRC settings generator 202 is configured to receive the output of the tag analyser 201 , (or voice activity detector 203), or any other suitable speech detection analysis output and generate a set of dynamic range controller settings suitable for applying to the dynamic range controller 103 dependent on the output of whether the audio signal is speech or non-speech (such as music).
With respect to Figure 7 an example set of dynamic range controller settings are shown for speech tuning are shown. In the example shown in Figure 7 there are a set of dynamic range control settings for various frequency bands which are passed to the dynamic range controller and applied to each of the bands. With respect to Figure 8 dynamic range control settings are shown for similar bands but where the audio signal is determined to be non-speech (for example music). The examples shown in Figures 7 and 8 show that in some embodiments for a speech audio signal the dynamic range controller settings compress much more the mid-range between 700Hz and 4kHz whereas for music only the higher frequencies are compressed. In other words for low levels of signal there is a much higher compression factor. The impact of which when applied would enhance speech components within audio signals with speech components but enhance the loudness of the audio signal otherwise.
With respect to Figure 5 the operation of determining the DRC control settings dependent on whether the audio signal is speech (or voice) is shown in step 405.
Furthermore the DRC settings generator 202 can output the settings to the dynamic range controller 103. The outputting of DRC settings to the DRC is shown in Figure 5 by step 407. With respect to Figure 4 the dynamic range controller 103 according to some embodiments is shown in further detail. Furthermore with respect to Figure 6 the operation of the dynamic range controller 103 is described. The dynamic range controller 103 can in some embodiments receive the input audio signal.
The inputting of the audio signal is shown in Figure 6 by step 501. The dynamic range controller 103 in some embodiments can comprise a sub- band filter 301 configured to filter the input audio signal into a determined number of sub-bands. The sub-bands can be contiguous or overlapping and be linear or non-linear in distribution or frequency range depending on the implementation embodiment. In the following examples the sub-band filter 301 can be configured to generate 5 sub-bands for the audio signal, band 1 from 0- 217Hz, band 2 from 217-727Hz, band 3 from 727-1609Hz, band 4 from 1609- 4758Hz, and band 5 from 4758-24000Hz. The sub-band filter can perform such filtering according to any suitable means. The sub-band filter 301 can be configured to output each of the sub-bands to an associated band dynamic range controller. Thus in the embodiments as shown in Figure 4 the first band, band 1 , is passed to the band 1 DRC 303, the second band, band 2, is to passed to the band 2 DRC 305 and the fifth band, band 5, passed to the band N DRC 307. It would be understood that in some embodiments the number of sub-bands can be greater than or less than 5 (as represented by the value N).
The operation of filtering the audio into sub-bands is shown in Figure 6 by step 503.
In some embodiments the dynamic range controller 103 comprises a series of band dynamic range controllers. In the example shown in Figure 4 there is a band 1 dynamic range controller 303 configured to receive the band 1 sub-band audio signal and the band 1 dynamic range control settings from the analyser output, a band 2 dynamic range controller 305 configured to receive the second band audio signals from the sub-band filter 301 and the dynamic range controller settings from the analyser output for the second band, and a band N dynamic range controller 307, where N=5 in this example, dynamic range controller configured to receive the fifth band sub-band audio signal from the sub-band filter 301 and the fifth band dynamic range controller settings from the analyser output.
The operation of receiving the dynamic range controller settings from the analyser is shown in Figure 6 by step 505.
Each of the band dynamic range controllers 303, 305 and 307 can be configured to receive the audio signal of the sub-band and apply the dynamic range control settings to each band to generate a dynamically range controlled band output signal. The dynamically range controlled band output signals can be passed to a band combiner 309. The operation of applying the dynamic range control settings to each of the sub- bands is shown in Figure 6 by step 507.
In some embodiments the dynamic range controller 103 comprises a band combiner 309. The band combiner 309 can be configured to recombine the received band dynamically controlled signals to a single audio signal.
The combination of the dynamically range control band signals into a single audio signal is shown in Figure 6 by step 509. Furthermore the band combiner 309 can be configured to output the dynamically range controlled band combined signals. The operation of outputting the DRC signals is shown in Figure 6 by step 51 1.
In some embodiments the combiner can be configured to apply interpolation on the audio signals such that where tuning sets are changed there is no sudden change when the dynamic range controller switches between speech and non- speech audio signals. In some embodiments this dynamic switching DRC fading can be implemented within the band DRC components or in the DRC control settings components. Figure 9 shows an example measured the total loudness output difference between a speech tuned loudness 801 and a standard or conventional tuning 803. In this example it is shown that the measured loudness difference is only 1 .5 phon difference. With respect to Figure 10 an example measured frequency response between the speech tuning and a conventional tuning is shown. With respect to Figure 10 the frequency responses for speech tunings 903 and conventional tuning 901 is shown where there is much more energy for lower frequencies and the frequency response is flatter with speech tunings. The reason for this is that with speech there is much less energy on lower frequencies and it can be compressed more than music. Furthermore it would be understood that because the root mean squared (RMS) level of speech is smaller compared to the measurement signal used for the frequency response measurements the difference in real life is greater than that seen in Figure 10.
It would be understood that in some embodiments the voice activity detector can be configured to determine an output such that where there is uncertainty the voice activity detector outputs a non-voice or non-speech result so that the detector does not determine music as being speech. This is because speech audio in the embodiments described herein attempts to achieve the best loudness by driving the speaker as hard as possible where speech audio is detected but avoid speaker damage as operating with speech DRC settings at full volume may produce distorted sound.
Although the above examples describe embodiments of the application operating within a codec within an apparatus 10, it would be appreciated that the invention as described below may be implemented as part of any audio (or speech) codec, including any variable rate/adaptive rate audio (or speech) codec. Thus, for example, embodiments of the application may be implemented in an audio codec which may implement audio coding over fixed or wired communication paths.
Thus user equipment may comprise an audio codec such as those described in embodiments of the application above. It shall be appreciated that the term user equipment is intended to cover any suitable type of wireless user equipment, such as mobile telephones, portable data processing devices or portable web browsers.
Furthermore elements of a public land mobile network (PLMN) may also comprise audio codecs as described above.
In general, the various embodiments of the application may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. For example, some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device, although the invention is not limited thereto. While various aspects of the application may be illustrated and described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
Thus at least some embodiments there may be an apparatus comprising at least one processor and at least one memory including computer program code the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to perform: analysing an audio signal; signal processing the audio signal to enhance the speech component of the audio signal dependent on determining the audio signal comprises speech components; and signal processing the audio signal to enhance a loudness of the audio signal otherwise.
The embodiments of this application may be implemented by computer software executable by a data processor of the mobile device, such as in the processor entity, or by hardware, or by a combination of software and hardware. Further in this regard it should be noted that any blocks of the logic flow as in the Figures may represent program steps, or interconnected logic circuits, blocks and functions, or a combination of program steps and logic circuits, blocks and functions.
Thus at least some embodiments there may be a computer-readable medium encoded with instructions that, when executed by a computer perform: analysing an audio signal comprises speech components; signal processing the audio signal to enhance the speech component of the audio signal dependent on determining the audio signal comprises speech components; and signal processing the audio signal to enhance the loudness of the audio signal otherwise
The memory may be of any type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as semiconductor-based memory devices, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory. The data processors may be of any type suitable to the local technical environment, and may include one or more of general purpose computers, special purpose computers, microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASIC), gate level circuits and processors based on multi-core processor architecture, as non-limiting examples.
Embodiments of the application may be practiced in various components such as integrated circuit modules. The design of integrated circuits is by and large a highly automated process. Complex and powerful software tools are available for converting a logic level design into a semiconductor circuit design ready to be etched and formed on a semiconductor substrate.
Programs, such as those provided by Synopsys, Inc. of Mountain View, California and Cadence Design, of San Jose, California automatically route conductors and locate components on a semiconductor chip using well established rules of design as well as libraries of pre-stored design modules. Once the design for a semiconductor circuit has been completed, the resultant design, in a standardized electronic format (e.g., Opus, GDSII, or the like) may be transmitted to a semiconductor fabrication facility or "fab" for fabrication.
As used in this application, the term 'circuitry' refers to all of the following:
(a) hardware-only circuit implementations (such as implementations in only analog and/or digital circuitry) and
(b) to combinations of circuits and software (and/or firmware), such as: (i) to a combination of processor(s) or (ii) to portions of processor(s)/software
(including digital signal processor(s)), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions and
(c) to circuits, such as a microprocessor(s) or a portion of a microprocessor(s), that require software or firmware for operation, even if the software or firmware is not physically present. This definition of 'circuitry' applies to all uses of this term in this application, including any claims. As a further example, as used in this application, the term 'circuitry' would also cover an implementation of merely a processor (or multiple processors) or portion of a processor and its (or their) accompanying software and/or firmware. The term 'circuitry' would also cover, for example and if applicable to the particular claim element, a baseband integrated circuit or applications processor integrated circuit for a mobile phone or similar integrated circuit in server, a cellular network device, or other network device. The foregoing description has provided by way of exemplary and non-limiting examples a full and informative description of the exemplary embodiment of this invention. However, various modifications and adaptations may become apparent to those skilled in the relevant arts in view of the foregoing description, when read in conjunction with the accompanying drawings and the appended claims. However, all such and similar modifications of the teachings of this invention will still fall within the scope of this invention as defined in the appended claims.

Claims

Claims:
1. A method comprising:
analysing an audio signal ;
signal processing the audio signal using a first one or more parameter to enhance the speech component of the audio signal dependent on determining the audio signal comprises speech components; and
signal processing the audio signal using a second one or more parameter to enhance the audio signal otherwise.
2. The method as claimed in claim 1 , wherein signal processing the audio signal using a first one or more parameter to enhance the speech component of the audio signal dependent on determining the audio signal comprises speech components comprises:
filtering the audio signal into at least two bands;
performing a dynamic range control processing on the at least two bands according to the first one or more parameter, the first one or more parameter being a first set of dynamic range control settings so to enhance an intelligibility of the speech component of the audio signal; and
combining the dynamic range control processed bands into an output audio signal.
3. The method as claimed in claim 2, wherein performing a dynamic range control processing on the at least two bands comprises compressing a mid- band frequency range compared to the higher-band frequency range.
4. The method as claimed in claims 1 to 3, wherein signal processing the audio signal using a first one or more parameter to enhance the speech component of the audio signal dependent on determining the audio signal comprises speech components comprises at least one of:
equalising the audio signal using a first one or more parameter to enhance an intelligibility of the speech component of the audio signal; and filtering the audio signal using a first one or more parameter to enhance an intelligibility of the speech component of the audio signal.
5. The method as claimed in claims 1 to 4, wherein signal processing the audio signal using the second one or more parameter to enhance the audio signal comprises:
filtering audio signal into at least two bands;
performing a dynamic range control processing on the audio signal at least two bands according to the second one or more parameter, the second one or more parameter being a second set of dynamic range control settings so to enhance a loudness of the audio signal; and
combining the dynamic range control processed bands into an output audio signal.
6, The method as claimed in claim 5, wherein performing a dynamic range control processing on the audio signal at least two bands according to a second set of dynamic range control settings comprises compressing a higher-band frequency range compared to the mid-band frequency range.
7. The method as claimed in claims 1 to 6, wherein signal processing the audio signal using the second one or more parameter to enhance audio signal comprises at least one of:
equalising the audio signal using the second one or more parameter to enhance the loudness of the audio signal; and
filtering the audio signal using the second one or more parameter to enhance the loudness of the audio signal.
8. The method as claimed in claims 3 and 6, wherein the mid-band frequency range is 700Hz and 4kHz and the higher-band frequency range is greater than 4kHz.
9. The method as claimed in claims 1 to 8, wherein analysing the audio signal comprises:
determining a speech indicator in metadata associated with the audio signal; and
determining voice activity in the audio signal.
10. An apparatus comprising at least one processor and at least one memory including computer code for one or more programs, the at least one memory and the computer code configured to with the at least one processor cause the apparatus to at least perform:
analysing an audio signal;
signal processing the audio signal using a first one or more parameter to enhance the speech component of the audio signal dependent on determining the audio signal comprises speech components; and
signal processing the audio signal using a second one or more parameter to enhance the audio signal otherwise.
1 1. The apparatus as claimed in claim 10 wherein signal processing the audio signal using a first one or more parameters to enhance the speech component of the audio signal dependent on determining the audio signal comprises speech components causes the apparatus to perform:
filtering the audio signal into at least two bands;
performing a dynamic range control processing on the at least two bands according to the first one or more parameters, the first one or more parameters being a first set of dynamic range control settings so to enhance an intelligibility of the speech component of the audio signal; and
combining the dynamic range control processed bands into an output audio signal.
12. The apparatus as claimed in claim 1 1 , wherein performing a dynamic range control processing on the at least two bands causes the apparatus to perform compressing a mid-band frequency range compared to the higher-band frequency range.
13. The apparatus as claimed in claims 10 to 12, wherein signal processing the audio signal using the second one or more parameter to enhance the audio signal causes the apparatus to perform:
filtering audio signal into at least two bands;
performing a dynamic range control processing on the audio signal at least two bands according to the second one or more parameter, the second one or more parameter being a second set of dynamic range control settings so to enhance a loudness of the audio signal; and
combining the dynamic range control processed bands into an output audio signal.
14. The apparatus as claimed in claim 13, wherein performing a dynamic range control processing on the audio signal at least two bands according to a second set of dynamic range control settings causes the apparatus to perform compressing a higher-band frequency range compared to the mid-band frequency range.
15. The apparatus as claimed in claims 10 to 14, wherein determining an audio signal comprises speech components causes the apparatus to perform: determining a speech indicator in metadata associated with the audio signal; and determining voice activity in the audio signal.
16. An apparatus comprising:
an audio signal analyser configured to analyse an audio signal;
an audio signal processor configured to signal process the audio signal using a first one or more parameter to enhance the speech component of the audio signal dependent on determining the audio signal comprises speech components; and signal processing the audio signal using a second one or more parameter to enhance the audio signal otherwise.
17. The apparatus as claimed in claim 16, wherein the audio signal processor comprises:
a filter configured to filter the audio signal into at least two bands;
at least one dynamic range controller configured to dynamic range control the at least two bands according to the first one or more parameter, the first one or more parameter being a first set of dynamic range control settings so to enhance an intelligibility of the speech component of the audio signal; and a combiner configured to combine the dynamic range control processed bands into an output audio signal.
18. The apparatus as claimed in claim 17, wherein the dynamic range controller is configured to compress a mid-band frequency range compared to the higher-band frequency range.
19. The apparatus as claimed in claims 16 to 18, wherein the audio signal processor comprises:
a filter configured to filter audio signal into at least two bands;
at least one dynamic range controller configured to dynamic range control the audio signal at least two bands according to the second one or more parameter, the second one or more parameter being a second set of dynamic range control settings so to enhance an intelligibility of the speech component of the audio signal; and
a combiner configured to combine the dynamic range control processed bands into an output audio signal.
20. The apparatus as claimed in claim 19, wherein the dynamic range controller is configured to compress a higher-band frequency range compared to the mid-band frequency range.
21 . The apparatus as claimed in claims 16 to 20, wherein the audio signal analyser comprises: a speech indicator determiner configured to determine a speech indicator in metadata associated with the audio signal; and a voice activity determiner configured to determine voice activity in the audio signal.
22. An apparatus comprising: means for analysing an audio signal; means for signal processing the audio signal to enhance the speech component of the audio signal dependent on determining the audio signal comprises speech components; and means for signal processing the audio signal to enhance the audio signal otherwise.
23. An electronic device comprising apparatus as claimed in claims 10 to 22.
24. A chipset comprising apparatus as claimed in claims 10 to 22.
PCT/IB2012/051689 2012-04-05 2012-04-05 Adaptive audio signal filtering WO2013150340A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/IB2012/051689 WO2013150340A1 (en) 2012-04-05 2012-04-05 Adaptive audio signal filtering
EP12873637.8A EP2834815A4 (en) 2012-04-05 2012-04-05 Adaptive audio signal filtering
US14/388,152 US9633667B2 (en) 2012-04-05 2012-04-05 Adaptive audio signal filtering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IB2012/051689 WO2013150340A1 (en) 2012-04-05 2012-04-05 Adaptive audio signal filtering

Publications (1)

Publication Number Publication Date
WO2013150340A1 true WO2013150340A1 (en) 2013-10-10

Family

ID=49300055

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2012/051689 WO2013150340A1 (en) 2012-04-05 2012-04-05 Adaptive audio signal filtering

Country Status (3)

Country Link
US (1) US9633667B2 (en)
EP (1) EP2834815A4 (en)
WO (1) WO2013150340A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021133779A1 (en) * 2019-12-27 2021-07-01 Bose Corporation Audio device with speech-based audio signal processing

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9614489B2 (en) * 2012-03-27 2017-04-04 Htc Corporation Sound producing system and audio amplifying method thereof
TWM487509U (en) * 2013-06-19 2014-10-01 杜比實驗室特許公司 Audio processing apparatus and electrical device
CN106297813A (en) * 2015-05-28 2017-01-04 杜比实验室特许公司 The audio analysis separated and process
US10090005B2 (en) * 2016-03-10 2018-10-02 Aspinity, Inc. Analog voice activity detection
US9838737B2 (en) * 2016-05-05 2017-12-05 Google Inc. Filtering wind noises in video content
US10535360B1 (en) * 2017-05-25 2020-01-14 Tp Lab, Inc. Phone stand using a plurality of directional speakers

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5878391A (en) 1993-07-26 1999-03-02 U.S. Philips Corporation Device for indicating a probability that a received signal is a speech signal
DE19848491A1 (en) 1998-10-21 2000-04-27 Bosch Gmbh Robert Radio receiver with audio data system has control unit to allocate sound characteristic according to transferred program type identification adjusted in receiving section
US6098830A (en) 1998-10-05 2000-08-08 Jamieson; Michael Resealable flip-top beverage can lid
US20070078645A1 (en) * 2005-09-30 2007-04-05 Nokia Corporation Filterbank-based processing of speech signals
WO2008106036A2 (en) * 2007-02-26 2008-09-04 Dolby Laboratories Licensing Corporation Speech enhancement in entertainment audio
WO2009011827A1 (en) * 2007-07-13 2009-01-22 Dolby Laboratories Licensing Corporation Audio processing using auditory scene analysis and spectral skewness
US20090299742A1 (en) * 2008-05-29 2009-12-03 Qualcomm Incorporated Systems, methods, apparatus, and computer program products for spectral contrast enhancement
WO2010033384A1 (en) * 2008-09-19 2010-03-25 Dolby Laboratories Licensing Corporation Upstream quality enhancement signal processing for resource constrained client devices
WO2010071521A1 (en) * 2008-12-19 2010-06-24 Telefonaktiebolaget L M Ericsson (Publ) Systems and methods for improving the intelligibility of speech in a noisy environment
WO2010128882A1 (en) 2009-05-07 2010-11-11 General Electric Company Multi-premixer fuel nozzle
WO2012127278A1 (en) * 2011-03-18 2012-09-27 Nokia Corporation Apparatus for audio signal processing

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19703228B4 (en) * 1997-01-29 2006-08-03 Siemens Audiologische Technik Gmbh Method for amplifying input signals of a hearing aid and circuit for carrying out the method
US8385561B2 (en) * 2006-03-13 2013-02-26 F. Davis Merrey Digital power link audio distribution system and components thereof
US20120051561A1 (en) * 2006-12-05 2012-03-01 Cohen Alexander J Audio/sound information system and method
JP5038915B2 (en) 2008-01-08 2012-10-03 株式会社松川レピヤン Fabric manufacturing method and fabric
EP2172930B1 (en) * 2008-03-24 2012-02-22 Victor Company Of Japan, Limited Audio signal processing device and audio signal processing method
US9215538B2 (en) 2009-08-04 2015-12-15 Nokia Technologies Oy Method and apparatus for audio signal classification
TWI459828B (en) * 2010-03-08 2014-11-01 Dolby Lab Licensing Corp Method and system for scaling ducking of speech-relevant channels in multi-channel audio
CN103886863A (en) * 2012-12-20 2014-06-25 杜比实验室特许公司 Audio processing device and audio processing method

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5878391A (en) 1993-07-26 1999-03-02 U.S. Philips Corporation Device for indicating a probability that a received signal is a speech signal
US6098830A (en) 1998-10-05 2000-08-08 Jamieson; Michael Resealable flip-top beverage can lid
DE19848491A1 (en) 1998-10-21 2000-04-27 Bosch Gmbh Robert Radio receiver with audio data system has control unit to allocate sound characteristic according to transferred program type identification adjusted in receiving section
US20070078645A1 (en) * 2005-09-30 2007-04-05 Nokia Corporation Filterbank-based processing of speech signals
WO2008106036A2 (en) * 2007-02-26 2008-09-04 Dolby Laboratories Licensing Corporation Speech enhancement in entertainment audio
WO2009011827A1 (en) * 2007-07-13 2009-01-22 Dolby Laboratories Licensing Corporation Audio processing using auditory scene analysis and spectral skewness
US20090299742A1 (en) * 2008-05-29 2009-12-03 Qualcomm Incorporated Systems, methods, apparatus, and computer program products for spectral contrast enhancement
WO2010033384A1 (en) * 2008-09-19 2010-03-25 Dolby Laboratories Licensing Corporation Upstream quality enhancement signal processing for resource constrained client devices
WO2010071521A1 (en) * 2008-12-19 2010-06-24 Telefonaktiebolaget L M Ericsson (Publ) Systems and methods for improving the intelligibility of speech in a noisy environment
WO2010128882A1 (en) 2009-05-07 2010-11-11 General Electric Company Multi-premixer fuel nozzle
WO2012127278A1 (en) * 2011-03-18 2012-09-27 Nokia Corporation Apparatus for audio signal processing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP2834815A4 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021133779A1 (en) * 2019-12-27 2021-07-01 Bose Corporation Audio device with speech-based audio signal processing
US11172294B2 (en) 2019-12-27 2021-11-09 Bose Corporation Audio device with speech-based audio signal processing

Also Published As

Publication number Publication date
US20150310874A1 (en) 2015-10-29
US9633667B2 (en) 2017-04-25
EP2834815A4 (en) 2015-10-28
EP2834815A1 (en) 2015-02-11

Similar Documents

Publication Publication Date Title
US9633667B2 (en) Adaptive audio signal filtering
US10523168B2 (en) Method and apparatus for processing an audio signal based on an estimated loudness
US10186276B2 (en) Adaptive noise suppression for super wideband music
US9640187B2 (en) Method and an apparatus for processing an audio signal using noise suppression or echo suppression
US20180220250A1 (en) Audio scene apparatus
KR100800725B1 (en) Automatic volume controlling method for mobile telephony audio player and therefor apparatus
US9576590B2 (en) Noise adaptive post filtering
US9704497B2 (en) Method and system of audio power reduction and thermal mitigation using psychoacoustic techniques
US20150371643A1 (en) Stereo audio signal encoder
KR101647576B1 (en) Stereo audio signal encoder
US9076437B2 (en) Audio signal processing apparatus
CN106293607B (en) Method and system for automatically switching audio output modes
US10897670B1 (en) Excursion and thermal management for audio output devices
RU2648632C2 (en) Multi-channel audio signal classifier
US10559315B2 (en) Extended-range coarse-fine quantization for audio coding
JP2013120961A (en) Acoustic apparatus, sound quality adjustment method, and program
KR20050063262A (en) Method and apparatus for outputting audio signal in mobile communication terminal
JP2013074371A (en) Signal processing device, method, and program
JP2010158044A (en) Signal processing apparatus and signal processing method
JP2010160496A (en) Signal processing device and signal processing method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12873637

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2012873637

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 14388152

Country of ref document: US