GB2561596A - Audio signal generation for spatial audio mixing - Google Patents

Audio signal generation for spatial audio mixing Download PDF

Info

Publication number
GB2561596A
GB2561596A GB1706290.2A GB201706290A GB2561596A GB 2561596 A GB2561596 A GB 2561596A GB 201706290 A GB201706290 A GB 201706290A GB 2561596 A GB2561596 A GB 2561596A
Authority
GB
United Kingdom
Prior art keywords
audio
audio signal
sum
environment
spatial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1706290.2A
Other versions
GB201706290D0 (en
Inventor
Johannes Pihlajakuja Tapani
Artturi Leppanen Jussi
Johannes Eronen Antti
Juhani Lehtiniemi Arto
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nokia Technologies Oy
Original Assignee
Nokia Technologies Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Technologies Oy filed Critical Nokia Technologies Oy
Priority to GB1706290.2A priority Critical patent/GB2561596A/en
Publication of GB201706290D0 publication Critical patent/GB201706290D0/en
Priority to PCT/FI2018/050275 priority patent/WO2018193162A2/en
Publication of GB2561596A publication Critical patent/GB2561596A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/005Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • H04R2430/20Processing of the output signals of the acoustic transducers of an array for obtaining a desired directivity characteristic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/11Positioning of individual sound objects, e.g. moving airplane, within a sound field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/15Aspects of sound capture and related signal processing for recording or reproduction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/01Enhancing the perception of the sound image or of the spatial distribution using head related transfer functions [HRTF's] or equivalents thereof, e.g. interaural time difference [ITD] or interaural level difference [ILD]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/11Application of ambisonics in stereophonic audio systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/13Application of wave-field synthesis in stereophonic audio systems

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Otolaryngology (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Stereophonic System (AREA)
  • Stereophonic Arrangements (AREA)

Abstract

Generating an intended spatial audio field by receiving at least two audio signals, wherein each audio signal is received from a separate microphone 201. Each separate microphone is located in the same environment and configured to capture a sound source. Each audio signal is analyzed to determine at least in part an ambience audio signal. A sum audio signal is generated 211 from the determined ambience audio signal before being processed to spatially extend the sum audio signal so as to generate the intended spatial audio field 215, wherein the sum audio signal comprises the ambience audio signal for the intended spatial audio field. Preferably reverberation 213 is applied to the sum audio before its spatial extension. The spatial extension could comprise any of vector based amplitude panning (VBAP), direct binaural panning, synthesized ambisonics and wavefield synthesis. Preferably, when performing the spatial extension; a spatial extent parameter, a position associated with the microphones, and a frequency band position based on the position associated with the microphones and the spatial extent parameter are determined.

Description

(71) Applicant(s):
Nokia Technologies Oy Karaportti 3, 02610 Espoo, Finland (72) Inventor(s):
Tapani Johannes Pihlajakuja Jussi Artturi Leppanen Antti Johannes Eronen Arto Juhani Lehtiniemi (56) Documents Cited:
GB 2540175 A1 US 20150317981 A1 US 20130202114 A1 (58) Field of Search:
INT CL H04S Other: WPI, EPODOC
US 20160007131 A1 US 20150296319 A1 US 20130044884 A1 (74) Agent and/or Address for Service:
Page White & Farrer
Bedford House, John Street, London, WC1N 2BF, United Kingdom (54) Title of the Invention: Audio signal generation for spatial audio mixing Abstract Title: Audio Signal Generation for Spatial Audio Mixing (57) Generating an intended spatial audio field by receiving at least two audio signals, wherein each audio signal is received from a separate microphone 201. Each separate microphone is located in the same environment and configured to capture a sound source. Each audio signal is analyzed to determine at least in part an ambience audio signal. A sum audio signal is generated 211 from the determined ambience audio signal before being processed to spatially extend the sum audio signal so as to generate the intended spatial audio field 215, wherein the sum audio signal comprises the ambience audio signal for the intended spatial audio field. Preferably reverberation 213 is applied to the sum audio before its spatial extension. The spatial extension could comprise any of vector based amplitude panning (VBAP), direct binaural panning, synthesized ambisonics and wavefield synthesis. Preferably, when performing the spatial extension; a spatial extent parameter, a position associated with the microphones, and a frequency band position based on the position associated with the microphones and the spatial extent parameter are determined.
Figure 2
205
Figure GB2561596A_D0001
1/5 ι-1 ω
ι_ =3 .9?
Figure GB2561596A_D0002
2/5
Τ3
Φ
Τ3 (_)
C
ΙΛ
Φ (_)
1_
Ο
ΙΛ φ A u_ (_) “ C Φ
ΓΟ ΓΟ
ΓΟ 4-»
ο. ·5 ΙΛ >
LTI
Ο
ΓΜ Π3
4-» φ
ro X
Sp Ε
Ο
Ο
ΓΜ
Π3
C .9Ρ
ΙΛ
Φ α
c φ
Figure 2
Figure GB2561596A_D0003
λ.
ο
4-»
***· TO
TO
c φ
ο
4-» φ
Ω_ >
ο φ
3/5
Figure 3
Figure GB2561596A_D0004
4/5
Ln τ—I CN
CM CO CO m m
rH rH rH
M-
407
Figure GB2561596A_D0005
5/5
1200
Figure GB2561596A_D0006
ω
Intellectual
Property
Office
Application No. GB1706290.2
RTM
Date :19 October 2017
The following terms are registered trade marks and should be read as such wherever they occur in this document:
OZO (Pages 1, 13, 14, 18)
UMTS (Page 30)
Bluetooth (Page 30)
Opus (Page 32)
Intellectual Property Office is an operating name of the Patent Office www.gov.uk/ipo
AUDIO SIGNAL GENERATION FOR SPATIAL AUDIO MIXING
Field
The present application relates to apparatus and methods for audio signal generation and ambience audio signal generation for spatial audio mixing.
Background
Capture of audio signals from multipie sources and mixing of audio signals when these sources are moving in the spatial field requires significant effort. For example the capture and mixing of an audio signal source such as a speaker or artist within an audio environment such as a theatre or lecture hall to be presented to a listener and produce an effective audio atmosphere requires significant investment in equipment and training.
A commonly implemented system is where one or more ‘external’ microphones, for example a Lavaiier microphone worn by the user or an audio channel associated with an instrument, is mixed with a suitable spatial (or environmental or audio field) audio signal such that the produced sound comes from an intended direction. This system is known in some areas a Spatial Audio Mixing (SAM)
The SAM system enables the creation of immersive sound scenes comprising “background spatial audio” or ambiance and sound objects for Virtual Reality (VR) applications. Often, the scene can be designed such that the overall spatial audio of the scene, such as a concert venue, is captured with a microphone array (such as one contained in the OZO virtual camera) and the most important sources captured using the ‘external’ microphones.
However, there are scenarios, where spatiai audio capture apparatus such as OZO are not available, but a content producer would like to create high quality VR sound scenes with spatial ambiance and high quality close-up sources. Thus there is a need to be able to generate solutions which enables this.
Furthermore in many live situations a designated spatial audio capture device, such as an OZO device, captures audio that is unusable for professional audio production for several possible reasons. For example the spatial audio capture device may capture unintended audio, e.g., live mix for the audience, close mic ambience. Furthermore in some circumstances the signal-to-noise ratio is not good enough at the spatial capture device to represent even the ambience of the scene, such as for example where the capture device is mounted on a moving car. Also in some circumstances artistically, the spatial audio capture device may not always represent the spatial scene that is desired but something similar is the target. Thus, there is a need to develop solutions which can determine these circumstances and enable the provision of alternative ambient or spatial audio signals for the spatial audio mixing and sound track creation process.
Summary
There is provided according to a first aspect an apparatus for generating an intended spatial audio field, the apparatus configured to: receive at ieast two audio signals, wherein each audio signal is received from a separate microphone, each separate microphone is located in the same environment and configured to capture a sound source; analyse each audio signal to determine at least in part an ambience audio signal; generate a sum audio signal from the determined ambience audio signal based on the at ieast two audio signals; and process the sum audio signal to spatially extend the sum audio signal so as to generate the intended spatial audio field, wherein the sum audio signal comprises the ambience audio signal for the intended spatial audio field.
The apparatus may further be configured to apply a reverberation to the sum audio signal before the processing of the sum audio signal to spatially extend the sum audio signal.
The apparatus configured to generate a sum audio signal from the determined ambience audio signal based on the at least two audio signals may be configured to generate for and apply to at least one of the at least two audio signals a weighting value before generating the sum audio signal, wherein the weighting value may be based on at least one of: a detection of voice activity within the audio signal; a determination of spectra! flatness within the audio signal; a determination of percussiveness within the audio signal; a determination of harmonicity within the audio signal; a determination of content classification type within the audio signal; a determination of silence within the audio signal; a determination of noise within the audio signal; and at least one user generated input associated with the audio signal.
The apparatus configured to generate for at ieast one of the at least two audio signals a weighting value may be further configured to normalise the weighting value for at least one of the at least two audio signals.
The apparatus configured to process the sum audio signal to spatially extend the sum audio signal may be configured to apply one of: vector base amplitude panning to the sum audio signal; direct binaural panning to the sum audio signal; direct assignment to channel output location to the sum audio signal; synthesized ambisonics to the sum audio signal; and wavefield synthesis to the sum audio signal.
The apparatus configured to process the sum audio signal to spatiaily extend the sum audio signa! may be configured to: determine a spatia! extent parameter; determine at least one position associated with the microphones; determine at least one frequency band position based on the at least one position associated with the microphones and the spatial extent parameter.
The apparatus configured to apply vector base amplitude panning to the sum audio signal may be further configured to generate panning vectors for the application of vector base amplitude panning to frequency bands of the sum audio signal.
The apparatus may be configured to generate the intended spatial audio field is configured to generate a plurality of intended spatial audio fields parts, wherein at least one part of the intended spatia! audio field may be at least one of: partially overlapping a neighbouring part; non-overlapping at ieast one other part; contained within at least one other part; and containing at least one other part.
The apparatus may be configured to generate: at least one first part of the intended spatial audio field associated with a first part of the environment, the first part of the environment comprising at least one sound source; and at least one second part of the intended spatial audio field associated with a second part of the environment, the second part of the environment comprising at ieast one further sound source.
The first part of the environment may be a left portion of the environment with respect to the apparatus, and the second part of the environment may be a right portion of the environment with respect to the apparatus.
The first part of the environment may be a front portion of the environment with respect to the apparatus, and the second part of the environment may be a rear portion of the environment with respect to the apparatus.
The apparatus may be further configured to determine a position of the at ieast one microphone of the microphones relative to the apparatus.
The apparatus may be further configured to: receive at ieast one audio signa! from a capture device comprising a microphone array for capturing audio signals of the sound scene; compare the at least one audio signal from the capture device to the at least one audio signal; control the generation of the sum audio signal from microphones located within the intended spatiai audio field, and process the sum audio signai to generate the intended spatial audio field based on the comparison.
The apparatus may be further configured to mix the at ieast one spatially extended sum audio signal with at least one of the at least two audio signals to generate the intended spatiai audio field.
The apparatus configured to process the sum audio signal to spatially extend the sum audio signal may be configured to spatially extend the sum audio signai such that the at least one spatially extended sum audio signal is one of: full spatially extended to 360 degrees; and partial spatially extended upto 360 degrees.
According to a second aspect there is provided a method for generating an intended spatiai audio field, the method comprising: receiving at least two audio signals, wherein each audio signal is received from a separate microphone, each separate microphone being located in the same environment and configured to capture a sound source; analysing each audio signai to determine at least in part an ambience audio signal; generating a sum audio signai from the determined ambience signa! based on the at least two audio signals; and processing the sum audio signal to spatially extend the sum audio signai so as to generate the intended spatiai audio field, wherein the sum audio signal comprises the ambience audio signal for the intended spatial audio field.
The method may further comprise applying a reverberation to the sum audio signal before the processing of the sum audio signal to spatialiy extend the sum audio signal.
Generating the sum audio signal may comprise: generating for at least one of the at ieast two audio signals a weighting value; and applying to at least one of the at least two audio signals the weighting value before generating the sum audio signai, wherein the weighting value is based on at ieast one of: a detection of voice activity within the audio signai; a determination of spectral flatness within the audio signal; a determination of percussiveness within the audio signai; a determination of harmonicity within the audio signai; a determination of silence within the audio signal; a determination of noise within the audio signai; a determination of content classification type within the audio signai; and at least one user generated input associated with the audio signal.
Generating the weighting value may further comprise normalising the weighting value for at least one of the at least two audio signals.
Processing the sum audio signal to spatially extend the sum audio signai may comprise applying one of: vector base amplitude panning to the sum audio signal; direct binaurai panning to the sum audio signal; direct assignment to channel output location to the sum audio signal; synthesized ambisonics to the sum audio signal; and wavefield synthesis to the sum audio signal.
Processing the sum audio signai to spatially extend the sum audio signal may comprise: determining a spatial extent parameter; determining at ieast one position associated with the microphones; determining at ieast one frequency band position based on the at ieast one position associated with the microphones and the spatial extent parameter.
The apparatus configured to apply vector base amplitude panning to the sum audio signai may be further configured to generate panning vectors for the application of vector base amplitude panning to frequency bands of the weighted sum.
Generating the intended spatial audio field may comprise generating a plurality of intended spatial audio field parts, wherein at least one part is at least one of: partially overlapping a neighbouring part; non-overlapping at least one other part; contained within at ieast one other part; and containing at least one other part.
The method may comprise: generating at least one first part of the intended spatial audio field associated with a first part of the environment, the first part of the environment comprising at ieast one sound source; and generating at least one second part of the intended spatial audio field associated with a second part of the environment, the second part of the environment comprising at least one further sound source.
The first part of the environment may be a left portion of the environment, and the second part of the environment may be a right portion of the environment.
The first part of the environment may be a front portion of the environment, and the second part of the environment may be a rear portion of the environment.
The method may further comprise determining a position of the at least one microphone of the microphones relative to the apparatus.
The method may further comprise: receiving at least one audio signal from a capture device comprising a microphone array for capturing audio signals of the sound scene; comparing the at ieast one audio signai from the capture device to the at ieast one audio signal; controlling the generation of the sum audio signal from microphones located within the intended spatial audio fieid; and processing the sum audio signal to generate the intended spatial audio fieid based on the comparison.
The method may further comprise mixing the at least one spatially extended sum audio signai with at least one of the at least two audio signals to generate the intended spatial audio field.
Processing the sum audio signal to spatially extend the sum audio signal comprises spatially extending the sum audio signal such that the at least one spatially extended audio signal may be one of: full spatially extended to 360 degrees; and partial spatially extended upto 360 degrees.
According to a thirds aspect there is provided an apparatus for generating at least one spatially extended audio signal associated with a sound scene, the apparatus configured to: receive at least two audio signals, wherein each audio signai is received from a separate microphone located within the sound scene; generate a sum of the at least two audio signals; and apply a spatially extended control to the sum of the at ieast two audio signals to generate the at least one spatially extended audio signai, wherein the at ieast one spatially extended audio signal is an ambience audio signai for mixing with at least one of the at least two audio signals to generate at least one spatial audio field.
The apparatus may be further configured to apply a reverberation to the sum before the application of the spatially extended control.
The apparatus configured to generate a sum may be configured to generate for and appiy to at least one of the at least two audio signals a weighting value before generating the sum, wherein the weighting value is based on at least one of: a detection of voice activity within the audio signai; a determination of spectral flatness within the audio signal; a determination of percussiveness within the audio signai; a determination of harmonicity within the audio signal; a determination of content classification type within the audio signal; a determination of silence within the audio signal; a determination of noise within the audio signal; and at least one user generated input associated with the audio signal.
The apparatus configured to generate a sum is configured to generate for at least one of the at least two audio signals a weighting value may be further configured to normalise the weighting value for at least one of the at least two audio signals.
The apparatus configured to apply a spatiaily extended control to the sum of the at least two audio signals to generate the at least one spatiaily extended audio signal may be configured to apply one of: vector base amplitude panning to the sum of the at least two audio signals; direct binaural panning to the sum of the at least two audio signals; direct assignment to channel output location to the sum of the at least two audio signals: synthesized ambisonics to the sum of the at least two audio signals; and wavefield synthesis to the sum of the at least two audio signals.
The apparatus configured to apply a spatially extended control to the sum of the at least two audio signals may be configured to: determine a spatial extent parameter; determine at least one position associated with the microphones; determine at least one frequency band position based on the at least one position associated with the microphones and the spatial extent parameter; and generate panning vectors for the application of vector base amplitude panning to frequency bands of the sum of the at least two audio signals.
The apparatus may be configured to generate a plurality of audio signals, each of the plurality of audio signals are associated with a portion of the sound scene, wherein at least one portion of the sound scene is at ieast one of: partially overlapping a neighbouring portion; non-overlapping at ieast one other portion; contained within at least one other portion; and containing at least one other portion.
The apparatus may be configured to generate: at ieast one first audio signal associated with a first portion of the sound scene, the first portion of the sound scene comprising at least one sound source; and at least one second audio signal associated with a second portion of the sound scene, the second portion of the sound scene comprising at least one further sound source.
The first portion of the sound scene may be a left portion of the sound scene with respect to the apparatus, and the second portion of the sound scene may be a right portion of the sound scene with respect to the apparatus.
The first portion of the sound scene may be a front portion of the sound scene with respect to the apparatus, and the second portion of the sound scene may be a rear portion of the sound scene with respect to the apparatus.
The apparatus may be further configured to determine a position of the at least one microphone of the microphones relative to the apparatus.
The apparatus may be further configured to: receive at least one audio signal from a capture device comprising a microphone array for capturing audio signals of the sound scene; compare the at least one audio signal from the capture device to the at least one audio signal; control the generation of the sum of the at least two audio signals from microphones located within the sound scene, and apply the spatially extended control to the sum of the at least two audio signais to generate the at least one audio signai based on the comparison.
The apparatus may be located in the sound scene comprising at least one sound source and at least one of the at least two microphones is associated with the at least one sound source within the sound scene.
The apparatus may be further configured to mix the at least one spatially extended audio signal with at least one of the at ieast two audio signals to generate at least one spatial audio field.
The apparatus configured to apply a spatialiy extended control to the sum of the at least two audio signals to generate the at least one spatially extended audio signal may be configured to spatialiy extend the sum of the at least two audio signals such that the at least one spatially extended audio signal is one of: full spatially extended to 360 degrees; and partial spatially extended upto 360 degrees.
According to a fourth aspect there is provided a method for generating at least one spatialiy extended audio signal associated with a sound scene, the method comprising: receiving at least two audio signals, wherein each audio signal is received from a separate microphone located within the sound scene; generating a sum of the at ieast two audio signals; and applying a spatially extended control to the sum of the at least two audio signals to generate the at least one spatially extended audio signal, wherein the at least one spatially extended audio signal is an ambience audio signal for mixing with at least one of the at least two audio signals to generate at least one spatial audio field.
The method may further comprise applying a reverberation to the sum before the application of the spatially extended control.
Generating a sum may comprise: generating for at ieast one of the at least two audio signals a weighting value; and applying to at least one of the at ieast two audio signals the weighting value before generating the sum, wherein the weighting value is based on at least one of: a detection of voice activity within the audio signal; a determination of spectral flatness within the audio signal; a determination of percussiveness within the audio signal; a determination of harmonicity within the audio signal; a determination of content classification type within the audio signal; a determination of silence within the audio signal; a determination of noise within the audio signal; and at least one user generated input associated with the audio signal.
Generating for at least one of the at least two audio signals a weighting value may further comprise normalising the weighting value for at least one of the at least two audio signals.
Applying a spatially extended control to the sum of the at least two audio signals to generate the at least one spatialiy extended audio signal may comprise applying a vector base amplitude panning to the sum of the at least two audio signals.
Applying a spatiaily extended control to the sum of the at least two audio signals to generate the at least one spatially extended audio signal may comprise applying direct binaural panning to the sum of the at least two audio signals.
Applying a spatially extended control to the sum of the at least two audio signals to generate the at least one spatially extended audio signal may comprise applying direct assignment to channel output location to the sum of the at least two audio signals.
Applying a spatiaily extended control to the sum of the at least two audio signals to generate the at least one spatially extended audio signal may comprise applying synthesized ambisonics to the sum of the at least two audio signals.
Applying a spatially extended control to the sum of the at least two audio signals to generate the at least one spatially extended audio signai may comprise applying wavefield synthesis to the sum of the at least two audio signals.
Appiying a vector base amplitude panning to the sum of the at least two audio signals may comprise: determining a spatial extent parameter; determining at least one position associated with the microphones located within the sound scene; determining at least one frequency band position based on the at least one position associated with the microphones located within the sound scene and the spatial extent parameter; and generating panning vectors for the application of vector base amplitude panning to frequency bands of the sum of the at least two audio signals.
Generating at least one spatially extended audio signai associated with a sound scene may comprise generating a plurality of audio signals, each of the piuraiity of audio signals are associated with a portion of the sound scene, wherein at least one portion of the sound scene is at least one of: partially overlapping a neighbouring portion; non-overlapping at ieast one other portion; contained within at least one other portion; and containing at least one other portion.
Generating at least one spatially extended audio signai associated with a sound scene may comprise generating: at least one first audio signai associated with a first portion of the sound scene, the first portion of the sound scene comprising at least one sound source; and at least one second audio signai associated with a second portion of the sound scene, the second portion of the sound scene comprising at ieast one further sound source.
The first portion of the sound scene may be a left portion of the sound scene with respect to the apparatus, and the second portion of the sound scene may be a right portion of the sound scene with respect to the apparatus.
The first portion of the sound scene may be a front portion of the sound scene with respect to the apparatus, and the second portion of the sound scene may be a rear portion of the sound scene with respect to the apparatus.
The method may further comprise determining a position of the at least one microphone of the microphones relative to the apparatus.
The method may further comprise: receiving at least one audio signal from a capture device comprising a microphone array for capturing audio signals of the sound scene; comparing the at least one audio signal from the capture device to the at ieast one audio signal; controlling the generation of the sum of the at least two audio signais from microphones located within the sound scene, and applying the spatialiy extended control to the sum of the at least two audio signals to generate the at ieast one audio signal based on the comparison.
The method may further comprise mixing the at least one spatially extended audio signal with at least one of the at least two audio signals to generate at least one spatial audio field.
Applying a spatially extended control to the sum of the at ieast two audio signais to generate the at least one spatially extended audio signal may comprise spatialiy extending the sum of the at least two audio signais such that the at least one spatialiy extended audio signal is one of: full spatially extended to 360 degrees; and partial spatially extended upto 360 degrees.
The apparatus may be located in the sound scene comprising at least one sound source and at least one of the at least two microphones is associated with the at ieast one sound source within the sound scene.
An apparatus may comprise means for implementing the method as described herein.
A computer program product stored on a medium may cause an apparatus to perform the method as described herein.
An electronic device may comprise apparatus as described herein.
A chipset may comprise apparatus as described herein.
Embodiments of the present application aim to address problems associated with the state of the art.
Summary of the Figures
For a better understanding of the present application, reference will now be made by way of example to the accompanying drawings in which:
Figure 1 shows schematically an example known capture and mixing arrangement where the external microphones and the microphone array produce both external and ambient audio signals respectively for mixing;
Figure 2 shows schematically an example capture and mixing arrangement where the external microphones produce both the external and ambient audio signals for mixing according to some embodiments;
Figure 3 shows schematically the example capture and mixing arrangement shown in Figure 2 in further detail according to some embodiments;
Figure 4 shows schematically the spatial extent synthesizer shown in Figures 2 and 3 in further detail according to some embodiments; and
Figure 5 shows schematically an example device suitable for implementing the apparatus shown in Figures 2 to 4.
Embodiments of the Application
The following describes in further detail suitable apparatus and possible mechanisms for the provision of effective ambient (or ambience) audio signal generation from the capture of audio signals from multiple sources. Furthermore the following describes mixing of the ambient and external audio signals. In the following examples, audio signals and audio capture signals are described. However it would be appreciated that in some embodiments the apparatus may be part of any suitable electronic device or apparatus configured to capture an audio signal or receive the audio signals and other information signals.
A conventional approach to the capturing and mixing of sound sources with respect to an audio background or environment audio field signal would be for a professional producer to utilize an external microphone (a close or Lavalier microphone worn by the user, or a microphone attached to an instrument or some other microphone) to capture audio signals close to the sound source, and further utilize a ‘background’ microphone or microphone array to capture a environmental audio signal. These signals or audio tracks may then be manually mixed to produce an output audio signal such that the produced sound features the sound source coming from an intended (though not necessarily the original) direction.
With respect to Figure 1 is shown a first known example capture and mixing arrangement. Figure 1 shows an example external or close microphone and tag 101 which is configured to transmit HAIP signals which are received by the microphone array and tag receiver 103 in order to determine the actual position of the external microphone 101 relative to the microphone array 103. The actual position may be passed to a mixer 105. The external microphone may furthermore generate an external audio signal 102 which is passed to the mixer 105.
The microphone array and tag receiver 103 may furthermore generate an ambient or spatial field audio signal 104 which is passed to the mixer 105.
Having received the external microphone audio signa! 102 and the microphone array audio signal 104 the mixer can in some embodiments mix the two to determine a mixed audio signal 106. The mixed audio signal 106 may be generated in some embodiments based on a user input such at the positional user input 109. The mixed audio signal may furthermore be passed to a renderer 107 wherein the mixed audio signal is rendered into a format suitable for outputting to a user. The renderer 107 in some embodiments may be configured to use vectorbase amplitude panning techniques when loudspeaker domain output is desired (e.g. 5.1 channel output) or use head-related transfer-function filtering if binaural output for headphone listening is desired.
However as discussed above there are scenarios where either there is no spatial audio capture apparatus (such as Nokia’s OZO) available or the spatial audio capture apparatus captures audio that is unusable for professional audio production but it is still a goal to would still like to create high quality VA sound scenes with spatial ambiance and high quality cioseup sources.
The concept as described herein may be considered to be enhancement to conventional Spatial Audio Capture (SPAC) technology. Spatial audio capture technology can process audio signals captured via a microphone array into a spatial audio format. In other words generating an audio signal format with a spatial perception capacity. The concept may thus be embodied in a form where audio signals may be captured such that, when rendered to a user, the user can experience the sound field as if they were present at the location of the capture device. Spatial audio capture can be implemented for microphone arrays found in mobile devices. In addition, audio processing derived from the spatial audio capture may be used employed within a presence-capturing device such as the Nokia OZO (OZO) devices.
In the examples described herein the audio signai is rendered into a suitable binaural form, where the spatial sensation may be created using rendering such as by head-reiated-transfer-function (HRTF) filtering a suitable audio signal.
The concept as described with respect to the embodiments herein makes it possible to capture and remix an external and environmental audio signai more effectively and produce a better quality output where the sound or sound sources are more widely distributed.
The concept may for example be embodied as a capture system configured to capture both two or more external (speaker, instrument or other source) audio signals and a processor configured to generate from the two or more external audio signals an spatial or environmental (audio field) audio signal.
Although capture and render systems may be separate, it is understood that they may be implemented with the same apparatus or may be distributed over a series of physically separate but communication capable apparatus. For example, a presence-capturing device such as the OZO device could be equipped with an additional interface for receiving location data and close microphone audio signals, and could be configured to perform the capture part. The output of a capture part of the system may be the microphone audio signals (e.g. as a 5.1 channel downmix), the close microphone audio signals (which may furthermore be timedelay compensated to match the time of the microphone array audio signals), and the position information of the close microphones (such as a time-varying azimuth, elevation, distance with regard to the microphone array).
The Tenderer as described herein may be an audio playback device (for example a set of headphones), user input (for example motion tracker), and software capable of mixing and audio rendering. In some embodiments the user input and audio rendering parts may be implemented within a computing device with display capacity such as a mobiie phone, tabiet computer, virtual reality headset, augmented reality headset etc.
Furthermore it is understood that at least some elements of the following mixing and rendering may be implemented within a distributed computing system such as known as the ‘cloud’.
In the following concept the apparatus and method utilizes using external (closeup-mic) microphone audio signals and spatial extent processing to create an ambiance ‘like’ signal without a spatial capture device microphone array, in some embodiments the apparatus is configured to sum signals captured by the external microphones and spatially extend this summed audio signal to cover a full spatial audio field (360 degrees).
In some embodiments, the sum of external microphone signals used for ambiance audio signal creation can be weighted. That is, the contribution of each external microphone audio signal used in the creation of the ambiance audio signal can be weighted based on various criteria. For example the criteria weighting may be how likely the microphone audio signai contains silence or just the background audio. Other audio weighting criteria may be based on voice activity detection (VAD) processing (where VAD inactivity implies that a microphone is capturing background noise). Further audio weighting criteria may be noisiness detection which may indicate that the external microphone signal contains noise and this may be compared against analysis of harmonicity & percussiveness which would indicate a high likelihood of an actual instrument/voice within the audio signai. Thus for example audio signals associated with external microphones with high scores for noise/ambiance may receive larger weighting in the sum to create an ambiance audio signai compared to audio signals from external microphones which have high levels of detected harmonic/percussive components.
In some embodiments, the ambiance signai is created differently for different parts of the sound scene. For example an ambient signal may generate different left and right scene ambient audio signals. Furthermore if the sound scene is acoustically not diffuse (e.g., the scene is outside), then a more natural ambience audio signal may correspond somewhat to the directions of the external audio signal microphone/sources. For example, in a “battle of the bands” situation outside with two rock bands on opposite sides of the listener, it would be more natural that these two directions would have their own ambience audio signai tracks or spatial audio field parts representing their environment parts.
In some embodiments, portions of the automatically generated ambiance audio signals may be analysed and used for creating looped, long duration or ‘infinitely Song’ artificial ambiance audio signals.
With respect to figure 2 a schematic view of an example capture and mixing arrangement where the external microphones produce both the external and ambient audio signals for mixing according to some embodiments are shown. The system shown in figure 2 shows N microphone sources. Specifically figure 2 shows a first microphone, mic source 1, 2011 configured to generate a first audio signal 202i which is passed to the spatial mixer 205 and the ambience signal generator 203. The system also shows a second microphone, mic source 2, 2012 configured to generate a second audio signal 2022 which is passed to the spatial mixer 205 and the ambience signal generator 203. Furthermore the system is shown comprising a N’th microphone source mic source, mic source N, 201 n configured to generate a N’th audio signai 202n which is passed to the spatial mixer 205 and the ambience signai generator 203.
The externa! microphones 2011 to 201 n can be configured to capture audio signals associated with humans, instruments, or other sound sources of interest.
For example the external microphone 201 may be a Lavaiier microphone. The external microphones may be any microphone external or separate to a microphone array which may capture the spatial audio signal. Thus the concept is applicable to any external/additional microphones be they Lavaiier microphones, hand held microphones, mounted mics, or whatever. The external microphones can be worn/carried by persons or mounted as close-up microphones for instruments or a microphone in some relevant location which the designer wishes to capture accurately. A Lavaiier microphone typically comprises a small microphone worn around the ear or otherwise close to the mouth. For other sound sources, such as musical instruments, the audio signal may be provided either by a Lavaiier microphone or by an internal microphone system of the instrument (e.g., pick-up microphones in the case of an electric guitar) or an internal audio output (e.g., a eiectric keyboard output). In some embodiments the close microphone may be configured to output the captured audio signals to a mixer. The external microphone may be connected to a transmitter unit (not shown), which wirelessly transmits the audio signal to a receiver unit (not shown).
In some embodiments the external microphones, mic sources, 201 and thus the performers and/or the instruments that are being played positions may be tracked by using position tags located on or associated with the microphone source. Thus for example the external microphone comprises or is associated with a microphone position tag. The microphone position tag may be configured to transmit a radio signal such that an associated receiver may determine information identifying the position or location of the close microphone. It is important to note that microphones worn by people can be freely moved in the acoustic space and the system supporting iocation sensing of wearable microphone has to support continuous sensing of user or microphone location. The close microphone position tag may be configured to output this signai to a position tracker. Although the following examples show the use of the HAIR (high accuracy indoor positioning) radio frequency signal to determine the location of the ciose microphones it is understood that any suitable position estimation system may be used (for example satellite-based position estimation systems, inertial position estimation, beacon based position estimation etc.).
In some embodiments the system comprises a spatial mixer 205. The spatiai mixer 205 is configured as in the known spatial audio mixing system shown in figure 1 to receive the external microphone audio signals, which may be configured to spatiaiiy position the audio signals and mix them to create a spatial audio signai.
The spatial positioning may be performed based on the positioning data from the HAIR information received. Alternatively the positioning information may be input manually by a sound engineer, e.g., by providing azimuth/elevation/distance for each sound source or by any other suitable position tracking method.
The spatial mixer 205 may from the determined position data, render a positioned monophonic sound signal at a suitable spatial location using headrelated-transfer-function (HRTF) filtering when binaural audio output is desired for headphone listening. The output may be a two channel L+R signai for headphone listening, and the outputs after filtering can be summed for each microphone source to create a spatial mix signal containing all the spatially positioned sources. Correspondingly, in some embodiments when creating a loudspeaker domain output, the positioned monophonic sound signa! output (for each sound source) may be panned and the panned sound source audio signals summed to create the spatial mix of sources.
Furthermore in some embodiments the spatial mixer 205 is configured to mix at least one ambient or ambience audio signal. In the known systems the ambient or ambience audio signal was generated from a spatial audio signal capture apparatus comprising an array of microphones, for example a Nokia OZO apparatus. However in the embodiments as described hereafter the at least one ambience audio signal is generated by an ambience signal generator 203.
The ambience signal generator 203 is configured to receive the audio signals 202 from the external microphones 201 and from these audio signals generate at least one ambience audio signal which may be passed to the spatial mixer 205 to be mixed with the spatially processed audio signals from the externa! microphones.
The ambience signal generator 203 in some embodiments comprises a weighted sum 211. The weighted sum 211 is configured to receive the audio signals 202 from the microphone sources and generate a weighted sum of the audio signals. In some embodiments the weighted sum 211 outputs the combined audio signal to a reverberator 213, however in some embodiments the weighted sum 211 outputs a combined audio signal to the spatial extent synthesizer 215 directly.
In some embodiments the ambience signal generator 203 comprises a reverberator 213. The reverberator 213 in some embodiments is configured to receive the output from the weighted sum 211. The reverberator is configured to output a reverberated audio signal to a spatial extent synthesiser 215.
In some embodiments the ambience signal generator 203 comprises a spatial extent synthesizer 215. The spatial extent synthesizer 215 is configured to receive the output from the reverberator 213 (or the weighted sum 211) and generate an ambience signal 204 which is output to the spatial mixer 205.
Figure 3 shows the system shown in figure 2 in further detail. The example in figure 3 shows a single example microphone source 201 which is configured to output an audio signal 202 to the weighted sum 211 and spatial mixer 205 (sound object processor 331).
The weighted sum 211 in some embodiments comprises a signal classifier/characterizer 301 which is configured to receive the audio signai 202 from the microphone source 201 and classify or characterise the audio signal or otherwise generate parameters which may be used by a weight determiner and normalizer to determine an ambience weighting factor.
In some embodiments signa! classifier/characterizer 301 may comprise a Voice Activity Detector (VAD) configured to the categorisation of the audio signal can be performed by a voice activity detector 311. The VAD may in some embodiments first perform a noise reduction stage, calculate some features or quantities from a section of the input signai, and then apply a classification rule to classify the section as speech or non-speech. In some embodiments this classification rule is based on determining a value exceeds a threshold. In some embodiments there may be some feedback in this sequence, in which the VAD decision is used to improve the noise estimate in the noise reduction stage, or to adaptively vary the threshold(s). These feedback operations improve the VAD performance in non-stationary noise (i.e. when the noise varies a lot). Some VAD methods may formulate the decision rule on a frame by frame basis using instantaneous measures of the divergence distance between speech and noise. The different measures which are used in VAD methods may inciude spectral slope, correlation coefficient, log likelihood ratio, cepstral, weighted cepstral, and modified distance measures.
in some embodiments signai classifier/characterizer 301 may comprise a spectral flatness detector 313. The spectral flatness detector 313 is typically measured in decibels, and provides a way to quantify how noise-like a sound is. as opposed to being tone-like.
The meaning of tonal in this context is in the sense of the amount of peaks or resonant structure in a power spectrum, as opposed to flat spectrum of a white noise. A high spectral flatness (approaching 1.0 for white noise) indicates that the spectrum has a similar amount of power in ail spectral bands. This spectrum would sound similar to white noise, and the graph of the spectrum would appear relatively fiat and smooth. A low spectral flatness (approaching 0.0 for a pure tone) indicates that the spectral power is concentrated in a relatively small number of bands. This spectrum would typically sound like a mixture of sine waves, and the spectrum would appear spiky. In some embodiments the spectral flatness is calculated by dividing the geometric mean of the power spectrum by the arithmetic mean of the power spectrum.
The spectral flatness may in some embodiments be measured within a specified sub-band, rather than across the whole band.
In some embodiments the signal ciassifier/characterizer 301 may comprise a percussiveness detector 315. The percussiveness detector 315 may be configured to perform an analysis of percussiveness, using, for example the pulsemetric characterization such as described within CONSTRUCTION AND EVALUATION OF A ROBUST MULTIFEATURE SPEECH/MUSIC DISCRIMINATOR, Speech & music discrimination, pulse-metric feature available from https://www.ee.columbia.edu/~dpwe/papers/ScheiS97-mussp.pdf.
In some embodiments the signal ciassifier/characterizer 301 may comprise a harmonicity detector 317. The harmonicity detector 317 may for example be similar to the harmonicity detector from Srinivasan & Kankanhalii, “HARMONICITY AND DYNAMICS-BASED FEATURES FOR AUDIO” which is available from
In some embodiments the signal ciassifier/characterizer 301 may comprise a content classifier 319. The content classifier 319 may in some embodiments determine the content of the microphone audio signals and used in determining the weights. For example, a deep neural network may be trained to classify between noise/speech/music/singing, for example, using any of the above features or signal spectrum directly.
In some embodiments the signal ciassifier/characterizer 301 may comprise a sound engineer input 321. The sound engineer input 321 may enable weights or weight adjustments to be input by a sound engineer or other user of the system. For example a sound engineer could be offered an option to make adjustments using a graphical user interface (GUi) on a digital audio workstation (DAW).
The classifier 301 can output the results of the analysis to a weight determiner and normaliser 303.
in some embodiments the weighted sum 211 comprises a weight determiner and normaliser 303. The weight determiner and normaliser 303 can be configured to generate weightings for each of the microphone sources based on the characterisation from the classifier/characterizer 301. in one example the weight determiner generates weights for each microphone audio signal so that the weighted sum 205 is configured to multiply each microphone signals with an equal weighting (w(i)=1/N for i=1 to N microphone sources) to create the ambience audio signai. However, in some embodiments the ambience signai may be created from microphone audio signals that represent the overall ambience instead of active direct sources. The weights w(i) for each microphone audio signal may thus in some embodiments be obtained based on analysis which determines how likely is that each microphone signai carries ambient background noise instead of being dominantly capturing a closeup-sound source. The sum of w(i) when i ranges from i to N may in some embodiments be normalized to unity. In other words sum(w(i) i=1 :N)=1.
The output from the classifier/characterizer 301 may be used to determine the weights applied for a microphone source. In some embodiments the analyses and the weightings may be performed over time in frames, say 1 second long, to enable the weight of a microphone signa! in ambiance creation to change over time. Thus, this effectively implements time multiplexing, that is, different external microphones can be used at different times for ambiance creation.
In some embodiments where the input signai is analysed to determine whether there is voice activity. When the VAD indicates inactivity, it is likely that the microphone captures just background noise as the audio signai. In this case, the weight for this microphone source w(i) may be increased. The increase may be, for example, proportional to the current weight value.
Also in some embodiments where the input audio signai is analysed for spectral flatness to determine how close to (white) noise the input signal is vs, how likely is it that the input signal is tone like any signals which receive spectral flatness measures close to 1 may receive higher weighting in the sum because they are likely to contain noise-like content.
Furthermore in some embodiments where the input audio signal is anaiysed to determine harmonicity related features such as fundamental frequency (pitch), harmonic concentration, or harmonicity these audio signals may indicate that the microphone signa! contains harmonic content. Microphone audio signals which likely contain harmonic content may receive lower weights in the summation.
In some embodiments where the input audio signai is analysed for percussiveness, any microphone signals which likely contain rhythmic content are likely to contain percussions or other rhythmic content rather than background. These audio signais may receive lower weighting factors.
Analysis which determines classifiers may determine the content of the microphone signals and used in determining the weighting values. For example, a deep neural network may be trained to classify between noise/speech/music/singing. If the classification indicates noise, the weighting for this audio signal may be increased. Where the classification indicates speech/singing/music, the weighting for this signai may be decreased.
Also the weight determiner 303 may be configured in some embodiments to determine the weighting values based on the input by a sound engineer or other user. For example, the system might calculate initial weights using the logic above, and then a sound engineer could be offered an option to make adjustments.
The output of these weighting values can be passed to the weighted sum
305.
In some embodiments the weighted sum 211 comprises a weighted sum processor 305 configured to receive the audio signais and the weightings associated with each audio signal. The weighted sum processor 305 may then combine the audio signals according to the weightings generated in the weight determination and normalisation module 303.
The output of the weighted sum processor 305 can be passed to the digital reverberator 213. The digital reverberator 213 may be configured in some embodiments to optionally add reverberation to the combined audio signal. This additional process increases the spaciousness of the ambience signal and helps to separate it from the external microphone audio signals. For this, any suitable digital reverberator method may be applied. For example the combined audio signal may be passed through various delay lines. An example of a suitable reverberator is a Schroeder digital reverberator, details of which may be found from https;sleds/-- los/pasp/Sch rpsder ,,:ReverberatorsJTffli· The output of the digital reverberator 213 can be passed to the spatial extent synthesiser 215.
The spatial extent synthesiser 215 may receive the output of the reverberator 213 or weighted sum 211 (305) and output a spatially extent synthesised signal as the ambience signal to a spatial mixer 205 and in some embodiments a spatial mixer processor 333.
In some embodiments the system comprises the spatial mixer 205. The spatial mixer 205. The spatial mixer 205 in some embodiments comprises a sound object processor 331. The sound object processor 331 can be configured to analyse the audio signals from the microphone sources and output these to the spatial mixer processor 333, The processing may for example comprise spatial position determining of the microphone sources
The spatial mixer 205 may receive the ambience audio signals and the processed audio signals from the microphone audio signals and be configured to mix and/or render the audio signals based on the positioning data (which may be from the HAIR information received, input manually by a sound engineer or by any other suitable position tracking method). The spatial mixer processor 333 may thus from the determined position data, render a positioned monophonic sound signal at a suitable spatial location using head-related-transfer-function (HRTF) filtering when binaural audio output is desired for headphone listening. The output may be a two channel L+R signal for headphone listening, and the outputs after filtering can be summed for each microphone source to create a spatial mix signal containing all the spatially positioned sources. Correspondingly, in some embodiments when creating a loudspeaker domain output, the positioned monophonic sound signal output (for each sound source) may be panned and the panned sound source audio signals summed to create the spatial mix of sources.
With respect to figure 4 an example spatial extent synthesiser 215 is shown in further detail. As described herein the spatial extent synthesiser 215 receives the combined (reverberated) audio signals and spatially extends the audio signal to a defined (for example 360 degree) spatial extent using methods for spatial extent control. In other words it takes as input a mono sound source audio signal and spatial extent parameters (width, height and depth).
in some embodiments where the audio signal input is a time domain signal the spatial extent synthesiser 215 comprises a suitable time to frequency domain transformer. For example as shown in figure 4 the spatial extent synthesiser 215 comprises a Short-Time Fourier Transform (STFT) 401 configured to receive the audio signal and output a suitable frequency domain output. In some embodiments the input is a time-domain signai which is processed with hop-size of 512 samples. A processing frame of 1024 samples is used, and it is formed from the current 512 samples and previous 512 samples. The processing frame is zero-padded to twice its length (2048 samples) and Hann windowed. The Fourier transform is calculated from the windowed frame producing the Short-Time Fourier Transform (STFT) output. The STFT output is symmetric, thus it is sufficient to process the positive half of 1024 samples including the DC component, totalling 1025 samples. Although the STFT is shown in figure 4 any suitable time to frequency domain transform may be used.
In some embodiments the spatial extent synthesiser 215 further comprises a filter bank 403. The filter bank 403 is configured to receive the output of the STFT 401 and using a set of filters generated based on a Halton sequence (and with some default parameters) generate a number of frequency bands 405. In statistics, Halton sequences are sequences used to generate points in space for numerical methods such as Monte Carlo simulations. Although these sequences are deterministic, they are of low discrepancy, that is, appear to be random for many purposes. In some embodiments the filter bank 409 comprises set of 9 different distribution filters, which are used to create 9 different frequency domain signals where the signals do not contain overlapping frequency components. These signals are denoted Band 1 F 405i to Band 9 F 405s in figure 4. The filtering can be implemented in the frequency domain by multiplying the STFT output with stored filter coefficients for each band.
In some embodiments the spatial extent synthesiser 215 further comprises a spatial extent input 400. The spatial extent input 400 may be configured to define the spatial extent of the audio signal.
Furthermore in some embodiments the spatial extent synthesiser 215 may further comprise an object position input/determiner 402. The object position input/determiner 402 may be configured to determine the spatial position of sound sources. This information may be determined in some embodiments by the sound object processor.
In some embodiments the spatial extent synthesiser 215 may further comprise a band position determiner 404. The band position determiner 404 may be configured to receive the outputs from the object position input/determiner 402 and the spatial extent input 400 and from these generate an output passed to the vector base amplitude panning processor 406. In the following example the spatial extent synthesiser 215 (or spatially extending controller) is implemented using a vector based amplitude panning operation. However it is understood that the spatial extent synthesis or spatialiy extending control may be implementation agnostic and any suitable implementation used to generate the spatialiy extending control. For example in some embodiments the spatialiy extending control may implement direct binaural panning (using Head related transfer function filters for directions), direct assignment to the output channel locations (for example direct assignment to the loudspeakers without using any panning), synthesized ambisonics. and wave-field synthesis.
In some embodiments the spatial extent synthesiser 215 may further comprise a vector based amplitude panning (VBAP) processor 406. The VBAP 406 may be configured to generate control signals to control the panning of the frequency domain signals to desired spatial positions. Given the spatial position of the sound source (azimuth, elevation) and the desired spatial extent for the source (width in degrees), the system calculates a spatial position for each frequency domain signal. For example, if the spatial position of the sound source is zero degrees azimuth (front), and spatial extent 90 degrees, the VBAP may position the frequency bands at positions azimuth 45, 33.75, 22.5,11.25, 0, -11.2500, -22.5000, -33.7500, -45 degrees. Thus, we use a linear aliocation of bands around the source position, with the span defined by the spatial extent.
The VBAP processor 406 may therefore be used to calculate a suitable gain for the signal, given the desired loudspeaker positions. VBAP processor 406 may provide gains for a signai such that it can be spatially positioned to a suitable position. These gains may be passed to a series of multipliers 407.
In some embodiments the spatial extent synthesiser 215 may further comprise a series of multipliers 407. In figure 4 is shown one multiplier for each frequency band. Thus the series of multipliers comprise multipliers 407i to 4079, however any suitable number of multipliers may be used. Each frequency domain band signal may be multiplied in the multiplier 407 with the determined VBAP gains.
The products of the VBAP gains and each frequency band signal may be passed to a series of output channel sum devices 409.
In some embodiments the spatial extent synthesiser 215 may further comprise a series of sum devices 409. The sum devices 409 may receive the outputs from the multipliers and combine them to generate an output channel band signal 411. In the example shown in figure 4, a 4.0 ioudspeaker format output is impiemented with outputs for front left (Band FL F 4111), front right (Band FR F 4112). rear left (Band RL F 4113), and rear right (Band RR F 4114) channels which are generated by sum devices 409i, 4092, 409.3 4094 respectively. In some other embodiments other loudspeaker formats or number of channels can be supported.
Furthermore in some embodiments other panning methods can be used such as panning laws, or the signals could be assigned to the closest loudspeakers directly.
In some embodiments the spatial extent synthesiser 215 may further comprise a series of inverse Short-Time Fourier Transforms (ISTFT) 413. For example as shown in figure 4 there is an ISTFT 413i associated with the FL signal an ISTFT 4132 associated with the FR signal, an ISTFT 413a associated with the RL signai output and an ISTFT 4134 associated with the RR signal. In other words it provides N component audio signals to be played from different directions based on the spatial extent parameters. The signals are subjected to Inverse Short-Time Fourier Transform (ISTFT) and overlap-added to produce time-domain outputs.
These component signals may be provided for rendering and also for analysis for the purpose of ensuring even energy distributions between the components.
In some embodiments, there may be more than one ambiance audio signal, or in other words the ambiance audio signal may be created in two or more parts. For example, microphone audio signals on the left side of an sound scene may contribute to an ambiance audio signai on the left and spatially extended to 180 degrees, and microphone audio signals on the right side of the sound scene may contribute to the ambiance audio signal on the right, and also extended to 180 degrees extent.
Thus if we know that the space is acousticaily not diffuse (e.g., we are outside), then it is more natural that the ambience corresponds somewhat to the directions of the sources. For example, if we have a battle of the bands situation outside with two rock bands on opposite sides of the listener, it is oniy natural that these two directions would have their own ambience tracks.
Any suitable division of the scene may be used for creating the ambiance signal in different parts. For example, if the microphones are located approximately in a left/right arrangement, then a left/right division for ambiance creation may be suitable. If the microphones are located in a constellation around the stage with mics in the front/back/left/right division, then a division into four 90 degree sectors for ambiance creation may be suitable. The division may in some embodiments be controllable from a graphical user interface of a digital audio workstation (DAW) system taking care of executing at ieast part of the proposed functionality.
In some embodiments by using just one microphone it is possible to create an ambiance audio signal since the apparatus and method utilizes spatial extent synthesis to expand the scene. However, at least a second microphone some distance away from the first microphone may be required to aliow capture of ambience. More microphones may result in better approximation of the ambient signal, in general, the decision of how many externa! microphones to use for generating the ambiance may be at least partly based on the number of relatively acoustically homogenous portions of the scene. That is, if the sound scene varies at different locations of the scene, it is naturally to place at least one microphone to capture each homogenous portion of the scene.
In some embodiments, audio texture synthesis methods (see US patent 9,528,852) may be used for reusing a portion of the ambiance created this way for some other times of the audio capture. For example, an ambiance signal may be created during the proposed method during a quiet section of the event, or during any suitable portion of the event. The ambiance may be stored and looped as proposed in US 9,528,852 to create ambiance for some other times in the event. In some embodiments, such pre-generated ambiance audio signal may be used in such times of the event where all microphone signals indicate that they are capturing the external sound sources rather than background sounds. This can be determined by ail microphones receiving low weights for the ambiance creation.
The system may also in some embodiments use a weighted sum of the ambiance created from current microphone input and the pre-generated (looped) ambiance.
In some embodiments, the system may perform a pre-calibration phase, during which the ambiance captured with external microphones is matched acoustically to an ambiance captured using a microphone array. For example, the magnitude response or other acoustic properties of the ambiance created from external microphones may be matched to an ambiance captured from a microphone array. This enables substituting a microphone array ambiance more realistically with ambiance captured from external microphones, and may be useful, for example, in situations where the microphone array suddenly becomes unavailable (in breakdowns, for example).
With respect to Figure 5 an example electronic device which may be used as the mixer and/or ambience signal generator is shown. The device may be any suitable electronics device or apparatus. For example in some embodiments the device 1200 is a mobile device, user equipment, tablet computer, computer, audio playback apparatus, etc.
The device 1200 may comprise a microphone 1201. The microphone 1201 may comprise a plurality (for example a number N) of microphones. However it is understood that there may be any suitable configuration of microphones and any suitable number of microphones. In some embodiments the microphone 1201 is separate from the apparatus and the audio signal transmitted to the apparatus by a wired or wireless coupling. The microphone 1201 may in some embodiments be the microphone array as shown in the previous figures.
The microphone may be a transducer configured to convert acoustic waves into suitable electrical audio signals. In some embodiments the microphone can be solid state microphones. In other words the microphone may be capable of capturing audio signals and outputting a suitable digital format signal. In some other embodiments the microphone 1201 can comprise any suitable microphone or audio capture means, for example a condenser microphone, capacitor microphone, electrostatic microphone, Electret condenser microphone, dynamic microphone, ribbon microphone, carbon microphone, piezoelectric microphone, or microelectrical-mechanical system (MEMS) microphone. The microphone can in some embodiments output the audio captured signal to an analogue-to-digitai converter (ADC) 1203.
The device 1200 may further comprise an analogue-to-digital converter 1203. The analogue-to-digital converter 1203 may be configured to receive the audio signals from each of the microphone 1201 and convert them into a format suitable for processing. In some embodiments where the microphone is an integrated microphone the analogue-to-digital converter is not required. The analogue-to-digital converter 1203 can be any suitable analogue-to-digital conversion or processing means. The analogue-to-digital converter 1203 may be configured to output the digital representations of the audio signai to a processor 1207 or to a memory 1211.
In some embodiments the device 1200 comprises at least one processor or centra! processing unit 1207. The processor 1207 can be configured to execute various program codes such as the methods such as described herein.
In some embodiments the device 1200 comprises a memory 1211. in some embodiments the at least one processor 1207 is coupled to the memory 1211. The memory 1211 can be any suitable storage means. In some embodiments the memory 1211 comprises a program code section for storing program codes implementable upon the processor 1207. Furthermore in some embodiments the memory 1211 can further comprise a stored data section for storing data, for example data that has been processed or to be processed in accordance with the embodiments as described herein. The implemented program code stored within the program code section and the data stored within the stored data section can be retrieved by the processor 1207 whenever needed via the memory-processor coupling.
In some embodiments the device 1200 comprises a user interface 1205. The user interface 1205 can be coupled in some embodiments to the processor 1207. In some embodiments the processor 1207 can control the operation of the user interface 1205 and receive inputs from the user interface 1205. In some embodiments the user interface 1205 can enable a user to input commands to the device 1200, for example via a keypad. In some embodiments the user interface 205 can enable the user to obtain information from the device 1200. For example the user interface 1205 may comprise a display configured to display information from the device 1200 to the user. The user interface 1205 can in some embodiments comprise a touch screen or touch interface capable of both enabling information to be entered to the device 1200 and further displaying information to the user of the device 1200. in some embodiments the user interface 1205 may be the user interface for communicating with the position determiner as described herein.
In some implements the device 1200 comprises a transceiver 1209. The transceiver 1209 in such embodiments can be coupled to the processor 1207 and configured to enable a communication with other apparatus or electronic devices, for example via a wireless communications network. The transceiver 1209 or any suitable transceiver or transmitter and/or receiver means can in some embodiments be configured to communicate with other electronic devices or apparatus via a wire or wired coupling.
For example as shown in Figure 11 the transceiver 1209 may be configured to communicate with the renderer as described herein.
The transceiver 1209 can communicate with further apparatus by any suitable known communications protocol. For example in some embodiments the transceiver 1209 or transceiver means can use a suitable universal mobile telecommunications system (UMTS) protocol, a wireless local area network (WLAN) protocol such as for example IEEE 802.X, a suitable short-range radio frequency communication protocol such as Bluetooth, or infrared data communication pathway (IRDA).
In some embodiments the device 1200 may be employed as at least part of the renderer. As such the transceiver 1209 may be configured to receive the audio signals and positional information from the microphone/close microphones/position determiner as described herein, and generate a suitable audio signal rendering by using the processor 1207 executing suitable code. The device 1200 may comprise a digital-to-analogue converter 1213. The digital-to-analogue converter 1213 may be coupled to the processor 1207 and/or memory 1211 and be configured to convert digital representations of audio signals (such as from the processor 1207 following an audio rendering of the audio signals as described herein) to a suitable analogue format suitable for presentation via an audio subsystem output. The digital-to-analogue converter (DAC) 1213 or signa! processing means can in some embodiments be any suitable DAC technology.
Furthermore the device 1200 can comprise in some embodiments an audio subsystem output 1215, An example as shown in Figure 11 shows the audio subsystem output 1215 as an output socket configured to enabling a coupling with headphones 121, However the audio subsystem output 1215 may be any suitable audio output or a connection to an audio output. For example the audio subsystem output 1215 may be a connection to a multichannel speaker system.
In some embodiments the digital to analogue converter 1213 and audio subsystem 1215 may be implemented within a physically separate output device. For example the DAC 1213 and audio subsystem 1215 may be implemented as cordless earphones communicating with the device 1200 via the transceiver 1209.
Although the device 1200 is shown having both audio capture, audio processing and audio rendering components, it would be understood that in some embodiments the device 1200 can comprise just some of the elements.
In general, the various embodiments of the invention may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. For example, some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device, although the invention is not limited thereto. While various aspects of the invention may be illustrated and described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
The embodiments of this invention may be implemented by computer software executable by a data processor of the mobile device, such as in the processor entity, or by hardware, or by a combination of software and hardware. Further in this regard it should be noted that any blocks of the logic flow as in the Figures may represent program steps, or interconnected logic circuits, blocks and functions, or a combination of program steps and logic circuits, blocks and functions. The software may be stored on such physical media as memory chips, or memory blocks implemented within the processor, magnetic media such as hard disk or floppy disks, and optical media such as for example DVD and the data variants thereof, CD.
The memory may be of any type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as semiconductor-based memory devices, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory. The data processors may be of any type suitable to the local technical environment, and may include one or more of general purpose computers, special purpose computers, microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASIC), gate level circuits and processors based on multi-core processor architecture, as non-limiting examples.
Embodiments of the inventions may be practiced in various components such as integrated circuit modules. The design of integrated circuits is by and large a highly automated process. Complex and powerful software tools are available for converting a logic level design into a semiconductor circuit design ready to be etched and formed on a semiconductor substrate.
Programs, such as those provided by Synopsys, Inc. of Mountain View, California and Cadence Design, of San Jose, California automatically route conductors and locate components on a semiconductor chip using well established rules of design as well as libraries of pre-stored design modules. Once the design for a semiconductor circuit has been completed, the resultant design, in a standardized electronic format (e.g., Opus, GDSII, or the like) may be transmitted to a semiconductor fabrication faciiity or fab for fabrication.
The foregoing description has provided by way of exemplary and nonlimiting examples a full and informative description of the exemplary embodiment of this invention. However, various modifications and adaptations may become apparent to those skilled in the relevant arts in view of the foregoing description, when read in conjunction with the accompanying drawings and the appended claims. However, al! such and similar modifications of the teachings of this invention will still fall within the scope of this invention as defined in the appended claims.

Claims (30)

CLAIMS:
1. An apparatus for generating an intended spatial audio field, the apparatus configured to:
receive at least two audio signals, wherein each audio signa! is received from a separate microphone, each separate microphone is located in the same environment and configured to capture a sound source;
analyse each audio signal to determine at least in part an ambience audio signal;
generate a sum audio signal from the determined ambience audio signal based on the at least two audio signals; and process the sum audio signai to spatially extend the sum audio signal so as to generate the intended spatial audio fieid, wherein the sum audio signal comprises the ambience audio signal for the intended spatial audio field.
2. The apparatus as claimed in claim 1, wherein the apparatus is further configured to apply a reverberation to the sum audio signai before the processing of the sum audio signal to spatially extend the sum audio signal.
3. The apparatus as claimed in any of claims 1 and 2, wherein the apparatus configured to generate a sum audio signai from the determined ambience audio signai based on the at least two audio signals is configured to generate for and apply to at least one of the at least two audio signals a weighting value before generating the sum audio signal, wherein the weighting value is based on at ieast one of:
a detection of voice activity within the audio signal; a determination of spectral flatness within the audio signal; a determination of percussiveness within the audio signai; a determination of harmonicity within the audio signal; a determination of content classification type within the audio signal; a determination of silence within the audio signal; a determination of noise within the audio signal; and at least one user generated input associated with the audio signal.
4. The apparatus as claimed in claim 3, wherein the apparatus configured to generate for at least one of the at least two audio signals a weighting value is further configured to normalise the weighting value for at least one of the at ieast two audio signals.
5. The apparatus as claimed in any of claims 1 to 4. wherein the apparatus configured to process the sum audio signal to spatially extend the sum audio signal is configured to apply one of:
vector base amplitude panning to the sum audio signal;
direct binaural panning to the sum audio signal;
direct assignment to channel output location to the sum audio signai;
synthesized ambisonics to the sum audio signai; and wavefield synthesis to the sum audio signal.
6. The apparatus as claimed in claim 5, wherein the apparatus configured to process the sum audio signal to spatially extend the sum audio signai is configured to:
determine a spatial extent parameter;
determine at ieast one position associated with the microphones; determine at least one frequency band position based on the at ieast one position associated with the microphones and the spatial extent parameter.
7. The apparatus as claimed in claim 6, wherein the apparatus configured to appiy vector base amplitude panning to the sum audio signai is further configured to generate panning vectors for the application of vector base amplitude panning to frequency bands of the sum audio signal.
8. The apparatus as claimed in any of claims 1 to 7, wherein the apparatus is configured to generate the intended spatial audio field is configured to generate a plurality of intended spatial audio fields parts, wherein at least one part of the intended spatial audio field is at ieast one of:
partially overlapping a neighbouring part;
non-overiapping at least one other part; contained within at least one other part; and containing at least one other part.
9. The apparatus as claimed in any of claims 1 to 7, wherein the apparatus is configured to generate:
at least one first part of the intended spatial audio field associated with a first part of the environment, the first part of the environment comprising at least one sound source; and at least one second part of the intended spatial audio field associated with a second part of the environment, the second part of the environment comprising at least one further sound source.
10. The apparatus as claimed in claim 9, wherein the first part of the environment is a left portion of the environment with respect to the apparatus, and the second part of the environment is a right portion of the environment with respect to the apparatus.
11. The apparatus as claimed in claim 9, wherein the first part of the environment is a front portion of the environment with respect to the apparatus, and the second part of the environment is a rear portion of the environment with respect to the apparatus.
12. The apparatus as claimed in any of claims 1 to 11, further configured to determine a position of the at least one microphone of the microphones relative to the apparatus,
13. The apparatus as claimed in any of claims 1 to 12, further configured to: receive at least one audio signai from a capture device comprising a microphone array for capturing audio signals of the sound scene;
compare the at least one audio signai from the capture device to the at least one audio signal;
control the generation of the sum audio signal from microphones located within the intended spatial audio field, and process the sum audio signal to generate the intended spatial audio field based on the comparison.
14. The apparatus as claimed in any of ciaims 1 to 13, further configured to mix the at least one spatiaiiy extended sum audio signal with at least one of the at least two audio signals to generate the intended spatial audio field.
15. The apparatus as claimed in any of ciaims 1 to 14, wherein the apparatus configured to process the sum audio signal to spatiaiiy extend the sum audio signal is configured to spatiaiiy extend the sum audio signal such that the at least one spatiaiiy extended sum audio signal is one of:
full spatiaiiy extended to 360 degrees; and partial spatially extended upto 360 degrees.
16. A method for generating an intended spatial audio field, the method comprising:
receiving at least two audio signals, wherein each audio signai is received from a separate microphone, each separate microphone being located in the same environment and configured to capture a sound source;
analysing each audio signal to determine at least in part an ambience audio signal;
generating a sum audio signai from the determined ambience signal based on the at least two audio signals; and processing the sum audio signal to spatiaiiy extend the sum audio signal so as to generate the intended spatial audio field, wherein the sum audio signai comprises the ambience audio signal for the intended spatial audio field.
17. The method as claimed in claim 16, further comprising applying a reverberation to the sum audio signal before the processing of the sum audio signal to spatiaiiy extend the sum audio signal.
18. The method as claimed in any of claims 16 and 17, wherein generating the sum audio signal comprises:
generating for at least one of the at ieast two audio signals a weighting value; and applying to at ieast one of the at ieast two audio signals the weighting value before generating the sum audio signal, wherein the weighting value is based on at least one of:
a detection of voice activity within the audio signal; a determination of spectral flatness within the audio signal; a determination of percussiveness within the audio signai; a determination of harmonicity within the audio signal; a determination of silence within the audio signal; a determination of noise within the audio signal;
a determination of content classification type within the audio signal: and at least one user generated input associated with the audio signal.
19. The method as claimed in claim 18, wherein generating the weighting value further comprises normalising the weighting value for at least one of the at least two audio signals.
20. The method as claimed in any of claims 16 to 19, wherein processing the sum audio signa! to spatiaily extend the sum audio signa! comprises applying one of:
vector base amplitude panning to the sum audio signal;
direct binaural panning to the sum audio signai;
direct assignment to channe! output iocation to the sum audio signal;
synthesized ambisonics to the sum audio signal; and wavefield synthesis to the sum audio signai.
21. The method as claimed in claim 20, wherein processing the sum audio signal to spatially extend the sum audio signal comprises:
determining a spatial extent parameter;
determining at least one position associated with the microphones;
determining at ieast one frequency band position based on the at ieast one position associated with the microphones and the spatial extent parameter.
22. The method as claimed in claim 21. wherein the apparatus configured to apply vector base amplitude panning to the sum audio signal is further configured to generate panning vectors for the application of vector base amplitude panning to frequency bands of the weighted sum.
23. The method as claimed in any of claims 16 to 22, wherein generating the intended spatial audio field comprises generating a plurality of intended spatial audio field parts, wherein at least one part is at least one of:
partially overlapping a neighbouring part; non-overlapping at least one other part; contained within at least one other part; and containing at least one other part.
24. The method as claimed in any of claims 16 to 23, comprising: generating at least one first part of the intended spatial audio field associated with a first part of the environment, the first part of the environment comprising at least one sound source; and generating at least one second part of the intended spatial audio field associated with a second part of the environment, the second part of the environment comprising at ieast one further sound source.
25. The method as claimed in claim 24, wherein the first part of the environment is a left portion of the environment, and the second part of the environment is a right portion of the environment.
26. The method as claimed in claim 24, wherein the first part of the environment is a front portion of the environment, and the second part of the environment is a rear portion of the environment.
27. The method as claimed in any of claims 16 to 26, further comprising determining a position of the at least one microphone of the microphones relative to the apparatus.
28. The method as claimed in any of claims 16 to 26 further comprising: receiving at least one audio signal from a capture device comprising a microphone array for capturing audio signals of the sound scene;
comparing the at least one audio signal from the capture device to the at least one audio signal;
controlling the generation of the sum audio signal from microphones located within the intended spatial audio field; and processing the sum audio signal to generate the intended spatial audio field based on the comparison.
29. The method as claimed in any of claims 16 to 28, further comprising mixing the at ieast one spatialiy extended sum audio signai with at least one of the at least two audio signals to generate the intended spatial audio field.
30. The method as claimed in any of claims 16 to 29, wherein processing the sum audio signai to spatially extend the sum audio signai comprises spatially extending the sum audio signal such that the at ieast one spatially extended audio signal is one of:
full spatially extended to 360 degrees; and partial spatially extended upto 360 degrees.
Intellectual
Property
Office
Application No: GB 1706290.2
GB1706290.2A 2017-04-20 2017-04-20 Audio signal generation for spatial audio mixing Withdrawn GB2561596A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB1706290.2A GB2561596A (en) 2017-04-20 2017-04-20 Audio signal generation for spatial audio mixing
PCT/FI2018/050275 WO2018193162A2 (en) 2017-04-20 2018-04-19 Audio signal generation for spatial audio mixing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1706290.2A GB2561596A (en) 2017-04-20 2017-04-20 Audio signal generation for spatial audio mixing

Publications (2)

Publication Number Publication Date
GB201706290D0 GB201706290D0 (en) 2017-06-07
GB2561596A true GB2561596A (en) 2018-10-24

Family

ID=58795721

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1706290.2A Withdrawn GB2561596A (en) 2017-04-20 2017-04-20 Audio signal generation for spatial audio mixing

Country Status (2)

Country Link
GB (1) GB2561596A (en)
WO (1) WO2018193162A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113170274A (en) * 2018-11-21 2021-07-23 诺基亚技术有限公司 Ambient audio representation and associated rendering

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020178321A1 (en) * 2019-03-06 2020-09-10 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Downmixer and method of downmixing
JP2021131433A (en) 2020-02-19 2021-09-09 ヤマハ株式会社 Sound signal processing method and sound signal processor

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130044884A1 (en) * 2010-11-19 2013-02-21 Nokia Corporation Apparatus and Method for Multi-Channel Signal Playback
US20130202114A1 (en) * 2010-11-19 2013-08-08 Nokia Corporation Controllable Playback System Offering Hierarchical Playback Options
US20150296319A1 (en) * 2012-11-20 2015-10-15 Nokia Corporation Spatial audio enhancement apparatus
US20150317981A1 (en) * 2012-12-10 2015-11-05 Nokia Corporation Orientation Based Microphone Selection Apparatus
US20160007131A1 (en) * 2010-11-19 2016-01-07 Nokia Technologies Oy Converting Multi-Microphone Captured Signals To Shifted Signals Useful For Binaural Signal Processing And Use Thereof
GB2540175A (en) * 2015-07-08 2017-01-11 Nokia Technologies Oy Spatial audio processing apparatus

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080298610A1 (en) * 2007-05-30 2008-12-04 Nokia Corporation Parameter Space Re-Panning for Spatial Audio
CA2731045C (en) * 2010-02-05 2015-12-29 Qnx Software Systems Co. Enhanced spatialization system
GB2540226A (en) * 2015-07-08 2017-01-11 Nokia Technologies Oy Distributed audio microphone array and locator configuration

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130044884A1 (en) * 2010-11-19 2013-02-21 Nokia Corporation Apparatus and Method for Multi-Channel Signal Playback
US20130202114A1 (en) * 2010-11-19 2013-08-08 Nokia Corporation Controllable Playback System Offering Hierarchical Playback Options
US20160007131A1 (en) * 2010-11-19 2016-01-07 Nokia Technologies Oy Converting Multi-Microphone Captured Signals To Shifted Signals Useful For Binaural Signal Processing And Use Thereof
US20150296319A1 (en) * 2012-11-20 2015-10-15 Nokia Corporation Spatial audio enhancement apparatus
US20150317981A1 (en) * 2012-12-10 2015-11-05 Nokia Corporation Orientation Based Microphone Selection Apparatus
GB2540175A (en) * 2015-07-08 2017-01-11 Nokia Technologies Oy Spatial audio processing apparatus

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113170274A (en) * 2018-11-21 2021-07-23 诺基亚技术有限公司 Ambient audio representation and associated rendering
CN113170274B (en) * 2018-11-21 2023-12-15 诺基亚技术有限公司 Environmental audio representation and associated rendering
US11924627B2 (en) 2018-11-21 2024-03-05 Nokia Technologies Oy Ambience audio representation and associated rendering

Also Published As

Publication number Publication date
WO2018193162A3 (en) 2018-12-06
GB201706290D0 (en) 2017-06-07
WO2018193162A2 (en) 2018-10-25

Similar Documents

Publication Publication Date Title
US10382849B2 (en) Spatial audio processing apparatus
US10204614B2 (en) Audio scene apparatus
CN109313907B (en) Combining audio signals and spatial metadata
JP5149968B2 (en) Apparatus and method for generating a multi-channel signal including speech signal processing
US20190394606A1 (en) Two stage audio focus for spatial audio processing
JP5957446B2 (en) Sound processing system and method
US10873814B2 (en) Analysis of spatial metadata from multi-microphones having asymmetric geometry in devices
CN106796792B (en) Apparatus and method for enhancing audio signal, sound enhancement system
EP2484127B1 (en) Method, computer program and apparatus for processing audio signals
WO2018193162A2 (en) Audio signal generation for spatial audio mixing
EP3613221A1 (en) Enhancing loudspeaker playback using a spatial extent processed audio signal
WO2022014326A1 (en) Signal processing device, method, and program
US10587983B1 (en) Methods and systems for adjusting clarity of digitized audio signals
WO2019208285A1 (en) Sound image reproduction device, sound image reproduction method and sound image reproduction program
EP3613043A1 (en) Ambience generation for spatial audio mixing featuring use of original and extended signal
CN114631142A (en) Electronic device, method, and computer program
WO2024024468A1 (en) Information processing device and method, encoding device, audio playback device, and program
WO2018193161A1 (en) Spatially extending in the elevation domain by spectral extension
CN116569566A (en) Method for outputting sound and loudspeaker
JP5641187B2 (en) Sound processor
JP2023551090A (en) Method of outputting sound and speaker

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)