WO2017143003A1 - Traitement de signaux de microphone pour lecture spatiale - Google Patents

Traitement de signaux de microphone pour lecture spatiale Download PDF

Info

Publication number
WO2017143003A1
WO2017143003A1 PCT/US2017/018082 US2017018082W WO2017143003A1 WO 2017143003 A1 WO2017143003 A1 WO 2017143003A1 US 2017018082 W US2017018082 W US 2017018082W WO 2017143003 A1 WO2017143003 A1 WO 2017143003A1
Authority
WO
WIPO (PCT)
Prior art keywords
arrival
matrix
microphone input
input signal
vector
Prior art date
Application number
PCT/US2017/018082
Other languages
English (en)
Inventor
David S. Mcgrath
Original Assignee
Dolby Laboratories Licensing Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dolby Laboratories Licensing Corporation filed Critical Dolby Laboratories Licensing Corporation
Priority to US15/999,764 priority Critical patent/US11234072B2/en
Publication of WO2017143003A1 publication Critical patent/WO2017143003A1/fr
Priority to US17/583,114 priority patent/US11706564B2/en
Priority to US18/352,197 priority patent/US20240015434A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/04Circuit arrangements, e.g. for selective connection of amplifier inputs/outputs to loudspeakers, for loudspeaker detection, or for adaptation of settings to personal preferences or hearing impairments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S5/00Pseudo-stereo systems, e.g. in which additional channel signals are derived from monophonic signals by means of phase shifting, time delay or reverberation 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • H04R1/40Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
    • H04R1/406Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • H04R2430/03Synergistic effects of band splitting and sub-band processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2499/00Aspects covered by H04R or H04S not otherwise provided for in their subgroups
    • H04R2499/10General applications
    • H04R2499/11Transducers incorporated or for use in hand-held devices, e.g. mobile phones, PDA's, camera's
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/03Aspects of down-mixing multi-channel audio to configurations with lower numbers of playback channels, e.g. 7.1 -> 5.1
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/15Aspects of sound capture and related signal processing for recording or reproduction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/07Synergistic effects of band splitting and sub-band processing

Definitions

  • the present disclosure generally relates to audio signal processing, and more specifically to the creation of multi-channel soundfield signals from a set of input audio signals.
  • Recording devices with two or more microphones are becoming more common.
  • mobile phones as well as tablets and the like commonly contain 2, 3 or 4 microphones, and the need for increased quality audio capture is driving the use of more microphones on recording devices.
  • the recorded input signals may be derived from an original acoustic scene, wherein the source sounds created by one or more acoustic sources are incident on M microphones (where M ⁇ 2). Hence, each of the source sounds may be present within the input signals according to the acoustic propagation path from the acoustic source to the microphones.
  • the acoustic propagation path may be altered by the arrangement of the microphones in relation to each other, and in relation to any other acoustically reflecting or acoustically diffracting objects, including the device to which the microphones are attached.
  • the propagation path from a distant acoustic source to each microphone may be approximated by a time-delay and a frequency-dependant gain, and various methods are known for determining the propagation path, including the use of acoustic measurements or numerical calculation techniques.
  • Example embodiments disclosed herein propose a solution of audio signal processing which create multi-channel soundfield signals (composed of N channels, where N > 2) so as to be suitable for presentation to a listener, wherein the listener is presented with a playback experience that approximates the original acoustic scene.
  • a method and/or system which converts a multi-microphone input signal to a multichannel output signal makes use of a time- and frequency-varying matrix. For each time and frequency tile, the matrix is derived as a function of a dominant direction of arrival and a steering strength parameter. Likewise, the dominant direction and steering strength parameter are derived from characteristics of the multi-microphone signals, where those characteristics include values representative of the inter-channel amplitude and group-delay differences.
  • Embodiments in this regard further provide a corresponding computer program product.
  • FIG. 1 illustrates an example of a acoustic capture device including a plurality of microphones suitable for carrying out example embodiments disclosed here;
  • FIG. 2 illustrates a top-down view of the acoustic capture device in FIG. 1 showing an incident acoustic signal in accordance with example embodiments disclosed herein;
  • FIG. 3 illustrates a graph of the impulse responses of three microphones in accordance with example embodiments disclosed herein;
  • FIG. 4 illustrates a graph of the frequency response of three microphones in accordance with example embodiments disclosed herein;
  • FIG. 5 illustrates a user' s acoustic experience recreated using speakers in accordance with example embodiments disclosed herein;
  • FIG. 6 illustrates an example of processing of one band according to a matrix in accordance with example embodiments disclosed herein;
  • FIG. 7 illustrates an example of processing of one band of the audio signals in a multi- band processing system in accordance with example embodiments disclosed herein;
  • FIG. 8 illustrates an example of processing of one band according to a matrix, including decorrelation in accordance with example embodiments disclosed herein;
  • FIG. 9 illustrates an example of process for computing a matrix according to characteristics determined from microphone input signals in accordance with example embodiments disclosed herein.
  • FIG. 10 is a block diagram of an example computer system suitable for implementing example embodiments disclosed herein.
  • the audio input signals may be derived from microphones arranged to form an acoustic capture device.
  • multi-channel soundfield signals (composed of N channels, where N ⁇ 2) may be created so as to be suitable for presentation to a listener.
  • multi-channel soundfield signals may include:
  • acoustic capture device 10 An example of an acoustic capture device 10, is shown in Fig. 1.
  • Acoustic capture device 10 may be for example, a smart phone, tablet or other electronic device
  • the body, 30, of the acoustic capture device 10 may be oriented as shown in Fig.1, in order to capture a video recording and an accompanying audio recording.
  • the primary camera 34 is shown.
  • microphones are disposed on or inside the body of the device in Fig. 1, with acoustic openings 31 and 33 indicating the locations of two microphones. That is, the locations of acoustic openings 31 and 33 is merely provided for illustration pruposes and are in no way limited to the specific locations shown in Fig. 1.
  • the Forward, Left and Up directions are indicated in Fig. 1.
  • the Forward, Left and Up directions will also be referred to as the X, Y and Z axes, respectively, for the purpose of identifying the location of acoustic sources in Cartesian coordinates relative to the centre of the body of the capture device.
  • Fig. 2 shows a top-down view of the acoustic capture device 10 of Fig. 1, showing example locations of microphones 31, 32 and 33.
  • the acoustic waveform, 36 from an acoustic source is shown, incident from a direction, 37, represented by an azimuth angle ⁇ (where —100° ⁇ ⁇ ⁇ 180°), measured in a counter-clockwise direction from the Forward (X) axis.
  • the direction of arrival may also be represented by a unit vector,
  • the direction of arrival may also be represented by a unit vector
  • Each microphone (31, 32 and 33) will respond to the incident acoustic waveform with a varying time-delay and frequency response, according to the direction-of-arrival ( ⁇ , ⁇ ).
  • Fig. 4 shown the frequency responses (96, 97 and 98), representing the respective impulse responses 91, 92 and 93 of Fig. 3.
  • the signal, 93, incident at microphone 33 can be seen to be delayed relative to the signal, 91, incident at microphone 31.
  • This delay is approximately 0.3ms, and is a side-effect of the physical placement of the microphones.
  • a device with a maximum inter-microphone spacing of L metres will contribute to inter- microphone delays up to a maximum of ⁇ « ⁇ seconds, where c is the speed of sound in meters/second.
  • the maximum inter-microphone delay
  • the multi-channel soundfield signals, ou ⁇ , out 2 , . .. out N may be presented to a listener, 101, though a set of speakers as shown in Fig. 5, wherein each channel in the set of multi-channel soundfield signals represents the signal emitted by a corresponding speaker.
  • the positioning of the listener, 101 as well as the set of speakers is merely provided for illustrative purposes and as such is merely a nonlimiting example embodiment.
  • the listener, 101 may be presented with the impression of an acoustic signal incident from azimuth angle ⁇ , as per Fig.5, by panning the acoustic source sound to the out 3 and out 4 speaker channels.
  • Some implementations disclosed herein may derive the appropriate speaker signals from the microphone input signals, according to a matrix mixing process.
  • the microphone input signals such as 13.6, are mixed to form the multi-channel soundfield signals, according to the [N X M] matrix, A:
  • the multi-channel soundfield signals are formed as a linear mixture of the microphone input signals. It will be appreciated, by those of ordinary skill in the art, that linear mixtures or audio signals are implemented according to a variety of different methods, including, but not limited to, the following:
  • Time domain input signals may be split into two or more frequency bands, with each band being processed by a different mixing matrix.
  • Fig. 7 This method, whereby the input signals are split into multiple bands, and the processed results of each band are recombined to form the output signals, is illustrated in Fig. 7.
  • a microphone input, 11, is split into multiple bands (13.1, 13.2, ...) and each band signal, for example 13.6, is processed by processor block, 14, by way of one or more filter banks, 12 to create band output signals (141, 142, ).
  • Band output signals may then be recombined by combiner, 16, to produce the output signals, for example outi, ⁇ l .
  • processing block, 14 is processing one band, by way example. In general, one such processing block, 14, will be applied for each one of the B bands. However, additional processing blocks may be incorporated into this method.
  • Input signals may be processed according to mixing matrices that are determined from time to time. For example, at periodic intervals (once every T seconds, say), a new value of A may be determined. In this case, the time-varying matrix is implemented by updating the matrix at periodic intervals.
  • the block-based processing may be implemented by determining a frequency-domain representation of the input signal around block number k, and the frequency-domain representation of the multi-channel soundfield signals may be determined according a matrix operation. If we define the frequency domain representations of the input signal and multi-channel soundfield signals to be Mic(k, ⁇ ) and Out(k, ⁇ ) respectively), then the matrix, A , may also be determined at each block, k , and at each frequency, ⁇ , so that:
  • the frequency domain method may also be implemented in a number of bands (B bands, say), and hence the matrix, A, may be determined at each block, k, and at each band, b, so that for any frequency, ⁇ that lies within band b:
  • Some example methods defined below may be considered to be applied in the form of mixing matrices that vary in both time and frequency. Without loss of generality, an example of a method will be described wherein a matrix, A(k, b), is determined at block k and band b, as per the linear mixing method number 6 above. In the following description, as a matter of shorthand, the matrix A(k, b) will be referred to as A. Also, in the following description, let band b be represented by discrete frequency domain samples: ⁇ E ⁇ ⁇ + 1, . . . , ⁇ 2 ⁇ .
  • the matrix A(k, b) is determined according to the multichannel microphone input signals, Mic(k, ⁇ ), by the procedure illustrated in Fig. 9, and according to the following steps:
  • Input to the process is in the form of multichannel microphone input signals, Mic(k, ⁇ ) , corresponding to M channels (Mic ⁇ k, ⁇ ), ... , Mic M (k, ⁇ )), representing the microphone input at time-block k . and frequency range ⁇ E ⁇ 1 , ⁇ 1 + 1, . . . , ⁇ 2 ⁇
  • Mic ⁇ k, ⁇ is shown, 13.6, in Fig. 9 as input to the Covariance process.
  • the Covariance process, 71 first determines the [M X M] instantaneous co-variance matrix:
  • x indicates the conjugate-transpose of a column vector
  • the x operation represents the complex conjugate of x
  • the Covariance process, 71 determines the time-smoothed co-variance matrix, Cov(k, ⁇ ), 75, according to:
  • the Extract Characteristics process, 72 determines the delay- matrix, D"(k, ⁇ ), according to:
  • ⁇ ⁇ is chosen to be approximately ⁇ ⁇ «— radians per second, where ⁇ is the maximum expected group-delay difference any two microphone input signals.
  • tr(D') represents the trace of the matrix D' .
  • a different matrix norm may be used instead of the Frobenius norm, e.g. an L 2 ,i norm or a max norm.
  • the [M X M] normalized band-characteristics matrix, D(k, b) will be a Hermitian matrix, as will be familiar to those of ordinary skill in the art. Hence, the information contained within this matrix will be represented in the form of M real elements in the diagonal
  • the Determine Direction process, 73 is provided with the characteristic- vector, C(k, b), 76, as input, and determines the dominant direction of arrival unit- vector, u b 77, and a Steering parameter, s b , 79, representative of the degree to which the microphone input signals appear to contain a single dominant direction of arrival.
  • the function V b refers to the function that determines u b :
  • the Determine Matrix process, 74 determines the [N X M] mixing matrix, A(k, b), 22, as a function of the dominant direction of arrival, u b , 77, the Steering parameter, s b , 79, and the parameter, p b , 78, according to the set of matrix-determining functions:
  • a n ,m(k, b) F nimib (u b , s b , p b ) (13) where the indices n and m correspond to output channel n and microphone input channel m, respectively, and where 1 ⁇ N ⁇ N and 1 ⁇ m ⁇ M .
  • Steps 2-3 are intended to determine the normalized covariance matrix, and may be summarized in the form of a single function, KQ, according to:
  • CovQk, ⁇ ) K(Cov ⁇ k - 1, ⁇ ), Mic(k, ⁇ )) (14) wherein the function, KQ, determines the normalized covariance matrix according to the process detailed in Steps 2-3 above.
  • Steps 4-7 are intended to determine the characteristics-vector for one band, and may be summarized in the form of a single function, J b Q, according to:
  • the Determine Direction process, 73 first determines a direction vector, (x, y), for band b,according to a set of direction estimating functions, G x b () and G y b () , and then determines the dominant direction of arrival unit- vector, u b and the Steering parameter, Sj,, from (x,y), according to:
  • the dominant direction of arrival is specified as a 2-element unit-vector, u b , representing the azimuth of arrival of the dominant acoustic component (as shown in Fig. 2), as defined in Equation (1).
  • the Determine Direction process, 73 first determines a 3D direction vector, u b , according to a set of direction estimating functions, G x b Q, G y b Q and G z b Q , and then determines the dominant direction of arrival unit- vector, , and the Steering parameter, s b , from (x, y, z), according to:
  • equations 17 and 20 the vectors (x, y) and (x, y, z) are multiplied by a normalization factor. This normalization factor is also used to calculate the steering parameter Sb.
  • G x b Q, G y b Q and/or G z b Q may be implemented as polynomial functions of the elements in C(k).
  • a 2nd order polynomial may be constructed according to:
  • E ⁇ b represents a set of M ⁇ M polynomial coefficients for each band, b, used in the calculation of G x b (C(k)), where 1 ⁇ j ⁇ i ⁇ M.
  • G y b (C(k)) may be calculated according to:
  • the Determine Matrix process, 74 makes use of matrix- determining functions, F n m b (u b , s b , p b ) (as per Equation (13)) that are formed by combining together a fixed matrix value, Q n: m :b , and a steered matrix function, R n: m :b (u), according to:
  • each steered matrix function, R n ,m,b (. u b) > represents a polynomial function.
  • u b is a 2-element vector
  • Equations (25) and (26) specify the behaviour of the matrix-determining functions, Fn,m,b (. u b > s b > Pb) -
  • Equation (13) may be re- written in matrix form as,
  • A(k, b) F b (u b , s b , p b ) (27)
  • Equation (29) may be interpreted as follows:
  • a mixing matrix is formed by a sum of a matrix Q which is independent of the dominant direction of arrival, multiplied by a first weighting factor, and a matrix R(a) which varies for different vectors u representative of the dominant direction of arrival, multiplied by a second weighting factor.
  • the second weighting factor increases for an increase in the degree to which the multi-microphone input signal can be represented by a single direction of arrival, as represented by the steering strength parameter s
  • the first weighting factor decreases for an increase in the degree to which the multi-microphone input signal can be represented by a single direction of arrival, as represented by the steering strength parameter s.
  • the second weighting factor may be a monotonically increasing function of the steering strength parameter s, while the first weighting factor may be a monotonically decreasing function of the steering strenght parameter s.
  • the second weighting factor is a linear function of the steering strength parameter with a positive slope, while the first weighting factor is a linear function of the steering strength parameter with a negative slope.
  • the weighting factors may optionally also depend on the parameter pb, for example by multiplying the steering strength parameter Sb and the parameter pb.
  • the Rb matrix dominates the mixing matrix if the soundfield was made up of only one source, so that the microphones are mixed to form a panned output signal. If the soundfield was diffuse, with no dominant direction of arrival, the Q matrix dominates the mixing matrix, and the microphones are mixed to spread the signals around the output channels.
  • Conventional approaches e.g. blind source separation techniques based on non-negative matrix factorization, try to separate all individual sound sources. However, when using such techniques for diffuse soundfields, the quality of the audio output decreases.
  • the present approach exploits the fact that a human' s ability to hear the location of sounds becomes quite poor when the soundfield is highly diffuse, and adapts the mixing matrix in dependence on the degree to which the multi-microphone input signal can be represented by a single direction of arrival. Therefore, sound quality is maintained for diffuse sound fields, while directionality is maintained for sound field having a single dominant direction of arrival.
  • the mixing matrix, A(k, b) may be determined, from the microphone input signals, according to a set of functions, KQ , J b , G x , b 0, Gy. b Q, G z b Q and R b Q and the matrix Q b .
  • the implementation of the functions G x b Q, G y b Q and G z b Q may be determined from the acoustic behaviour of the microphone signals.
  • the function R b () and the matrix Q b may be determined from acoustic behaviour of the microphone signals and characteristics of the multi-channel soundfield signals.
  • the function G z b Q is omitted, as the direction or arrival unit- vector, u b , may be a 2-element vector.
  • the behaviour of these functions is determined by first determining the multi-dimensional arrays: u a , C a b , A a b according to:
  • a 1. . W ⁇ .
  • mice a m (o) Determine an estimated acoustic response signal, Mic a m (o)), for each microphone, being the estimated signal at each microphone from an acoustic impulse that is incident on the capture device from the direction represented by u a .
  • the estimate of ic a m (io) may be derived from acoustic measurements, or from numerical simulation/estimation methods.
  • the function V b (C(k, b)), as used in Equation (12), may be implemented by finding the candidate direction of arrival vector u a according to:
  • V b (C(k, b)) u a (30)
  • This procedure effectively determines the candidate direction of arrival vector u a for which the corresponding candidate characteristics vector C a b matches most closely to the actual characteristics vector C(k, b), in band b at a time corresponding to block k.
  • the function V b (C(k, b)), as used in Equation (12), may be implemented by first evaluating the functions G x b Q, G y b Q and (in instances where the direction of arrival vector u b is a 3D vector) G z b Q.
  • G x b Q may be implemented as a polynomial according to Equation (22).
  • G x b Q may be implemented as a second-order polynomial. This polynomial may be determined so as to provide an optimum approximation to: x a « G Xib (32) hence, a Va G ⁇ 1. . W ⁇ (33)
  • G y b () and (in instances where the direction of arrival vector u b is a 3D vector) G z b Q may be determined by polynomial regression, so that the coefficients E j b and Ef ⁇ may be determined to allow least- squares optimised approximations to y a « G y b (G a b ) and z a « G z b (C a b ), respectively.
  • Equation (28) determines F b (u b , s b , p b ) in terms of the matrix Q b and the function R b (u b ).
  • R b (u b ) may implemented according to:
  • This procedure effectively chooses the candidate mixing matrix A a b for band b that corresponds to the candidate direction of arrival vector u a that is closest in direction to the estimated direction of arrival vector u b .
  • the function R b (u b ) may be implemented as a polynomial function in terms of the coordinates of the unit-vector, u b , according to:
  • Rb (. u b) Pb,0 + Pb,l x b + Pb.lVb + Pb,3 x b + Pb. ⁇ bVb (36)
  • u b The choice of the polynomial coefficient matrices (P b 0 , P b 5 ) may be determined by polynomial regression, in order to achieve the least-square error in the approximation: A a,b ⁇ R b (u a ) Va E ⁇ l. . W ⁇ (37) this is equivalent to the least squares minimisation of:
  • the matrix Q b is determined according to the average value of A a b , according to:
  • the matrix Q b is determined according to the average value of A a b , with an empirically defined scale-factor, ⁇ , according to:
  • the matrix A is augmented with a second matrix, A', as shown in Fig. 8.
  • the outputs for example 141 ... 149) are formed by combining the intermediate signals (151 ... 159) produced by the mixing matrix A, 23, with the intermediate signals (161 ... 169) produced by the mixing matrix A, 26.
  • Matrix mixer 26 receives inputs from intermediate signals, for example 25, that are output from a decorrelate process, 24.
  • the decorrelation matrix, Q' b may be determined by a number of different methods.
  • the columns of the matrix, Q' b should be approximately orthogonal to each other, and each column of Q' b should be approximately orthogonal to each column of Q b .
  • the elements of Q' b may be implemented by copying the elements of Q b with alternate rows negated:
  • the time-smoothed covariance matrix, Cov(k, ⁇ ) represents 2nd-order statistical information derived from the microphone input signals.
  • Cov(k, ⁇ ) will be a [M x M] matrix.
  • Cov(k, ⁇ ) 1 2 represents the covariance of microphone channel 1 compared to microphone channel 2.
  • this covariance element represents a complex frequency response (a function of ⁇ ).
  • a group-delay offset may exist between the signals in the two microphones, as per Fig. 3.
  • This group delay offset will result in a phase difference between the microphones that varies as a linear function of ⁇ .
  • the group-delay between the microphone signals will be a function of the direction of arrival of the wave from the acoustic source.
  • GD - We may therefore represent the group delay between microphones according to the approximation:
  • Equation (7) determines the delay- covariance matrix such that each element of the matrix has it's magnitude taken frame the magnitude of the time-smoothed covariance matrix
  • ⁇ ⁇ is chosen so that, for the expected range of group-delay differences between microphones (for all expected directions of arrival), the quantity: arg(Cov(k, ⁇ + ⁇ ⁇ ) 1 2 0 ⁇ , ⁇ — ⁇ ⁇ ) 1 2 ) will lie in the approximate range [— ⁇ '" ⁇ ⁇
  • the diagonal entries of the delay-covariance matrix will be determined according to the amplitudes of the microphone input signals, without any group-delay information.
  • the -delay information as it relates to the relative delay between different microphones, is contained in the off-diagonal entries of the delay-covariance matrix.
  • the off diagonal entries of the delay-covariance matrix may be determined according to any method whereby the delay between microphones is represented.
  • D"(k, ) t may be computed according to methods that include, but are not limited to, the following:
  • the components of the methods and systems of 14 shown in FIGs. 6-8 and/or the system 21 shown in FIG. 9 may be a hardware module or a software unit module.
  • the system may be implemented partially or completely as software and/or in firmware, for example, implemented as a computer program product embodied in a computer readable medium.
  • the system may be implemented partially or completely based on hardware, for example, as an integrated circuit (IC), an application- specific integrated circuit (ASIC), a system on chip (SOC), a field programmable gate array (FPGA), and so forth.
  • IC integrated circuit
  • ASIC application- specific integrated circuit
  • SOC system on chip
  • FPGA field programmable gate array
  • Fig. 10 depicts a block diagram of an example computer system 1000 suitable for implementing example embodiments disclosed herein. That is, a computer system contained in, for example, the acoustic capture device 10 (e.g., a smart phone, tablet or the like) shown in Fig. 1.
  • the computer system 1000 includes a central processing unit (CPU) 1001 which is capable of performing various processes in accordance with a program stored in a read only memory (ROM) 1002 or a program loaded from a storage unit 1008 to a random access memory (RAM) 1003.
  • ROM read only memory
  • RAM random access memory
  • data required when the CPU 1001 performs the various processes or the like is also stored as required.
  • the CPU 1001, the ROM 1002 and the RAM 1003 are connected to one another via a bus 1004.
  • An input/output (I/O) interface 1005 is also connected to the bus 1004.
  • the following components are connected to the I/O interface 1005: an input unit 1006 including a keyboard, a mouse, or the like; an output unit 1007 including a display such as a cathode ray tube (CRT), a liquid crystal display (LCD), or the like, and a loudspeaker or the like; the storage unit 1008 including a hard disk or the like; and a communication unit 1009 including a network interface card such as a LAN card, a modem, or the like.
  • the communication unit 1009 performs a communication process via the network such as the internet.
  • a drive 1010 is also connected to the I O interface 1005 as required.
  • a removable medium 1011 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like, is mounted on the drive 1010 as required, so that a computer program read therefrom is installed into the storage unit 1008 as required.
  • example embodiments disclosed herein include a computer program product including a computer program tangibly embodied on a machine readable medium, the computer program including program code for performing the systems or methods.
  • the computer program may be downloaded and mounted from the network via the communication unit 1009, and/or installed from the removable medium 1011.
  • various example embodiments disclosed herein may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. Some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device. While various aspects of the example embodiments disclosed herein are illustrated and described as block diagrams, flowcharts, or using some other pictorial representation, it would be appreciated that the blocks, apparatus, systems, techniques or methods disclosed herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
  • example embodiments disclosed herein include a computer program product including a computer program tangibly embodied on a machine readable medium, the computer program containing program codes configured to carry out the methods as described above.
  • a machine readable medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • the machine readable medium may be a machine readable signal medium or a machine readable storage medium.
  • a machine readable medium may include, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
  • machine readable storage medium More specific examples of the machine readable storage medium would include an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read-only memory
  • EPROM or Flash memory erasable programmable read-only memory
  • CD-ROM compact disc read-only memory
  • optical storage device a magnetic storage device, or any suitable combination of the foregoing.
  • Computer program code for carrying out methods disclosed herein may be written in any combination of one or more programming languages. These computer program codes may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor of the computer or other programmable data processing apparatus, cause the functions/operations specified in the flowcharts and/or block diagrams to be implemented.
  • the program code may execute entirely on a computer, partly on the computer, as a standalone software package, partly on the computer and partly on a remote computer or entirely on the remote computer or server.
  • the program code may be distributed on specially- programmed devices which may be generally referred to herein as "modules".
  • modules may be written in any computer language and may be a portion of a monolithic code base, or may be developed in more discrete code portions, such as is typical in object-oriented computer languages.
  • the modules may be distributed across a plurality of computer platforms, servers, terminals, mobile devices and the like. A given module may even be implemented such that the described functions are performed by separate processors and/or computing hardware platforms.
  • circuitry refers to all of the following: (a) hardware-only circuit implementations (such as implementations in only analog and/or digital circuitry) and (b) to combinations of circuits and software (and/or firmware), such as (as applicable): (i) to a combination of processor(s) or (ii) to portions of processor(s)/software (including digital signal processor(s)), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions) and (c) to circuits, such as a microprocessor(s) or a portion of a microprocessor(s), that require software or firmware for operation, even if the software or firmware is not physically present.
  • communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
  • EEEs enumerated example embodiments
  • EEE 1 A method for determining a multichannel audio output signal, composed of two or more output audio channels, from a multi-microphone input signal, composed of at least two microphone signals, comprising: determining a mixing matrix, based on characteristics of the multi-microphone input signal,
  • the multi-microphone input signal is mixed according to the mixing matrix to produce the multichannel audio output signal.
  • EEE 2 A method according to EEE 1 wherein the method for determining the mixing matrix further comprises; determining a dominant direction of arrival and a steering strength parameter, based on characteristics of said multi-microphone input signal; and determining the mixing matrix, based on said dominant direction of arrival and said steering strength parameter.
  • EEE 3 A method according to EEE 1 or EEE 2, wherein the characteristics of the multi- microphone input signal includes the relative amplitudes between one or more pairs of said microphone signals.
  • EEE 4. A method according to any of the previous EEEs wherein said characteristics of said multi-microphone input signal includes the relative group-delay between one or more pairs of said microphone signals.
  • EEE 5 A method according to any of the previous EEEs wherein said matrix is modified as a function of time, according to characteristics of said multi-microphone input signal at various times.
  • EEE 6 A method according to any of the previous EEEs wherein said matrix is modified as a function of frequency, according to characteristics of said multi-microphone input signal in various frequency bands.
  • EEE 7 A computer program product for processing an audio signal, comprising a computer program tangibly embodied on a machine readable medium, the computer program containing program code for performing the method according to any of EEEs 1-6.
  • a device comprising:
  • a memory storing instructions that, when executed by the processing unit, cause the device to perform the method according to any of EEEs 1-6.
  • EEE 9 An apparatus, comprising: circuitry adapted to cause the apparatus to at least:
  • the multi-microphone input signal is mixed according to the mixing matrix to produce the multichannel audio output signal.
  • EEE 10 A program storage device readable by a machine, tangibly embodying a program of instructions executable by the machine for causing performance of operations, said operations comprising:
  • the multi-microphone input signal is mixed according to the mixing matrix to produce the multichannel audio output signal.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

La présente invention concerne des procédés et systèmes qui convertissent un signal d'entrée de microphone multiple en un signal de sortie de canal multiple faisant usage du temps et de la matrice de variance de fréquence. Pour chaque temps et mosaïque de fréquence, la matrice est remaniée en fonction de la direction dominante d'arrivée et du paramètre de puissance de direction. De même, la direction dominante et le paramètre de force de direction sont issus de caractéristiques de signaux de microphone multiple, ces dernières comprennent les valeurs représentatives d'une amplitude intercanal et des différences de temps de propagation de groupe.
PCT/US2017/018082 2016-02-18 2017-02-16 Traitement de signaux de microphone pour lecture spatiale WO2017143003A1 (fr)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US15/999,764 US11234072B2 (en) 2016-02-18 2017-02-16 Processing of microphone signals for spatial playback
US17/583,114 US11706564B2 (en) 2016-02-18 2022-01-24 Processing of microphone signals for spatial playback
US18/352,197 US20240015434A1 (en) 2016-02-18 2023-07-13 Processing of microphone signals for spatial playback

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201662297055P 2016-02-18 2016-02-18
US62/297,055 2016-02-18
EP16169658.8 2016-05-13
EP16169658 2016-05-13

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US15/999,764 A-371-Of-International US11234072B2 (en) 2016-02-18 2017-02-16 Processing of microphone signals for spatial playback
US17/583,114 Continuation US11706564B2 (en) 2016-02-18 2022-01-24 Processing of microphone signals for spatial playback

Publications (1)

Publication Number Publication Date
WO2017143003A1 true WO2017143003A1 (fr) 2017-08-24

Family

ID=55969043

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2017/018082 WO2017143003A1 (fr) 2016-02-18 2017-02-16 Traitement de signaux de microphone pour lecture spatiale

Country Status (1)

Country Link
WO (1) WO2017143003A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107749299A (zh) * 2017-09-28 2018-03-02 福州瑞芯微电子股份有限公司 一种多音频输出方法和装置
CN112567763A (zh) * 2018-05-09 2021-03-26 诺基亚技术有限公司 用于音频信号处理的装置、方法和计算机程序
CN115426223A (zh) * 2022-08-10 2022-12-02 华中科技大学 一种低轨卫星信道估计和符号检测方法及系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007096808A1 (fr) * 2006-02-21 2007-08-30 Koninklijke Philips Electronics N.V. Codage et décodage audio
WO2010019750A1 (fr) * 2008-08-14 2010-02-18 Dolby Laboratories Licensing Corporation Transformation de format de signal audio
EP2560161A1 (fr) * 2011-08-17 2013-02-20 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Matrices de mélange optimal et utilisation de décorrelateurs dans un traitement audio spatial
WO2014147442A1 (fr) * 2013-03-20 2014-09-25 Nokia Corporation Appareil audio spatial

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007096808A1 (fr) * 2006-02-21 2007-08-30 Koninklijke Philips Electronics N.V. Codage et décodage audio
WO2010019750A1 (fr) * 2008-08-14 2010-02-18 Dolby Laboratories Licensing Corporation Transformation de format de signal audio
EP2560161A1 (fr) * 2011-08-17 2013-02-20 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Matrices de mélange optimal et utilisation de décorrelateurs dans un traitement audio spatial
WO2014147442A1 (fr) * 2013-03-20 2014-09-25 Nokia Corporation Appareil audio spatial

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
CONSTANTINIDES A G ET AL: "Estimation of Direction of Arrival Using Information Theory", IEEE SIGNAL PROCESSING LETTERS, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 12, no. 8, 1 August 2005 (2005-08-01), pages 561 - 564, XP011136206, ISSN: 1070-9908, DOI: 10.1109/LSP.2005.849546 *
EPAIN N ET AL: "Super-resolution sound field imaging with sub-space pre-processing", 2013 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP); VANCOUCER, BC; 26-31 MAY 2013, INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS, PISCATAWAY, NJ, US, 26 May 2013 (2013-05-26), pages 350 - 354, XP032507935, ISSN: 1520-6149, [retrieved on 20131018], DOI: 10.1109/ICASSP.2013.6637667 *
HAOHAI SUN ET AL: "Optimal Higher Order Ambisonics Encoding With Predefined Constraints", IEEE TRANSACTIONS ON AUDIO, SPEECH AND LANGUAGE PROCESSING, IEEE SERVICE CENTER, NEW YORK, NY, USA, vol. 20, no. 3, 1 March 2012 (2012-03-01), pages 742 - 754, XP011391644, ISSN: 1558-7916, DOI: 10.1109/TASL.2011.2164532 *
KARIM M IBRAHIM ET AL: "PRIMARY-AMBIENT EXTRACTION IN AUDIO SIGNALS USING ADAPTIVE WEIGHTING AND PRINCIPAL COMPONENT ANALYSIS", 13TH SOUND AND MUSIC COMPUTING CONFERENCE AND SUMMER SCHOOL, 31 August 2016 (2016-08-31), Hamburg, XP055366735, Retrieved from the Internet <URL:http://smcnetwork.org/system/files/SMC2016_submission_36.pdf> [retrieved on 20170424] *
NICOLAS EPAIN ET AL: "SPARSE RECOVERY METHOD FOR DEREVERBERATION", REVERB WORKSHOP 2014, 10 May 2014 (2014-05-10), XP055366746, Retrieved from the Internet <URL:https://reverb2014.dereverberation.com/workshop/slides/epain_reverb2014.pdf> [retrieved on 20170424] *
NICOLAS EPAIN ET AL: "SPARSE RECOVERY METHOD FOR DEREVERBERATION", REVERB WORKSHOP, 10 May 2014 (2014-05-10), XP055366745, Retrieved from the Internet <URL:https://reverb2014.dereverberation.com/workshop/reverb2014-papers/1569899015.pdf> [retrieved on 20170424] *
NIKUNEN JOONAS ET AL: "Direction of Arrival Based Spatial Covariance Model for Blind Sound Source Separation", IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, IEEE, USA, vol. 22, no. 3, 1 March 2014 (2014-03-01), pages 727 - 739, XP011539739, ISSN: 2329-9290, [retrieved on 20140210], DOI: 10.1109/TASLP.2014.2303576 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107749299A (zh) * 2017-09-28 2018-03-02 福州瑞芯微电子股份有限公司 一种多音频输出方法和装置
CN107749299B (zh) * 2017-09-28 2021-07-09 瑞芯微电子股份有限公司 一种多音频输出方法和装置
CN112567763A (zh) * 2018-05-09 2021-03-26 诺基亚技术有限公司 用于音频信号处理的装置、方法和计算机程序
US11457310B2 (en) 2018-05-09 2022-09-27 Nokia Technologies Oy Apparatus, method and computer program for audio signal processing
CN112567763B (zh) * 2018-05-09 2023-03-31 诺基亚技术有限公司 用于音频信号处理的装置和方法
US11950063B2 (en) 2018-05-09 2024-04-02 Nokia Technologies Oy Apparatus, method and computer program for audio signal processing
CN115426223A (zh) * 2022-08-10 2022-12-02 华中科技大学 一种低轨卫星信道估计和符号检测方法及系统
CN115426223B (zh) * 2022-08-10 2024-04-23 华中科技大学 一种低轨卫星信道估计和符号检测方法及系统

Similar Documents

Publication Publication Date Title
EP3320692B1 (fr) Appareil de traitement spatial de signaux audio
US8180062B2 (en) Spatial sound zooming
US11832080B2 (en) Spatial audio parameters and associated spatial audio playback
EP2984852B1 (fr) Procédé et appareil pour enregistrer du son spatial
US20240015434A1 (en) Processing of microphone signals for spatial playback
US20120082322A1 (en) Sound scene manipulation
US10694306B2 (en) Apparatus, method or computer program for generating a sound field description
EP2991382A1 (fr) Appareil et procédé de traitement de signaux sonores
EP2600637A1 (fr) Appareil et procédé pour le positionnement de microphone en fonction de la densité spatiale de puissance
US8041043B2 (en) Processing microphone generated signals to generate surround sound
WO2016172111A1 (fr) Traitement de données audio pour compenser une perte auditive partielle ou un environnement auditif indésirable
EP3275208B1 (fr) Mélange de sous-bande de multiples microphones
US12022276B2 (en) Apparatus, method or computer program for processing a sound field representation in a spatial transform domain
US10798511B1 (en) Processing of audio signals for spatial audio
US20210099795A1 (en) Spatial Audio Capture
WO2017143003A1 (fr) Traitement de signaux de microphone pour lecture spatiale
EP2437517B1 (fr) Manipulation de scène sonore
KR101779731B1 (ko) 업믹서에서의 적응적 확산 신호 생성
US9706324B2 (en) Spatial object oriented audio apparatus
Delikaris-Manias et al. Parametric binaural rendering utilizing compact microphone arrays
Stefanakis et al. Foreground suppression for capturing and reproduction of crowded acoustic environments
Herzog et al. Signal-Dependent Mixing for Direction-Preserving Multichannel Noise Reduction

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17706659

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17706659

Country of ref document: EP

Kind code of ref document: A1