EP3844981B1 - Method for the spatial sound reproduction of a sound field that is audible in a position of a moving listener and system implementing such a method - Google Patents

Method for the spatial sound reproduction of a sound field that is audible in a position of a moving listener and system implementing such a method Download PDF

Info

Publication number
EP3844981B1
EP3844981B1 EP19778569.4A EP19778569A EP3844981B1 EP 3844981 B1 EP3844981 B1 EP 3844981B1 EP 19778569 A EP19778569 A EP 19778569A EP 3844981 B1 EP3844981 B1 EP 3844981B1
Authority
EP
European Patent Office
Prior art keywords
listener
sub
region
loudspeakers
mic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP19778569.4A
Other languages
German (de)
French (fr)
Other versions
EP3844981A1 (en
Inventor
Georges Roussel
Rozenn Nicol
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Orange SA
Original Assignee
Orange SA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Orange SA filed Critical Orange SA
Publication of EP3844981A1 publication Critical patent/EP3844981A1/en
Application granted granted Critical
Publication of EP3844981B1 publication Critical patent/EP3844981B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/302Electronic adaptation of stereophonic sound system to listener position or orientation
    • H04S7/303Tracking of listener position or orientation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/302Electronic adaptation of stereophonic sound system to listener position or orientation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • H04R1/40Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
    • H04R1/403Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers loud-speakers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/005Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/301Automatic calibration of stereophonic sound system, e.g. with test microphone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/13Aspects of volume control, not necessarily automatic, in stereophonic sound systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/01Enhancing the perception of the sound image or of the spatial distribution using head related transfer functions [HRTF's] or equivalents thereof, e.g. interaural time difference [ITD] or interaural level difference [ILD]

Definitions

  • the invention is placed in the field of spatial audio and sound field control.
  • the method aims to restore at least one sound field in a zone, for a listener, depending on the position of the listener.
  • the process aims to restore the sound field by taking into account the movements of the listener.
  • the area is covered by an array of loudspeakers, powered by respective control signals to each emit a continuous audio signal.
  • a respective weight is applied to each loudspeaker control signal in order to reproduce the sound field according to the listener's position. From the weights, a set of filters is determined, each filter of the filter set corresponding to each loudspeaker. The signal to be distributed to the listener is then filtered by the filter set and broadcast by the loudspeaker corresponding to the filter.
  • the iterative methods used use the weights calculated in the previous iteration to calculate the new weights.
  • the set of filters therefore has a memory of previous iterations.
  • part of the sound field that was rendered in the previous iteration (or at the listener's old position) is missing from the new listener position. It is therefore no longer constrained and the part of the weights allowing this previous restitution is no longer useful but remains in memory.
  • the sound field restored at the previous position of the listener, at the previous iteration is no longer useful for calculating the weights at the current position of the listener, or at the current iteration, but remains in effect. memory.
  • the present invention improves the situation.
  • the present method is therefore based directly on the movement of the listener to vary the forgetting factor at each iteration. This helps mitigate the memory effect due to the calculation of weights in previous iterations.
  • the precision of field restitution is greatly improved, while not requiring excessively expensive computing resources.
  • a plurality of points forming the respective positions of a plurality of virtual microphones are defined in the zone to estimate a plurality of respective acoustic pressures in the zone taking into account the respective weight applied to each loudspeaker, each comprising respectively a forgetting factor, and transfer functions specific to each speaker in each virtual microphone, the plurality of points being centered on the position of the listener.
  • the sound pressure is estimated at a plurality of points in the area surrounding the listener.
  • This makes it possible to apply weights to each loudspeaker taking into account the differences in sound pressures that may occur at different points in the area.
  • the estimation of acoustic pressures is therefore carried out in a homogeneous and precise manner around the listener, which makes it possible to increase the precision of the method.
  • the method therefore makes it possible to reproduce different sound fields in the same area, using the same loudspeaker system, depending on the movement of the listener.
  • the sound field actually restored in the two sub-zones is evaluated so that at each movement of the listener, the sound pressure in each of the sub-zones actually reaches the target sound pressure.
  • the position of the listener can help determine the sub-area in which the field sound is to be made audible.
  • the sub-zone in which the sound field is to be made inaudible is then dynamically defined each time the listener moves.
  • the forgetting factor is therefore calculated iteratively for each of the two sub-zones, so that the sound pressure in each of the sub-zones reaches its target sound pressure.
  • the position of the listener can make it possible to define the sub-zone in which the sound field is to be made inaudible.
  • the sub-zone in which the sound field is to be made audible being defined dynamically as complementary to the other sub-zone.
  • the forgetting factor is therefore calculated iteratively for each of the two sub-zones, so that the sound pressure in each of the sub-zones reaches its target sound pressure.
  • each sub-zone comprises at least one virtual microphone and two speakers, and preferably each sub-zone comprises at least ten virtual microphones and at least ten speakers.
  • the method is therefore capable of operating with a plurality of microphones and speakers.
  • a value of the forgetting factor increases if the listener moves and decreases if the listener does not move.
  • the forgetting factor is defined by with ⁇ (n) the forgetting factor, n the current iteration, ⁇ max the maximum forgetting factor, ⁇ a parameter defined by the designer equal to ⁇ an adaptation step, m a variable defined as a function of a movement of the listener having as maximum ⁇ and ⁇ a variable making it possible to adjust the speed of increase or decrease of the forgetting factor.
  • the forgetting factor is directly estimated as a function of a movement of the listener.
  • the forgetting factor depends on the distance traveled in each iteration by the listener, in other words on the speed of movement of the listener. A different forgetting factor can therefore be estimated for each listener. Variable values can also be adjusted during iterations to truly account for listener movement.
  • the forgetting factor is between 0 and 1.
  • the present invention also relates to a spatialized sound reproduction system according to claim 10.
  • the present invention also relates to a storage medium for a computer program according to claim 11.
  • the SYST system comprises a network of HP loudspeakers comprising N loudspeakers (HP 1 ,..., HP N ), with N at least equal to 2, and preferably at least equal to 10.
  • the loudspeaker network HP speakers cover a zone Z.
  • the HP speakers are powered by respective control signals to each emit an audio signal continuously, with a view to spatialized sound diffusion of a sound field chosen in zone Z. More precisely , the chosen sound field is to be reproduced in a position a1 of a listener U.
  • the speakers can be defined by their position in the zone.
  • the position a1 of listener U can be obtained by means of a position sensor POS.
  • the area is additionally covered by MIC microphones.
  • the area is covered by a network of M MIC microphones, with M at least equal to 1 and preferably at least equal to 10.
  • the MIC microphones are virtual microphones. In the remainder of the description, the term “MIC microphone” is used. MIC microphones are identified by their position in zone Z.
  • the virtual microphones are defined as a function of the position a1 of the listener U in zone Z.
  • the virtual microphones MIC can be defined so as to surround the listener U.
  • the position of the virtual microphones MIC changes according to the position a1 of the listener U.
  • the microphone array MIC surrounds position a1 of listener U. Then, when listener U moves to position a2, the array of MIC microphones are redefined to surround the listener's A2 position.
  • the movement of the listener U is represented schematically by the arrow F.
  • the SYST system further comprises a TRAIT processing unit capable of implementing the steps of the process.
  • the TRAIT processing unit comprises in particular a memory, forming a storage medium for a computer program comprising portions of code for implementing the method described below with reference to the figures 2a and 2b .
  • the TRAIT processing unit further comprises a PROC processor capable of executing the code portions of the computer program.
  • the TRAIT processing unit receives, continuously and in real time, the position of the MIC microphones, the position of the listener U, the positions of each speaker HP, the audio signal to be reproduced S(U) intended for the the listener U and the target sound field P t to be reached at the listener's position.
  • the TRAIT processing unit also receives the estimated sound pressure P in the position of the listener U. From these data, the TRAIT processing unit calculates the FILT filter to be applied to the signal S in order to restore the sound field target P t .
  • the TRAIT processing unit outputs the filtered signals S(HP 1 ...HP N ) to be broadcast respectively on the speakers HP 1 to HP N.
  • THE figures 2a and 2b illustrate the main steps of a process for the restitution of a chosen sound field in a position of a listener, when the listener moves.
  • the steps of the process are implemented by the TRAIT processing unit continuously and in real time.
  • step S1 the position of the listener U in the area is obtained by means of a position sensor. From this geolocation data, a network of virtual microphones MIC is defined in step S2.
  • the network of virtual microphones MIC can take any geometric shape such as a square, a circle, a rectangle, etc.
  • the network of virtual microphones MIC can be centered around the position of the listener U.
  • the network of virtual microphones MIC can be centered around the position of the listener U.
  • the network of virtual microphones MIC defined for example a perimeter of a few tens of centimeters to a few tens of meters around the listener U.
  • the network of virtual microphones MIC comprises at least two virtual microphones, and preferably at least ten virtual microphones. The number of virtual microphones as well as their arrangement define limits in the quality of restitution of the zone.
  • step S3 the position of each HP speaker is determined.
  • the zone includes a speaker array comprising at least two HP speakers.
  • the speaker network includes around ten HP speakers. The HP speakers can be distributed across the area so that the entire area is covered by the speakers.
  • step S4 a distance between each pair of HP loudspeaker and MIC microphone is calculated. This makes it possible to calculate each of the transfer functions Ftransf, for each HP loudspeaker/MIC microphone pair, in step S5.
  • the exponent T is the transposition operator.
  • step S6 the acoustic pressure P is determined in the position of the listener U. More precisely, the acoustic pressure P is determined in the perimeter defined by the network of virtual microphones MIC. Even more precisely, the acoustic pressure P is determined at each virtual microphone.
  • the sound pressure P is the sound pressure from the signals broadcast by the speakers in the area.
  • the sound pressure P is determined from the transfer functions Ftransf, calculated in step S5, and a weight applied to the control signals supplying each loudspeaker.
  • the initial weight applied to the control signals of each of the loudspeakers is equal to zero. This corresponds to the weight applied to the first iteration. Then, with each new iteration, the weight applied to the control signals tends to vary, as described below.
  • the acoustic pressure P includes all of the acoustic pressures determined at each of the positions of the virtual microphones.
  • the sound pressure estimated at the listener's position U is more representative. This makes it possible to obtain a homogeneous result at the end of the process.
  • Step S7 makes it possible to define the value of the target acoustic pressure Pt at the position of the listener U. More precisely, the value of the target acoustic pressure Pt is initialized at this step. The target sound pressure Pt can be chosen by the designer. It is then transmitted to the TRAIT processing unit in the form of the vector defined above.
  • step S8 the error between the target pressure Pt and the estimated pressure P at the listener's position U is calculated.
  • the error may be due to the fact that an adaptation step ⁇ is applied so that the target pressure Pt is not reached immediately.
  • the target pressure Pt is reached after a certain number of iterations of the process. This makes it possible to minimize the computational resources necessary to reach the target pressure at the position of the listener U. This also ensures the stability of the algorithm.
  • the adaptation step ⁇ is also chosen so that the error calculated in step S8 has a small value, in order to stabilize the filter.
  • step S12 the forgetting factor ⁇ (n) is calculated in order to calculate the weights to be applied to each loudspeaker control signal.
  • the forgetting factor ⁇ (n) has two roles. On the one hand, it makes it possible to regularize the problem. In other words, it prevents the process from diverging when it is in a stationary state.
  • the forgetting factor ⁇ (n) makes it possible to attenuate the weights calculated in previous iterations.
  • previous weights do not influence future weights.
  • the forgetting factor ⁇ (n) is determined based directly on a possible movement of the listener. This calculation is illustrated in steps S9 to S11.
  • step S9 the position of the listener in previous iterations is recovered. For example, it is possible to recover the position of the listener in all previous iterations. Alternatively, it is possible to recover the position of the listener only for part of the previous iterations, for example the last ten or the last hundred iterations.
  • a movement speed of the listener is calculated in step S10. Movement speed can be calculated in meters per iteration. The listener's speed may be zero.
  • step S 11 the forgetting factor ⁇ (n) is calculated according to the formula: with ⁇ the forgetting factor, n the current iteration, ⁇ max x the maximum forgetting factor, ⁇ a parameter defined by the designer equal to ⁇ the adaptation step, m a variable defined as a function of a displacement of the listener having as maximum ⁇ and ⁇ a variable making it possible to adjust the speed of increase or decrease of the forgetting factor.
  • the forgetting factor ⁇ is bounded between 0 and ⁇ ma x . According to this definition, ⁇ ma x therefore corresponds to a maximum percentage of weight to be forgotten between each iteration.
  • m varies during the iterations. It is chosen such that if there is a movement of the listener, then the forgetting factor increases. When there is no movement, it decreases. In other words, when the listener's speed is positive, the forgetting factor increases and when the listener's speed is zero it decreases.
  • variable ⁇ mainly influences the speed of convergence of the process. In other words, it allows you to choose the number of iterations for which the maximum value ⁇ ma x and/or minimum value of the forgetting factor is reached.
  • variables l u and l d correspond respectively to a step of rise and a step of fall of the forgetting factor. They are defined according to the speed of movement of the listener and/or according to a modification of the sound field chosen to be reproduced.
  • the rise step l u has a greater value if the previous weights are to be forgotten quickly during movement (for example in the case where the speed of movement of the listener is high).
  • the descent step l d has a greater value if the previous weights are to be completely forgotten at the end of a movement by the listener.
  • step S12 the forgetting factor ⁇ is modified if necessary, depending on the result of the calculation of step S 11.
  • step S12 The calculation and modification of the forgetting factor in step S12 is used to calculate the weights to be applied to the control signals of the HP speakers. More precisely, at the first iteration, the weights are initialized to zero (step S13). Each loudspeaker broadcasts an unweighted control signal. Then, at each iteration, the value of the weights varies according to the error and the forgetting factor (step S14). The speakers then broadcast a weighted control signal, which can be different with each new iteration. This modification of the control signals explains in particular why the acoustic pressure P estimated at the position of the listener U can be different at each iteration.
  • the adaptation step which can vary at each iteration
  • ⁇ ( n ) the forgetting factor which can vary.
  • step S15 the FILT filters to be applied to the speakers are calculated. For example, one filter per speaker is calculated. There can therefore be as many filters as speakers.
  • To obtain filters in the time domain from the weights calculated in the previous step it is possible to perform a symmetry of the weights calculated in the frequency domain by taking their conjugate complex. Then, an Inverse Fourier Transform is performed to obtain the filters in the time domain.
  • the calculated filters may not respect the principle of causality. A time shift of the filter, corresponding for example to half the filter length, can be carried out. Thus, a plurality of filters, for example one filter per loudspeaker, is obtained.
  • step S16 the audio signal to be broadcast to the listener is obtained. It is then possible to carry out real-time filtering of the audio signal S(U) to broadcast the signal to the speakers.
  • the signal S(U) is filtered in step S17 by the filters calculated in step S15 and broadcast by the loudspeaker corresponding to the filter in steps S18 and S19.
  • the FILT filters are calculated as a function of the filtered signals S(HP 1 ,...,HP N ), weighted at the previous iteration and broadcast by the speakers, as perceived by the network of microphones.
  • the FILT filters are applied to the signal S(U) to obtain new control signals S(HP 1 ,...,HP N ) to be broadcast respectively to each loudspeaker of the loudspeaker network.
  • step S6 the sound pressure at the listener's position is determined.
  • the HP loudspeaker network covers an area comprising a first sub-zone SZ1 and a second sub-zone SZ2.
  • the HP speakers are powered by respective control signals to each emit an audio signal continuously, with a view to spatialized sound diffusion of a chosen sound field.
  • the chosen sound field is to be made audible in one of the sub-zones, and to be made inaudible in the other sub-zone.
  • the chosen sound field is audible in the first subzone SZ1.
  • the chosen sound field must be made inaudible in the second sub-zone SZ2.
  • Speakers can be defined by their position in the zone.
  • Each subzone SZ can be defined by the position of the listener U. It is then possible to define, based on the geolocation data of the listener, the first subzone SZ1, in which the listener U hears the chosen sound field.
  • the SZ1 subzone has, for example, predefined dimensions.
  • the first sub-zone can correspond to a surface of a few tens of centimeters to a few tens of meters, of which the listener U is the center.
  • the second subzone SZ2, in which the chosen sound field is to be made inaudible can be defined as the complementary subzone.
  • the position of the listener U can define, in the same manner as described above, the second subzone SZ2.
  • the first subzone SZ1 is defined as complementary to the second subzone SZ2.
  • part of the microphone network MIC covers the first sub-zone SZ1 while the other part covers the second sub-zone SZ2.
  • Each subzone includes at least one virtual microphone.
  • the area is covered by M microphones M1 to MIC M.
  • the first sub-zone is covered by the microphones MICi to MIC N , with N less than M.
  • the second sub-zone is covered by the microphones MIC N+1 to MIC M.
  • the sub-zones being defined according to the position of the listener, they evolve as the listener moves.
  • the position of the virtual microphones changes in the same way.
  • the first subzone SZ1 is defined by the position a1 of the listener U (represented in solid lines).
  • the MIC microphone array is defined to cover the first sub-zone SZ1.
  • the second subzone SZ2 is complementary to the first subzone SZ1.
  • the arrow F illustrates a movement of the listener U towards a position a2.
  • the first subzone SZ1 is then redefined around the listener U (in dotted lines).
  • the MIC microphone array is redefined to cover the new first subzone SZ1.
  • the rest of the zone represents the new second subzone SZ2.
  • the first subzone SZ1 initially defined by the position a1 of the listener is found in the second subzone SZ2.
  • the TRAIT processing unit receives as input the position of the microphones MIC, the geolocation data of the listener U, the positions of each loudspeaker HP, the audio signal to be reproduced S(U) intended for the listener U and the target sound fields Pt 1 , Pt 2 to be achieved in each sub-zone. From this data, the TRAIT processing unit calculates the FILT filter to be applied to the signal S(U) in order to restore the target sound fields Pt 1 , Pt 2 in the sub-zones. The TRAIT processing unit also receives the acoustic pressures P 1 , P 2 estimated in each of the sub-zones. The TRAIT processing unit outputs the filtered signals S(HP 1 ...HP N ) to be broadcast respectively on the speakers HP 1 to HP N.
  • THE figures 4a and 4b illustrate the main stages of the process according to the invention.
  • the steps of the process are implemented by the TRAIT processing unit continuously and in real time.
  • the purpose of the method is to make the sound field chosen in one of the sub-zones inaudible, for example in the second sub-zone SZ2 while following the movement of a listener whose position defines the sub-zones.
  • the method is based on an estimation of acoustic pressures in each of the sub-zones, so as to apply a desired level of sound contrast between the two sub-zones.
  • the audio signal S(U) is filtered as a function of the estimated acoustic pressures and the sound contrast level to obtain the control signals S(HP 1 ...HP N ) to be broadcast on the speakers.
  • step S20 the position of the listener U is determined, for example by means of a position sensor POS. From this position, the two sub-zones SZ1, SZ2 are defined.
  • the first subzone corresponds to the position of the listener U.
  • the first subzone SZ1 is for example defined as being an area of a few tens of centimeters to a few tens of meters in circumference, including the first listener U1 is the center.
  • the second subzone SZ2 can be defined as being complementary to the first subzone SZ1.
  • the second subzone SZ2 which is defined by the position of the listener, the first subzone SZ1 being complementary to the second subzone SZ2.
  • step S21 the network of MIC microphones is defined, at least one microphone covering each of the sub-zones SZ1, SZ2.
  • step S22 the position of each HP speaker is determined, as described above with reference to the figures 2a and 2b .
  • step S23 a distance between each pair of HP loudspeaker and MIC microphone is calculated. This makes it possible to calculate each of the transfer functions Ftransf, for each HP loudspeaker/MIC microphone pair, in step S4.
  • the exponent T is the transposition operator.
  • G ml j ⁇ ck 4 ⁇ R ml e ⁇ jkR ml , with R ml the distance between a loudspeaker and microphone pair, k the wave number, ⁇ the density of the air and c the speed of sound.
  • step S25 the acoustic pressures P 1 and P 2 are determined respectively in the first sub-zone SZ1 and in the second sub-zone SZ2.
  • the acoustic pressure P 1 in the first sub-zone SZ1 can be the acoustic pressure resulting from the signals broadcast by the speakers in the first sub-zone.
  • the acoustic pressure P 2 in the second sub-zone, in which the sound signals are to be made inaudible, can correspond to the induced acoustic pressure resulting from the signals broadcast by the loudspeakers supplied by the control signals associated with the pressure P 1 induced in the first subzone.
  • the acoustic pressures P 1 , P 2 are determined from the transfer functions Ftransf calculated in step S24, and an initial weight applied to the control signals of each loudspeaker.
  • the initial weight applied to the control signals of each of the loudspeakers is equal to zero. Then, the weight applied to the control signals tends to vary with each iteration, as described below.
  • the acoustic pressures P 1 , P 2 each comprise all of the acoustic pressures determined at each of the positions of the virtual microphones.
  • the estimated sound pressure in the sub-zones is more representative. This makes it possible to obtain a homogeneous result at the end of the process.
  • an acoustic pressure determined in a single position P 1 , P 2 is respectively estimated for the first sub-zone SZ1 and for the second sub-zone SZ2. This makes it possible to limit the number of calculations, and therefore to reduce the processing time and therefore the responsiveness of the system.
  • step S26 the sound levels L 1 and L 2 are determined respectively in the first sub-zone SZ1 and in the second sub-zone SZ2.
  • the sound levels L 1 and L 2 are determined at each position of the MIC microphones.
  • This step makes it possible to convert the values of the estimated sound pressures P 1 , P 2 into measurable values in decibels. In this way, the sound contrast between the first and second sub-zones can be calculated.
  • step S27 a desired sound contrast level C C between the first sub-zone and the second sub-zone is defined.
  • the desired sound contrast C C between the first sub-zone SZ1 and the second sub-zone SZ2 is previously defined by a designer according to the chosen sound field and/or the perception of a listener U.
  • step S28 the difference between the estimated sound contrast between the two sub-zones and the desired sound contrast C C is calculated. From this difference, an attenuation coefficient can be calculated.
  • the attenuation coefficient is calculated and applied to the estimated sound pressure P 2 in the second sub-zone in step S29. More precisely, an attenuation coefficient is calculated and applied to each of the estimated acoustic pressures P 2 in each of the positions of the microphones MIC of the second sub-zone SZ2.
  • the target sound pressure Pt 2 in the second sub-zone then takes the value of the attenuated sound pressure P 2 of the second sub-zone.
  • This coefficient is determined by the amplitude of the acoustic pressure to be given to each microphone so that the sound level in the second sub-zone is homogeneous.
  • C ⁇ ⁇ 0 therefore ⁇ ⁇ 1. This means that the sound pressure estimated in this microphone corresponds to the pressure value target in the second subzone.
  • the principle is therefore to use the pressure field present in the second sub-zone which is induced by the acoustic pressure in the first sub-zone, then to attenuate or amplify the individual acoustic pressure values estimated in each microphone , so that they match the target sound field in the second sub-zone across all microphones.
  • [ ⁇ 1 ,..., ⁇ m , ..., ⁇ M ] T.
  • This coefficient is calculated at each iteration and can therefore change. It can therefore be written in the form ⁇ ( n ).
  • a single attenuation coefficient is calculated and applied to the sound pressure P 2 .
  • the attenuation coefficients are calculated so as to meet the contrast criterion defined by the designer.
  • the attenuation coefficient is defined so that the difference between the sound contrast between the two sub-zones SZ2 and the desired sound contrast This is close to zero.
  • Steps S30 to S32 make it possible to define the value of the target acoustic pressures Pt 1 , Pt 2 in the first and second sub-zones SZ1, SZ2.
  • Step S30 includes the initialization of the target acoustic pressures Pt 1 , Pt 2 , respectively in the first and second sub-zones SZ1, SZ2.
  • the target sound pressures Pt 1 , Pt 2 characterize the target sound field to be diffused in the sub-zones.
  • the target sound pressure Pt 1 in the first sub-zone SZ1 is defined as being a target pressure Pt 1 , chosen by the designer. More precisely, the target pressure Pt 1 in the first sub-zone SZ1 is greater than zero, so that the target sound field is audible in this first sub-zone.
  • the target sound pressure Pt 2 in the second sub-zone is initialized at zero.
  • the target pressures Pt 1 , Pt 2 are then transmitted to the TRAIT processing unit in step S31, in the form of a Pt vector.
  • step S32 new target pressure values are assigned to the target pressures Pt 1 , Pt 2 determined in the previous iteration.
  • the value of the target pressure Pt 1 in the first sub-zone is that defined in step S30 by the designer. The designer can change this value at any time.
  • the target sound pressure Pt 2 in the second sub-zone takes the value of the attenuated sound pressure P 2 (step S29). This makes it possible, at each iteration, to redefine the target sound field to be reproduced in the second sub-zone, taking into account the listener's perception and the loudspeaker control signals.
  • the target sound pressure Pt 2 of the second sub-zone is only equal to zero during the first iteration. Indeed, as soon as the speakers broadcast a signal, a sound field is perceived in the first sub-zone, but also in the second sub-zone.
  • the target pressure Pt 2 in the second subzone is calculated as follows.
  • the sound pressure P 2 estimated in the second sub-zone is calculated. This sound pressure corresponds to the sound pressure induced in the second sub-zone by the radiation from the speakers in the first sub-zone.
  • P 2 ( ⁇ , n ) G 2 ( ⁇ , n ) q ( ⁇ , n ), with G 2 ( ⁇ , n ) the matrix of transfer functions in the second subzone at iteration n .
  • step S33 the error between the target pressure Pt 2 and the estimated pressure P 2 in the second sub-zone is calculated.
  • the error is due to the fact that an adaptation step ⁇ is applied so that the target pressure Pt 2 is not reached immediately.
  • the target pressure Pt 2 is reached after a certain number of iterations of the process. This makes it possible to minimize the calculation resources necessary to reach the target pressure Pt 2 in the second sub-zone SZ2. This also ensures the stability of the algorithm.
  • the adaptation step ⁇ is also chosen so that the error calculated in step S33 has a small value, in order to stabilize the filter.
  • the forgetting factor ⁇ (n) is then calculated in order to calculate the weights to be applied to each loudspeaker control signal.
  • the forgetting factor ⁇ (n) makes it possible to regularize the problem and to attenuate the weights calculated in previous iterations. Thus, when the listener moves, previous weights do not influence future weights.
  • the forgetting factor ⁇ (n) is determined based directly on a possible movement of the listener. This calculation is illustrated in steps S34 to S36.
  • step S34 the position of the listener in previous iterations is recovered. For example, it is possible to recover the position of the listener in all previous iterations. Alternatively, it is possible to recover the position of the listener only for part of the previous iterations, for example the last ten or the last hundred iterations.
  • a movement speed of the listener is calculated in step S35. Movement speed can be calculated in meters per iteration. The listener's speed may be zero.
  • step S36 the forgetting factor ⁇ (n) is calculated according to the formula described above:
  • step S33 the forgetting factor ⁇ (n) is modified if necessary, depending on the calculation result of step S36.
  • step S37 The calculation and modification of the forgetting factor in step S37 is used to calculate the weights to be applied to the control signals of the HP speakers. More precisely, at the first iteration the weights are initialized to zero (step S38). Each loudspeaker broadcasts an unweighted control signal. Then, at each iteration, the value of the weights varies according to the error and the forgetting factor (step S39). The speakers then broadcast the control signal thus weighted.
  • the FILT filters to be applied to the speakers are then determined in step S40. For example, a filter per HP speaker is calculated. There can therefore be as many filters as speakers.
  • the type of filters applied to each speaker includes, for example, an inverse Fourier transform.
  • Step S41 is an initialization step, implemented only in the first iteration of the method.
  • the audio signal to be reproduced S(U) is intended respectively for the listener U.
  • the FILT filters are applied to the signal S(U), in order to obtain N control signals S(HP 1 , ...,HP N ) filtered to be broadcast respectively by the speakers (HP 1 ,...,HP N ) in step S43.
  • the control signals S(HP 1 ,...,HP N ) are broadcast respectively by each loudspeaker (HP 1 ,...,HP N ) of the loudspeaker network in step S44.
  • HP speakers broadcast control signals continuously.
  • the FILT filters are calculated based on the signals S(HP 1 ,...,HP N ) filtered in the previous iteration and broadcast by the speakers, as perceived by the microphone network.
  • FILT filters are applied to the S(U) signal to obtain new control signals S(HP 1 ,...,HP N ) to be broadcast respectively on each loudspeaker of the loudspeaker network.
  • step S35 the acoustic pressures P 1 , P 2 of the two sub-zones SZ1, SZ2 are estimated.
  • the method can be implemented for a plurality of listeners U 1 to U N.
  • an audio signal S(U 1 , U N ) can be provided respectively for each listener.
  • the steps of the method can be implemented for each listener, so that the sound field chosen for each listener is returned to them in their position, and taking into account their movements.
  • a plurality of forgetting factors can be calculated for each of the listeners.
  • the chosen sound field is a first sound field, at least a second chosen sound field being broadcast by the HP loudspeaker network.
  • the second chosen sound field is audible in the second sub-zone for a second listener and is to be made inaudible in the first sub-zone for a first listener.
  • the speakers are powered by the first control signals to each emit a continuous audio signal corresponding to the first chosen sound field, and are also powered by second control signals to each emit a continuous audio signal corresponding to the second sound field selected.
  • the steps of the method as described above can be applied to the first sub-zone SZ1, so that the second chosen sound field is made inaudible in the first sub-zone SZ1 taking into account the movements of the two listeners.
  • the first and second sub-zones are not complementary.
  • a first subzone can be defined with respect to a first listener U1 and a second subzone can be defined with respect to a second listener U2.
  • the sound field is to be made audible in the first sub-zone and inaudible in the second sub-zone.
  • the sound field in the remainder of the area may not be controlled.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Otolaryngology (AREA)
  • General Health & Medical Sciences (AREA)
  • Stereophonic System (AREA)
  • Circuit For Audible Band Transducer (AREA)

Description

L'invention se place dans le domaine de l'audio spatialisée et du contrôle de champs sonores. Le procédé a pour but de restituer au moins un champ sonore dans une zone, pour un auditeur, en fonction de la position de l'auditeur. Notamment, le procédé a pour but de restituer le champ sonore en prenant en compte les déplacements de l'auditeur.The invention is placed in the field of spatial audio and sound field control. The method aims to restore at least one sound field in a zone, for a listener, depending on the position of the listener. In particular, the process aims to restore the sound field by taking into account the movements of the listener.

La zone est couverte par un réseau de haut-parleurs, alimentées par des signaux de commande respectifs pour émettre chacun un signal audio en continu. Un poids respectif est appliqué à chaque signal de commande des haut-parleurs afin de restituer le champ sonore suivant la position de l'auditeur. A partir des poids, un jeu de filtres est déterminé, chaque filtre du jeu de filtres correspondant à chaque haut-parleur. Le signal devant être distribué à l'auditeur est alors filtré par le jeu de filtre et diffusé par le haut-parleur correspondant au filtre.The area is covered by an array of loudspeakers, powered by respective control signals to each emit a continuous audio signal. A respective weight is applied to each loudspeaker control signal in order to reproduce the sound field according to the listener's position. From the weights, a set of filters is determined, each filter of the filter set corresponding to each loudspeaker. The signal to be distributed to the listener is then filtered by the filter set and broadcast by the loudspeaker corresponding to the filter.

Les méthodes itératives utilisées se servent des poids calculés à l'itération précédente pour calculer les nouveaux poids. Le jeu de filtres dispose donc d'une mémoire des itérations précédentes. Lors d'un déplacement de l'auditeur, une partie du champ sonore qui a été restitué à l'itération précédente (ou à l'ancienne position de l'auditeur) est absent de la nouvelle position de l'auditeur. Il n'est donc plus contraint et la partie des poids permettant cette restitution précédente n'est plus utile mais reste en mémoire. Autrement dit, le champ sonore restitué en la position précédente de l'auditeur, à l'itération précédente n'est plus utile pour le calcul des poids à la position courante de l'auditeur, ou à l'itération courante, mais reste en mémoire.The iterative methods used use the weights calculated in the previous iteration to calculate the new weights. The set of filters therefore has a memory of previous iterations. When the listener moves, part of the sound field that was rendered in the previous iteration (or at the listener's old position) is missing from the new listener position. It is therefore no longer constrained and the part of the weights allowing this previous restitution is no longer useful but remains in memory. In other words, the sound field restored at the previous position of the listener, at the previous iteration, is no longer useful for calculating the weights at the current position of the listener, or at the current iteration, but remains in effect. memory.

Il est notamment connu du document WO2012/068174 A2 un procédé et un système permettant de produire un signal audio binaural localisé à l'attention d'un utilisateur. Toutefois, ce document ne tient pas compte de la position courante d'un auditeur.It is particularly known from the document WO2012/068174 A2 a method and system for producing a localized binaural audio signal for a user. However, this document does not take into account an auditor's current position.

La présente invention vient améliorer la situation.The present invention improves the situation.

A cet effet, elle propose un procédé assisté par des moyens informatiques selon la revendication 1.For this purpose, it proposes a method assisted by computer means according to claim 1.

La présente méthode se base donc directement sur le déplacement de l'auditeur pour faire varier le facteur d'oubli à chaque itération. Cela permet d'atténuer l'effet mémoire dû au calcul des poids aux itérations précédentes. La précision de la restitution du champ en est grandement améliorée, tout en ne nécessitant pas de ressources en calcul trop coûteuses.The present method is therefore based directly on the movement of the listener to vary the forgetting factor at each iteration. This helps mitigate the memory effect due to the calculation of weights in previous iterations. The precision of field restitution is greatly improved, while not requiring excessively expensive computing resources.

Selon une réalisation, une pluralité de points formant les positions respectives d'une pluralité de microphones virtuels est définie dans la zone pour estimer une pluralité de pressions acoustiques respectives dans la zone en tenant compte du poids respectif appliqué à chaque haut-parleur, comprenant chacun respectivement un facteur d'oubli, et des fonctions de transfert propres à chaque haut-parleur en chaque microphone virtuel, la pluralité de points étant centrée sur la position de l'auditeur.According to one embodiment, a plurality of points forming the respective positions of a plurality of virtual microphones are defined in the zone to estimate a plurality of respective acoustic pressures in the zone taking into account the respective weight applied to each loudspeaker, each comprising respectively a forgetting factor, and transfer functions specific to each speaker in each virtual microphone, the plurality of points being centered on the position of the listener.

De cette manière, la pression acoustique est estimée en une pluralité de points de la zone, entourant l'auditeur. Cela permet d'appliquer des poids à chaque haut-parleur en tenant compte des écarts de pressions acoustiques pouvant survenir en différents points de la zone. L'estimation des pressions acoustiques est donc réalisée de manière homogène et précise autour de l'auditeur, ce qui permet d'accroître la précision de la méthode.In this way, the sound pressure is estimated at a plurality of points in the area surrounding the listener. This makes it possible to apply weights to each loudspeaker taking into account the differences in sound pressures that may occur at different points in the area. The estimation of acoustic pressures is therefore carried out in a homogeneous and precise manner around the listener, which makes it possible to increase the precision of the method.

Selon une réalisation, la zone comprend une première sous-zone dans laquelle le champ sonore choisi est à rendre audible et une deuxième sous-zone dans laquelle le champ sonore choisi est à rendre inaudible, la première sous-zone étant définie dynamiquement comme correspondant à la position de l'auditeur et dudit microphone virtuel, le microphone virtuel étant un premier microphone virtuel, et la deuxième sous-zone étant définie dynamiquement comme étant complémentaire de la première sous-zone, la deuxième sous-zone étant couverte par au moins un deuxième microphone virtuel dont une position est définie dynamiquement en fonction de ladite deuxième sous-zone, le procédé comprenant en outre itérativement :

  • une estimation d'une pression acoustique dans la deuxième sous-zone, au moins en fonction des fonctions de transfert acoustiques, des signaux de commande respectifs des haut-parleurs, et d'un poids initial respectif des signaux de commande des haut-parleurs ;
  • calcul d'une erreur entre ladite pression acoustique estimée dans la deuxième sous-zone et une pression acoustique cible, souhaitée dans la deuxième sous-zone ;
  • calcul et application de poids respectifs aux signaux de commande des haut-parleurs, en fonction de ladite erreur et d'un facteur d'oubli de poids, ledit facteur d'oubli étant calculé en fonction d'un déplacement de l'auditeur, ledit déplacement étant déterminé par une comparaison entre une position précédente de l'auditeur et la position courante de l'auditeur ;
    le calcul de la pression acoustique dans la deuxième sous-zone étant mis en oeuvre à nouveau en fonction des signaux de commande respectifs, pondérés, des haut-parleurs.
According to one embodiment, the zone comprises a first sub-zone in which the chosen sound field is to be made audible and a second sub-zone in which the chosen sound field is to be made inaudible, the first sub-zone being defined dynamically as corresponding to the position of the listener and said virtual microphone, the virtual microphone being a first virtual microphone, and the second sub-zone being defined dynamically as being complementary to the first sub-zone, the second sub-zone being covered by at least one second virtual microphone whose position is dynamically defined as a function of said second sub-zone, the method further comprising iteratively:
  • an estimation of a sound pressure in the second sub-zone, at least as a function of the acoustic transfer functions, of the respective control signals of the loudspeakers, and of a respective initial weight of the loudspeaker control signals;
  • calculation of an error between said estimated sound pressure in the second sub-zone and a target sound pressure, desired in the second sub-zone;
  • calculation and application of respective weights to the loudspeaker control signals, as a function of said error and a weight forgetting factor, said forgetting factor being calculated as a function of a movement of the listener, said displacement being determined by a comparison between a previous position of the listener and the current position of the listener;
    the calculation of the sound pressure in the second sub-zone being implemented again as a function of the respective, weighted control signals from the loudspeakers.

La méthode permet donc de restituer différents champs sonores dans une même zone, en utilisant le même système de haut-parleurs, en fonction d'un déplacement de l'auditeur. Ainsi, à chaque itération, le champ sonore effectivement restitué dans les deux sous-zones est évalué afin qu'à chaque déplacement de l'auditeur, la pression acoustique dans chacune des sous-zones atteigne effectivement la pression acoustique cible. La position de l'auditeur peut permettre de déterminer la sous-zone dans laquelle le champ sonore est à rendre audible. La sous-zone dans laquelle le champ sonore est à rendre inaudible est alors définie dynamiquement à chaque déplacement de l'auditeur. Le facteur d'oubli est donc calculé itérativement pour chacune des deux sous-zones, de sorte que la pression acoustique dans chacune des sous-zones atteigne sa pression acoustique cible.The method therefore makes it possible to reproduce different sound fields in the same area, using the same loudspeaker system, depending on the movement of the listener. Thus, at each iteration, the sound field actually restored in the two sub-zones is evaluated so that at each movement of the listener, the sound pressure in each of the sub-zones actually reaches the target sound pressure. The position of the listener can help determine the sub-area in which the field sound is to be made audible. The sub-zone in which the sound field is to be made inaudible is then dynamically defined each time the listener moves. The forgetting factor is therefore calculated iteratively for each of the two sub-zones, so that the sound pressure in each of the sub-zones reaches its target sound pressure.

Selon une réalisation, la zone comprend une première sous-zone dans laquelle le champ sonore choisi est à rendre audible et une deuxième sous-zone dans laquelle le champ sonore choisi est à rendre inaudible, la deuxième sous-zone étant définie dynamiquement comme correspondant à la position de l'auditeur et dudit microphone virtuel, le microphone virtuel étant un premier microphone virtuel, et la première sous-zone étant définie dynamiquement comme étant complémentaire de la deuxième sous-zone, la première sous-zone étant couverte par au moins un deuxième microphone virtuel dont une position est définie dynamiquement en fonction de ladite première sous-zone, le procédé comprenant en outre itérativement :

  • une estimation d'une pression acoustique dans la deuxième sous-zone, au moins en fonction des fonctions de transfert acoustiques, des signaux de commande respectifs des haut-parleurs, et d'un poids initial respectif des signaux de commande des haut-parleurs ;
  • calcul d'une erreur entre ladite pression acoustique estimée dans la deuxième sous-zone et une pression acoustique cible, souhaitée dans la deuxième sous-zone ;
  • calcul et application de poids respectifs aux signaux de commande des haut-parleurs, en fonction de ladite erreur et d'un facteur d'oubli de poids, ledit facteur d'oubli étant calculé en fonction d'un déplacement de l'auditeur, ledit déplacement étant déterminé par une comparaison entre une position précédente de l'auditeur et la position courante de l'auditeur ;
le calcul de la pression acoustique dans la deuxième sous-zone étant mis en oeuvre à nouveau en fonction des signaux de commande respectifs, pondérés, des haut-parleurs.According to one embodiment, the zone comprises a first sub-zone in which the chosen sound field is to be made audible and a second sub-zone in which the chosen sound field is to be made inaudible, the second sub-zone being defined dynamically as corresponding to the position of the listener and said virtual microphone, the virtual microphone being a first virtual microphone, and the first sub-zone being defined dynamically as being complementary to the second sub-zone, the first sub-zone being covered by at least one second virtual microphone whose position is dynamically defined as a function of said first sub-zone, the method further comprising iteratively:
  • an estimation of a sound pressure in the second sub-zone, at least as a function of the acoustic transfer functions, of the respective control signals of the loudspeakers, and of a respective initial weight of the loudspeaker control signals;
  • calculation of an error between said estimated sound pressure in the second sub-zone and a target sound pressure, desired in the second sub-zone;
  • calculation and application of respective weights to the loudspeaker control signals, as a function of said error and a weight forgetting factor, said forgetting factor being calculated as a function of a movement of the listener, said displacement being determined by a comparison between a previous position of the listener and the current position of the listener;
the calculation of the sound pressure in the second sub-zone being implemented again as a function of the respective, weighted control signals from the loudspeakers.

De même, la position de l'auditeur peut permettre de définir la sous-zone dans laquelle le champ sonore est à rendre inaudible. La sous-zone dans laquelle le champ sonore est à rendre audible étant définie dynamiquement comme complémentaire de l'autre sous-zone. Le facteur d'oubli est donc calculé itérativement pour chacune des deux sous-zones, de sorte que la pression acoustique dans chacune des sous-zones atteigne sa pression acoustique cible.Likewise, the position of the listener can make it possible to define the sub-zone in which the sound field is to be made inaudible. The sub-zone in which the sound field is to be made audible being defined dynamically as complementary to the other sub-zone. The forgetting factor is therefore calculated iteratively for each of the two sub-zones, so that the sound pressure in each of the sub-zones reaches its target sound pressure.

Selon une réalisation, chaque sous-zone comprend au moins un microphone virtuel et deux haut-parleurs, et de préférence chaque sous-zone comprend au moins une dizaine de microphones virtuels et au moins une dizaine de haut-parleurs.According to one embodiment, each sub-zone comprises at least one virtual microphone and two speakers, and preferably each sub-zone comprises at least ten virtual microphones and at least ten speakers.

Le procédé est donc apte à fonctionner avec une pluralité de microphones et de haut-parleurs.The method is therefore capable of operating with a plurality of microphones and speakers.

Selon une réalisation, une valeur du facteur d'oubli augmente si l'auditeur se déplace et diminue si l'auditeur ne se déplace pas.According to one embodiment, a value of the forgetting factor increases if the listener moves and decreases if the listener does not move.

L'augmentation du facteur d'oubli en cas de déplacement de l'auditeur permet d'oublier plus rapidement les poids calculés aux itérations précédentes. Au contraire, la diminution du facteur d'oubli lorsque l'auditeur ne se déplace pas permet de conserver au moins en partie les poids calculés aux itérations précédentes.Increasing the forgetting factor when the listener moves makes it possible to forget the weights calculated in previous iterations more quickly. On the contrary, reducing the forgetting factor when the listener does not move makes it possible to preserve at least in part the weights calculated in previous iterations.

Selon une réalisation, le facteur d'oubli est défini par

Figure imgb0001
avec γ(n) le facteur d'oubli, n l'itération courante, γmax le facteur d'oubli maximal, χ un paramètre défini par le concepteur égal à µ un pas d'adaptation, m une variable définie en fonction d'un déplacement de l'auditeur ayant comme maximum χ et α une variable permettant d'ajuster la vitesse d'augmentation ou de diminution du facteur d'oubli.According to one embodiment, the forgetting factor is defined by
Figure imgb0001
with γ(n) the forgetting factor, n the current iteration, γ max the maximum forgetting factor, χ a parameter defined by the designer equal to µ an adaptation step, m a variable defined as a function of a movement of the listener having as maximum χ and α a variable making it possible to adjust the speed of increase or decrease of the forgetting factor.

Ainsi, le facteur d'oubli est directement estimé en fonction d'un déplacement de l'auditeur. Notamment, le facteur d'oubli dépend de la distance parcourue à chaque itération par l'auditeur, autrement dit de la vitesse de déplacement de l'auditeur. Un facteur d'oubli différent peut donc être estimé pour chaque auditeur. Les valeurs des variables peuvent également être ajustées au cours des itérations de manière à prendre réellement en compte le déplacement de l'auditeur.Thus, the forgetting factor is directly estimated as a function of a movement of the listener. In particular, the forgetting factor depends on the distance traveled in each iteration by the listener, in other words on the speed of movement of the listener. A different forgetting factor can therefore be estimated for each listener. Variable values can also be adjusted during iterations to truly account for listener movement.

Selon une réalisation, un pas de montée lu et un pas de descente ld du facteur d'oubli sont définis tels que :

  • si un déplacement de l'auditeur est déterminé, m = min(m + lu, 1)
  • si aucun déplacement de l'auditeur n'est déterminé, m = max(m - ld, 0),
avec 0< lu <1 et 0< ld <1, les pas de montée et de descente étant définis en fonction d'une vitesse de déplacement d'un auditeur et/ou d'une modification du champ sonore choisi à restituer.According to one embodiment, a step of rise l u and a step of fall l d of the forgetting factor are defined as:
  • if a movement of the listener is determined, m = min(m + l u , 1)
  • if no movement of the listener is determined, m = max(m - l d , 0),
with 0< l u <1 and 0< l d <1, the rise and fall steps being defined as a function of a listener's movement speed and/or a modification of the sound field chosen to be reproduced.

La définition de deux variables distinctes lu et ld permet de choisir les vitesses de réaction de la méthode en fonction du début et/ou de la fin du déplacement de l'auditeur.The definition of two distinct variables l u and l d makes it possible to choose the reaction speeds of the method according to the start and/or end of the listener's movement.

Selon une réalisation, le facteur d'oubli est compris entre 0 et 1.According to one embodiment, the forgetting factor is between 0 and 1.

Ainsi, cela permet d'oublier les poids précédents en totalité ou de conserver les poids précédents en totalité.Thus, this allows you to forget the previous weights in full or to keep the previous weights in full.

La présente invention vise aussi un système de restitution sonore spatialisée selon la revendication 10.The present invention also relates to a spatialized sound reproduction system according to claim 10.

La présente invention vise aussi un support de stockage d'un programme d'ordinateur selon la revendication 11.The present invention also relates to a storage medium for a computer program according to claim 11.

D'autres avantages et caractéristiques de l'invention apparaîtront à la lecture de la description détaillée ci-après d'exemples de réalisation de l'invention, et à l'examen des dessins annexés sur lesquels :

  • la figure 1 représente un exemple de système selon un mode de réalisation de l'invention,
  • les figures 2a et 2b illustrent, sous la forme d'un ordinogramme, les principales étapes d'un mode de réalisation particulier du procédé,
  • la figure 3 illustre de façon schématique un mode de réalisation dans lequel deux sous-zones sont définies dynamiquement en fonction des données de géolocalisation d'un auditeur,
  • les figures 4a et 4b illustrent, sous la forme d'un ordinogramme, les principales étapes d'un deuxième mode de réalisation du procédé.
Other advantages and characteristics of the invention will appear on reading the detailed description below of examples of embodiment of the invention, and on examining the appended drawings in which:
  • there figure 1 represents an example of a system according to one embodiment of the invention,
  • THE figures 2a and 2b illustrate, in the form of a flowchart, the main stages of a particular embodiment of the process,
  • there Figure 3 schematically illustrates an embodiment in which two sub-zones are dynamically defined according to the geolocation data of a listener,
  • THE figures 4a and 4b illustrate, in the form of a flowchart, the main stages of a second embodiment of the method.

Les modes de réalisation décrits en référence aux figures peuvent être combinés.The embodiments described with reference to the figures can be combined.

La figure 1 illustre schématiquement un système SYST selon un exemple de réalisation. Le système SYST comprend un réseau de haut-parleurs HP comprenant N haut-parleurs (HP1,...,HPN), avec N au moins égal à 2, et de préférence au moins égal à 10. Le réseau de haut-parleurs HP couvre une zone Z. Les haut-parleurs HP sont alimentés par des signaux de commande respectifs pour émettre chacun un signal audio en continu, en vue d'une diffusion sonore spatialisée d'un champ sonore choisi dans la zone Z. Plus précisément, le champ sonore choisi est à restituer en une position a1 d'un auditeur U. Les haut-parleurs peuvent être définis par leur position dans la zone. La position a1 de l'auditeur U peut être obtenue au moyen d'un capteur de position POS.There figure 1 schematically illustrates a SYST system according to an exemplary embodiment. The SYST system comprises a network of HP loudspeakers comprising N loudspeakers (HP 1 ,..., HP N ), with N at least equal to 2, and preferably at least equal to 10. The loudspeaker network HP speakers cover a zone Z. The HP speakers are powered by respective control signals to each emit an audio signal continuously, with a view to spatialized sound diffusion of a sound field chosen in zone Z. More precisely , the chosen sound field is to be reproduced in a position a1 of a listener U. The speakers can be defined by their position in the zone. The position a1 of listener U can be obtained by means of a position sensor POS.

La zone est en outre couverte par des microphones MIC. Dans un exemple de réalisation, la zone est couverte par un réseau de M microphones MIC, avec M au moins égal à 1 et de préférence au moins égal à 10. Les microphones MIC sont des microphones virtuels. Dans la suite de la description, le terme « microphone MIC » est utilisé. Les microphones MIC sont repérés par leur position dans la zone Z.The area is additionally covered by MIC microphones. In an exemplary embodiment, the area is covered by a network of M MIC microphones, with M at least equal to 1 and preferably at least equal to 10. The MIC microphones are virtual microphones. In the remainder of the description, the term “MIC microphone” is used. MIC microphones are identified by their position in zone Z.

Dans un exemple de réalisation, les microphones virtuels sont définis en fonction de la position a1 de l'auditeur U dans la zone Z. Notamment, les microphones virtuels MIC peuvent être définis de manière à entourer l'auditeur U. Dans cet exemple de réalisation, la position des microphones virtuels MIC change en fonction de la position a1 de l'auditeur U.In an exemplary embodiment, the virtual microphones are defined as a function of the position a1 of the listener U in zone Z. In particular, the virtual microphones MIC can be defined so as to surround the listener U. In this exemplary embodiment , the position of the virtual microphones MIC changes according to the position a1 of the listener U.

Comme illustré sur la figure 1, le réseau de microphones MIC entoure la position a1 de l'auditeur U. Puis, lorsque l'auditeur U se déplace vers la position a2, le réseau de microphones MIC est redéfini pour entourer la position a2 de l'auditeur. Le déplacement de l'auditeur U est schématisé par la flèche F.As illustrated on the figure 1 , the microphone array MIC surrounds position a1 of listener U. Then, when listener U moves to position a2, the array of MIC microphones are redefined to surround the listener's A2 position. The movement of the listener U is represented schematically by the arrow F.

Le système SYST comprend en outre une unité de traitement TRAIT apte à mettre en oeuvre les étapes du procédé. L'unité de traitement TRAIT comprend notamment une mémoire, formant un support de stockage d'un programme d'ordinateur comprenant des portions de code pour la mise en oeuvre du procédé décrit ci-après en référence aux figures 2a et 2b. L'unité de traitement TRAIT comprend en outre un processeur PROC apte à exécuter les portions de code du programme d'ordinateur.The SYST system further comprises a TRAIT processing unit capable of implementing the steps of the process. The TRAIT processing unit comprises in particular a memory, forming a storage medium for a computer program comprising portions of code for implementing the method described below with reference to the figures 2a and 2b . The TRAIT processing unit further comprises a PROC processor capable of executing the code portions of the computer program.

L'unité de traitement TRAIT reçoit, en continu et en temps réel, la position des microphones MIC, la position de l'auditeur U, les positions de chaque haut-parleur HP, le signal audio à reproduire S(U) destiné à l'auditeur U et le champ sonore cible Pt à atteindre en la position de l'auditeur. L'unité de traitement TRAIT reçoit en outre la pression acoustique estimée P en la position de l'auditeur U. A partir de ces données, l'unité de traitement TRAIT calcul le filtre FILT à appliquer au signal S afin de restituer le champ sonore cible Pt . L'unité de traitement TRAIT délivre en sortie les signaux filtrés S(HP1...HPN) à diffuser respectivement sur les haut-parleurs HP1 à HPN.The TRAIT processing unit receives, continuously and in real time, the position of the MIC microphones, the position of the listener U, the positions of each speaker HP, the audio signal to be reproduced S(U) intended for the the listener U and the target sound field P t to be reached at the listener's position. The TRAIT processing unit also receives the estimated sound pressure P in the position of the listener U. From these data, the TRAIT processing unit calculates the FILT filter to be applied to the signal S in order to restore the sound field target P t . The TRAIT processing unit outputs the filtered signals S(HP 1 ...HP N ) to be broadcast respectively on the speakers HP 1 to HP N.

Les figures 2a et 2b illustrent les principales étapes d'un procédé pour la restitution d'un champ sonore choisi en une position d'un auditeur, lorsque l'auditeur se déplace. Les étapes du procédé sont mises en oeuvre par l'unité de traitement TRAIT de manière continue et en temps réel.THE figures 2a and 2b illustrate the main steps of a process for the restitution of a chosen sound field in a position of a listener, when the listener moves. The steps of the process are implemented by the TRAIT processing unit continuously and in real time.

A l'étape S1, la position de l'auditeur U dans la zone est obtenue au moyen d'un capteur de position. A partir de ces données de géolocalisation, un réseau de microphones virtuels MIC est défini à l'étape S2. Le réseau de microphones virtuels MIC peut prendre toute forme géométrique telle qu'un carré, un cercle, un rectangle... Le réseau de microphones virtuels MIC peut être centré autour de la position de l'auditeur U. Le réseau de microphones virtuels MIC défini par exemple un périmètre de quelques dizaines de centimètres à quelques dizaines de mètres autour de l'auditeur U. Le réseau de microphones virtuels MIC comprend au moins deux microphones virtuels, et de préférence au moins dix microphones virtuels. Le nombre de microphones virtuels ainsi que leur agencement définissent des limites dans la qualité de restitution de la zone.In step S1, the position of the listener U in the area is obtained by means of a position sensor. From this geolocation data, a network of virtual microphones MIC is defined in step S2. The network of virtual microphones MIC can take any geometric shape such as a square, a circle, a rectangle, etc. The network of virtual microphones MIC can be centered around the position of the listener U. The network of virtual microphones MIC can be centered around the position of the listener U. The network of virtual microphones MIC defined for example a perimeter of a few tens of centimeters to a few tens of meters around the listener U. The network of virtual microphones MIC comprises at least two virtual microphones, and preferably at least ten virtual microphones. The number of virtual microphones as well as their arrangement define limits in the quality of restitution of the zone.

A l'étape S3, la position de chaque haut-parleur HP est déterminée. Notamment, la zone comprend un réseau de haut-parleurs comprenant au moins deux haut-parleurs HP. De préférence, le réseau de haut-parleurs comprend une dizaine de haut-parleur HP. Les haut-parleurs HP peuvent être répartis dans la zone de manière à ce que l'intégralité de la zone soit couverte par les haut-parleurs.In step S3, the position of each HP speaker is determined. Notably, the zone includes a speaker array comprising at least two HP speakers. Preferably, the speaker network includes around ten HP speakers. The HP speakers can be distributed across the area so that the entire area is covered by the speakers.

A l'étape S4, une distance entre chaque couple de haut-parleur HP et de microphone MIC est calculée. Cela permet de pouvoir calculer chacune des fonctions de transfert Ftransf, pour chaque couple haut-parleur HP/microphone MIC, à l'étape S5.In step S4, a distance between each pair of HP loudspeaker and MIC microphone is calculated. This makes it possible to calculate each of the transfer functions Ftransf, for each HP loudspeaker/MIC microphone pair, in step S5.

Plus précisément, le champ sonore cible peut être défini comme un vecteur Pt(ω, n) pour l'ensembles des microphones MIC, à chaque instant n pour une pulsation ω = 2πf, f étant la fréquence. Les microphones virtuels MICi à MICM du réseau de microphones virtuels sont disposés aux positions xMIC = [MIC 1 , ..., MICM ] et capturent un ensemble de pressions acoustiques regroupés dans le vecteur P(ω, n).More precisely, the target sound field can be defined as a vector Pt ( ω , n ) for all of the MIC microphones, at each time n for a pulsation ω = 2 πf, f being the frequency. The virtual microphones MICi to MIC M of the network of virtual microphones are arranged at positions x MIC = [ MIC 1 , ..., MIC M ] and capture a set of acoustic pressures grouped in the vector P ( ω , n ).

Le champ sonore est reproduit par les haut-parleurs (HP1,...,HPN) fixes et ayant comme position respective xHP = [HP 1,...,HPN ] . Les haut-parleurs (HP1,...,HPN) sont pilotés par un ensemble de poids regroupés dans le vecteur q(ω, n) = [q1(ω, n),...,qN (ω, n)] T. L'exposant T est l'opérateur de transposition.The sound field is reproduced by the speakers (HP 1 ,...,HP N ) fixed and having the respective position x HP = [ HP 1 ,..., HP N ]. The loudspeakers (HP 1 ,...,HP N ) are driven by a set of weights grouped in the vector q ( ω , n ) = [q 1 ( ω, n ) ,...,q N ( ω , n )] T . The exponent T is the transposition operator.

Le trajet de propagation du champ sonore entre chaque couple de haut-parleur HP et microphone MIC peut être défini par un ensemble de fonctions de transferts G(ω, n) assemblées dans la matrice G ω n = G 11 ω n G 1 , N ω n G M 1 ω n G MN ω n

Figure imgb0002
Avec les fonctions de transfert définies comme étant : G ml = jρck 4 πR ml e jkR ml
Figure imgb0003
, avec Rml la distance entre un couple haut-parleur et microphone, k le nombre d'onde, ρ la masse volumique de l'air et c la célérité du son.The sound field propagation path between each pair of HP loudspeaker and MIC microphone can be defined by a set of transfer functions G ( ω , n ) assembled in the matrix G ω not = G 11 ω not G 1 , NOT ω not G M 1 ω not G M.N. ω not
Figure imgb0002
With the transfer functions defined as: G ml = jρck 4 πR ml e jkR ml
Figure imgb0003
, with R ml the distance between a loudspeaker and microphone pair, k the wave number, ρ the density of the air and c the speed of sound.

A l'étape S6, la pression acoustique P est déterminée en la position de l'auditeur U. Plus précisément, la pression acoustique P est déterminée dans le périmètre défini par le réseau de microphones virtuels MIC. De manière encore plus précise, la pression acoustique P est déterminée en chaque microphone virtuel. La pression acoustique P est la pression acoustique issue des signaux diffusés par les haut-parleurs dans la zone. La pression acoustique P est déterminée à partir des fonctions de transfert Ftransf, calculées à l'étape S5, et d'un poids appliqué aux signaux de commande alimentant chaque haut-parleur. Le poids initial appliqué aux signaux de commande de chacun des haut-parleurs est égal à zéro. Cela correspond au poids appliqué à la première itération. Puis, à chaque nouvelle itération, le poids appliqué aux signaux de commande tend à varier, tel que décrit ci-après.In step S6, the acoustic pressure P is determined in the position of the listener U. More precisely, the acoustic pressure P is determined in the perimeter defined by the network of virtual microphones MIC. Even more precisely, the acoustic pressure P is determined at each virtual microphone. The sound pressure P is the sound pressure from the signals broadcast by the speakers in the area. The sound pressure P is determined from the transfer functions Ftransf, calculated in step S5, and a weight applied to the control signals supplying each loudspeaker. The initial weight applied to the control signals of each of the loudspeakers is equal to zero. This corresponds to the weight applied to the first iteration. Then, with each new iteration, the weight applied to the control signals tends to vary, as described below.

Dans cet exemple, la pression acoustique P comprend l'ensemble des pressions acoustiques déterminées en chacune des positions des microphones virtuels. Ainsi, la pression acoustique estimée en la position de l'auditeur U est plus représentative. Cela permet d'obtenir un résultat homogène en sortie du procédé.In this example, the acoustic pressure P includes all of the acoustic pressures determined at each of the positions of the virtual microphones. Thus, the sound pressure estimated at the listener's position U is more representative. This makes it possible to obtain a homogeneous result at the end of the process.

L'étape S7 permet de définir la valeur de la pression acoustique cible Pt en la position de l'auditeur U. Plus précisément, la valeur de la pression acoustique cible Pt est initialisée à cette étape. La pression acoustique cible Pt peut être choisie par le concepteur. Elle est ensuite transmise à l'unité de traitement TRAIT sous la forme du vecteur défini ci-avant.Step S7 makes it possible to define the value of the target acoustic pressure Pt at the position of the listener U. More precisely, the value of the target acoustic pressure Pt is initialized at this step. The target sound pressure Pt can be chosen by the designer. It is then transmitted to the TRAIT processing unit in the form of the vector defined above.

A l'étape S8, l'erreur entre la pression cible Pt et la pression estimée P en la position de l'auditeur U est calculée. L'erreur peut être due au fait qu'un pas d'adaptation µ est appliqué de manière à ce que la pression cible Pt ne soit pas atteinte immédiatement. La pression cible Pt est atteinte au bout d'un certain nombre d'itérations du procédé. Cela permet de minimiser les ressources en calcul nécessaires pour atteindre la pression cible en la position de l'auditeur U. Cela permet en outre d'assurer la stabilité de l'algorithme. De la même manière, le pas d'adaptation µ est également choisi de sorte que l'erreur calculée à l'étape S8 ait une petite valeur, afin de stabiliser le filtre.In step S8, the error between the target pressure Pt and the estimated pressure P at the listener's position U is calculated. The error may be due to the fact that an adaptation step µ is applied so that the target pressure Pt is not reached immediately. The target pressure Pt is reached after a certain number of iterations of the process. This makes it possible to minimize the computational resources necessary to reach the target pressure at the position of the listener U. This also ensures the stability of the algorithm. In the same way, the adaptation step µ is also chosen so that the error calculated in step S8 has a small value, in order to stabilize the filter.

L'erreur E(n) est calculée comme suit : E n = G n q n p T n = p n p T n

Figure imgb0004
The error E(n) is calculated as follows: E not = G not q not p T not = p not p T not
Figure imgb0004

A l'étape S12, le facteur d'oubli γ(n) est calculé afin de calculer les poids à appliquer à chaque signal de commande des haut-parleurs.In step S12, the forgetting factor γ(n) is calculated in order to calculate the weights to be applied to each loudspeaker control signal.

Le facteur d'oubli γ(n) a deux rôles. D'une part, il permet de régulariser le problème. Autrement dit, il permet d'éviter que le procédé ne diverge lorsqu'il est dans un état stationnaire.The forgetting factor γ(n) has two roles. On the one hand, it makes it possible to regularize the problem. In other words, it prevents the process from diverging when it is in a stationary state.

D'autre part, le facteur d'oubli γ(n) permet d'atténuer les poids calculés aux itérations précédentes. Ainsi, lorsque l'auditeur se déplace, les poids précédents n'influent pas sur les poids futurs.On the other hand, the forgetting factor γ(n) makes it possible to attenuate the weights calculated in previous iterations. Thus, when the listener moves, previous weights do not influence future weights.

Le facteur d'oubli γ(n) est déterminé en se basant directement sur un éventuel déplacement de l'auditeur. Ce calcul est illustré aux étapes S9 à S11. A l'étape S9, la position de l'auditeur aux itérations précédentes est récupérée. Il est par exemple possible de récupérer la position de l'auditeur à toutes les itérations précédentes. En variante, il est possible de ne récupérer la position de l'auditeur que pour une partie des précédentes itérations, par exemple les dix dernières ou les cent dernières itérations.The forgetting factor γ(n) is determined based directly on a possible movement of the listener. This calculation is illustrated in steps S9 to S11. In step S9, the position of the listener in previous iterations is recovered. For example, it is possible to recover the position of the listener in all previous iterations. Alternatively, it is possible to recover the position of the listener only for part of the previous iterations, for example the last ten or the last hundred iterations.

A partir de ces données, une vitesse de déplacement de l'auditeur est calculée à l'étape S10. La vitesse de déplacement peut être calculée en mètres par itération. La vitesse de l'auditeur peut être nulle.From this data, a movement speed of the listener is calculated in step S10. Movement speed can be calculated in meters per iteration. The listener's speed may be zero.

A l'étape S 11, le facteur d'oubli γ(n) est calculé selon la formule :

Figure imgb0005
avec γ le facteur d'oubli, n l'itération courante, γ max le facteur d'oubli maximal, χ un paramètre défini par le concepteur égal à µ le pas d'adaptation, m une variable définie en fonction d'un déplacement de l'auditeur ayant comme maximum χ et α une variable permettant d'ajuster la vitesse d'augmentation ou de diminution du facteur d'oubli.In step S 11, the forgetting factor γ(n) is calculated according to the formula:
Figure imgb0005
with γ the forgetting factor, n the current iteration, γ max x the maximum forgetting factor, χ a parameter defined by the designer equal to µ the adaptation step, m a variable defined as a function of a displacement of the listener having as maximum χ and α a variable making it possible to adjust the speed of increase or decrease of the forgetting factor.

Le facteur d'oubli γ est borné entre 0 et γ max. Selon cette définition, γ max correspond donc à un pourcentage maximal de poids à oublier entre chaque itération.The forgetting factor γ is bounded between 0 and γ ma x . According to this definition, γ ma x therefore corresponds to a maximum percentage of weight to be forgotten between each iteration.

Le choix de la valeur de m est variable au cours des itérations. Il est choisi tel que s'il existe un déplacement de l'auditeur, alors le facteur d'oubli augmente. Lorsqu'il n'y a pas de déplacement, il diminue. Autrement dit, lorsque la vitesse de l'auditeur est positive, le facteur d'oubli augmente et lorsque la vitesse de l'auditeur est nulle il diminue.The choice of the value of m varies during the iterations. It is chosen such that if there is a movement of the listener, then the forgetting factor increases. When there is no movement, it decreases. In other words, when the listener's speed is positive, the forgetting factor increases and when the listener's speed is zero it decreases.

La variable α influe principalement sur la vitesse de convergence du procédé. Autrement dit, il permet de choisir le nombre d'itérations pour lesquelles la valeur maximale γ max et/ou minimale du facteur d'oubli est atteinte.The variable α mainly influences the speed of convergence of the process. In other words, it allows you to choose the number of iterations for which the maximum value γ ma x and/or minimum value of the forgetting factor is reached.

La variable m est définie comme suit :

  • si un déplacement de l'auditeur est déterminé, m = min(m + lu, 1)
  • si aucun déplacement de l'auditeur n'est déterminé, m = max(m - ld , 0).
The variable m is defined as follows:
  • if a movement of the listener is determined, m = min( m + l u , 1)
  • if no movement of the listener is determined, m = max( m - l d , 0) .

Les variables lu et ld correspondent respectivement à un pas de montée et un pas de descente du facteur d'oubli. Ils sont définis en fonction de la vitesse de déplacement de l'auditeur et/ou en fonction d'une modification du champ sonore choisi à restituer.The variables l u and l d correspond respectively to a step of rise and a step of fall of the forgetting factor. They are defined according to the speed of movement of the listener and/or according to a modification of the sound field chosen to be reproduced.

Notamment, le pas de montée lu a une valeur plus importante si les poids précédents sont à oublier rapidement en cours de déplacement (par exemple dans le cas où la vitesse de déplacement de l'auditeur est importante). Le pas de descente ld a une valeur plus importante si les poids précédents sont complètement à oublier à la fin d'un déplacement de l'auditeur.In particular, the rise step l u has a greater value if the previous weights are to be forgotten quickly during movement (for example in the case where the speed of movement of the listener is high). The descent step l d has a greater value if the previous weights are to be completely forgotten at the end of a movement by the listener.

La définition de deux variables lu et ld permet donc de moduler le système. Cela permet de prendre en compte, en temps réel et en continu, le déplacement de l'auditeur. Ainsi, à chaque itération, le facteur d'oubli est calculé en fonction du déplacement réel de l'auditeur, de manière à restituer le champ sonore choisi en la position de l'auditeur.The definition of two variables l u and l d therefore makes it possible to modulate the system. This makes it possible to take into account, in real time and continuously, the movement of the listener. Thus, at each iteration, the forgetting factor is calculated according to the actual movement of the listener, so as to restore the chosen sound field in the position of the listener.

A l'étape S12, le facteur d'oubli γ est modifié si nécessaire, en fonction du résultat du calcul de l'étape S 11.In step S12, the forgetting factor γ is modified if necessary, depending on the result of the calculation of step S 11.

Le calcul et la modification du facteur d'oubli à l'étape S12 sert à calculer les poids à appliquer aux signaux de commande des haut-parleurs HP. Plus précisément, à la première itération, les poids sont initialisés à zéro (étape S13). Chaque haut-parleur diffuse un signal de commande non pondéré. Puis, à chaque itération, la valeur des poids varie en fonction de l'erreur et du facteur d'oubli (étape S14). Les haut-parleurs diffusent alors un signal de commande pondéré, qui peut être différent à chaque nouvelle itération. Cette modification des signaux de commande explique notamment que la pression acoustique P estimée en la position de l'auditeur U puisse être différente à chaque itération.The calculation and modification of the forgetting factor in step S12 is used to calculate the weights to be applied to the control signals of the HP speakers. More precisely, at the first iteration, the weights are initialized to zero (step S13). Each loudspeaker broadcasts an unweighted control signal. Then, at each iteration, the value of the weights varies according to the error and the forgetting factor (step S14). The speakers then broadcast a weighted control signal, which can be different with each new iteration. This modification of the control signals explains in particular why the acoustic pressure P estimated at the position of the listener U can be different at each iteration.

Les nouveaux poids sont calculés à l'étape S14 selon la formule mathématique : q(n + 1) = q(n)(1 - µγ(n)) + µ G H (n)(G(n)q(n) - Pt(n)), avec µ le pas d'adaptation pouvant varier à chaque itération et γ(n) le facteur d'oubli pouvant varier. Afin de garantir la stabilité du filtre, il est avantageux d'éviter que le pas d'adaptation µ ne soit supérieur à l'inverse de la plus grande valeur propre de G H G.The new weights are calculated in step S14 according to the mathematical formula: q (n + 1) = q (n)(1 - µγ ( n )) + µ G H ( n ) ( G ( n ) q ( n ) - Pt ( n )), with µ the adaptation step which can vary at each iteration and γ ( n ) the forgetting factor which can vary. In order to guarantee the stability of the filter, it is advantageous to prevent the adaptation step µ from being greater than the inverse of the largest eigenvalue of G H G .

A l'étape S15, les filtres FILT à appliquer aux haut-parleurs sont calculés. Par exemple, un filtre par haut-parleur est calculé. Il peut donc y avoir autant de filtres que de haut-parleurs. Pour obtenir des filtres dans le domaine temporel à partir des poids calculés à l'étape précédente, il est possible d'effectuer une symétrie des poids calculés dans le domaine fréquentiel en prenant leur complexe conjugué. Puis, une transformée de Fourier Inverse est réalisée pour obtenir les filtres dans le domaine temporel. Toutefois, les filtres calculés peuvent ne pas respecter le principe de causalité. Un décalage temporel du filtre, correspondant par exemple à la moitié de la longueur de filtre, peut être réalisé. Ainsi, une pluralité de filtres, par exemple un filtre par haut-parleur, est obtenue.In step S15, the FILT filters to be applied to the speakers are calculated. For example, one filter per speaker is calculated. There can therefore be as many filters as speakers. To obtain filters in the time domain from the weights calculated in the previous step, it is possible to perform a symmetry of the weights calculated in the frequency domain by taking their conjugate complex. Then, an Inverse Fourier Transform is performed to obtain the filters in the time domain. However, the calculated filters may not respect the principle of causality. A time shift of the filter, corresponding for example to half the filter length, can be carried out. Thus, a plurality of filters, for example one filter per loudspeaker, is obtained.

A l'étape S16, le signal audio à diffuser à l'auditeur est obtenu. Il est alors possible de réaliser un filtrage en temps réel du signal audio S(U) pour diffuser le signal sur les haut-parleurs. Notamment, le signal S(U) est filtré à l'étape S17 par les filtres calculés à l'étape S15 et diffusé par le haut-parleur correspondant au filtre aux étapes S18 et S19.In step S16, the audio signal to be broadcast to the listener is obtained. It is then possible to carry out real-time filtering of the audio signal S(U) to broadcast the signal to the speakers. In particular, the signal S(U) is filtered in step S17 by the filters calculated in step S15 and broadcast by the loudspeaker corresponding to the filter in steps S18 and S19.

Puis, à chaque itération, les filtres FILT sont calculés en fonction des signaux S(HP1,...,HPN) filtrés, pondérés à l'itération précédente et diffusés par les haut-parleurs, tels que perçus par le réseau de microphones. Les filtres FILT sont appliqués au signal S(U) pour obtenir de nouveaux signaux de commande S(HP1,...,HPN) à diffuser respectivement sur chaque haut-parleur du réseau de haut-parleurs.Then, at each iteration, the FILT filters are calculated as a function of the filtered signals S(HP 1 ,...,HP N ), weighted at the previous iteration and broadcast by the speakers, as perceived by the network of microphones. The FILT filters are applied to the signal S(U) to obtain new control signals S(HP 1 ,...,HP N ) to be broadcast respectively to each loudspeaker of the loudspeaker network.

Le procédé est alors relancé à partir de l'étape S6 dans laquelle la pression acoustique en la position de l'auditeur est déterminée.The process is then restarted from step S6 in which the sound pressure at the listener's position is determined.

Ci-après, un autre mode de réalisation est décrit. Les mêmes références numériques désignent les mêmes éléments.Below, another embodiment is described. The same numerical references designate the same elements.

Dans ce mode de réalisation, le réseau de haut-parleurs HP couvre une zone comprenant une première sous-zone SZ1 et une deuxième sous-zone SZ2. Les haut-parleurs HP sont alimentés par des signaux de commande respectifs pour émettre chacun un signal audio en continu, en vue d'une diffusion sonore spatialisée d'un champ sonore choisi. Le champ sonore choisi est à rendre audible dans une des sous-zones, et à rendre inaudible dans l'autre sous-zone. Par exemple, le champ sonore choisi est audible dans la première sous-zone SZ1. Le champ sonore choisi est à rendre inaudible dans la deuxième sous-zone SZ2. Les haut-parleurs peuvent être définis par leur position dans la zone.In this embodiment, the HP loudspeaker network covers an area comprising a first sub-zone SZ1 and a second sub-zone SZ2. The HP speakers are powered by respective control signals to each emit an audio signal continuously, with a view to spatialized sound diffusion of a chosen sound field. The chosen sound field is to be made audible in one of the sub-zones, and to be made inaudible in the other sub-zone. For example, the chosen sound field is audible in the first subzone SZ1. The chosen sound field must be made inaudible in the second sub-zone SZ2. Speakers can be defined by their position in the zone.

Chaque sous-zone SZ peut être définie par la position de l'auditeur U. Il est alors possible de définir, en fonction des données de géolocalisation de l'auditeur, la première sous-zone SZ1, dans laquelle l'auditeur U entend le champ sonore choisi. La sous-zone SZ1 a par exemple des dimensions prédéfinies. Notamment, la première sous-zone peut correspondre à une surface de quelques dizaines de centimètres à quelques dizaines de mètres, dont l'auditeur U est le centre. La deuxième sous-zone SZ2, dans laquelle le champ sonore choisi est à rendre inaudible, peut être définie comme la sous-zone complémentaire.Each subzone SZ can be defined by the position of the listener U. It is then possible to define, based on the geolocation data of the listener, the first subzone SZ1, in which the listener U hears the chosen sound field. The SZ1 subzone has, for example, predefined dimensions. In particular, the first sub-zone can correspond to a surface of a few tens of centimeters to a few tens of meters, of which the listener U is the center. The second subzone SZ2, in which the chosen sound field is to be made inaudible, can be defined as the complementary subzone.

En variante, la position de l'auditeur U peut définir, de la même manière que décrite ci-avant, la deuxième sous-zone SZ2. La première sous-zone SZ1 est définie comme complémentaire de la deuxième sous-zone SZ2.Alternatively, the position of the listener U can define, in the same manner as described above, the second subzone SZ2. The first subzone SZ1 is defined as complementary to the second subzone SZ2.

Selon cette réalisation, une partie du réseau de microphones MIC couvre la première sous-zone SZ1 tandis que l'autre partie couvre la deuxième sous-zone SZ2. Chaque sous-zone comprend au moins un microphone virtuel. Par exemple, la zone est couverte par M microphones M1 à MICM. La première sous-zone est couverte par les microphones MICi à MICN, avec N inférieur à M. La deuxième sous-zone est couverte par les microphones MICN+1 à MICM.According to this embodiment, part of the microphone network MIC covers the first sub-zone SZ1 while the other part covers the second sub-zone SZ2. Each subzone includes at least one virtual microphone. For example, the area is covered by M microphones M1 to MIC M. The first sub-zone is covered by the microphones MICi to MIC N , with N less than M. The second sub-zone is covered by the microphones MIC N+1 to MIC M.

Les sous-zones étant définies en fonction de la position de l'auditeur, elles évoluent à mesure que l'auditeur se déplace. La position des microphones virtuels évolue de la même manière.The sub-zones being defined according to the position of the listener, they evolve as the listener moves. The position of the virtual microphones changes in the same way.

Plus précisément, et comme illustré sur la figure 3, la première sous-zone SZ1 est définie par la position a1 de l'auditeur U (représentés en traits plein). Le réseau de microphones MIC est défini de manière à couvrir la première sous-zone SZ1. La deuxième sous-zone SZ2 est complémentaire de la première sous-zone SZ1. La flèche F illustre un déplacement de l'auditeur U vers une position a2. La première sous-zone SZ1 est alors redéfinie autour de l'auditeur U (en traits pointillés). Le réseau de microphones MIC est redéfini de manière à couvrir la nouvelle première sous-zone SZ1. Le reste de la zone représente la nouvelle deuxième sous-zone SZ2. Ainsi, la première sous-zone SZ1 définie initialement par la position a1 de l'auditeur se trouve dans la deuxième sous-zone SZ2.More precisely, and as illustrated in the Figure 3 , the first subzone SZ1 is defined by the position a1 of the listener U (represented in solid lines). The MIC microphone array is defined to cover the first sub-zone SZ1. The second subzone SZ2 is complementary to the first subzone SZ1. The arrow F illustrates a movement of the listener U towards a position a2. The first subzone SZ1 is then redefined around the listener U (in dotted lines). The MIC microphone array is redefined to cover the new first subzone SZ1. The rest of the zone represents the new second subzone SZ2. Thus, the first subzone SZ1 initially defined by the position a1 of the listener is found in the second subzone SZ2.

Ainsi, sur le système illustré figure 3, l'unité de traitement TRAIT reçoit en entrée la position des microphones MIC, les données de géolocalisation de l'auditeur U, les positions de chaque haut-parleur HP, le signal audio à reproduire S(U) destiné à l'auditeur U et les champs sonores cibles Pt 1, Pt 2 à atteindre dans chaque sous-zone. A partir de ces données, l'unité de traitement TRAIT calcul le filtre FILT à appliquer au signal S(U) afin de restituer les champs sonores cibles Pt 1, Pt 2 dans les sous-zones. L'unité de traitement TRAIT reçoit également les pressions acoustiques P1, P2 estimées dans chacune des sous-zones. L'unité de traitement TRAIT délivre en sortie les signaux filtrés S(HP1...HPN) à diffuser respectivement sur les haut-parleurs HP1 à HPN.Thus, on the system illustrated Figure 3 , the TRAIT processing unit receives as input the position of the microphones MIC, the geolocation data of the listener U, the positions of each loudspeaker HP, the audio signal to be reproduced S(U) intended for the listener U and the target sound fields Pt 1 , Pt 2 to be achieved in each sub-zone. From this data, the TRAIT processing unit calculates the FILT filter to be applied to the signal S(U) in order to restore the target sound fields Pt 1 , Pt 2 in the sub-zones. The TRAIT processing unit also receives the acoustic pressures P 1 , P 2 estimated in each of the sub-zones. The TRAIT processing unit outputs the filtered signals S(HP 1 ...HP N ) to be broadcast respectively on the speakers HP 1 to HP N.

Les figures 4a et 4b illustrent les principales étapes du procédé selon l'invention. Les étapes du procédé sont mises en oeuvre par l'unité de traitement TRAIT de manière continue et en temps réel.THE figures 4a and 4b illustrate the main stages of the process according to the invention. The steps of the process are implemented by the TRAIT processing unit continuously and in real time.

Le procédé a pour but de rendre inaudible le champ sonore choisi dans l'une des sous-zones, par exemple dans la deuxième sous-zone SZ2 tout en suivant le déplacement d'un auditeur dont la position définit les sous-zones. Le procédé est basé sur une estimation de pressions acoustiques dans chacune des sous-zones, de manière à appliquer un niveau de contraste sonore souhaité entre les deux sous-zones. A chaque itération, le signal audio S(U) est filtré en fonction des pressions acoustiques estimées et du niveau de contraste sonore pour obtenir les signaux de commande S(HP1...HPN) à diffuser sur les haut-parleurs.The purpose of the method is to make the sound field chosen in one of the sub-zones inaudible, for example in the second sub-zone SZ2 while following the movement of a listener whose position defines the sub-zones. The method is based on an estimation of acoustic pressures in each of the sub-zones, so as to apply a desired level of sound contrast between the two sub-zones. At each iteration, the audio signal S(U) is filtered as a function of the estimated acoustic pressures and the sound contrast level to obtain the control signals S(HP 1 ...HP N ) to be broadcast on the speakers.

A l'étape S20, la position de l'auditeur U est déterminée, par exemple au moyen d'un capteur de position POS. A partir de cette position, les deux sous-zones SZ1, SZ2 sont définies. Par exemple, la première sous-zone correspond à la position de l'auditeur U. La première sous-zone SZ1 est par exemple définie comme étant une zone de quelques dizaines de centimètres à quelques dizaines de mètres de circonférence, dont le premier auditeur U1 est le centre. La deuxième sous-zone SZ2 peut être définie comme étant complémentaire de la première sous-zone SZ1.In step S20, the position of the listener U is determined, for example by means of a position sensor POS. From this position, the two sub-zones SZ1, SZ2 are defined. For example, the first subzone corresponds to the position of the listener U. The first subzone SZ1 is for example defined as being an area of a few tens of centimeters to a few tens of meters in circumference, including the first listener U1 is the center. The second subzone SZ2 can be defined as being complementary to the first subzone SZ1.

En variante, c'est la deuxième sous-zone SZ2 qui est définie par la position de l'auditeur, la première sous-zone SZ1 étant complémentaire de la deuxième sous-zone SZ2.Alternatively, it is the second subzone SZ2 which is defined by the position of the listener, the first subzone SZ1 being complementary to the second subzone SZ2.

A l'étape S21, le réseau de microphones MIC est défini, au moins un microphone couvrant chacune des sous-zones SZ1, SZ2.In step S21, the network of MIC microphones is defined, at least one microphone covering each of the sub-zones SZ1, SZ2.

A l'étape S22, la position de chaque haut-parleur HP est déterminée, tel que décrit ci-avant en référence aux figures 2a et 2b.In step S22, the position of each HP speaker is determined, as described above with reference to the figures 2a and 2b .

A l'étape S23, une distance entre chaque couple de haut-parleur HP et de microphone MIC est calculée. Cela permet de pouvoir calculer chacune des fonctions de transfert Ftransf, pour chaque couple haut-parleur HP/microphone MIC, à l'étape S4.In step S23, a distance between each pair of HP loudspeaker and MIC microphone is calculated. This makes it possible to calculate each of the transfer functions Ftransf, for each HP loudspeaker/MIC microphone pair, in step S4.

Plus précisément, le champ sonore cible peut être défini comme un vecteur Pt ω n = P t 1 P t 2

Figure imgb0006
, pour l'ensembles des microphones MIC, à chaque instant n pour une pulsation ω = 2πf, f étant la fréquence. Les microphones MIC1 à MICM sont disposés aux positions xMIC = [MIC 1 ,...,MICM ] et capturent un ensemble de pressions acoustiques regroupés dans le vecteur P(ω, n).More precisely, the target sound field can be defined as a vector Pt ω not = P t 1 P t 2
Figure imgb0006
, for all of the MIC microphones, at each instant n for a pulsation ω = 2 πf, f being the frequency. The microphones MIC 1 to MIC M are arranged at positions x MIC = [ MIC 1 , ..., MIC M ] and capture a set of acoustic pressures grouped in the vector P ( ω, n ).

Le champ sonore est reproduit par les haut-parleurs (HP1,...,HPN) fixes et ayant comme position respective xHP = [HP 1 ,...,HPN ] . Les haut-parleurs (HP1,...,HPN) sont pilotés par un ensemble de poids regroupés dans le vecteur q(ω, n) = [q1(ω, n),...,qN (ω, n)] T. L'exposant T est l'opérateur de transposition.The sound field is reproduced by the speakers (HP 1 ,...,HP N ) fixed and having the respective position x HP = [ HP 1 ,...,HP N ]. The loudspeakers (HP 1 ,...,HP N ) are driven by a set of weights grouped in the vector q ( ω , n ) = [q 1 ( ω, n ) ,...,q N ( ω , n )] T . The exponent T is the transposition operator.

Le trajet de propagation du champ sonore entre chaque couple de haut-parleur HP et microphone MIC peut être défini par un ensemble de fonctions de transferts G(ω, n) assemblées dans la matrice G ω n = G 11 ω n G 1 N ω n G M 1 ω n G MN ω n

Figure imgb0007
The sound field propagation path between each pair of HP loudspeaker and MIC microphone can be defined by a set of transfer functions G ( ω , n ) assembled in the matrix G ω not = G 11 ω not G 1 NOT ω not G M 1 ω not G M.N. ω not
Figure imgb0007

Avec les fonctions de transfert définies comme étant : G ml = jρck 4 πR ml e jkR ml

Figure imgb0008
, avec Rml la distance entre un couple haut-parleur et microphone, k le nombre d'onde, ρ la masse volumique de l'air et c la célérité du son.With the transfer functions defined as: G ml = jρck 4 πR ml e jkR ml
Figure imgb0008
, with R ml the distance between a loudspeaker and microphone pair, k the wave number, ρ the density of the air and c the speed of sound.

A l'étape S25, les pressions acoustiques P 1 et P 2 sont déterminées respectivement dans la première sous-zone SZ1 et dans la deuxième sous-zone SZ2.In step S25, the acoustic pressures P 1 and P 2 are determined respectively in the first sub-zone SZ1 and in the second sub-zone SZ2.

Selon un exemple de réalisation, la pression acoustique P 1 dans la première sous-zone SZ1 peut être la pression acoustique issue des signaux diffusés par les haut-parleurs dans la première sous-zone. La pression acoustique P 2 dans la deuxième sous-zone, dans laquelle les signaux sonores sont à rendre inaudibles, peut correspondre à la pression acoustique induite issue des signaux diffusés par les haut-parleurs alimentés par les signaux de commande associés à la pression P 1 induite dans la première sous-zone.According to an exemplary embodiment, the acoustic pressure P 1 in the first sub-zone SZ1 can be the acoustic pressure resulting from the signals broadcast by the speakers in the first sub-zone. The acoustic pressure P 2 in the second sub-zone, in which the sound signals are to be made inaudible, can correspond to the induced acoustic pressure resulting from the signals broadcast by the loudspeakers supplied by the control signals associated with the pressure P 1 induced in the first subzone.

Les pressions acoustiques P 1, P 2 sont déterminées à partir des fonctions de transfert Ftransf calculées à l'étape S24, et d'un poids initial appliqué aux signaux de commande de chaque haut-parleur. Le poids initial appliqué aux signaux de commande de chacun des haut-parleurs est égal à zéro. Puis, le poids appliqué aux signaux de commande tend à varier à chaque itération, tel que décrit ci-après.The acoustic pressures P 1 , P 2 are determined from the transfer functions Ftransf calculated in step S24, and an initial weight applied to the control signals of each loudspeaker. The initial weight applied to the control signals of each of the loudspeakers is equal to zero. Then, the weight applied to the control signals tends to vary with each iteration, as described below.

Selon cet exemple de réalisation, les pressions acoustiques P 1, P 2 comprennent chacun l'ensemble des pression acoustiques déterminées en chacune des positions des microphones virtuels. Ainsi, la pression acoustique estimée dans les sous-zones est plus représentative. Cela permet d'obtenir un résultat homogène en sortie de procédé.According to this exemplary embodiment, the acoustic pressures P 1 , P 2 each comprise all of the acoustic pressures determined at each of the positions of the virtual microphones. Thus, the estimated sound pressure in the sub-zones is more representative. This makes it possible to obtain a homogeneous result at the end of the process.

En variante, une pression acoustique déterminée en une seule position P 1 , P 2 est respectivement estimée pour la première sous-zone SZ1 et pour la deuxième sous-zone SZ2. Cela permet de limiter le nombre de calculs, et donc de diminuer le temps de traitement et par conséquent la réactivité du système.Alternatively, an acoustic pressure determined in a single position P 1 , P 2 is respectively estimated for the first sub-zone SZ1 and for the second sub-zone SZ2. This makes it possible to limit the number of calculations, and therefore to reduce the processing time and therefore the responsiveness of the system.

Plus précisément, les pressions acoustiques P 1 , P 2 dans chacune des sous-zones peuvent être rassemblées sous la forme d'un vecteur défini comme : p ω n = P 1 P 2 =

Figure imgb0009
G ω n q ω n
Figure imgb0010
More precisely, the acoustic pressures P 1 , P 2 in each of the sub-zones can be gathered in the form of a vector defined as: p ω not = P 1 P 2 =
Figure imgb0009
G ω not q ω not
Figure imgb0010

A l'étape S26, les niveaux sonores L1 et L2 sont déterminés respectivement dans la première sous-zone SZ1 et dans la deuxième sous-zone SZ2. Les niveaux sonores L1 et L2 sont déterminés en chaque position des microphones MIC. Cette étape permet de convertir les valeurs des pressions acoustiques estimées P 1, P 2 en des valeurs mesurables en décibels. De cette manière, le contraste sonore entre la première et la deuxième sous-zone peut être calculé. A l'étape S27, un niveau de contraste sonore souhaité CC entre la première sous-zone et la deuxième sous-zone est défini. Par exemple, le contraste sonore souhaité CC entre la première sous-zone SZ1 et la deuxième sous-zone SZ2 est préalablement défini par un concepteur en fonction du champ sonore choisi et/ou de la perception d'un auditeur U.In step S26, the sound levels L 1 and L 2 are determined respectively in the first sub-zone SZ1 and in the second sub-zone SZ2. The sound levels L 1 and L 2 are determined at each position of the MIC microphones. This step makes it possible to convert the values of the estimated sound pressures P 1 , P 2 into measurable values in decibels. In this way, the sound contrast between the first and second sub-zones can be calculated. In step S27, a desired sound contrast level C C between the first sub-zone and the second sub-zone is defined. For example, the desired sound contrast C C between the first sub-zone SZ1 and the second sub-zone SZ2 is previously defined by a designer according to the chosen sound field and/or the perception of a listener U.

Plus précisément, le niveau sonore L pour un microphone peut être défini par L = 20 log 10 P p 0

Figure imgb0011
, avec p 0 la pression acoustique de référence, c'est-à-dire le seuil de perception.More precisely, the sound level L for a microphone can be defined by L = 20 log 10 P p 0
Figure imgb0011
, with p 0 the reference acoustic pressure, that is to say the perception threshold.

Ainsi, le niveau sonore moyen dans une sous-zone peut être défini comme : L = 10 log 10 P H P M p 0 2

Figure imgb0012
, avec P H la transposée conjuguée du vecteur de pressions acoustiques dans la sous-zone et M le nombre de microphones dans cette sous-zone.Thus, the average sound level in a sub-zone can be defined as: L = 10 log 10 P H P M p 0 2
Figure imgb0012
, with P H the conjugate transpose of the acoustic pressure vector in the sub-zone and M the number of microphones in this sub-zone.

A partir du niveau sonore L1, L2 dans les deux sous-zones, il est possible de calculer le contraste sonore estimé C entre les deux sous-zones : C = L1 - L2.From the sound level L 1 , L 2 in the two sub-zones, it is possible to calculate the estimated sound contrast C between the two sub-zones: C = L 1 - L 2 .

A l'étape S28, la différence entre le contraste sonore estimé entre les deux sous-zones et le contraste sonore souhaité CC est calculée. A partir de cette différence, un coefficient d'atténuation peut être calculé. Le coefficient d'atténuation est calculé et appliqué à la pression acoustique estimée P 2 dans la deuxième sous-zone à l'étape S29. Plus précisément, un coefficient d'atténuation est calculé et appliqué à chacune des pressions acoustiques estimées P 2 en chacune des positions des microphones MIC de la deuxième sous-zone SZ2. La pression acoustique cible Pt 2 dans la deuxième sous-zone prend alors la valeur de la pression acoustique P 2 atténuée de la deuxième sous-zone.In step S28, the difference between the estimated sound contrast between the two sub-zones and the desired sound contrast C C is calculated. From this difference, an attenuation coefficient can be calculated. The attenuation coefficient is calculated and applied to the estimated sound pressure P 2 in the second sub-zone in step S29. More precisely, an attenuation coefficient is calculated and applied to each of the estimated acoustic pressures P 2 in each of the positions of the microphones MIC of the second sub-zone SZ2. The target sound pressure Pt 2 in the second sub-zone then takes the value of the attenuated sound pressure P 2 of the second sub-zone.

Mathématiquement, la différence Cξ entre le contraste sonore estimé C et le contraste sonore souhaité CC peut être calculée comme suit Cξ = C - CC = L 1 - L 2 - CC Il est alors possible de calculer le coefficient d'atténuation ξ = 10 c ξ 20

Figure imgb0013
.Mathematically, the difference C ξ between the estimated sound contrast C and the desired sound contrast C C can be calculated as follows C ξ = C - C C = L 1 - L 2 - C C It is then possible to calculate the coefficient of mitigation ξ = 10 vs ξ 20
Figure imgb0013
.

Ce coefficient est déterminé par l'amplitude de la pression acoustique à donner à chaque microphone pour que le niveau sonore dans la deuxième sous-zone soit homogène. Lorsque le contraste est équivalent à celui correspondant au contraste sonore souhaité CC pour un microphone dans la deuxième sous-zone, alors Cξ ≈ 0 donc ξ ≈ 1. Cela signifie que la pression acoustique estimée en ce microphone correspond à la valeur de pression cible dans la deuxième sous-zone.This coefficient is determined by the amplitude of the acoustic pressure to be given to each microphone so that the sound level in the second sub-zone is homogeneous. When the contrast is equivalent to that corresponding to the desired sound contrast C C for a microphone in the second sub-zone, then C ξ ≈ 0 therefore ξ ≈ 1. This means that the sound pressure estimated in this microphone corresponds to the pressure value target in the second subzone.

Lorsque la différence entre le contraste sonore estimé C et le contraste sonore souhaité CC est négative Cξ < 0, cela signifie que le contraste souhaité CC n'est pas encore atteint, et donc qu'une amplitude de pression plus faible est à obtenir en ce microphone.When the difference between the estimated sound contrast C and the desired sound contrast C C is negative C ξ < 0, this means that the desired contrast C C is not yet achieved, and therefore that a lower pressure amplitude is required. get in this microphone.

Lorsque la différence entre le contraste sonore estimé C et le contraste sonore souhaité CC est positive Cξ > 0, la pression acoustique en ce point est trop faible. Elle doit donc être augmentée pour correspondre au contraste sonore souhaité dans la deuxième sous-zone.When the difference between the estimated sound contrast C and the desired sound contrast C C is positive C ξ > 0, the sound pressure at this point is too low. It must therefore be increased to match the desired sound contrast in the second sub-zone.

Le principe est donc d'utiliser le champ de pression présent dans la deuxième sous-zone qui est induit par la pression acoustique dans la première sous-zone, puis d'atténuer ou d'amplifier les valeurs individuelles de pressions acoustiques estimées en chaque microphone, de sorte à ce qu'elles correspondent au champ sonore cible dans la deuxième sous-zone sur l'ensemble des microphones. Pour tous les microphones, on définit le vecteur : ξ = [ξ 1,...,ξm,...,ξM ] T .The principle is therefore to use the pressure field present in the second sub-zone which is induced by the acoustic pressure in the first sub-zone, then to attenuate or amplify the individual acoustic pressure values estimated in each microphone , so that they match the target sound field in the second sub-zone across all microphones. For all microphones, we define the vector: ξ = [ ξ 1 ,..., ξ m , ..., ξ M ] T.

Ce coefficient est calculé à chaque itération et peut donc évoluer. Il peut donc être écrit sous la forme ξ ( n ).This coefficient is calculated at each iteration and can therefore change. It can therefore be written in the form ξ ( n ).

En variante, dans le cas où une unique pression acoustique P 2 est estimée pour la deuxième sous-zone SZ2, un seul coefficient d'atténuation est calculé et appliqué à la pression acoustique P 2.Alternatively, in the case where a single sound pressure P 2 is estimated for the second sub-zone SZ2, a single attenuation coefficient is calculated and applied to the sound pressure P 2 .

Les coefficients d'atténuation sont calculés de manière à répondre au critère de contraste défini par le concepteur. Autrement dit, le coefficient d'atténuation est défini de sorte que la différence entre le contraste sonore entre les deux sous-zones SZ2 et le contraste sonore souhaité Ce soit proche de zéro.The attenuation coefficients are calculated so as to meet the contrast criterion defined by the designer. In other words, the attenuation coefficient is defined so that the difference between the sound contrast between the two sub-zones SZ2 and the desired sound contrast This is close to zero.

Les étapes S30 à S32 permettent de définir la valeur des pressions acoustiques cibles Pt 1, Pt 2 dans la première et la deuxième sous-zone SZ1, SZ2.Steps S30 to S32 make it possible to define the value of the target acoustic pressures Pt 1 , Pt 2 in the first and second sub-zones SZ1, SZ2.

L'étape S30 comprend l'initialisation des pressions acoustiques cibles Pt 1, Pt2, respectivement dans la première et la deuxième sous-zone SZ1, SZ2. Les pressions acoustiques cibles Pt 1, Pt 2 caractérisent le champ sonore cible à diffuser dans les sous-zones. La pression acoustique cible Pt 1 dans la première sous-zone SZ1 est définie comme étant une pression cible Pt1 , choisie par le concepteur. Plus précisément, la pression cible Pt 1dans la première sous-zone SZ1 est supérieure à zéro, de sorte que le champ sonore cible soit audible dans cette première sous-zone. La pression acoustique cible Pt 2 dans la seconde sous-zone est initialisée à zéro. Les pressions cibles Pt 1, Pt 2 sont ensuite transmises à l'unité de traitement TRAIT à l'étape S31, sous la forme d'un vecteur Pt. Step S30 includes the initialization of the target acoustic pressures Pt 1 , Pt 2 , respectively in the first and second sub-zones SZ1, SZ2. The target sound pressures Pt 1 , Pt 2 characterize the target sound field to be diffused in the sub-zones. The target sound pressure Pt 1 in the first sub-zone SZ1 is defined as being a target pressure Pt 1 , chosen by the designer. More precisely, the target pressure Pt 1 in the first sub-zone SZ1 is greater than zero, so that the target sound field is audible in this first sub-zone. The target sound pressure Pt 2 in the second sub-zone is initialized at zero. The target pressures Pt 1 , Pt 2 are then transmitted to the TRAIT processing unit in step S31, in the form of a Pt vector.

A chaque itération, on attribue de nouvelles valeurs de pressions cibles aux pressions cibles Pt 1, Pt 2 déterminées à l'itération précédente. Cela correspond à l'étape S32. Plus précisément, la valeur de la pression cible Pt 1dans la première sous-zone est celle définie à l'étape S30 par le concepteur. Le concepteur peut modifier cette valeur à tout moment. La pression acoustique cible Pt 2 dans la deuxième sous-zone prend la valeur de la pression acoustique P 2 atténuée (étape S29). Cela permet, à chaque itération, de redéfinir le champ sonore cible à restituer dans la deuxième sous-zone, en tenant compte de la perception de l'auditeur et des signaux de commande des haut-parleurs. Ainsi, la pression acoustique cible Pt 2 de la deuxième sous-zone n'est égale à zéro que lors de la première itération. En effet, dès lors que les haut-parleurs diffusent un signal, un champ sonore est perçu dans la première sous-zone, mais également dans la deuxième sous-zone.At each iteration, new target pressure values are assigned to the target pressures Pt 1 , Pt 2 determined in the previous iteration. This corresponds to step S32. More precisely, the value of the target pressure Pt 1 in the first sub-zone is that defined in step S30 by the designer. The designer can change this value at any time. The target sound pressure Pt 2 in the second sub-zone takes the value of the attenuated sound pressure P 2 (step S29). This makes it possible, at each iteration, to redefine the target sound field to be reproduced in the second sub-zone, taking into account the listener's perception and the loudspeaker control signals. Thus, the target sound pressure Pt 2 of the second sub-zone is only equal to zero during the first iteration. Indeed, as soon as the speakers broadcast a signal, a sound field is perceived in the first sub-zone, but also in the second sub-zone.

Mathématiquement, la pression cible Pt 2 dans la deuxième sous-zone est calculée comme suit.Mathematically, the target pressure Pt 2 in the second subzone is calculated as follows.

A la première itération, Pt 2 est égale à zéro : Pt 2 (0) = 0 .At the first iteration, Pt 2 is equal to zero: Pt 2 (0) = 0 .

A chaque itération, la pression acoustique P 2 estimée dans la deuxième sous-zone est calculée. Cette pression acoustique correspond à la pression acoustique induite dans la seconde sous-zone par le rayonnement des haut-parleurs dans la première sous-zone. Ainsi, à chaque itération on a : P 2(ω, n) = G2 (ω, n)q(ω, n), avec G2 (ω, n) la matrice de fonctions de transfert dans la deuxième sous-zone à l'itération n.At each iteration, the sound pressure P 2 estimated in the second sub-zone is calculated. This sound pressure corresponds to the sound pressure induced in the second sub-zone by the radiation from the speakers in the first sub-zone. Thus, at each iteration we have: P 2 ( ω , n ) = G 2 ( ω , n ) q ( ω , n ), with G 2 ( ω , n ) the matrix of transfer functions in the second subzone at iteration n .

La pression cible Pt 2 à l'itération n + 1 peut donc être calculée comme Pt 2(n + 1) = ξ (n) × P 2.The target pressure Pt 2 at iteration n + 1 can therefore be calculated as Pt 2 ( n + 1) = ξ ( n ) × P 2 .

A l'étape S33, l'erreur entre la pression cible Pt 2 et la pression estimée P 2 dans la deuxième sous-zone est calculée. L'erreur est due au fait qu'un pas d'adaptation µ est appliqué de manière à ce que la pression cible Pt 2 ne soit pas atteinte immédiatement. La pression cible Pt 2 est atteinte au bout d'un certain nombre d'itérations du procédé. Cela permet de minimiser les ressources en calcul nécessaires pour atteindre la pression cible Pt 2 dans la deuxième sous-zone SZ2. Cela permet en outre d'assurer la stabilité de l'algorithme. De la même manière, le pas d'adaptation µ est également choisi de sorte que l'erreur calculée à l'étape S33 ait une petite valeur, afin de stabiliser le filtre.In step S33, the error between the target pressure Pt 2 and the estimated pressure P 2 in the second sub-zone is calculated. The error is due to the fact that an adaptation step µ is applied so that the target pressure Pt 2 is not reached immediately. The target pressure Pt 2 is reached after a certain number of iterations of the process. This makes it possible to minimize the calculation resources necessary to reach the target pressure Pt 2 in the second sub-zone SZ2. This also ensures the stability of the algorithm. In the same way, the adaptation step µ is also chosen so that the error calculated in step S33 has a small value, in order to stabilize the filter.

Le facteur d'oubli γ(n) est ensuite calculé afin de calculer les poids à appliquer à chaque signal de commande des haut-parleurs.The forgetting factor γ(n) is then calculated in order to calculate the weights to be applied to each loudspeaker control signal.

Comme décrit ci-avant, le facteur d'oubli γ(n) permet de régulariser le problème et d'atténuer les poids calculés aux itérations précédentes. Ainsi, lorsque l'auditeur se déplace, les poids précédents n'influent pas sur les poids futurs.As described above, the forgetting factor γ(n) makes it possible to regularize the problem and to attenuate the weights calculated in previous iterations. Thus, when the listener moves, previous weights do not influence future weights.

Le facteur d'oubli γ(n) est déterminé en se basant directement sur un éventuel déplacement de l'auditeur. Ce calcul est illustré aux étapes S34 à S36. A l'étape S34, la position de l'auditeur aux itérations précédentes est récupérée. Il est par exemple possible de récupérer la position de l'auditeur à toutes les itérations précédentes. En variante, il est possible de ne récupérer la position de l'auditeur que pour une partie des précédentes itérations, par exemple les dix dernières ou les cent dernières itérations.The forgetting factor γ(n) is determined based directly on a possible movement of the listener. This calculation is illustrated in steps S34 to S36. In step S34, the position of the listener in previous iterations is recovered. For example, it is possible to recover the position of the listener in all previous iterations. Alternatively, it is possible to recover the position of the listener only for part of the previous iterations, for example the last ten or the last hundred iterations.

A partir de ces données, une vitesse de déplacement de l'auditeur est calculée à l'étape S35. La vitesse de déplacement peut être calculée en mètres par itération. La vitesse de l'auditeur peut être nulle.From this data, a movement speed of the listener is calculated in step S35. Movement speed can be calculated in meters per iteration. The listener's speed may be zero.

A l'étape S36, le facteur d'oubli γ(n) est calculé selon la formule décrite ci-avant :

Figure imgb0014
In step S36, the forgetting factor γ(n) is calculated according to the formula described above:
Figure imgb0014

A l'étape S33, le facteur d'oubli γ(n) est modifié si nécessaire, en fonction du résultat du calcul de l'étape S36.In step S33, the forgetting factor γ(n) is modified if necessary, depending on the calculation result of step S36.

Le calcul et la modification du facteur d'oubli à l'étape S37 sert à calculer les poids à appliquer aux signaux de commande des haut-parleurs HP. Plus précisément, à la première itération les poids sont initialisés à zéro (étape S38). Chaque haut-parleur diffuse un signal de commande non pondéré. Puis, à chaque itération, la valeur des poids varie en fonction de l'erreur et du facteur d'oubli (étape S39). Les haut-parleurs diffusent alors le signal de commande ainsi pondéré.The calculation and modification of the forgetting factor in step S37 is used to calculate the weights to be applied to the control signals of the HP speakers. More precisely, at the first iteration the weights are initialized to zero (step S38). Each loudspeaker broadcasts an unweighted control signal. Then, at each iteration, the value of the weights varies according to the error and the forgetting factor (step S39). The speakers then broadcast the control signal thus weighted.

Les poids sont calculés tels que décrit ci-avant en référence aux figures 2a et 2b, selon la formule : q n + 1 = q n 1 μγ n + μ G H n G n q n Pt n .

Figure imgb0015
The weights are calculated as described above with reference to the figures 2a and 2b , according to the formula: q not + 1 = q not 1 μγ not + μ G H not G not q not Pt not .
Figure imgb0015

Les filtres FILT à appliquer aux haut-parleurs sont alors déterminés à l'étape S40. Un filtre par haut-parleur HP est par exemple calculé. Il peut donc y avoir autant de filtres que de haut-parleurs. Le type de filtres appliqués à chaque haut-parleur comprend par exemple une transformée de Fourier inverse.The FILT filters to be applied to the speakers are then determined in step S40. For example, a filter per HP speaker is calculated. There can therefore be as many filters as speakers. The type of filters applied to each speaker includes, for example, an inverse Fourier transform.

Les filtres sont alors appliqués au signal audio à reproduire S(U) qui a été obtenu à l'étape S41. L'étape S41 est une étape d'initialisation, mise en oeuvre uniquement à la première itération du procédé. Le signal audio à reproduire S(U) est destiné respectivement à l'auditeur U. A l'étape 542, les filtres FILT sont appliqués au signal S(U), en vue d'obtenir N signaux de commande S(HP1,...,HPN) filtrés à diffuser respectivement par les haut-parleurs (HP1,...,HPN) à l'étape S43. Les signaux de commande S(HP1,...,HPN) sont diffusés respectivement par chaque haut-parleur (HP1,...,HPN) du réseau de haut-parleurs à l'étape S44. De manière générale, les haut-parleurs HP diffusent les signaux de commande en continu.The filters are then applied to the audio signal to be reproduced S(U) which was obtained in step S41. Step S41 is an initialization step, implemented only in the first iteration of the method. The audio signal to be reproduced S(U) is intended respectively for the listener U. In step 542, the FILT filters are applied to the signal S(U), in order to obtain N control signals S(HP 1 , ...,HP N ) filtered to be broadcast respectively by the speakers (HP 1 ,...,HP N ) in step S43. The control signals S(HP 1 ,...,HP N ) are broadcast respectively by each loudspeaker (HP 1 ,...,HP N ) of the loudspeaker network in step S44. Generally speaking, HP speakers broadcast control signals continuously.

Puis, à chaque itération, les filtres FILT sont calculés en fonction des signaux S(HP1,...,HPN) filtrés à l'itération précédente et diffusés par les haut-parleurs, tels que perçus par le réseau de microphones. Les filtres FILT sont appliqués au signal S(U) pour obtenir de nouveaux signaux de commande S(HP1,...,HPN) à diffuser respectivement sur chaque haut-parleur du réseau de haut-parleurs.Then, at each iteration, the FILT filters are calculated based on the signals S(HP 1 ,...,HP N ) filtered in the previous iteration and broadcast by the speakers, as perceived by the microphone network. FILT filters are applied to the S(U) signal to obtain new control signals S(HP 1 ,...,HP N ) to be broadcast respectively on each loudspeaker of the loudspeaker network.

Le procédé est alors relancé à partir de l'étape S35 dans laquelle les pressions acoustiques P 1, P 2 des deux sous-zones SZ1, SZ2 sont estimées.The process is then restarted from step S35 in which the acoustic pressures P 1 , P 2 of the two sub-zones SZ1, SZ2 are estimated.

Bien entendu, la présente invention ne se limite pas aux modes de réalisation décrits ci-avant. Elle s'étend à d'autres variantes.Of course, the present invention is not limited to the embodiments described above. It extends to other variants.

Par exemple, le procédé peut être mis en oeuvre pour une pluralité d'auditeurs U1 à UN. Dans ce mode de réalisation, un signal audio S(U1, UN) peut être prévu respectivement pour chaque auditeur. Ainsi, les étapes du procédé peuvent être mises en oeuvre pour chacun des auditeurs, de manière à ce que le champ sonore choisi de chaque auditeur lui soit restitué en sa position, et en tenant compte de ses déplacements. Ainsi, une pluralité de facteurs d'oubli peut être calculée pour chacun des auditeurs.For example, the method can be implemented for a plurality of listeners U 1 to U N. In this embodiment, an audio signal S(U 1 , U N ) can be provided respectively for each listener. Thus, the steps of the method can be implemented for each listener, so that the sound field chosen for each listener is returned to them in their position, and taking into account their movements. Thus, a plurality of forgetting factors can be calculated for each of the listeners.

Selon une autre variante, le champ sonore choisi est un premier champ sonore, au moins un deuxième champ sonore choisi étant diffusé par le réseau de haut-parleurs HP. Le deuxième champ sonore choisi est audible dans la deuxième sous-zone pour un deuxième auditeur et est à rendre inaudible dans la première sous-zone pour un premier auditeur. Les haut-parleurs sont alimentés par les premiers signaux de commande pour émettre chacun un signal audio en continu correspondant au premier champ sonore choisi, et sont également alimentés par des deuxièmes signaux de commande pour émettre chacun un signal audio en continu correspondant au deuxième champ sonore choisi. Les étapes du procédé telles que décrites ci-avant peuvent être appliquées à la première sous-zone SZ1, de sorte que le deuxième champ sonore choisi soit rendu inaudible dans la première sous-zone SZ1 en tenant compte des déplacements des deux auditeurs.According to another variant, the chosen sound field is a first sound field, at least a second chosen sound field being broadcast by the HP loudspeaker network. The second chosen sound field is audible in the second sub-zone for a second listener and is to be made inaudible in the first sub-zone for a first listener. The speakers are powered by the first control signals to each emit a continuous audio signal corresponding to the first chosen sound field, and are also powered by second control signals to each emit a continuous audio signal corresponding to the second sound field selected. The steps of the method as described above can be applied to the first sub-zone SZ1, so that the second chosen sound field is made inaudible in the first sub-zone SZ1 taking into account the movements of the two listeners.

Selon un autre exemple de réalisation, les première et deuxième sous-zones ne sont pas complémentaires. Par exemple, dans une zone, une première sous-zone peut être définie par rapport à un premier auditeur U1 et une deuxième sous-zone peut être définie par rapport à un deuxième auditeur U2. Le champ sonore est à rendre audible dans la première sous-zone et inaudible dans la deuxième sous-zone. Le champ sonore dans le restant de la zone peut ne pas être contrôlé.According to another exemplary embodiment, the first and second sub-zones are not complementary. For example, in a zone, a first subzone can be defined with respect to a first listener U1 and a second subzone can be defined with respect to a second listener U2. The sound field is to be made audible in the first sub-zone and inaudible in the second sub-zone. The sound field in the remainder of the area may not be controlled.

Claims (11)

  1. Method implemented by computational means, for spatialized audio rendering using an array of loudspeakers (HP1, HPN) covering a region (Z), with a view to diffusing a chosen audio field, audible in at least one position of at least one listener (U) in the region, wherein the loudspeakers (HP1, HPN) are fed with respective command signals (S(HP1,...,HPN)) so as to each continuously emit an audio signal, the method comprises, iteratively and continuously for each listener:
    - obtaining the current position of a listener (U) in the region (Z) by means of a position sensor (CAPT);
    - determining distances between at least one point of the region and the respective positions of the loudspeakers (HP1,...,HPN), with a view to deducing therefrom respective acoustic transfer functions of the loudspeakers at said point, said point corresponding to a position of a virtual microphone (MIC),
    - estimating an acoustic pressure (P) at said virtual microphone (MIC) at least depending on the acoustic transfer functions and on an initial respective weight of the command signals (S (HP1, ..., HPN)) for the loudspeakers (HP1,...,HPN),
    - computing an error between said estimated acoustic pressure (P) and a target acoustic pressure (Pt), desired at said virtual microphone (MIC);
    - computing and applying respective weights to the command signals (S(HP1,...,HPN)) for the loudspeakers (HP1, ..., HPN), depending on said error,
    characterized in that respective weights are furthermore computed and applied to the command signals (S(HP1,...,HPN)) for the loudspeakers (HP1,...,HPN) depending on a weight forgetting factor, said forgetting factor being computed depending on a movement of the listener, said movement being determined by comparison between a previous position of the listener and the current position of the listener;
    and in that the acoustic pressure (P) at the current position of the listener is computed again depending on the respective command signals (S(HP1,...,HPN)), thus weighted, for the loudspeakers (HP1,...,HPN),
    and in that an acoustic pressure (P) at said virtual microphone (MIC) is also estimated depending on the respective command signals (S(HP1,...,HPN)) for the loudspeakers (HP1,...,HPN),
    and in that a current position of said point is defined dynamically depending on the current position of the listener.
  2. Method according to Claim 1, wherein a plurality of points forming the respective positions of a plurality of virtual microphones (MIC) is defined in the region (Z) with a view to estimating a plurality of respective acoustic pressures (P) in the region taking into account the respective weight applied to each loudspeaker (HP1,...,HPN), each respectively comprising a forgetting factor, and transfer functions specific to each loudspeaker (HP1,...,HPN) at each virtual microphone (MIC), the plurality of points being centred on the position of the listener.
  3. Method according to either of Claims 1 and 2, wherein the region (Z) comprises a first sub-region (SZ1) in which the chosen audio field is to be made audible and a second sub-region (SZ2) in which the chosen audio field is to be made inaudible, the first sub-region (SZ1) being defined dynamically by the position of the listener and of said virtual microphone (MIC), the virtual microphone (MIC) being a first virtual microphone, and the second sub-region (SZ2) being defined dynamically as being complementary to the first sub-region, the second sub-region (SZ2) being covered by at least one second virtual microphone a position of which is defined dynamically depending on said second sub-region (SZ2), the method furthermore comprising, iteratively:
    - estimating an acoustic pressure (P2) in the second sub-region, at least depending on the acoustic transfer functions, on the respective command signals (S(HP1, ..., HPN)) for the loudspeakers and on an initial respective weight of the command signals (S(HP1,...,HPN)) for the loudspeakers;
    - computing an error between said estimated acoustic pressure (P2) in the second sub-region and a target acoustic pressure (Pt2), desired in the second sub-region;
    - computing and applying respective weights to the command signals (S(HP1,...,HPN)) for the loudspeakers, depending on said error and on a weight forgetting factor, said forgetting factor being computed depending on a movement of the listener, said movement being determined by comparison between a previous position of the listener and the current position of the listener;
    - the acoustic pressure (P2) in the second sub-region being computed again depending on the respective command signals (S(HP1,...,HPN)), thus weighted, for the loudspeakers.
  4. Method according to either of Claims 1 and 2, wherein the region (Z) comprises a first sub-region (SZ1) in which the chosen audio field is to be made audible and a second sub-region (SZ2) in which the chosen audio field is to be made inaudible, the second sub-region (SZ2) being defined dynamically by the position of the listener and of said virtual microphone (MIC), the virtual microphone (MIC) being a first virtual microphone, and the first sub-region (SZ1) being defined dynamically as being complementary to the second sub-region (SZ2), the first sub-region (SZ1) being covered by at least one second virtual microphone (MIC) a position of which is defined dynamically depending on said first sub-region (SZ1), the method furthermore comprising, iteratively:
    - estimating an acoustic pressure (P2) in the second sub-region, at least depending on the acoustic transfer functions, on the respective command signals (S(HP1,...,HPN)) for the loudspeakers and on an initial respective weight of the command signals (S(HP1,...,HPN)) for the loudspeakers;
    - computing an error between said estimated acoustic pressure (P2) in the second sub-region and a target acoustic pressure (Pt2), desired in the second sub-region (SZ2) ;
    - computing and applying respective weights to the command signals (S(HP1,...,HPN)) for the loudspeakers, depending on said error and on a weight forgetting factor, said forgetting factor being computed depending on a movement of the listener, said movement being determined by comparison between a previous position of the listener and the current position of the listener; the acoustic pressure in the second sub-region (SZ2) being computed again depending on the respective, weighted, command signals (S(HP1,...,HPN)) for the loudspeakers.
  5. Method according to either of Claims 3 and 4, wherein each sub-region comprises at least one virtual microphone (MIC) and two loudspeakers (HP1,...,HPN), preferably each sub-region comprises at least ten virtual microphones (MIC) and at least ten loudspeakers (HP1, ..., HPN) .
  6. Method according to one of Claims 1 to 5, wherein a value of the forgetting factor:
    - increases if the listener moves;
    - decreases if the listener does not move.
  7. Method according to one of Claims 1 to 6, wherein the forgetting factor is defined by: γ n = γ max × m X α
    Figure imgb0018
    where γ(n) is the forgetting factor, n is the current iteration, γmax is the maximum forgetting factor, χ is a parameter defined equal to µ, an adaptation pitch, m is a variable defined depending on a movement of the listener having χ as its maximum and α is a variable for adjusting the rate of increase or decrease of the forgetting factor.
  8. Method according to Claim 7, wherein a rise pitch lu and a fall pitch ld of the forgetting factor are defined such that:
    - if movement of the listener is determined, m = min m + l u , 1
    Figure imgb0019
    - if no movement of the listener is determined, m = max m l d , 0 ,
    Figure imgb0020
    where 0 < lu < 1 and 0 < ld < 1, the rise and fall pitches being defined depending on a speed of movement of a listener and/or a change in the chosen audio field to be rendered.
  9. Method according to one of Claims 1 to 8, wherein the forgetting factor is between 0 and 1.
  10. System for spatialized audio rendering using an array of loudspeakers covering a region, with a view to diffusing a chosen audio field, selectively audible in a position of a listener in the region, characterized in that it comprises a position sensor (CAPT) and a processing unit that are designed for processing and implementing the method according to any one of Claims 1 to 9.
  11. Medium for storing a computer program, loadable into a memory associated with a processor in a system according to Claim 10, and comprising code segments for implementing a method according to any one of Claims 1 to 9 on execution of said program by the processor.
EP19778569.4A 2018-08-29 2019-08-22 Method for the spatial sound reproduction of a sound field that is audible in a position of a moving listener and system implementing such a method Active EP3844981B1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FR1857774A FR3085572A1 (en) 2018-08-29 2018-08-29 METHOD FOR A SPATIALIZED SOUND RESTORATION OF AN AUDIBLE FIELD IN A POSITION OF A MOVING AUDITOR AND SYSTEM IMPLEMENTING SUCH A METHOD
PCT/FR2019/051952 WO2020043979A1 (en) 2018-08-29 2019-08-22 Method for the spatial sound reproduction of a sound field that is audible in a position of a moving listener and system implementing such a method

Publications (2)

Publication Number Publication Date
EP3844981A1 EP3844981A1 (en) 2021-07-07
EP3844981B1 true EP3844981B1 (en) 2023-09-27

Family

ID=65951625

Family Applications (1)

Application Number Title Priority Date Filing Date
EP19778569.4A Active EP3844981B1 (en) 2018-08-29 2019-08-22 Method for the spatial sound reproduction of a sound field that is audible in a position of a moving listener and system implementing such a method

Country Status (5)

Country Link
US (1) US11432100B2 (en)
EP (1) EP3844981B1 (en)
CN (1) CN112840679B (en)
FR (1) FR3085572A1 (en)
WO (1) WO2020043979A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11417351B2 (en) * 2018-06-26 2022-08-16 Google Llc Multi-channel echo cancellation with scenario memory
CN114199368B (en) * 2021-11-30 2024-04-26 北京工商大学 Full-band PP sound intensity automatic measurement device and measurement method
CN116489573A (en) * 2022-12-21 2023-07-25 瑞声科技(南京)有限公司 Sound field control method, device, equipment and readable storage medium

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AUPR647501A0 (en) 2001-07-19 2001-08-09 Vast Audio Pty Ltd Recording a three dimensional auditory scene and reproducing it for the individual listener
EP2056627A1 (en) 2007-10-30 2009-05-06 SonicEmotion AG Method and device for improved sound field rendering accuracy within a preferred listening area
GB2457508B (en) * 2008-02-18 2010-06-09 Ltd Sony Computer Entertainmen System and method of audio adaptaton
US9578440B2 (en) * 2010-11-15 2017-02-21 The Regents Of The University Of California Method for controlling a speaker array to provide spatialized, localized, and binaural virtual surround sound
US9031268B2 (en) * 2011-05-09 2015-05-12 Dts, Inc. Room characterization and correction for multi-channel audio
US20150131824A1 (en) 2012-04-02 2015-05-14 Sonicemotion Ag Method for high quality efficient 3d sound reproduction
EP2891338B1 (en) * 2012-08-31 2017-10-25 Dolby Laboratories Licensing Corporation System for rendering and playback of object based audio in various listening environments
JP2015206989A (en) 2014-04-23 2015-11-19 ソニー株式会社 Information processing device, information processing method, and program
CN108141691B (en) * 2015-10-14 2020-12-01 华为技术有限公司 Adaptive reverberation cancellation system
US10979843B2 (en) * 2016-04-08 2021-04-13 Qualcomm Incorporated Spatialized audio output based on predicted position data

Also Published As

Publication number Publication date
FR3085572A1 (en) 2020-03-06
US11432100B2 (en) 2022-08-30
EP3844981A1 (en) 2021-07-07
WO2020043979A1 (en) 2020-03-05
CN112840679A (en) 2021-05-25
US20210360363A1 (en) 2021-11-18
CN112840679B (en) 2022-07-12

Similar Documents

Publication Publication Date Title
EP3844981B1 (en) Method for the spatial sound reproduction of a sound field that is audible in a position of a moving listener and system implementing such a method
EP3815395B1 (en) Method for the spatial sound reproduction of a sound field which is selectively audible in a sub-area of an area
EP1836876B1 (en) Method and device for individualizing hrtfs by modeling
CA2337176C (en) Process for adjusting the sound level of a digital sound recording
EP1789956B1 (en) Method of processing a noisy sound signal and device for implementing said method
EP2518724A1 (en) Microphone/headphone audio headset comprising a means for suppressing noise in a speech signal, in particular for a hands-free telephone system
EP2772916B1 (en) Method for suppressing noise in an audio signal by an algorithm with variable spectral gain with dynamically adaptive strength
WO2014044948A1 (en) Optimized calibration of a multi-loudspeaker sound restitution system
EP2113913B1 (en) Method and system for reconstituting low frequencies in an audio signal
EP1586220B1 (en) Method and device for controlling a reproduction unit using a multi-channel signal
EP3842923A1 (en) Connected enclosure comprising a lan interface and a wpan interface
WO2012045927A1 (en) Method for developing correction filters for correcting the acoustic modes of a room
FR2550903A1 (en) Method and device for controlling and regulating an electroacoustic channel.
WO2017207286A1 (en) Audio microphone/headset combination comprising multiple means for detecting vocal activity with supervised classifier
FR3120449A1 (en) Method for determining a direction of propagation of a sound source by creating sinusoidal signals from sound signals received by microphones.
WO2024213556A1 (en) Optimized processing for reducing channels of a stereophonic audio signal
CA2974156C (en) Amplifier with adjustment of the automatic sound level
FR3051959A1 (en) METHOD AND DEVICE FOR ESTIMATING A DEREVERBERE SIGNAL
FR3138750A1 (en) Method for calibrating a portable audio device, system for calibrating a portable audio device and associated computer program product
FR2957185A1 (en) Threshold determination method for attenuating noise generated in e.g. room, involves determining threshold to be applied to component of sound signal based on noise level and amplitude, and maintaining noise generated by sound signal
FR3147898A1 (en) Optimized channel reduction processing of a stereophonic audio signal
EP4287648A1 (en) Electronic device and associated processing method, acoustic apparatus and computer program
FR2963844A1 (en) Method for determining parameters defining two filters respectively applicable to loudspeakers in room, involves comparing target response with acoustic response generated, at point, by loudspeakers to which filters are respectively applied
FR2943867A1 (en) Three dimensional audio signal i.e. ambiophonic signal, processing method for computer, involves determining equalization processing parameters according to space components based on relative tolerance threshold and acquisition noise level
WO2013087638A1 (en) Method for digitally processing a set of audio tracks before mixing

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20210216

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

RAP3 Party data changed (applicant data changed or rights of an application transferred)

Owner name: ORANGE

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

INTG Intention to grant announced

Effective date: 20230404

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

Free format text: NOT ENGLISH

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602019038275

Country of ref document: DE

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

Free format text: LANGUAGE OF EP DOCUMENT: FRENCH

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG9D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20231228

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230927

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230927

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20231227

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230927

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230927

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230927

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20231228

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230927

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20230927

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 1616558

Country of ref document: AT

Kind code of ref document: T

Effective date: 20230927

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230927

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20240127

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230927

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230927

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230927

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230927

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20240127

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230927

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230927

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230927

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230927

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20240129

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230927

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230927

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230927

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602019038275

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230927

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230927

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20240628

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20240723

Year of fee payment: 6

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20240723

Year of fee payment: 6

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20240723

Year of fee payment: 6

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230927