EP2901718A1 - Procede et systeme de restitution d'un signal audio - Google Patents
Procede et systeme de restitution d'un signal audioInfo
- Publication number
- EP2901718A1 EP2901718A1 EP13779299.0A EP13779299A EP2901718A1 EP 2901718 A1 EP2901718 A1 EP 2901718A1 EP 13779299 A EP13779299 A EP 13779299A EP 2901718 A1 EP2901718 A1 EP 2901718A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- sound
- spatial
- restitution
- window
- reproduction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 97
- 230000005236 sound signal Effects 0.000 title claims abstract description 83
- 238000012732 spatial analysis Methods 0.000 claims abstract description 62
- 238000011282 treatment Methods 0.000 claims abstract description 24
- 238000009877 rendering Methods 0.000 claims description 169
- 239000013598 vector Substances 0.000 claims description 57
- 238000012545 processing Methods 0.000 claims description 42
- 230000008569 process Effects 0.000 claims description 24
- 238000000354 decomposition reaction Methods 0.000 claims description 21
- 230000015572 biosynthetic process Effects 0.000 claims description 14
- 230000000694 effects Effects 0.000 claims description 10
- 238000000605 extraction Methods 0.000 claims description 7
- 208000004143 Waterhouse-Friderichsen Syndrome Diseases 0.000 description 13
- 201000010802 Wolfram syndrome Diseases 0.000 description 13
- 238000001914 filtration Methods 0.000 description 13
- 238000004458 analytical method Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 238000003786 synthesis reaction Methods 0.000 description 6
- 238000012731 temporal analysis Methods 0.000 description 5
- 238000009792 diffusion process Methods 0.000 description 4
- 238000004091 panning Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000004590 computer program Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000012886 linear function Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 230000005855 radiation Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- BWSQKOKULIALEW-UHFFFAOYSA-N 2-[2-[4-fluoro-3-(trifluoromethyl)phenyl]-3-[2-(piperidin-3-ylamino)pyrimidin-4-yl]imidazol-4-yl]acetonitrile Chemical compound FC1=C(C=C(C=C1)C=1N(C(=CN=1)CC#N)C1=NC(=NC=C1)NC1CNCCC1)C(F)(F)F BWSQKOKULIALEW-UHFFFAOYSA-N 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000021615 conjugation Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 238000009432 framing Methods 0.000 description 1
- 238000001093 holography Methods 0.000 description 1
- 238000004377 microelectronic Methods 0.000 description 1
- 238000002156 mixing Methods 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 241000894007 species Species 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S3/00—Systems employing more than two channels, e.g. quadraphonic
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2420/00—Techniques used stereophonic systems covered by H04S but not provided for in its groups
- H04S2420/13—Application of wave-field synthesis in stereophonic audio systems
Definitions
- the invention relates to the general field of acoustic processing and sound spatialization.
- It relates more particularly to the rendering of a multichannel audio signal on a determined rendering device, equipped with a plurality of loudspeakers arranged at fixed locations of the rendering device.
- the invention applies in a preferred but non-limiting manner to an acoustic speaker type rendering device, also known as a "baffling structure" (or “baffling structure” in English).
- acoustic chamber is, in a known manner, consisting of a single or monobloc structure, integrating the various speakers used for the reproduction of the audio signal (the speakers can not be separated from the speaker).
- An example of an acoustic speaker is in particular a sound bar in which the various speakers are integrated.
- the invention also has a particular interest when it is applied to a so-called compact acoustic enclosure or more generally to a compact retrieval device.
- a compact rendering device is a device of small dimensions (in particular with respect to the dimensions of the room or the room in which it is envisaged to place the rendering device), and in which the loudspeakers are mounted relatively close to each other.
- this device can be monobloc (as an acoustic speaker) or alternatively be composed of several elements, grouped together to form a compact assembly, each element being equipped with one or more speakers.
- the largest dimension of a compact rendering device generally does not exceed 2 meters, while the spacing between the loudspeakers, two by two, is less than 50 centimeters.
- This method is based on a spatial analysis of the multichannel audio signal that is to be restored, making it possible to extract and locate the sound objects of the audio signal located inside a sound reproduction window defined from the the physical position of the speakers of the playback device and the extended listening area.
- the extracted sound objects are restored inside the sound reproduction window, according to their location in this window, using a first rendering process.
- This first rendering process is, for example, a synthesis of the acoustic field (or WFS treatment for "Wave Field Synthesis” in English), known per se.
- the other components of the multichannel audio signal are also restored inside the sound reproduction window, according to a second rendering process (such as, for example, an intensity panoramic effect).
- a second rendering process such as, for example, an intensity panoramic effect.
- a compact reproduction device has certain constraints, particularly in terms of the size of the listening area that can be considered and the sound reproduction window related to the physical arrangement of the speakers on the device of restitution, which are generally smaller than with a restitution device composed of several entities scattered throughout the room or the room in which the device is placed, and as envisaged in the document WO 2012/025580 .
- the invention responds in particular to this need by proposing a method of rendering a multichannel audio signal on a playback device equipped with a plurality of loudspeakers, these loudspeakers being arranged at fixed locations of the playback device and defining a sound reproduction spatial window with respect to a so-called reference spatial position.
- the restitution process according to the invention is remarkable in that it comprises:
- a step of spatial analysis of the multichannel audio signal comprising:
- spatial analysis step as being diffuse or positioned outside the restitution space window of the rendering device.
- the invention also relates to a system for rendering a multichannel audio signal on a rendering device equipped with a plurality of loudspeakers, these loudspeakers being arranged at fixed locations of the rendering device and defining a window spatial sound reproduction compared to a reference position, this restitution system comprising:
- Means for spatial analysis of the multichannel audio signal comprising:
- o means for extracting at least one sound object from the signal, and o estimating means, for each extracted sound object, of a diffuse or localized character of this sound object, and a position of this object sound with respect to the spatial window of sound reproduction of the rendering device;
- Means for reproducing the audio signal on the plurality of loudspeakers of the rendering device able to apply to each sound object extracted from the signal audio, a processing restitution on at least one speaker of the plurality of speakers of the playback device, this restitution processing depending on the diffuse or localized nature of the sound object and its position relative to the spatial window of sound reproduction estimated during the spatial analysis step, the rendering processing comprising the creation of at least one virtual source outside the restitution space window of the rendering device, from the loudspeakers of the rendering device, when the sound object is estimated by the spatial analysis means as being diffuse or positioned outside the restitution space window of the rendering device.
- step (respectively means) of restitution on loudspeakers is meant here the step (respectively the means) which consists of generating and supplying signals intended to supply the speakers of the rendering device. These signals will then be broadcast (i.e. transmitted) by the speakers of the playback device so as to reproduce the multichannel audio signal.
- reference spatial position here is meant both a point in the space characterizing the position of a target listener of the audio signal, and a larger area of the space in which is (are) susceptible ( s) to find one or more auditors.
- reference spatial position here is meant both a point in the space characterizing the position of a target listener of the audio signal, and a larger area of the space in which is (are) susceptible ( s) to find one or more auditors.
- the invention therefore proposes to implement a spatial analysis of the multichannel audio signal to be reproduced in order to separate the sound objects composing the audio signal as a function, on the one hand, of their localized character in the space (ie discrete, generated by a localizable source) or diffuse, and secondly, their position relative to the sound reproduction window defined by the reference spatial position and the physical location of the speakers on (or in ) the rendering device with respect to this reference spatial position.
- This separation of sound objects is exploited, in accordance with the invention, by applying rendering processes to the extracted objects which take into account their localized or diffuse characters, as well as the positions of the sources at the origin of these objects. inside or outside the sound reproduction window.
- the invention links the restitution processes applied to the sound objects of the multichannel signal to be restored, directly to the spatial characteristics of these objects extracted during the spatial analysis of the multichannel signal. More precisely, the sound objects identified during the spatial analysis step as being diffuse or positioned outside the restitution space window of the rendering device, are advantageously restored via the speakers of the device. restitution, outside this window, through the implementation of a rendering processing including the creation of virtual sources outside this window.
- the restitution processing applied to this sound object during the restitution step is preferentially able to restore this sound object within the sound reproduction space window of the rendering device, at the location of the source at the origin of this sound object.
- This restitution inside the spatial window of sound reproduction can be done directly, by diffusing the sound objects on the speakers of the rendering device without resorting to complex spatial filtering processes. For example, it diffuses the object as is on one or more speakers, or by simply applying a panning effect (or "panning" in English). Such techniques are known per se and relatively simple to implement.
- the rendering processing inside the reproduction space window can comprise the creation of one or more virtual sources from the speakers of the rendering device, inside the restitution space window. sound of the rendering device. This may be a type of WFS or derivative processing.
- the direction or position of the virtual sources, as well as, if appropriate, their amplitude, are then determined from the estimated position of the sources at the origin of the localized sound objects extracted from the multichannel signal and their contribution (ex. sound level terms) in the multichannel signal.
- the application, during the restitution step, of the aforementioned restitution treatments chosen according to the characteristics of the sound objects determined during the spatial analysis step, makes it possible to remove the objects that are diffuse or coming from the outside the rendering window, objects located inside the window (such objects typically include voice or dialogues).
- the listener located at the reference spatial position in relation to the sound reproduction window offered by the rendering device, window particularly limited in the case of a compact reproduction device.
- the listener has the feeling of being immersed in the sound stage (perception of envelopment in the sound stage).
- the invention takes advantage of a phenomenon well known in psycho-acoustics under the name “cocktail party effect” or “cocktail party effect” in English, which reflects the ability of the human auditory system to select a sound source in a noisy environment and to treat sounds even if they are not at the heart of the object of human attention.
- the invention thus allows a rendering the multichannel audio signal of very good quality, including on a compact playback device, while preserving the accuracy and clarity of signal sound objects located and coming from within the rendering window. It can be applied to any multichannel signal format, such as a stereo signal, 5.1, 7.1, 10.2, Higher Order Ambisonics (HOA), and so on.
- a stereo signal such as a stereo signal, 5.1, 7.1, 10.2, Higher Order Ambisonics (HOA), and so on.
- HOA Higher Order Ambisonics
- processing generally carried out by the invention does not in itself aim to modify the characteristics of the sound scene of the multichannel audio signal, but promotes the intelligibility of the sound objects located in the sound reproduction window and allows to immerse the listener in the sound stage.
- the spatial analysis step further comprises estimating the position of the sound object with respect to the center of the spatial sound reproduction window of the rendering device.
- the invention has a preferred application, but not limited to, when the rendering device is an acoustic chamber in which the plurality of loudspeakers is arranged.
- Such an acoustic speaker is for example a sound bar equipped with several speakers.
- the spatial analysis step comprises a decomposition of the received audio signal into a plurality of frequency sub-bands, the extraction of said at least one sound object being performed on at least one sub-band. -frequency band.
- frequency sub-bands eg in octave, third octave or auditory bands
- the spatial analysis of the audio signal is in fact carried out by frequency subband: it is thus possible to better isolate the sound objects composing the multichannel audio signal. In particular, it is possible to isolate several sound objects in the multichannel audio signal, for example one per frequency subband.
- the diffuse or localized nature of the extracted sound object is estimated from at least one evaluated correlation between two distinct channels of the multichannel audio signal.
- the position of the extracted sound object with respect to the sound reproduction spatial window can be estimated from at least one evaluated difference in levels between two distinct channels of the multichannel audio signal.
- the determination of the characteristics associated with each sound object extracted from the multichannel audio signal can therefore be performed very simply, by means of calculating correlations and differences. of levels between the signals distributed on the different channels of the multichannel signal.
- the spatial analysis step comprises the determination of a Gerzon vector representative of the multichannel audio signal.
- the Gerzon vector of a multichannel audio signal is derived from the respective contributions (direction and intensity or energy) of the different channels of the multichannel signal to the sound scene perceived by the listener. the reference position.
- the determination of such a vector for a multichannel audio signal is described for example in the document US 2007/0269063.
- the Gerzon vector of a multichannel audio signal reflects the spatial location of the multichannel audio signal as perceived by the listener from the reference position. The determination of this Gerzon vector makes it possible to dispense with the calculation of correlations between the different channels of the multichannel signal in order to determine the diffuse or localized nature of the sound objects extracted from the signal.
- the spatial analysis step comprises a spatial decomposition of the multichannel signal into spherical harmonics.
- Such spatial decomposition is known to those skilled in the art and described for example in WO 2012/025580. It allows a very precise spatial analysis of the multichannel audio signal and the sound objects composing it. Thus, in particular, several sound objects can be determined for the same frequency subband.
- the restitution processing applied to this sound object uses a transaural technique of restitution of this sound object on the loudspeakers side of the rendering device.
- This first embodiment has a preferred application in the case of a playback device equipped with a reduced number of speakers, for example a central speaker and two side speakers.
- the plurality of speakers of the playback device comprises a central speaker and side speakers
- this sound object is broadcast, during the restitution step, by the rendering processing, on the central loudspeaker of the device of restitution.
- a sound object centered with respect to the reference spatial position is attached to the center of the rendering device so as to optimize its intelligibility. It is preferably restored in a direct way (that is to say without filtering spatial) on the central speaker of the playback device, so as to benefit from the natural directivity properties of the center speaker.
- the rendering process applied during the rendering step broadcasts this sound object on the speakers of the rendering device using a panoramic effect of intensity.
- the sound objects located and positioned inside the acoustic window are also attached to the playback device, and restored directly (that is to say without spatial filtering), within the window of playback through the intensity panning effect applied to the speakers.
- This panoramic intensity effect applied to all the speakers of the rendering device makes it possible to better distinguish the sound objects located and positioned inside the acoustic window of the sound objects located in the center of the window.
- the invention is however not limited to the application of the aforementioned restitution treatments; it is also possible to resort to more complex rendering processes, in particular implementing a spatial filtering of the sound objects on the speakers of the rendering device.
- the creation of at least one virtual source outside the restitution space window of the rendering device may comprise the formation of at least one beam directed towards the outside of the beamforming space window.
- the restitution processing applied to this object sound during the restitution step may comprise the formation of a beam directed towards the reference spatial position.
- the creation of virtual sources allows better control and better accuracy of the sound reproduction of an audio signal than a "direct" sound reproduction (ie without spatial filtering) on the speakers of the playback device, limited by itself by the capacity of the speakers of the rendering device. It offers the possibility of having better control of the directivity of reconstructed sound sources.
- beamforming is particularly well suited for the reproduction of signals on dense speaker networks (eg playback device equipped with 6 or more speakers), for which we have a better precision to create the sources virtual because of the existence of a larger number of degrees of freedom (related to the presence of a larger number of speakers).
- the various steps of the rendering method are determined by computer program instructions.
- the invention also relates to a program on an information medium, this program being capable of being implemented in a rendering system or more generally in a computer, this program comprising instructions adapted to the implementation steps of a restitution process as described above.
- This program can use any programming language, and be in the form of source code, object code, or intermediate code between source code and object code, such as in a partially compiled form, or in any other form desirable shape.
- the invention also relates to a computer-readable or microprocessor-readable information medium, and comprising instructions of a program as mentioned above.
- the information carrier may be any entity or device capable of storing the program.
- the medium may comprise a means of storage, such as a ROM, for example a CD ROM or a microelectronic circuit ROM, or a magnetic recording means, for example a floppy disk or a hard disk.
- the information medium may be a transmissible medium such as an electrical or optical signal, which may be conveyed via an electrical or optical cable, by radio or by other means.
- the program according to the invention can be downloaded in particular on an Internet type network.
- the information carrier may be an integrated circuit in which the program is incorporated, the circuit being adapted to execute or to be used in the execution of the method in question.
- the invention also relates to an acoustic enclosure comprising a restitution system according to the invention.
- the method, the restitution system and the acoustic enclosure according to the invention present in combination all or part of the aforementioned characteristics.
- FIG. 1 represents a reproduction system according to the invention, in a particular embodiment
- FIGS. 2, 3A and 3B illustrate examples of spatial windows of sound reproduction for various restitution devices and reference positions
- FIG. 4 diagrammatically represents the hardware architecture of the rendering system of FIG. 1;
- FIG. 5 represents the main steps of a rendering method according to the invention, as they are implemented, in a particular embodiment, by the rendering system of FIG. 1.
- FIG. 1 represents, in its environment, a system 1 for rendering a multi-channel audio signal S on a reproduction device 2, in accordance with the invention, in a particular embodiment.
- the playback device 2 is equipped with a plurality of loudspeakers 2-1, 2-2, 2-N (N> 1). This is, in the example shown in Figure 1, a compact reproduction device.
- the device 2 of restitution is here a compact acoustic enclosure, in other words a monobloc structure or single closed box, incorporating all the speakers 2-1, 2-2, 2-N.
- the rendering device 2 is for example a horizontal sound bar, of length not exceeding one or two meters, inside (or on) which are arranged in fixed and close positions ( within 50cm of each other), the speakers 2-1, 2-2, 2-N.
- the invention also applies to other types of rendering devices.
- the invention also applies to a modular compact reproduction device consisting of several separate elements each integrating one or more speakers.
- compact restitution device designates in fact a device of small dimensions, especially with respect to the dimensions of the room or the room in which one consider the reproduction of the audio signal using this device, and on or in which the speakers are mounted relatively close to each other.
- the largest dimension of a compact rendering device does not generally exceed 2 meters, while the loudspeakers are mounted on the rendering device with a spacing of less than 50 cm.
- the physical location of the loudspeakers 2-1, 2-2, 2-N defines, in a known manner, a spatial window W of sound reproduction with respect to a so-called referenced position Pref, placed in front of the reproduction device 2 (in particular with regard to the orientation of all or part of the loudspeakers and the diffusion of sounds), and modeling the position of a listener in the space taken as a reference to optimize the reproduction of the audio signal S.
- reference position Pref The actual choice of reference position Pref depends on several factors known to those skilled in the art, and will not be described here. For a compact rendering device, this reference position Pref is chosen generally point.
- FIG. 2 illustrates the spatial window W of sound reproduction defined by the loudspeakers 2-1, 2-2, 2-N of the reproduction device 2 and the reference position Pref.
- the physical location of the loudspeakers 2-1, 2-2, 2-N on the rendering device 2 (and more precisely of the two loudspeakers 2-1 and 2-N located at the ends of the device 2), associated with the reference position Pref, define an angular aperture ⁇ of sound reproduction.
- the subspace delimited by this angular aperture ⁇ corresponds to the spatial window W of sound reproduction associated with the reproduction device 2 and the reference position Pref.
- the window W depends on the reference position Pref.
- the position Pref is aligned with respect to the center of the reproduction device 2, so that the spatial window W is defined by the angular excursion ⁇ / 2 with respect to the axis ⁇ connecting the center of the playback device 2 at the reference position Pref;
- FIGS. 3A and 3B respectively illustrate, as examples:
- the spatial window W for the sound reproduction of a horizontal sound-type rendering device 2 ' provided with three loudspeakers 2-1', 2-2 ', 2-3' with respect to a spatial position Extended reference pref;
- the spatial window W "of sound reproduction of a reproduction device 2" provided with 8 loudspeakers 2-1 ", 2-2", 2-8 "with respect to a spatial position Pref” of point reference, the 2-1 “speakers at 2-4" front while 2-5 “, 2-6” and 2-7 “, 2-8" loudspeakers are located on each side of the 2 "playback device .
- the invention proposes a processing of a multichannel audio signal in two stages: firstly, the multichannel audio signal to be restored is analyzed spatially; then, the spatial characteristics of the signal resulting from this spatial analysis are used to optimize the restitution of the signal on the rendering device 2.
- the system 1 of restitution according to the invention comprises:
- Means 3 for spatial analysis of the multi-channel audio signal S including in particular means for extracting at least one sound object from the signal, and estimation, for each sound object extracted, a diffuse or localized character of this sound object, and a position of this sound object with respect to the spatial window W of sound reproduction of the playback device 2 (the extraction of sound objects and the estimation of their characteristics are generally carried out jointly); and
- the rendering means 4 are able to apply the T-A1, T-A2, TB and TC rendering processes on the sound objects extracted from the signal S, as a function of the characteristics determined by the means 3 of spatial analysis.
- the rendering system 1 no limitation is attached to the number of different treatments that can be applied by the rendering system 1.
- the T-Al, T-A2, TB and TC treatments may be of the same kind (ie based on the same techniques, as per example a WFS technique or "beamforming"). However, they are adapted to the spatial characteristics of the sound objects to which they are applied and differ in that sense from each other. For example, they do not broadcast the signals on the same speakers, do not envisage the creation of virtual sources in the same subspaces (or having similar characteristics in terms of position / direction and / or amplitude ), the created beams can be dimensioned differently (eg of different widths), etc.
- Processing means 4A capable of applying one or more rendering processes on the sound objects of the audio signal S determined to be localized and in the sound reproduction window W.
- the processing means 4A are able to apply a T-Al processing on the sound objects generated by sources placed in the center of the window W, and a T-A2 treatment on the sound objects. placed inside the window W at a position distinct from the center;
- Processing means 4B capable of applying a treatment TB on the sound objects of the audio signal S determined to be diffuse;
- Processing means 4C capable of applying a T-C processing on the sound objects of the audio signal S determined as localized and outside the window W of sound reproduction.
- T-Al, T-A2, T-B and T-C rendering treatments will be described in more detail later and illustrated by examples.
- the spatial analysis means 3 and the audio signal reproduction means 4 are software means.
- the rendering system 1 has the hardware architecture of a computer, as illustrated in FIG. 4.
- It comprises in particular a processor (or microprocessor) 5, a random access memory 6, a read-only memory 7, a non-volatile flash memory 8 as well as communication means 9 able to transmit and receive signals.
- processor or microprocessor
- the communication means 9 comprise, on the one hand, an interface (wired or wireless) with the loudspeakers 2-1, 2-N of the reproduction device 2, as well as means for receiving an audio signal multichannel, such as the signal S for example. These means are known to those skilled in the art and will not be described further here.
- the read-only memory 7 of the reproduction system 1 constitutes a recording medium in accordance with the invention, readable by the (micro) processor 5 and on which is recorded a computer program according to the invention, comprising instructions for performing the steps of a rendering process described later with reference to Figure 5.
- the reproduction system 1 may be in the form of a computer or alternatively of an electronic chip or of an integrated circuit, in which the computer program comprising the instructions for the execution of the method of restitution according to the invention is incorporated.
- system 1 of restitution may be an entity separate from the device 2 of restitution, or conversely, be integrated within the device 2 restitution.
- the multi-channel audio signal S is supplied to the rendering system 1 via its communication means 9.
- the format and structure of such an audio signal is known to those skilled in the art and will not be described. right here.
- the rendering system 1 Upon reception of the signal S (step E10), the rendering system 1 initiates a first phase ⁇ of spatial analysis of the signal S carried out using its spatial analysis means 3.
- the signal denoted Si resulting from the decomposition of the signal S and associated with the frequency sub-band BWi is itself a multichannel signal .
- each sub-band No limitation is attached to the width of each sub-band: one can for example consider a decomposition in octave, in third of octave, or in auditory bands (ie adapted to the hearing), according to a compromise complexity / accuracy in particular.
- the frequency subband decomposition of the signal S is carried out via a Fourier transformation applied to the signal S, and does not present any difficulty per se for the skilled person.
- the amplitudes of the extracted sound objects are contained directly in the signals Si, and correspond respectively to the levels of the frequency subbands.
- the extraction of the sound objects and the estimation of the aforementioned characteristics of each object are performed jointly by the means 3 of spatial analysis.
- the spatial analysis means 3 of the rendering system 1 implement a temporal analysis of the multichannel signal Si.
- the rendering system 1 evaluates, for each pair of distinct channels of the multichannel signal Si, the normalized correlation between these channels (i.e. between the signals representative of the channels), defined by the following equation:
- R xy -p for p ⁇ 0 where x and y respectively denote two distinct channels of the multichannel signal Si, [.] * Denotes the complex conjugation operator, and M is a constant defining the number of signal samples on which the correlation is evaluated.
- the rendering system 1 can simply evaluate a normalized correlation between two distinct channels of the multichannel signal Si for only a selection of pairs of predetermined channels of the signal Si.
- this selection may include only four channel pairs, namely, the pair consisting of L and R channels, the pair consisting of Ls and Rs channels, the pair consisting of L and Ls channels and the pair consisting of R and Rs channels.
- Each correlation R xy thus evaluated is then compared with a predefined threshold denoted THR.
- the reproduction system 1 estimates that the signal Si (and thus a fortiori the signal S) contains a localized sound object.
- the reproduction system 1 estimates that the signal Si contains a diffuse sound object.
- the value of the THR threshold is determined empirically: it is preferably chosen between 0.5 and 0.8. Thus, it is possible to extract as many sound objects from the signal Si as from the pairs of channels examined or in an equivalent manner, than from the correlations evaluated between the channels of the signal Si.
- a sound object When a sound object is estimated as located by the playback system 1, it estimates the position of this sound object with respect to the sound reproduction window W (by definition, a diffuse object has no precise position or identifiable in space, so it is not necessary to estimate its position with respect to the spatial window W of restitution).
- the reproduction system 1 here estimates the reproduction window W from the reference position Pref and the physical locations of the speakers of the playback device 2.
- the spatial window W can be determined geometrically by the reproduction system 1, in terms of angular excursion with respect to the axis ⁇ passing through the center of the rendering device 2 and the reference position Pref, from the knowledge of the position Pref and physical locations of the speakers of the device 2 placed at the ends (ie 2-1 and 2-N).
- the spatial window W is associated by the reproduction system 2 with an angular excursion of ⁇ / 2 with respect to the axis ⁇ .
- the position Pref and the physical locations of the loudspeakers of the device can be previously configured in the nonvolatile flash memory 7 of the reproduction system 1, for example during the construction of the reproduction system 1 if it is integrated in the device. 2 or during a preliminary step of setting up the reproduction system 1.
- the window W may be estimated by the reproduction system 1 using a technique similar or identical to that described in the document E. Corteel entitled “Equalization in an extended area using multichannel inversion and wave field synthesis", Journal of the Audio Engineering Society No. 54 (12), December 2006, when the Pref position is an extended area.
- the spatial window W may be predetermined, and stored for example in the nonvolatile flash memory 7 of the reproduction system 1.
- the reproduction system 1 also evaluates, for each pair of distinct channels of the signal Si, the difference in levels (or energy) between these channels, for example in decibels, according to the following equation: where x and y respectively denote two distinct channels of the multichannel signal Si,
- This direction is evaluated here in terms of angular excursion with respect to the ⁇ axis.
- the system 1 of reproduction associates with a predefined difference in levels between two channels, for example -30 dB (respectively 30 dB), a direction of the sound object of 90 ° (respectively -90 °) compared to the axis ⁇ .
- the directions between -90 ° and 90 ° are then estimated from an increasing interpolation function (eg an increasing linear function) defined between the two values -90 ° and 90 °.
- the reproduction system 1 compares the direction of the sound object thus evaluated with respect to the angular excursion ⁇ / 2 defining the spatial window W, in order to determine whether the object is inside or outside.
- the spatial window W thus, a sound object for which a direction in absolute value greater than ⁇ / 2 has been estimated with respect to the axis ⁇ , is considered by the system 1 as outside the spatial window W , while a sound object for which a direction in absolute value less than or equal to ⁇ / 2 with respect to the ⁇ axis has been estimated, is considered by the system 1 to be positioned inside the spatial window W .
- the rendering system 1 also uses the estimated direction of the sound object to determine if this object is in the center of the spatial window W (to a delta of precision), in order to better distinguish during the restitution, the objects located in the center of the window W of the other objects located in the window W (step E40).
- an object is considered by the rendering system 1 to be positioned in the center of the spatial window W if its direction is within an interval [0; ⁇ ] around the axis ⁇ , where ⁇ denotes a predefined angle, for example 2.5 °.
- This step is however optional.
- the spatial analysis phase ⁇ I comprises the determination of a Gerzon vector representative of each multichannel audio signal Si (a vector is estimated for each frequency subband BWi).
- the Gerzon vector of a multichannel audio signal is derived from the respective contributions (direction and intensity or energy) of the different channels of the multichannel signal to the sound scene perceived by the listener located at the reference position Pref.
- the determination of such a vector for a multichannel audio signal is described in US 2007/0269063 and will not be described in more detail here. It is assumed here that in the second variant embodiment, the reproduction system 1 proceeds in the same manner as described in this document.
- the Gerzon vector of a multichannel audio signal reflects the spatial location of the multichannel audio signal as perceived by the listener from the reference position.
- the determination of this Gerzon vector makes it possible to dispense with the calculation of correlations between the different channels of the multichannel signal in order to determine the diffuse or localized nature of the sound objects extracted from the signal, and the position of these objects with respect to the spatial window W .
- Gerzon vector associated with a multichannel signal Si is written in the form of a directional vector, giving the direction of the sound object associated with the frequency subband BWi, and a non-directive vector (ie diffuse).
- the sound reproduction system 1 is able to extract the localized and diffuse sound objects composing the signal S, and to determine the position of the localized objects with respect to the spatial window.
- W from the direction of the Gerzon vectors, and in particular "directional" vectors
- amplitude determined from the norm of the Gerzon vectors and from the contribution of the directional / non-directive vectors.
- THR_inf a so-called lower threshold
- THR_sup a threshold said higher
- the two sound objects ie the localized object corresponding to the directional vector and the diffuse object corresponding to the non-directive vector
- the amplitude associated with each sound object thus extracted is then derived from the amplitude of the corresponding directional or non-directive vector.
- the diffuse and localized objects given by the non-directive vector and the directional vector derived from the Gerzon vector are extracted both (no prior comparison with respect to a threshold to estimate if the contribution of the one and / or or the other is significant enough to be restored) to be restored on the speakers of the device 2 restitution.
- the direction of the vectors (i.e. directional) corresponding to the extracted sound objects is then compared with respect to the angular excursion ⁇ / 2, in order to determine their position with respect to the window W.
- the rendering system 1 can identify the objects located in the center of the spatial window W, so as to better distinguish them during the restitution compared to the other objects located inside. of the W. space window
- Gerzon vectors do not provide the ability to extract more than one localized sound object per frequency subband.
- the spatial analysis means 3 of the reproduction system 1 implement, to extract the sound objects from the signals Si and estimate their characteristics during the steps E30. and E40, a technique based on a spatial decomposition of each multichannel signal Si in spherical harmonics.
- the sound field ⁇ ⁇ , ⁇ ) derived from each multichannel signal Si can be decomposed according to the formalism of the spherical harmonics, as follows:
- 5 ⁇ ⁇ ( ⁇ ) denotes the coefficient (at the frequency ⁇ ) associated with the spherical harmonic ⁇ ⁇ , ⁇ ) in the decomposition, and:
- n (kr) is a spherical Bessel function of first order n species
- the spatial analysis means 3 apply, for example, the technique for extracting sound objects from a multichannel signal from its spatial decomposition into spherical harmonics described in document WO 2012/025580.
- This technique is based on a representation of the matrix ⁇ ( ⁇ , ⁇ ), constructed from the coefficients fî mn (w) of the decomposition in spherical harmonics to which we have applied a Fourier transform STFT (for "Short Time Fourier Transform ) At time t, in the form of a sum of two terms, ie, a first term modeling the localized sound objects included in the signal Si, and a second term modeling diffuse sound objects.
- STFT Short Time Fourier Transform
- the amplitude associated with the localized sound objects is determined from the sum of the spherical harmonic coefficients associated with these objects as a function of the estimated direction.
- the amplitude of diffuse objects is estimated from residual spherical harmonic coefficients obtained after subtracting the contribution of localized sound objects.
- the reproduction system 1 proceeds in a manner similar to that described in the first variant for the temporal analysis of the signals Si, by comparison of their direction with respect to the angular excursion ⁇ / 2.
- the rendering system 1 can identify the objects located in the center of the spatial window W, so as to better distinguish them during the restitution compared to the other objects located inside. of the W. space window
- the system 1 of restitution does not strictly concern the position of the sound objects extracted from the signals Si by relative to the rendering device 2, ie, it does not distinguish between the sound objects according to whether they are behind or in front of the playback device 2 with respect to the reference position Pref.
- the spatial analysis performed by the rendering system 1 may be limited to sound objects located behind the rendering device 2, regardless of the spatial analysis technique selected among the aforementioned techniques in particular.
- a frequency subband decomposition of the multichannel signal S is carried out, then the reproduction system 1 examines each frequency subband to extract the sound objects from the multichannel signal S. This allows extract more precisely the sound objects constituting the signal S (we can identify more particular sound objects).
- this hypothesis is not limiting and one could envisage in the context of the invention to work directly on the multichannel signal S without performing decomposition into frequency subbands.
- the reproduction system 1 extracted and identified several categories of sound objects in the multichannel signal S, namely:
- a first category of sound objects denoted OBJLocIntW, grouping the sound objects located and located inside the spatial window W;
- a second category of sound objects denoted OBJLocExtW, grouping the sound objects located and located outside the spatial window W;
- OBJDiff a third category of sound objects, denoted OBJDiff, grouping the diffuse sound objects.
- the system 1 of restitution also has, for the first and second categories of sound objects, the position of these objects in the spatial window W.
- the reproduction system 1 has also identified, within the category of sound objects OBJLocIntW, the sound objects coming from sources positioned in the center of the spatial window W.
- All of this information is for example stored in the RAM 6 or in the nonvolatile flash memory 7 of the system 1 for rendering in order to be used in real time.
- the system 1 will restore the sound objects extracted from the signal S according to their category, and the characteristics of these determined objects. during steps E30 and E40.
- the means 4 for restitution of the rendering system 1 apply four distinct processes T-A1, TA-2, TB and TC selected according to the characteristics of the sound objects extracted by the means 3 d. spatial analysis of the rendering system 1 during the phase I (step E50).
- the sound objects identified as belonging to the first category OBJLocIntW are restored by the means 4 of restitution (and more precisely by the means 4A), by applying the treatments T- Al or T- A2 according to whether they are respectively located in the center or not of the spatial window W (step E51).
- the processing T-Al and T-A2 restore the sound objects of the category OBJLocIntW inside the spatial window W.
- Different types of T-Al and T-A2 treatments can be envisaged for such a reproduction. These treatments may or may not implement filtering of the sound objects before they are broadcast on all or part of the speakers of the playback device 2.
- the playback device 2 comprises a central loudspeaker and side loudspeakers:
- the processing T-Al may be able to broadcast the sound objects extracted from the signal S identified in the center of the spatial window W, directly on the central loudspeaker of the device 2;
- the reproduction processing T-A2 may be able to broadcast the sound objects extracted from the signal S and positioned at a position distinct from the center of the spatial window W on the set of speakers of the rendering device 2 by using an effect intensity panning, chosen so as to preserve the position of the sound objects perceived by the listener at the reference position.
- the T-Al and / or T-A2 rendering processes applied to the sound objects located inside the spatial window W may be more complex spatial filtering processes including for example the creation of virtual sources 10 to from the speakers of the rendering device 2 inside the spatial window W, the virtual sources being positioned in accordance with the characteristics of the sound objects estimated at steps E30 and / or E40 (that is, in directions and where appropriate, according to the amplitudes estimated in steps E30 and E40).
- a rendering process including the creation of virtual sources at the positions identified during steps E30 and / or E40 is for example an acoustic field synthesis processing also known as WFS treatment known to those skilled in the art or a beam forming technique ( or "beamforming" in English), the beam being directed for example towards the reference position.
- the sound objects belonging respectively to the categories OBJLocExtW and OBJDiff are restored outside the spatial window W by the means 4 of restitution (respectively by the means 4-B and 4-C), by applying the treatments TB and TC (steps E52 and E53).
- the rendering processes TB and TC comprise the creation of at least one virtual source 11, 12 outside the spatial window W for restitution of the rendering device 2.
- these virtual sources 11 are reconstituted from the positions of the sound objects identified in step E30, for example via a transaural technique (particularly well suited for a configuration of the device 2 of FIG. playback with a center speaker and two side speakers), a WFS or derivative technique, as described for example in the European patent application EP 1 116 572.0 unpublished, or the formation of a beam directed to the outside of the spatial window of restitution, and whose width can be configured so as to optimize the sound reproduction.
- the T-C treatment makes it possible to create diffuse virtual sources 12.
- beamforming CT techniques will preferably be used to create these virtual sources, for which the orientation and the width of the beams are easily controlled so as to create reflections on the walls of the room in which the device is positioned. 2 of restitution and thus create more enveloping feeling for the listener placed at the reference position.
- the playback device 2 is a horizontal soundbar-type loudspeaker equipped with three loudspeakers 2-1, 2-2 and 2-3 (a central loudspeaker and two loudspeakers). side speakers).
- the position Pref is chosen punctually, centered with respect to the device 2 of restitution.
- the multichannel signal S supplied to the playback system 1 during step E10 is a stereo audio signal, that is, composed of two separate channels.
- the sound reproduction window W (and the angular excursion associated with this window), defined by the reference position Pref and the lateral speakers of the playback device 2.
- the reference position Pref placed at a distance of 2 to 4m from the playback device 2 and a playback device of width 1m, the side loudspeakers of this device being placed at the ends of the device , the angular excursion ⁇ / 2 corresponding to the spatial window W is between 7 and 15 °; and
- the amplitude of each sound object extracted on each frequency subband is given by the level of the signal Si on this subband.
- the spatial analysis of the signal S also comprises, in the first example considered here, the identification E40 of the sound objects located at the center of the spatial window W by comparing the angular excursion associated with each sound object extracted from the signals Si at the interval [0; 2.5 °], a sound object being considered as being in the center of the window if its angular excursion is between 0 and 2.5 ° (in absolute value).
- step E51 restitution inside the spatial window W of the localized sound objects estimated to be positioned inside the spatial window W (category OBJLocIntW), by means of the restitution treatments T-Al and T-A2 following:
- ⁇ T-Al treatment applied to the estimated sound objects in the center of the spatial window W diffusion of the sound objects directly (ie without spatial filtering) on the central speaker of the rendering device 2, in other words, the sound objects thus restored are attached to the center of the device 2 of restitution;
- step E52 restitution outside the spatial window W, of localized sound objects estimated to be positioned outside the spatial window W (category OBJLocExtW), using a technique TB transaural restitution. More precisely, using the two lateral loudspeakers of the rendering device 2, transaural virtual sources placed outside the window W are created, for example at 30 ° and 60 ° (respectively at -30 ° and - 60 °) with respect to the axis ⁇ . The sound objects of the OBJLocExtW category are then broadcast through these virtual sources, in the directions determined in step E30;
- step E53 restitution outside the spatial window W of the diffuse sound objects (category OBJDiff), using a transaural T-C rendering technique. More precisely, using the two lateral loudspeakers of the rendering device 2, transaural virtual sources placed outside the window W are created at an angle greater than 60 ° (respectively less than -60 °) relative to to the axis ⁇ . The sound objects of the category OBJDiff are then diffused through these virtual sources.
- Such techniques consist in applying a filter to each of the lateral speakers of the rendering device 2, each filter comprising a spatialization filter and a cross-propagation cancellation filter between the two loudspeakers. speakers.
- the rendering device 2 is a compact acoustic loudspeaker of the horizontal soundbar type equipped with 15 loudspeakers 2-1, 2-2, 2-15 of a length of approximately 1 m.
- the position Pref is chosen punctually, centered with respect to the device 2 of restitution.
- the multichannel signal S supplied to the rendering system 1 during step E10 is an audio signal 5.1.
- a signal already contains intrinsically spatialization information.
- the standard UU-R BS.775-1 defining the format of the signals 5.1 implies a center located at 0 °, L and R right channels located at +/- 30 ° with respect to the center, and left rear channels Ls and right rear Rs located at +/- 110 0 from the center.
- the sound objects located in the center of the spatial window W are present in the central channel by definition of the format 5.1. They are therefore "extracted” easily from this already isolated central channel.
- the reproduction system 1 then considers the signal Si 'composed of the four channels L, R, Ls and Rs of the signal Si, and the four "channel" vectors connecting the reference position Pref to the four channels L, R, Ls and Rs. It assigns each channel vector a weight corresponding to the energy of the associated channel.
- the Gerzon vector associated with the signal Si '(or equivalent to the signal Si) is defined as the centroid of points L, R, Ls and Rs thus weighted.
- the Gerzon vector thus defined is written in the form of a directional vector (equal to the sum of the two channel vectors adjacent to the Gerzon vector: for example, if the direction of the Gerzon vector is 15 ° relative to the ⁇ axis, the directional vector is the sum of the channel vectors associated respectively with the channels L and R), and a non-directional vector.
- the directional vector characterizes a localized sound object of the signal Si and its position (given by the direction of the vector) with respect to the window W.
- the reproduction system 1 compares this position with respect to the angular excursion ⁇ / 2 in a similar way in example 1, to estimate whether the sound object thus identified belongs to the OBJLocIntW category or to the OBJLocExtW category.
- the non-directional vector characterizes a diffuse sound object of the signal Si, classified by the reproduction system 1 in the OBJDiff category.
- the reproduction system 1 associates with each extracted sound object an amplitude evaluated from the amplitude of the corresponding vector (directional or non-directive and composing the Gerzon vector).
- step E51 restitution inside the spatial window W of the localized sound objects estimated to be positioned inside the spatial window W (category OBJLocIntW), by means of the restitution treatments T-Al and T-A2 following:
- ⁇ T-A2 processing applied to non-centered sound objects located in the spatial window W scattering of sound objects using a WFS sound field synthesis technique including the creation of virtual sources via the speakers of the device 2 of restitution, these virtual sources being positioned (by acting on the delays and the gains applied to each speaker) in the directions estimated by the directional vectors extracted from the Gerzon vectors derived during the spatial analysis so as to respect the same spatial organization only when mixing the multichannel signal.
- the amplitudes of the sound objects returned are consistent with the amplitudes evaluated in step E30;
- step E52 restitution outside the spatial window W, of localized sound objects estimated to be positioned outside the spatial window W (category OBJLocExtW), using a technique WFS including the creation of six virtual sources surrounding the reference position Pref:
- two virtual sources are positioned outside the spatial window W, among which: two virtual sources are positioned between 30 ° C. and 60 ° C. with respect to the ⁇ axis, and between -30 ° and -60 °; for example with the aid of two square waves directed towards the side walls of the room in which is placed the device 2 restitution; and two virtual sources are positioned between
- the virtual sources thus positioned are used to restore the sound objects of the OBJLocExtW category according to the directions and amplitudes estimated in step E30;
- step E53 restitution outside the spatial window W of the diffuse sound objects (category OBJDiff), using a rendering technique WFS TC, comprising the creation of four virtual sources to the outside of the window W using, for example, four plane waves directed towards the walls of the room in which the rendering device 2 is placed so as to create two reflections on the lateral walls situated between 60 ° and 80 ° (respectively -60 ° and -80 °) with respect to the ⁇ axis.
- a rendering technique WFS TC comprising the creation of four virtual sources to the outside of the window W using, for example, four plane waves directed towards the walls of the room in which the rendering device 2 is placed so as to create two reflections on the lateral walls situated between 60 ° and 80 ° (respectively -60 ° and -80 °) with respect to the ⁇ axis.
- the rendering device 2 is a compact acoustic loudspeaker equipped with 8 loudspeakers 2-1, 2-2, 2-8 of width approximately 80 cm, with four frontal loudspeakers 2-1, ..., 2-4, and two speakers 2-5 and 2-6, respectively 2-7 and 2-8, located on each side of the device 2 (device similar to the device 2 "shown in Figure 3B) .
- the position Pref is chosen punctually, centered with respect to the device 2 of restitution.
- the multichannel signal S supplied to the rendering system 1 during step E10 is an audio signal composed of four distinct channels.
- this step may optionally include the coding of the Si signal in an audio format of the HOA type, known per se);
- step E51 restitution inside the spatial window W, of the localized sound objects estimated to be positioned inside the spatial window W (category OBJLocIntW), by means of a processing Restitution TA combining WFS technique and radiation control taking into account the radiation of each loudspeaker and the influence of the loudspeaker itself containing the different loudspeakers.
- the sound reproduction field of each object is controlled via filtering.
- the processing TA comprises the creation of virtual sources behind the rendering device 2 via the WFS technique, and the application of a filtering to the loudspeakers 2-1, 2-8 of the device 2 determined so that the energy of sound objects restored by these virtual sources is directed to the reference position and is in agreement with the amplitudes determined in step E30;
- step E52 restitution outside the spatial window W, of the localized sound objects estimated to be positioned outside the spatial window W (category OBJLocExtW), by means of a processing of TB rendering as described in the European patent application not yet published EP 1116572.0 and combining:
- a WFS technique comprising the creation of virtual sources outside the spatial window W via the formation of two thin beams reflecting on the side walls of the room in which the rendering device 2 is installed at a predetermined point position;
- a filtering applied to the loudspeakers 2-1, 2-8 of the device 2 determined so that the energy of the sound objects restored by these virtual sources is directed concentrated towards the lateral walls of the room.
- the virtual sources thus positioned are used to restore the sound objects of the OBJLocExtW category according to the directions and amplitudes estimated in step E30;
- step E53 restitution outside the spatial window W of the diffuse sound objects (category OBJDiff), using a rendering processing TC as described in the European patent application not yet published EP 1116572.0 and combining:
- a WFS technique comprising creating virtual sources outside the spatial window W by forming two large beams reflecting on a predetermined wide area of side walls of the room in which the retrieval device 2 is installed;
- a filtering applied to the loudspeakers 2-1, 2-8 of the device 2 determined so that the energy of the sound objects restored by these virtual sources is directed concentrated towards the side walls of the room.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Stereophonic System (AREA)
- Circuit For Audible Band Transducer (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR1259132A FR2996094B1 (fr) | 2012-09-27 | 2012-09-27 | Procede et systeme de restitution d'un signal audio |
PCT/FR2013/052254 WO2014049267A1 (fr) | 2012-09-27 | 2013-09-25 | Procede et systeme de restitution d'un signal audio |
Publications (2)
Publication Number | Publication Date |
---|---|
EP2901718A1 true EP2901718A1 (fr) | 2015-08-05 |
EP2901718B1 EP2901718B1 (fr) | 2016-12-21 |
Family
ID=47594912
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP13779299.0A Active EP2901718B1 (fr) | 2012-09-27 | 2013-09-25 | Procede et systeme de restitution d'un signal audio |
Country Status (5)
Country | Link |
---|---|
US (1) | US9426597B2 (fr) |
EP (1) | EP2901718B1 (fr) |
CN (1) | CN104919821B (fr) |
FR (1) | FR2996094B1 (fr) |
WO (1) | WO2014049267A1 (fr) |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105448312B (zh) * | 2014-06-12 | 2019-02-19 | 华为技术有限公司 | 音频同步播放方法、装置及系统 |
CN105992120B (zh) | 2015-02-09 | 2019-12-31 | 杜比实验室特许公司 | 音频信号的上混音 |
EP3357259B1 (fr) * | 2015-09-30 | 2020-09-23 | Dolby International AB | Procédé et appareil de génération de contenu audio 3d provenant de contenu stéréo à deux canaux |
EP3239981B1 (fr) * | 2016-04-26 | 2018-12-12 | Nokia Technologies Oy | Procédés, appareils et programmes d'ordinateur pour la modification d'une caractéristique associée avec un signal séparé |
US10728691B2 (en) * | 2016-08-29 | 2020-07-28 | Harman International Industries, Incorporated | Apparatus and method for generating virtual venues for a listening room |
EP3297298B1 (fr) * | 2016-09-19 | 2020-05-06 | A-Volute | Procédé de reproduction de sons répartis dans l'espace |
WO2019023853A1 (fr) * | 2017-07-31 | 2019-02-07 | 华为技术有限公司 | Procédé de traitement audio, et dispositif de traitement audio |
CN114009064A (zh) * | 2019-03-04 | 2022-02-01 | 斯蒂尔赛瑞斯法国公司 | 用于音频分析的装置和方法 |
CN109978034B (zh) * | 2019-03-18 | 2020-12-22 | 华南理工大学 | 一种基于数据增强的声场景辨识方法 |
GB2584630A (en) * | 2019-05-29 | 2020-12-16 | Nokia Technologies Oy | Audio processing |
KR20210017169A (ko) | 2019-08-07 | 2021-02-17 | 주식회사 엘지화학 | 표면 요철 구조를 갖는 전지팩 커버 및 이를 포함하는 전지팩 |
CN113068056B (zh) * | 2021-03-18 | 2023-08-22 | 广州虎牙科技有限公司 | 音频播放方法、装置、电子设备和计算机可读存储介质 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001001388A (ja) | 1999-06-24 | 2001-01-09 | Idemitsu Petrochem Co Ltd | ブロー成形方法、ブロー成形品およびブロー成形金型 |
US8379868B2 (en) * | 2006-05-17 | 2013-02-19 | Creative Technology Ltd | Spatial audio coding based on universal spatial cues |
US9271081B2 (en) * | 2010-08-27 | 2016-02-23 | Sonicemotion Ag | Method and device for enhanced sound field reproduction of spatially encoded audio input signals |
EP2485504B1 (fr) * | 2011-02-07 | 2013-10-09 | Deutsche Telekom AG | Production de zones silencieuses à l'intérieur de la zone d'auditeurs d'un système de retransmission à plusieurs canaux |
-
2012
- 2012-09-27 FR FR1259132A patent/FR2996094B1/fr active Active
-
2013
- 2013-09-25 CN CN201380056226.7A patent/CN104919821B/zh active Active
- 2013-09-25 US US14/431,926 patent/US9426597B2/en active Active
- 2013-09-25 EP EP13779299.0A patent/EP2901718B1/fr active Active
- 2013-09-25 WO PCT/FR2013/052254 patent/WO2014049267A1/fr active Application Filing
Non-Patent Citations (1)
Title |
---|
See references of WO2014049267A1 * |
Also Published As
Publication number | Publication date |
---|---|
CN104919821B (zh) | 2017-04-05 |
US20150256958A1 (en) | 2015-09-10 |
EP2901718B1 (fr) | 2016-12-21 |
FR2996094A1 (fr) | 2014-03-28 |
CN104919821A (zh) | 2015-09-16 |
FR2996094B1 (fr) | 2014-10-17 |
US9426597B2 (en) | 2016-08-23 |
WO2014049267A1 (fr) | 2014-04-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2901718B1 (fr) | Procede et systeme de restitution d'un signal audio | |
EP1992198B1 (fr) | Optimisation d'une spatialisation sonore binaurale a partir d'un encodage multicanal | |
EP1836876B1 (fr) | Procédé et dispositif d'individualisation de hrtfs par modélisation | |
EP1999998B1 (fr) | Procede de synthese binaurale prenant en compte un effet de salle | |
EP2898707B1 (fr) | Calibration optimisee d'un systeme de restitution sonore multi haut-parleurs | |
EP2042001B1 (fr) | Spatialisation binaurale de donnees sonores encodees en compression | |
WO2010076460A1 (fr) | Codage perfectionne de signaux audionumériques multicanaux | |
FR2992459A1 (fr) | Procede de debruitage d'un signal acoustique pour un dispositif audio multi-microphone operant dans un milieu bruite. | |
WO2004086818A1 (fr) | Procede pour traiter un signal electrique de son | |
EP3475943A1 (fr) | Procede de conversion, d'encodage stereophonique, de decodage et de transcodage d'un signal audio tridimensionnel | |
FR2776461A1 (fr) | Procede de perfectionnement de reproduction sonore tridimensionnelle | |
EP3559947B1 (fr) | Traitement en sous-bandes d'un contenu ambisonique réel pour un décodage perfectionné | |
EP3025514B1 (fr) | Spatialisation sonore avec effet de salle | |
FR3065137A1 (fr) | Procede de spatialisation sonore | |
EP3384688B1 (fr) | Décompositions successives de filtres audio | |
EP2901717B1 (fr) | Procede et dispositif de generation de signaux audio destines a etre fournis a un systeme de restitution sonore | |
EP2957110B1 (fr) | Procede et dispositif de generation de signaux d'alimentation destines a un systeme de restitution sonore | |
EP3108670B1 (fr) | Procédé et dispositif de restitution d'un signal audio multicanal dans une zone d'écoute | |
WO2005096268A2 (fr) | Procede de traitement de donnees sonores, en particulier en contexte ambiophonique | |
WO2009081002A1 (fr) | Traitement d'un flux audio 3d en fonction d'un niveau de presence de composantes spatiales | |
FR3136072A1 (fr) | Procédé de traitement de signal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20150312 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
RIN1 | Information on inventor provided before grant (corrected) |
Inventor name: NGUYEN, KHOA-VAN Inventor name: CORTEEL, ETIENNE |
|
DAX | Request for extension of the european patent (deleted) | ||
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
INTG | Intention to grant announced |
Effective date: 20160629 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D Free format text: NOT ENGLISH |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D Free format text: LANGUAGE OF EP DOCUMENT: FRENCH |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: REF Ref document number: 856405 Country of ref document: AT Kind code of ref document: T Effective date: 20170115 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 602013015671 Country of ref document: DE |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161221 |
|
REG | Reference to a national code |
Ref country code: LT Ref legal event code: MG4D |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: MP Effective date: 20161221 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161221 Ref country code: NO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170321 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170322 Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161221 |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: MK05 Ref document number: 856405 Country of ref document: AT Kind code of ref document: T Effective date: 20161221 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161221 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161221 Ref country code: RS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161221 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161221 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 5 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161221 Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161221 Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161221 Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170421 Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161221 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SM Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161221 Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161221 Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170321 Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161221 Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170421 Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161221 Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161221 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 602013015671 Country of ref document: DE |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
26N | No opposition filed |
Effective date: 20170922 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161221 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161221 |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
GBPC | Gb: european patent ceased through non-payment of renewal fee |
Effective date: 20170925 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MC Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161221 |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: MM4A |
|
REG | Reference to a national code |
Ref country code: BE Ref legal event code: MM Effective date: 20170930 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LU Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20170925 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CH Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20170930 Ref country code: IE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20170925 Ref country code: LI Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20170930 Ref country code: GB Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20170925 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R081 Ref document number: 602013015671 Country of ref document: DE Owner name: SENNHEISER ELECTRONIC GMBH CO. KG, DE Free format text: FORMER OWNER: SONIC EMOTION LABS, PARIS, FR Ref country code: DE Ref legal event code: R081 Ref document number: 602013015671 Country of ref document: DE Owner name: SENNHEISER ELECTRONIC GMBH & CO. KG, DE Free format text: FORMER OWNER: SONIC EMOTION LABS, PARIS, FR |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20170930 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 6 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161221 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: HU Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO Effective date: 20130925 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CY Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161221 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161221 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: TR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161221 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: AL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161221 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: FR Payment date: 20230919 Year of fee payment: 11 Ref country code: DE Payment date: 20230906 Year of fee payment: 11 |