WO2022219100A1 - Éléments audio spatialement délimités à représentation intérieure dérivée - Google Patents
Éléments audio spatialement délimités à représentation intérieure dérivée Download PDFInfo
- Publication number
- WO2022219100A1 WO2022219100A1 PCT/EP2022/059973 EP2022059973W WO2022219100A1 WO 2022219100 A1 WO2022219100 A1 WO 2022219100A1 EP 2022059973 W EP2022059973 W EP 2022059973W WO 2022219100 A1 WO2022219100 A1 WO 2022219100A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- audio
- interior
- representation
- exterior
- signal
- Prior art date
Links
- 238000000034 method Methods 0.000 claims abstract description 69
- 238000009877 rendering Methods 0.000 claims abstract description 52
- 230000005236 sound signal Effects 0.000 claims description 151
- 238000012545 processing Methods 0.000 claims description 11
- 230000008447 perception Effects 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 7
- 230000003287 optical effect Effects 0.000 claims description 2
- ORQBXQOJMQIAOY-UHFFFAOYSA-N nobelium Chemical compound [No] ORQBXQOJMQIAOY-UHFFFAOYSA-N 0.000 description 32
- 239000000203 mixture Substances 0.000 description 12
- 239000003607 modifier Substances 0.000 description 6
- 238000013507 mapping Methods 0.000 description 5
- 108010089741 opacity factor Proteins 0.000 description 5
- 238000007781 pre-processing Methods 0.000 description 3
- 101100496087 Mus musculus Clec12a gene Proteins 0.000 description 2
- 101100345605 Rattus norvegicus Mill2 gene Proteins 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000001427 coherent effect Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 241001261630 Abies cephalonica Species 0.000 description 1
- 101100259947 Homo sapiens TBATA gene Proteins 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 150000001875 compounds Chemical class 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000004091 panning Methods 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000010561 standard procedure Methods 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
- H04S7/302—Electronic adaptation of stereophonic sound system to listener position or orientation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R5/00—Stereophonic arrangements
- H04R5/027—Spatial or constructional arrangements of microphones, e.g. in dummy heads
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2400/00—Details of stereophonic systems covered by H04S but not provided for in its groups
- H04S2400/01—Multi-channel, i.e. more than two input channels, sound reproduction with two speakers wherein the multi-channel information is substantially preserved
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2400/00—Details of stereophonic systems covered by H04S but not provided for in its groups
- H04S2400/03—Aspects of down-mixing multi-channel audio to configurations with lower numbers of playback channels, e.g. 7.1 -> 5.1
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2400/00—Details of stereophonic systems covered by H04S but not provided for in its groups
- H04S2400/11—Positioning of individual sound objects, e.g. moving airplane, within a sound field
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2400/00—Details of stereophonic systems covered by H04S but not provided for in its groups
- H04S2400/15—Aspects of sound capture and related signal processing for recording or reproduction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2420/00—Techniques used stereophonic systems covered by H04S but not provided for in its groups
- H04S2420/11—Application of ambisonics in stereophonic audio systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
- H04S7/302—Electronic adaptation of stereophonic sound system to listener position or orientation
- H04S7/303—Tracking of listener position or orientation
- H04S7/304—For headphones
Definitions
- This disclosure relates to derived interior representation of spatially-bounded audio elements.
- Spatial audio rendering is a process used for presenting audio within extended reality (XR) (e.g., virtual reality (VR), augmented reality (AR), or mixed reality (MR)) environment in order to give the listener the impression that the audio is coming from physical audio sources at certain position(s) and/or from physical audio sources that have a particular extent (e.g., the size and/or shape of the audio sources).
- XR extended reality
- VR virtual reality
- AR augmented reality
- MR mixed reality
- the audio presentation can be made through speakers (e.g., headphones, tabletop speakers).
- “sound” and “audio” are used interchangeably.
- the process for presenting the audio is called a binaural rendering.
- the binaural rendering uses spatial cues of the human spatial hearing, enabling the listener to hear the audio from the direction sounds are coming from. These cues involve Inter-aural Time Difference (ITD), Inter-aural Level Difference (ILD), and/or spectral difference.
- ITD Inter-aural Time Difference
- ILD Inter-aural Level Difference
- spectral difference spectral difference
- a point-source is defined to emanate audio from one specific point, and thus it does not have any extent.
- different audio rendering methods have been developed.
- One of such audio rendering methods is to create multiple duplicates of a mono audio object at positions around the mono object’s position. This creates the perception of a spatially homogeneous object with a certain size.
- This concept is used, for example, in the “object spread” and “object divergence” features of the MPEG-H 3D Audio standard [1] and [2], and in the “object divergence” feature of the EBU Audio Definition Model (ADM) standard [4]
- This idea using a mono audio object i.e., source
- HR head-related
- Another of the audio rendering methods is to render a spatially diffuse component in addition to the mono audio object, which creates the perception of a somewhat diffuse audio object (in contrast to the original mono audio object which has no distinct pin-point location).
- This method (or concept) is used, for example, in the “object diffuseness” feature of the MPEG- H 3D Audio standard [3] and the EBU ADM “object diffuseness” feature [5]
- ADM “object extent” feature [6] combines the creation of multiple copies of a mono audio object with addition of diffuse components.
- an audio element can be described well enough with a basic shape (e.g., a sphere or a box). But sometimes the extent (or shape) of the audio element is more complicated, and thus needs to be described in a more detailed form (e.g., with a mesh structure or a parametric description format).
- a basic shape e.g., a sphere or a box.
- the extent (or shape) of the audio element is more complicated, and thus needs to be described in a more detailed form (e.g., with a mesh structure or a parametric description format).
- Some audio elements are of the nature that the listener can move inside the audio elements and can hear a plausible audio representation inside the audio elements.
- the extent of the audio elements acts as a spatial boundary that defines the edge between the interior and the exterior of the audio elements. Examples of such audio elements could be (i) a forest (with sound of birds, sound of wind in the trees), (ii) a crowd of people (the sound of people clapping hands or cheering), and (3) background sound of a city square (sounds of traffic, birds, and/or people walking).
- the audio representation should be immersive and surround the listener.
- the audio should appear to come from the extent of the audio element.
- Listener-centric formats include channel-based formats as 5.1, 7.1 and scene-based formats such as Ambisonics. Listener centric formats are typically rendered using several speakers positioned around the listener.
- a stereo signal depicts the left and right parts of the audio element
- the listener is located at the side of the audio element, the depth information of the audio element needed for an adequate representation of the audio element is not described by the stereo signal.
- Figures 14(a) and 14(b) illustrate such problem - the problem in rendering the exterior representation to a listener position that is to the side of the audio element.
- the listener is in front of the audio element and the left and right audio signals can be used directly for the speakers SpL and SpR.
- a method for rendering an audio element comprises obtaining an exterior representation of the audio element and based on the obtained exterior representation, generating an interior representation of the audio element.
- a computer program comprising instructions which when executed by processing circuitry of a device, causes the device to perform the method described above.
- a device comprising processing circuitry and a memory.
- the memory contains instructions executable by the processing circuitry.
- the device is configured to perform the method described above.
- a device configured to obtain an exterior representation of the audio element and based on the obtained exterior representation, generate an interior representation of the audio element.
- Embodiments of this disclosure provide a method of deriving the interior representation of an audio element from the exterior representation of the audio element.
- the method provides a unified solution that is applicable for most kinds of spatially-bounded audio elements in cases where only the exterior representations of the audio elements are given.
- the same rendering principles can be used for audio elements in cases where the exterior representations of the audio elements are specified in different formats.
- the method for rendering the audio elements is highly efficient and can easily be adapted for the best trade-off of high-quality and low complexity.
- the method of synthesizing parts of the interior representation makes it possible to achieve good control over the process of generating missing spatial information.
- Figure 1 shows an example of a spatially-bounded audio element.
- Figure 2 illustrates the concept of an interior representation of an audio element.
- Figure 3(a) illustrates an exemplary exterior representation of an audio element.
- Figure 3(b) illustrates an exemplary interior representation of an audio element.
- Figure 4 shows an exemplary setup of virtual loudspeakers.
- Figure 5 illustrates a method of rendering an exterior representation of an audio element according to an embodiment.
- Figure 6 illustrates a method of rendering an exterior representation of an audio element according to an embodiment.
- Figure 7 illustrates a rendering setup according to an embodiment.
- Figures 8 A and 8B show an XR system according to an embodiment.
- Figure 9 shows an audio Tenderer according to an embodiment.
- Figure 10(a) shows a signal modifier according to an embodiment.
- Figure 10(b) shows a deriver according to an embodiment.
- Figure 11 shows a process of rendering an audio element according to an embodiment.
- Figure 12 shows an apparatus for implementing an audio Tenderer according to an embodiment.
- Figure 13 illustrates how different elevation layers of an interior representation can be used in case the spatial information in different elevation angles needs be generated.
- Figures 14(a) and 14(b) illustrate a problem in rendering the exterior representation to a listener position that is to the side of the audio element.
- Figure 1 shows an example of a spatially-bounded audio element 102 in an XR environment 100.
- the audio element 102 represents a choir where the group of singing people is located within a volume S which is defined as the spatial boundary of the audio element 102.
- a listener-centric audio format may be suitable for representing the audio element 102 since the listener-centric format is designed to present audio that surrounds the listener 104.
- the representation of the audio element 102 in the listener-centric format may be an interior representation of the audio element 102.
- Interior representation of an audio element is a representation that can be used to produce an audio experience for a listener in which the listener will have the perception of being within the boundary of the audio element.
- Data used for the interior representation may comprise one or more interior representation audio signals (herein after, “interior audio signals”) that may be used to generate audio for the audio element.
- the listener 104 may expect to hear audio of the audio element 102 as if the audio is emanating from the volume (defined by the boundary S) of the audio element 102.
- the perceived angle, distance, size, and shape of the audio element 102 should correspond to the defined boundary S as perceived by the listener 104 at the position B.
- a source-centric audio format may be more suitable than the listener-centric audio format since the listener 104 should no longer be surrounded by the choir.
- the representation of the audio element in the source-centric format may be an exterior representation of the audio element 102.
- Exterior representation of an audio element is a representation that can be used to produce an audio experience for a listener in which the listener will have the perception of being outside the boundary of the audio element.
- Exterior representation of an audio element may comprise one or more exterior representation audio signals (herein after, “exterior audio signals”) that may be used to generate audio for the audio element.
- the listener 104 at the position B may also expect to obtain some spatial information from the audio element 102 (by hearing the audio from the audio element 102) such that the listener 104 can acoustically perceive that the choir is made up of many individual voices rather than just one diffuse audio source.
- the audio element 102 may correspond to a spatially heterogenous audio element.
- the listener 104 can be provided with a convincing spatial experience even at the listening positions outside of the boundary S.
- the concept of a spatially heterogenous audio element and the method of rendering the spatially heterogeneous audio element are described in International Patent Application Publication No.
- both of the exterior and interior representations of an audio element may be needed.
- the listener 104 may change the listener’s position from the exterior listening position B to the interior listening position A.
- the expected audio experience will be different.
- an interior representation of an audio element is derived using an exterior representation of the audio element.
- both the interior and exterior representations may be rendered.
- a spatially heterogeneous audio element may be defined with a set of audio signals that are meant to represent the spatial information of the audio element in a certain dimension.
- the two channels of a stereo recording may be used to represent the audio element in the left-to-right dimension.
- the audio element may be represented in other dimensions.
- a 4-channel recording may be used such that the four channels represent the top-left, top-right, bottom-left, and bottom-right of the audio element as perceived at a certain listening location.
- the above recordings are examples of multi-channel recordings, they are still a source-centric representation since they describe a sound source (i.e., an audio element) that is at some distance from the listener rather than a sound source surrounding the listener. Thus, the above recordings may not be suitable for an interior representation of the audio element. Accordingly, it is desirable to derive an interior representation of an audio element from an exterior representation of the audio element such that the audio element can be rendered in a listener-centric representation.
- the exterior representation of the audio element may not be able to represent the spatial information of the audio element in all dimensions. For example, when the listening position is within the boundary of the audio element, it is desirable to render the audio element in the depth dimension in a plausible way. For an audio element of which the exterior representation is based on a stereo recording, however, the depth information is not defined. Therefore, to provide spatial information for the depth dimension, new signals need to be generated. Since the real spatial information is not known, the generation of the missing information needs to be done using some general assumptions about the audio element.
- the interior representation of an audio element can be based on different listener-centric audio formats. Examples of such listener-centric audio formats are Ambisonics and any one out of a large variety of channel -based formats such as quadraphonic, cubic octophonic, 5.1, 7.1, 22.2, a VBAP format, or a DirAC format. In those listener-centric audio formats, a number of audio channels are used to describe the spatial sound field inside the boundary of the audio element.
- Some of the listener-centric audio formats describe the spatial information of the audio element in all directions with respect to the listening position inside the boundary of the audio element whereas others (e.g., 5.1 and 7.1) only describe the spatial information of the audio element in the horizontal plane.
- the spatial information of the audio element in the vertical plane is not as important as the spatial information of the audio element in the horizontal plane.
- the human auditory system is less sensitive to spatial audio information in the vertical plane as compared to spatial information in the horizontal plane due to how the spatial cues (e.g., ITD and ILD) work. Therefore, sometimes it may be enough to describe the spatial information of an audio element in the horizontal plane only.
- the format of an interior representation of an audio element may be selected based on signals available in a given exterior representation of the audio element. For example, if the given exterior representation of the audio element is based on a stereo recording of which the two channels represent the audio element in the left-to-right dimension, the interior representation format (e.g., a quadraphonic format) that only describes the horizontal plane may be selected. On the other hand, if the exterior representation is based on a multi-channel format (e.g., see figure 3(a))in which both horizontal and vertical spatial information is described, the interior representation format that describes the audio element in both dimensions may be selected.
- a multi-channel format e.g., see figure 3(a)
- the exterior representation is given in a multi-channel format where signals represent top-left, top, top-right, left, center, right, bottom-left, bottom, bottom-right of an audio element, or a subset of these
- a multi layered quadraphonic format may be used.
- all of the given audio signals may be directly reused for the interior representation and only the audio signals representing the back of each elevation layer may need to be generated. This is illustrated in figures 3(a) and 3(b).
- the interior representation format with less channels may be selected.
- some spatial information in the exterior representation may be neglected in the interior representation in order to minimize the rendering complexity.
- a simple horizontal-only quadraphonic format may be used as the format of the interior representation.
- Figure 2 illustrates an exemplary interior representation of the audio element 102.
- the interior representation is based on a quadraphonic audio format.
- four audio channels are used to represent the left, right, front, and back of the audio element 102.
- the exemplary interior representation only describes the spatial information of the audio element 102 in the horizontal plane.
- the stereo signal including a left signal and a right signal can be reused as the signals representing the left and right of the audio element 102 (a.k.a., left and right interior representation signals) in the interior representation.
- those signals (a.k.a., missing interior representation signals) need to be generated for the interior representation.
- those signals are generated based on the signal(s) for the exterior representation (i.e., the stereo signal in the above example).
- audio signal may simply be referred as “signal” for simplification.
- the interior representation (a.k.a., front interior representation signal) may be generated based on a combination (e.g., a sum or a weighted sum) of the left and right exterior representation signals.
- the front interior representation signal is a mean of the left and right exterior representation signals.
- the signal representing the back of the audio element 104 in the interior representation may be generated in the same way. Then, however, the audio element 102 would have no spatial information in the front-back dimension since the front and back interior representation signals would be the same. In such case, the audio element 102 would behave more like a coherent source in the front-back dimension.
- the back interior representation signal may be generated as a decorrelated version of the front interior representation signal.
- the audio element 102 would behave more like a diffuse source in the front-back dimension.
- the front interior representation signal may be generated as a decorrelated version of a mix of the left and right exterior representation signals.
- the audio element 102 in the interior representation would sound more diffuse when the listener is positioned in front of the audio element 102. This may not be desirable, however, if the audio element 102 is intended to sound similar to the left and right exterior representation signals when the listener is in front of the audio element 102.
- using a decorrelated version of a mix of the left and right exterior representation signals may increase the width and/or diffuseness of the audio element 104 perceived by the listener. Such increase in the perceived width and/or diffuseness may be desirable for certain audio elements.
- the back interior representation signal may be generated as a mix of the left and right exterior representation signal, a decorrelated version of the front interior representation audio signal, or another decorrelated version of a mix of the left and right exterior representation signals.
- a decorrelated signal i.e., the decorrelated version of another signal where certain aspects of the signal are taken into account. For example, there may be special handling of transients, harmonic, and noise components of audio.
- the process of decorrelation is for creating a signal that shares high-level properties with the original signal (e.g., having the same timbre, magnitude spectrum, time envelope, etc.) but has no, or a very low, degree of correlation with the original signal (e.g., in the sense that the cross correlation of the two signals is close to zero).
- Classic methods for implementing a decorrelator use one of a large variety of fixed or dynamic delay line structures (which may be configured to delay the original signal), while more advanced implementations may use optimized (e.g. FIR) filter structures. More general information on decorrelation can be found at: https://en.wikipedia.Org/wiki/Decorrelation#. An example of more advanced implementation of a decorrelator can be found at: https://www.audiolabs-er GmbH.de/resources/2018-DAFx-VND.
- the generated audio signal(s) e.g., the back interior audio signal
- the other signal(s) e.g., the front interior audio signal
- the spatial information in the dimension that the generated audio signal(s) represent e.g., the front-back dimension
- the level of decorrelation needed may depend on the characteristics of the audio element.
- the amount of correlation needs to be less than a threshold of 50% in order to be provide a perceptual width that corresponds with the extent of the audio element when rendering the audio element.
- the process of generating audio signals in the interior representation needs to be based on certain assumptions of what is expected for a certain audio source. It is, however, possible to use certain aspects of the exterior audio signals themselves as guidance for these assumptions. For example, measuring the correlation between different signals in the exterior representation may give a good indication of what level of correlation signals generated for the interior representation should have with other interior representation audio signals that are reused from the exterior representation. Measuring the variance, diffuseness, presence of transients, etc. can be used in similar ways to help in generating the missing interior representation signals.
- extra metadata can be provided to represent an audio element. The metadata may define the expected behavior of the audio element.
- Metadata is the audio element’s diffuseness in different dimensions - a value of how diffuse the audio element should appear in different dimensions (e.g., the right-left dimension, the up- down dimension, the front-back dimension, etc.).
- Metadata may specify a desired degree of correlation between one or more of the provided (known) exterior representation audio signals (a.k.a., exterior audio signals) and one or more of the interior representation audio signals (a.k.a., interior audio signals) to be generated.
- the metadata may specify that the back interior audio signal to be derived should have a correlation of 0.6 with the provided left exterior audio signal and a correlation of 0.2 with the provided right exterior audio signal.
- the metadata may comprise an upmix matrix that fully specifies how the interior representation is to be derived from the exterior representation.
- the audio element When an audio format for the interior representation is selected that only describes the spatial information of an audio element in the horizontal plane, the audio element will behave like a coherent source in the vertical dimension since the same audio signals will be used to describe different parts of the audio element in the vertical dimension.
- the audio format used for the interior representation can be expanded, e.g., to a six-channel format where the extra two channels may be used to represent the bottom and top of the audio element. These two extra channels may be generated in a similar way as the front and back interior representation signals are generated.
- Figure 3(a) illustrates an exemplary exterior representation 300 of an audio element.
- the exterior representation 300 is based on a 9-channel audio format in which nine different channels represent the audio element’s top-left, top, top-right, left, center, right, bottom- left, bottom, and bottom-right respectively. More specifically, the nine channels may correspond to the nine audio signals associated with nine different portions of the audio element in a vertical planar representation.
- the interior representation of the audio element may need to be based on a rich audio format.
- the interior representation can be a three-tier quadraphonic format 350 as illustrated in figure 3(b).
- the three-tier quadraphonic format 350 each of three different elevation levels is represented by left, right, front, back signals.
- all of the available signals for the exterior representation i.e., the exterior signals for the audio element’s top-left, top, top-right, left, center, right, bottom-left, bottom, and bottom-right
- the exterior signals for the audio element’s top-left, top, top-right, left, center, right, bottom-left, bottom, and bottom-right can be directly reused for the interior representation.
- an Ambisonics representation may be used for the interior representation. While in principle this may be an Ambisonics representation of any order, including first order, preferably a representation of at least 2nd order is used in order to preserve the spatial resolution contained in the exterior representation.
- Ambisonics format signals i.e., the interior audio signals in the Ambisonics format
- the interior audio signals may be generated as a pre processing step before a real-time rendering begins. There are some cases where this is not possible. For example, if the audio signals representing the audio element are not available before the rendering starts. This could be the case if the signals are generated in real-time, either because they are the result of a real-time capture or that the signals are generated by a real-time process, such as in the case of procedural audio.
- the generation of the interior audio signals may not be performed as the pre processing step before the real-time rendering begins when the generation of the interior audio signals that are not defined in the exterior representation depends on parameters that are not available before the rendering begins. For example, if the generation of the interior representation depends on the momentary CPU load of an audio rendering device in such a way that a simpler interior representation is used when the CPU load needs to be limited, the generation of the interior audio signal may not be performed before the rendering begins. Another example is the case where the generation of the interior representation depends on the relative position of the audio element with respect to the listening position, e.g., in a way that a simpler interior representation is selected when the audio element is far away from the listening position.
- the methods for rendering an interior representation may depend on the kind of audio format that is selected for the interior representation.
- one way to render the interior representation is to represent each channel of the interior representation with a virtual loudspeaker placed at an angle relative to the listener.
- the angle may correspond to the direction that each channel represents with respect to the front vector of the audio element.
- a front interior audio signal may be rendered to come from a direction that is aligned with the front vector of the audio element (shown in figure 4), and a left interior audio signal may be rendered to come from a direction that is at a 90-degree angle with respect the front vector.
- This rendering largely corresponds to a virtual listening room where the listener is surrounded by the speaker setup and there is a direct and exclusive mapping between the interior audio signals and the virtual loudspeakers. In this case, the audio rendering does not depend on the head rotation of the listener.
- the setup of virtual loudspeakers is decoupled from the orientation of the audio element and instead depends on some other reference direction, such as the head rotation of the listener.
- Figure 4 shows a setup of virtual loudspeakers that can be used to render the horizontal plane of the interior representation.
- the signals to each virtual loudspeaker can be derived from a virtual microphone placed in the center of the interior representation at an angle that corresponds to the angle of the virtual loudspeaker.
- the signal going to the left virtual loudspeaker may be derived using a virtual microphone that is pointing in the direction of the virtual loudspeaker. In this case this virtual microphone would capture a mix of mostly the left and back signals.
- the signals of the interior representation i.e., the interior audio signals
- the input audio signal for each virtual loudspeaker can be derived with directional mixing of the interior audio signals.
- the audio output associated with the directional mixing may correspond to a virtual microphone that is angled in a way that it captures audio in a certain direction of the interior representation.
- Figure 4 shows an example of how the signal for the left virtual loudspeaker can be derived.
- the direction in which audio should be captured is 90 degrees to the left.
- the virtual microphone is directed in this direction in relation to the observation vector.
- the signal, Ml, captured by this virtual microphone can, in one embodiment, be derived as
- Equation 1 the angle between the listener’s head direction and the front vector of the audio element, and a is the angle of the virtual microphone in relation to the listener’s head direction.
- the interior representation only describes the spatial information of the audio element in the horizontal plane, and thus angles can be projected onto the horizontal plane.
- an audio signal may be generated based on a combination of at least two interior audio signals. More specifically, the audio signal may be generated based on a weighted sum of at least two interior audio signals.
- the weights used for the weighted sum may be determined based on a listener’s orientation (e.g., obtained by one or more sensors). However, in other embodiments, the weights may be determined based on some other reference orientation, such as an orientation of the audio element (for example, in the above described embodiment where the audio rendering does not depend on the head rotation of the listener).
- an audio format for the interior representation that has signals representing the audio element in the up-down dimension.
- the three-layer quadraphonic audio format as shown in figures 3(a) and 3(b) may be used.
- the vertical angle of each virtual microphone may also be taken into account. This vertical angle may be used for making a directional mix of the elevation layers, where the signal of each layer is calculated using the horizontal directional mixing described above.
- Figure 13 shows how different elevation layers of an interior representation can be used when the spatial information in different elevation angles needs to be generated.
- the listener’s head is directed upwards at an angle F.
- a directional mixing of the signals from the three layers can be used.
- the directional mix would consist of mix of the upper and middle elevation layers.
- any of the available standard methods for rendering ambisonics can be used, such as those based on the use of a number of virtual loudspeakers or those that render the spherical harmonics directly using an HRTF set that has been converted to the spherical harmonics domain.
- the derived interior representation may also be used advantageously to enable an improved rendering at listening positions outside the audio element.
- the provided exterior representation e.g. a stereo signal
- the provided exterior representation represents the audio element for one specific listening position (“reference position”), for example, a center position in front of the audio element, and may not be directly suitable to render the audio element for other exterior listening positions, for example, to the side or back of the audio element.
- the derived interior representation may be used to provide a very flexible rendering mechanism that provides a listener a full 6D0F experience in exploring the sound around an extended audio element.
- the exterior representation is given, in some situations, it may be beneficial to first derive an interior representation from the given exterior representation and then to derive a new exterior representation from the derived interior representation.
- the given exterior representation typically does not describe the spatial character in all dimensions of the audio element. Instead the given exterior representation typically only describes the audio element as heard from the front of the audio element. If the listener is situated to the side, above or below the audio element, similar to rendering the interior representation, the representation of the audio element in the depth dimension that is not defined may be needed.
- Figure 5 illustrates an exemplary method of rendering an exterior representation of an audio element based on an interior representation of the audio element.
- two virtual loudspeakers SpL and SpR are used to represent the audio element.
- the interior representation of the audio element is based on interior audio signals F, B, L, and R.
- the observation vector between the listener and the spatial extent of the audio element is used as a basis for determining the orientation (i.e., the angle) of a virtual microphone MicL which captures audio signals for the virtual loudspeakers.
- the audio signal for the virtual loudspeaker SpL (representing the left side of the audio element 602 as acoustically perceived at the listener’s position) may be derived from the interior representation (e.g., using the equation 1 above).
- Q is the angle between the observation vector and the front vector of the audio element and a (90 degree in figure 5) is the orientation of the microphone MicL with respect to the observation vector.
- the virtual microphone is oriented in a direction between the directions represented by the interior audio signals L and B, and thus may capture a mixture of the two interior audio signals L and B.
- Figure 6 illustrates another exemplary method of rendering an exterior representation of an audio element using an interior representation.
- a simplified extent of the audio element in the form of a plane is used to represent the spatial extent of the audio element acoustically perceived at the listening position.
- the angle used for deriving the exterior audio signal representing the left part of the extent of the audio element is based on the normal vector of the plane, instead of the observation vector.
- the angle Q is the angle between the normal vector of the plane and the front vector of the audio element.
- the angle Q should be seen as representing the perspective that should be represented by the virtual loudspeakers of the exterior rendering.
- the angle Q may be related to the observation vector but does not always directly follow it.
- Figure 7 shows an example of a rendering setup for an exterior representation.
- the audio signal provided to the speaker SpC may represent the audio coming from the center of an audio element.
- the audio coming from the center may include audio from the front and back of the audio element acoustically perceived at the listening position.
- the extra distance gain factor may control the mix of the two microphone signals so that the signal from MicF is louder than the signal from MicB.
- only those components of the interior representation that are audible directly from the listener’s current position may be included in the downmix.
- the listener is right in front of the audio element, only the left, right, and front audio components of the interior representation may be included in the downmix, and not the back audio component (which represents the back side of the audio element from which no sound may reach the listener directly.).
- this implies that the extent of the audio element is an acoustically opaque surface from which no direct sound energy reaches the listener from the part(s) of the audio element that are acoustically occluded from the listener at the listener’s position.
- the contribution of the different components of the interior representation to the downmix can be controlled by specifying an “acoustic opacity factor” (in analogy of the opacity property in optics) for the audio element (for example, by including the acoustic opacity factor in metadata that accompanies the audio element or by setting a switch in the Tenderer and configuring the switch to operate based on the acoustic opacity factor).
- the audio element is acoustically “transparent” and all elements of the interior representation contribute equally to the downmix (aside from the possible distance gain as described above (e.g., see paragraph [0097])).
- the audio element is acoustically fully opaque and thus only the components of the interior representation that reach the listener directly, i.e., without passing through the audio element, would be included in the downmix.
- Channel-based audio signals of one format may be mapped to an interior representation using either the same format or a different format such as Ambisonics or some other channel-based formats, using any of the many corresponding mapping methods known to the skilled person.
- An Ambisonics signal may also be mapped to an interior representation of an audio element based on a channel-based format using any of the many corresponding mapping methods known to the skilled person.
- FIG 8A illustrates an XR system 800 in which the embodiments may be applied.
- XR system 800 includes speakers 804 and 805 (which may be speakers of headphones worn by the listener) and a display device 810 that is configured to be worn by the listener.
- XR system 800 may comprise an orientation sensing unit 801, a position sensing unit 802, and a processing unit 803 coupled (directly or indirectly) to an audio render 851 for producing output audio signals (e.g., a left audio signal for a left speaker and a right audio signal for a right speaker as shown).
- Audio Tenderer 851 produces the output signals based on input audio signals, metadata regarding the XR scene the listener is experiencing, and information about the location and orientation of the listener. Audio Tenderer 851 may be a component of display device 810 or it may be remote from the listener (e.g., Tenderer 851 may be implemented in the “cloud”).
- Orientation sensing unit 801 is configured to detect a change in the orientation of the listener and provides information regarding the detected change to processing unit 803. In some embodiments, processing unit 803 determines the absolute orientation (in relation to some coordinate system) given the detected change in orientation detected by orientation sensing unit 801. There could also be different systems for determination of orientation and position, e.g., a system using lighthouse trackers (LIDAR).
- LIDAR lighthouse trackers
- orientation sensing unit 801 may determine the absolute orientation (in relation to some coordinate system) given the detected change in orientation. In this case the processing unit 803 may simply multiplex the absolute orientation data from orientation sensing unit 801 and positional data from position sensing unit 802. In some embodiments, orientation sensing unit 801 may comprise one or more accelerometers and/or one or more gyroscopes.
- FIG. 9 shows an example implementation of audio Tenderer 851 for producing sound for the XR scene.
- Audio Tenderer 851 includes a controller 901 and a signal modifier 902 for modifying audio input 861 (e.g., a multi-channel audio signal) based on control information 910 from controller 901.
- Controller 901 may be configured to receive one or more parameters and to trigger modifier 902 to perform modifications on audio input 861 based on the received parameters (e.g., increasing or decreasing the volume level).
- the received parameters include (1) information 863 regarding the position and/or orientation of the listener (e.g., direction and distance to an audio element) and (2) metadata 862 regarding an audio element in the XR scene (e.g., audio element 102).
- controller 901 and signal modifier 902 are two different entities, in some embodiments, they may be a single entity.
- FIG. 10(a) shows an example implementation of signal modifier 902 according one embodiment.
- Signal modifier 902 includes a deriver 1002, a directional mixer 1004, and a speaker signal producer 1006.
- Deriver 1002 receives audio input 861, which in this example includes a pair of exterior audio signals 1010 and 1012. Exterior audio signals 1010 and 1012 are for an exterior representation of an audio element. Using exterior audio signals 1010 and 1012, deriver 1002 derives an interior representation of the audio element from an exterior representation of the audio element. The deriving operation of deriver 1002 may be performed as a pre-processing step or in real-time. More specifically, deriver 1002 derives interior audio signals 1014 which are for the interior representation of the audio element. In figure 10, the interior audio signals 1014 comprises a left interior audio signal (L), a right interior audio signal (R), a front interior audio signal (F), and a back interior audio signal (B).
- L left interior audio signal
- R right interior audio signal
- F front interior audio signal
- B back interior audio signal
- FIG. 10(b) shows an example of deriver 1002 according to an embodiment.
- deriver 1002 may comprise a combiner 1062 and a decorrelator 1064.
- Combiner 1062 is configured to combine (or mix) the exterior audio signals 1010 and 1012, thereby generating a new interior audio signal (e.g., the front interior audio signal F).
- Decorrelator 1064 is configured to perform a decorrelation on a received audio signal.
- decorrelator 1064 is configured to perform a decorrelation on the front interior audio signal F, thereby generating a back interior audio signal B.
- Detailed explanation about the combination (or mixing) and the decorrelation is provided in Section 2 of this disclosure above.
- Directional mixer 1004 receives the interior audio signals 1014, and produces a set of n virtual speaker signals (Mi, M2, ..., Mn) (i.e., audio signals for virtual loudspeakers, representing a spatial extent of an audio element) based on the received interior audio signals 1014 and control information 910.
- n virtual speaker signals i.e., audio signals for virtual loudspeakers, representing a spatial extent of an audio element
- n will equal 3 for the audio element and Mi may correspond to SpL
- M2 may correspond to SpC
- M3 may correspond to SpR.
- the control information 910 used by directional mixer 1004 to produce the virtual speaker signals may include, or may be based on, the positions of each virtual speaker relative to the audio element, and/or the position and/or orientation of the listener (e.g., direction and distance to an audio element).
- Detailed information about directional mixing is described in Section 3.1 of this disclosure above.
- the virtual speaker signal Mi may be generated using the equation 1 disclosed in Section 3.1 of this disclosure.
- speaker signal producer 1006 uses the virtual speaker signals (Mi, M2, ..., Mn).
- output signals e.g., output signal 881 and output signal 882 for driving speakers (e.g., headphone speakers or other speakers).
- speaker signal producer 1006 may perform conventional binaural rendering to produce the output signals.
- speaker signal producer 1006 may perform conventional speaker panning to produce the output signals.
- the operations of directional mixer 1004 and speaker signal producer 1006 may be performed in real-time.
- FIG 11 shows a process 1100 for rendering an audio element.
- the process 1100 may begin with step si 102.
- Step si 102 comprises obtaining an exterior representation of the audio element.
- Step si 104 comprises, based on the obtained exterior representation, generating an interior representation of the audio element.
- the exterior representation of the audio element comprises one or more exterior audio signals for producing an audio experience in which a listener of the audio element has the perception of being outside a boundary of the audio element
- the interior representation of the audio element comprises one or more interior audio signals for producing an audio experience in which the listener has the perception of being inside the boundary of the audio element.
- the exterior representation of the audio element comprises an exterior audio signal
- the interior representation of the audio element comprises an interior audio signal, wherein the interior audio signal is not a component of the exterior representation.
- the exterior representation of the audio element comprises a first exterior audio signal and a second exterior audio signal
- the interior representation of the audio element comprises a first interior audio signal and a second interior audio signal
- the first interior audio signal is generated using the first exterior audio signal and the second exterior audio signal.
- the first interior audio signal is generated based on a mean of the first and second exterior audio signals.
- the mean of the first and second exterior audio signals is a weighted mean of the first and second exterior audio signals.
- a degree of correlation between the first interior audio signal and the second interior audio signal is less than a threshold.
- the second interior audio signal is generated by performing a decorrelation on the first interior audio signal or a combined signal of the first and second exterior audio signals.
- the decorrelation comprises changing the phase of the first interior audio signal at one or more frequencies or changing the phase of the combined signal at one or more frequencies.
- the decorrelation comprises delaying the first interior audio signal or delaying the combined signal.
- the decorrelation is performed based on metadata associated with the audio element, and the metadata comprises diffuseness information indicating diffuseness of the audio element in one or more dimensions.
- the exterior representation of the audio element comprises an exterior audio signal
- the interior representation of the audio element comprises an interior audio signal
- a degree of correlation between the exterior audio signal and the interior audio signal is less than a threshold
- the interior representation of the audio element comprises at least two interior audio signals
- the method further comprises combining said at least two interior audio signals, thereby generating an audio output signal
- the method further comprises obtaining a listener’s orientation with respect to the audio element, wherein said at least two interior audio signals are combined based on the obtained listener’s orientation.
- the method further comprises obtaining an orientation of the audio element, wherein said at least two interior audio signals are combined based on the obtained orientation of the audio element.
- the combination of said at least two interior audio signals is a weighted sum of said at least two interior audio signals.
- weights for the weighted sum are determined based on the obtained listener’s orientation. [0129] In some embodiments, weights for the weighted sum are determined based on the obtained orientation of the audio element.
- FIG 12 is a block diagram of an apparatus 1200, according to some embodiments, for performing the methods disclosed herein (e.g., audio Tenderer 851 may be implemented using apparatus 1200).
- apparatus 1200 may comprise: processing circuitry (PC) 1202, which may include one or more processors (P) 1255 (e.g., a general purpose microprocessor and/or one or more other processors, such as an application specific integrated circuit (ASIC), field-programmable gate arrays (FPGAs), and the like), which processors may be co-located in a single housing or in a single data center or may be geographically distributed (i.e., apparatus 1200 may be a distributed computing apparatus); at least one network interface 1248 comprising a transmitter (Tx) 1245 and a receiver (Rx) 1247 for enabling apparatus 1200 to transmit data to and receive data from other nodes connected to a network 110 (e.g., an Internet Protocol (IP) network) to which network interface 1248 is connected (directly or indirectly)
- IP Internet Protocol
- CPP 1241 includes a computer readable medium (CRM) 1242 storing a computer program (CP) 1243 comprising computer readable instructions (CRI) 1244.
- CRM 1242 may be a non-transitory computer readable medium, such as, magnetic media (e.g., a hard disk), optical media, memory devices (e.g., random access memory, flash memory), and the like.
- the CRI 1244 of computer program 1243 is configured such that when executed by PC 1202, the CRI causes apparatus 1200 to perform steps described herein (e.g., steps described herein with reference to the flow charts).
- apparatus 1200 may be configured to perform steps described herein without the need for code. That is, for example, PC 1202 may consist merely of one or more ASICs. Hence, the features of the embodiments described herein may be implemented in hardware and/or software.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Stereophonic System (AREA)
- Circuit For Audible Band Transducer (AREA)
Abstract
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP22717414.1A EP4324224A1 (fr) | 2021-04-14 | 2022-04-14 | Éléments audio spatialement délimités à représentation intérieure dérivée |
KR1020237034165A KR20230153470A (ko) | 2021-04-14 | 2022-04-14 | 도출된 내부 표현을 갖는 공간적으로-바운드된 오디오 엘리먼트 |
AU2022258764A AU2022258764A1 (en) | 2021-04-14 | 2022-04-14 | Spatially-bounded audio elements with derived interior representation |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202163174889P | 2021-04-14 | 2021-04-14 | |
US63/174,889 | 2021-04-14 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022219100A1 true WO2022219100A1 (fr) | 2022-10-20 |
Family
ID=81325776
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2022/059973 WO2022219100A1 (fr) | 2021-04-14 | 2022-04-14 | Éléments audio spatialement délimités à représentation intérieure dérivée |
Country Status (4)
Country | Link |
---|---|
EP (1) | EP4324224A1 (fr) |
KR (1) | KR20230153470A (fr) |
AU (1) | AU2022258764A1 (fr) |
WO (1) | WO2022219100A1 (fr) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0966179A2 (fr) * | 1998-06-20 | 1999-12-22 | Central Research Laboratories Limited | Méthode de synthétisation d'un signal acoustique |
US20140010372A1 (en) * | 2002-10-15 | 2014-01-09 | Electronics And Telecommunications Research Institute | Method for generating and consuming 3-d audio scene with extended spatiality of sound source |
US20140119581A1 (en) * | 2011-07-01 | 2014-05-01 | Dolby Laboratories Licensing Corporation | System and Tools for Enhanced 3D Audio Authoring and Rendering |
US20200128347A1 (en) * | 2018-10-19 | 2020-04-23 | Facebook Technologies, Llc | Head-Related Impulse Responses for Area Sound Sources Located in the Near Field |
WO2020127329A1 (fr) * | 2018-12-19 | 2020-06-25 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Appareil et procédé de reproduction d'une source sonore étendue spatialement ou appareil et procédé de génération d'un flux binaire à partir d'une source sonore étendue spatialement |
WO2020144061A1 (fr) | 2019-01-08 | 2020-07-16 | Telefonaktiebolaget Lm Ericsson (Publ) | Éléments audio spatialement délimités à représentations intérieures et extérieures |
WO2020144062A1 (fr) | 2019-01-08 | 2020-07-16 | Telefonaktiebolaget Lm Ericsson (Publ) | Éléments audio spatialement hétérogènes efficaces pour réalité virtuelle |
WO2021180820A1 (fr) | 2020-03-13 | 2021-09-16 | Telefonaktiebolaget Lm Ericsson (Publ) | Rendu d'objets audio présentant une forme complexe |
-
2022
- 2022-04-14 EP EP22717414.1A patent/EP4324224A1/fr active Pending
- 2022-04-14 KR KR1020237034165A patent/KR20230153470A/ko active Search and Examination
- 2022-04-14 AU AU2022258764A patent/AU2022258764A1/en active Pending
- 2022-04-14 WO PCT/EP2022/059973 patent/WO2022219100A1/fr active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0966179A2 (fr) * | 1998-06-20 | 1999-12-22 | Central Research Laboratories Limited | Méthode de synthétisation d'un signal acoustique |
US20140010372A1 (en) * | 2002-10-15 | 2014-01-09 | Electronics And Telecommunications Research Institute | Method for generating and consuming 3-d audio scene with extended spatiality of sound source |
US20140119581A1 (en) * | 2011-07-01 | 2014-05-01 | Dolby Laboratories Licensing Corporation | System and Tools for Enhanced 3D Audio Authoring and Rendering |
US20200128347A1 (en) * | 2018-10-19 | 2020-04-23 | Facebook Technologies, Llc | Head-Related Impulse Responses for Area Sound Sources Located in the Near Field |
WO2020127329A1 (fr) * | 2018-12-19 | 2020-06-25 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Appareil et procédé de reproduction d'une source sonore étendue spatialement ou appareil et procédé de génération d'un flux binaire à partir d'une source sonore étendue spatialement |
WO2020144061A1 (fr) | 2019-01-08 | 2020-07-16 | Telefonaktiebolaget Lm Ericsson (Publ) | Éléments audio spatialement délimités à représentations intérieures et extérieures |
WO2020144062A1 (fr) | 2019-01-08 | 2020-07-16 | Telefonaktiebolaget Lm Ericsson (Publ) | Éléments audio spatialement hétérogènes efficaces pour réalité virtuelle |
WO2021180820A1 (fr) | 2020-03-13 | 2021-09-16 | Telefonaktiebolaget Lm Ericsson (Publ) | Rendu d'objets audio présentant une forme complexe |
Also Published As
Publication number | Publication date |
---|---|
EP4324224A1 (fr) | 2024-02-21 |
AU2022258764A1 (en) | 2023-10-12 |
KR20230153470A (ko) | 2023-11-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7470695B2 (ja) | 仮想現実のための効率的な空間的にヘテロジーニアスなオーディオ要素 | |
TWI684978B (zh) | 用於生成增強聲場描述的裝置及方法與其計算機程式及記錄媒體、和生成修改聲場描述的裝置及方法與其計算機程式 | |
Hacihabiboglu et al. | Perceptual spatial audio recording, simulation, and rendering: An overview of spatial-audio techniques based on psychoacoustics | |
US20190230436A1 (en) | Method, systems and apparatus for determining audio representation(s) of one or more audio sources | |
JP2022167932A (ja) | 没入型オーディオ再生システム | |
TWI686794B (zh) | 以保真立體音響格式所編碼聲訊訊號為l個揚聲器在已知位置之解碼方法和裝置以及電腦可讀式儲存媒體 | |
TWI692753B (zh) | 生成增強的聲場描述的裝置與方法及其計算機程式與記錄媒體、和生成修改的聲場描述的裝置及方法及其計算機程式 | |
CN113170271B (zh) | 用于处理立体声信号的方法和装置 | |
JP2014506416A (ja) | オーディオ空間化および環境シミュレーション | |
KR102119240B1 (ko) | 스테레오 오디오를 바이노럴 오디오로 업 믹스하는 방법 및 이를 위한 장치 | |
AU2022256751A1 (en) | Rendering of occluded audio elements | |
Xie | Spatial sound: Principles and applications | |
US20230262405A1 (en) | Seamless rendering of audio elements with both interior and exterior representations | |
WO2022219100A1 (fr) | Éléments audio spatialement délimités à représentation intérieure dérivée | |
Picinali et al. | Chapter Reverberation and its Binaural Reproduction: The Trade-off between Computational Efficiency and Perceived Quality | |
US20240340606A1 (en) | Spatial rendering of audio elements having an extent | |
AU2022378526A1 (en) | Rendering of audio elements | |
Deppisch et al. | Browser Application for Virtual Audio Walkthrough. | |
Llopis et al. | Effects of the order of Ambisonics on localization for different reverberant conditions in a novel 3D acoustic virtual reality system | |
WO2024012867A1 (fr) | Rendu d'éléments audio occlus | |
WO2024121188A1 (fr) | Restitution d'éléments audio occlus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22717414 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202317064562 Country of ref document: IN Ref document number: 2022258764 Country of ref document: AU Ref document number: AU2022258764 Country of ref document: AU |
|
ENP | Entry into the national phase |
Ref document number: 20237034165 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020237034165 Country of ref document: KR |
|
ENP | Entry into the national phase |
Ref document number: 2022258764 Country of ref document: AU Date of ref document: 20220414 Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022717414 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2022717414 Country of ref document: EP Effective date: 20231114 |