EP3642957A1 - Verarbeitung von audiosignalen - Google Patents
Verarbeitung von audiosignalenInfo
- Publication number
- EP3642957A1 EP3642957A1 EP18821106.4A EP18821106A EP3642957A1 EP 3642957 A1 EP3642957 A1 EP 3642957A1 EP 18821106 A EP18821106 A EP 18821106A EP 3642957 A1 EP3642957 A1 EP 3642957A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- field
- audio signal
- far
- impulse response
- field audio
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000005236 sound signal Effects 0.000 title claims abstract description 134
- 238000012545 processing Methods 0.000 title description 34
- 230000004044 response Effects 0.000 claims abstract description 81
- 239000000203 mixture Substances 0.000 claims abstract description 56
- 238000000034 method Methods 0.000 claims abstract description 40
- 230000009466 transformation Effects 0.000 claims abstract description 9
- 238000000844 transformation Methods 0.000 claims abstract description 9
- 230000001131 transforming effect Effects 0.000 claims abstract description 7
- 230000015654 memory Effects 0.000 claims description 36
- 230000003190 augmentative effect Effects 0.000 claims description 16
- 238000004590 computer program Methods 0.000 claims description 11
- 230000000694 effects Effects 0.000 claims description 7
- 238000001514 detection method Methods 0.000 claims description 4
- 239000011159 matrix material Substances 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 7
- 238000001228 spectrum Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 5
- 238000003860 storage Methods 0.000 description 5
- 230000003595 spectral effect Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 230000001419 dependent effect Effects 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 238000005259 measurement Methods 0.000 description 3
- 238000009877 rendering Methods 0.000 description 3
- 238000003491 array Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 101100001678 Emericella variicolor andM gene Proteins 0.000 description 1
- 239000000654 additive Substances 0.000 description 1
- 230000000996 additive effect Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000013016 damping Methods 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 238000002592 echocardiography Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000009472 formulation Methods 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 238000002955 isolation Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000004091 panning Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
- H04R3/005—Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/20—Arrangements for obtaining desired frequency or directional characteristics
- H04R1/32—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
- H04R1/326—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only for microphones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2201/00—Details of transducers, loudspeakers or microphones covered by H04R1/00 but not provided for in any of its subgroups
- H04R2201/40—Details of arrangements for obtaining desired directional characteristic by combining a number of identical transducers covered by H04R1/40 but not provided for in any of its subgroups
- H04R2201/401—2D or 3D arrays of transducers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2420/00—Details of connection covered by H04R, not provided for in its groups
- H04R2420/07—Applications of wireless loudspeakers or wireless microphones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2400/00—Details of stereophonic systems covered by H04S but not provided for in its groups
- H04S2400/11—Positioning of individual sound objects, e.g. moving airplane, within a sound field
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2400/00—Details of stereophonic systems covered by H04S but not provided for in its groups
- H04S2400/15—Aspects of sound capture and related signal processing for recording or reproduction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
- H04S7/302—Electronic adaptation of stereophonic sound system to listener position or orientation
Definitions
- This specification relates to processing audio signals and, more specifically, to processing audio signals for mixing audio signals.
- a stereo or multi-channel recording can be passed from the recording or capture apparatus to a listening apparatus and replayed using a suitable multi-channel output such as a multi-channel loudspeaker arrangement and, with virtual surround processing, a pair of stereo headphones or headset.
- this specification describes a method comprising: receiving, from a far- field microphone device, at least one far-field audio signal in a time domain
- the or each far-field audio signal corresponding to a respective channel of an audio mixture of the far-field microphone device; receiving, from a near-field microphone, a first near-field audio signal in a time domain corresponding to the mobile source; determining location information relating to the mobile source; transforming the or each far-field audio signal and the first near-field audio signal from the time domain to a time-frequency domain; and using the transformations of the far-field audio signal and the first near- field audio signal and the location information of the mobile source to determine a set of room impulse response filters of the recording space.
- the method may further comprise: receiving a selection of a position within the recording space; receiving a second near-field audio signal associated with a source; identifying a room impulse response filter relating to the selected position within the recording space from the set of room impulse response filters of the recording space; applying the selected room impulse response filter to the second near-field audio signal to obtain a projected second near -field audio signal; and augmenting the audio mixture of the far -field microphone device by adding the projected second near-field audio signal to the audio mixture.
- the room impulse response filter applied to the second near-field audio signal may be retrieved from a room impulse response filter database.
- the room impulse response filter database may contain room impulse response filters obtained from a broadband signal within the recording space.
- the method may further comprise: receiving a selection of a position within the recording space; receiving a third near-field audio signal, the third near-field audio signal being associated with the selected position; identifying a room impulse response filter relating to the selected position within the recording space from the set of room impulse response filters of the recording space;
- the room impulse response filter applied to the third near-field audio signal may be calculated using the first near-field audio signal and the far-field audio mixture.
- the set of room impulse response filters may be determined using a block-wise linear least squares projection algorithm applied to a broadband calibration signal.
- the set of room impulse response filters may be collected during a calibration phase and stored in a room impulse response database.
- the set of room impulse response filters may be determined using far-field and near- field audio signals obtained in real-time.
- the mobile source may move around the recording space either manually or automatically.
- the set of room impulse response filters may be obtained using a recursive least squares algorithm.
- the near-field microphone may be provided with a location tag and the location information is received from the location tag.
- the location information may be determined using multilateration.
- the method may further comprise determining a signal activity detection signal.
- this specification describes an apparatus configured to perform a method according to any preceding claim.
- this specification describes computer-readable instructions which when executed by computing apparatus cause the computing apparatus to perform a method according to the first aspect of the specification.
- this specification describes apparatus comprising: at least one processor; and at least one memory including computer program code, which when executed by the at least one processor, causes the apparatus to: receive, from a far-field microphone device, at least one far-field audio signal in a time domain corresponding to a mobile source located within a recording space, the or each far-field audio signal corresponding to a respective channel of an audio mixture of the far-field microphone device; receive, from a near-field microphone, a first near-field audio signal in a time domain corresponding to the mobile source; determine location information relating to the mobile source; transform the or each far-field audio signal and the first near-field audio signal from the time domain to a time-frequency domain; and use the
- this specification describes a computer-readable medium having computer-readable code stored thereon, the computer readable code, when executed by a least one processor, causing performance of at least: receiving, from a far-field microphone device, at least one far-field audio signal in a time domain corresponding to a mobile source located within a recording space, the or each far-field audio signal corresponding to a respective channel of an audio mixture of the far-field microphone device; receiving, from a near-field microphone, a first near-field audio signal in a time domain corresponding to the mobile source; determining location information relating to the mobile source; transforming the or each far-field audio signal and the first near- field audio signal from the time domain to a time-frequency domain; and using the transformations of the far-field audio signal and the first near-field audio signal and the location information of the mobile source to determine a set
- this specification describes apparatus comprising: means for receiving, from a far-field microphone device, at least one far-field audio signal in a time domain corresponding to a mobile source located within a recording space, the or each far-field audio signal corresponding to a respective channel of an audio mixture of the far-field microphone device; means for receiving, from a near-field microphone, a first near-field audio signal in a time domain
- the mobile source means for determining location information relating to the mobile source; means for transforming the or each far-field audio signal and the first near-field audio signal from the time domain to a time- frequency domain; and means for using the transformations of the far-field audio signal and the first near-field audio signal and the location information of the mobile source to determine a set of room impulse response filters of the recording space.
- Figure 1 is a schematic diagram of an audio mixing system and a recording space
- Figure 2 is a schematic block diagram of elements of certain embodiments
- Figure 3 is a flow chart illustrating operations carried out in certain embodiments
- Figure 4 is an illustration of a recording space
- Figure 5 is a schematic diagram of an audio mixing system and a recording space
- Figure 6 is a schematic diagram of an audio mixing system and a recording space as a target source is replaced with a replacement source
- Figure 7 is a schematic diagram of an audio mixing system and a recording space as a new source is introduced to an audio mixture.
- Embodiments of the present invention relate to mixing audio signals received from both a near-field microphone and from a far-field microphone.
- Example near-field microphones include Lavalier microphones which may be worn by a user to allow hands-free operation or a handheld microphone.
- the near-field microphone may be location tagged.
- the near-field signals obtained from near-field microphones may be termed "dry signals", in that they have little influence from the recording space and have relatively high signal-to-noise ratio (SNR).
- Far-field microphones are microphones that are located relatively far away from a sound source.
- an array of far-field microphones may be provided, for example in a mobile phone or in a Nokia Ozo (RTM) or similar audio recording apparatus.
- Devices having multiple microphones may be termed multichannel devices and can detect an audio mixture comprising audio components received from the respective channels.
- the microphone signals from far-field microphones may be termed "wet signals", in that they have significant influence from the recording space (for example from ambience, reflections, echoes, reverberation, and other sound sources). Wet signals tend to have relatively low SNR. In essence, the near-field and far-field signals are in different "spaces", near-field signals in a "dry space” and far-field signals in a "wet space”.
- the audio signals When the originally "dry” audio content from the sound sources reaches the far-field microphone array the audio signals have changed because of the effect of the recording space. That is to say, the signal becomes "wet” and has a relatively low SNR.
- the near- field microphones are much closer to the sound sources than the far-field microphone array. This means that the audio signals received at the near-field microphones are much less affected by the recording space.
- the dry signal has much higher signal to noise ratio and lower cross talk with respect to other sound sources. Therefore, the near-field and far-field signals are very different and mixing the two (“dry” and "wet") results in audible artefacts or non-natural sounding audio content.
- a signal outside the system needs to be inserted into the audio mixture.
- an audio stream from an external player such as a professional audio recorder may be mixed with audio content recorded in a particular recording space.
- These signals need to be mixed together because only the microphone array can provide spatial audio content, for example for a virtual reality (VR) or augmented reality (AR) audio delivery system.
- VR virtual reality
- AR augmented reality
- 6DoF six degrees of freedom
- Embodiments of this invention provide a database where estimated RIR values are collected around the place of performance based on the captured "dry” and “wet” signals as well as available position data of the near-field microphones (which correspond to the position of the sound source).
- the RIR data are estimated based on the dry to wet signal transfer function at every relevant position within the recording space.
- the RIR database may be collected during an initial calibration phase where a sound source (for example, white noise, talking human, acoustic instrument, a flying drone with speaker, etc) is moving or is moved around the recording space either manually or automatically.
- a sound source for example, white noise, talking human, acoustic instrument, a flying drone with speaker, etc
- the RIR database can be used during the performance to insert additional sound sources to the audio mix in real-time.
- the recording space might have higher SNR available in some circumstances, for example when a studio audience is missing and also use of special signals such as white noise that will provide more accurate room impulse responses for the whole frequency range.
- continuous collection of new RIR data is performed during the recording itself, the new RIR data being inserted into the database as the actual performance occurs.
- Additional RIR data that is inserted into a pre-existing RIR database can also be collected during the actual performance. Collection of RIR data during a performance can be made in order to add more data points to make the database denser.
- the position grid can be made denser. For instance, data may be acquired for a 10 centimetre (cm) grid instead of an originally calibrated 20 cm grid so that more data points can be gathered.
- the RIR database can contain time varying RIR values. To capture time varying responses, RIR measurements need to be captured over an extended period of time for optimal quality. For example, when more people enter the recording space a damping of the recording space occurs which affects the acoustic properties of that recording space.
- Figure l shows an audio mixing system 100 which comprises a far-field audio recording device 101, such as a video/audio capture device, and one or more near-field audio recording devices 102, such as Lavalier microphones.
- the far-field audio recording device 101 comprises an array of far-field microphones and may be a mobile phone, a stereoscopic video/audio capture device or similar recording apparatus such as the Nokia Ozo (RTM).
- the near-field audio recording devices 102 may be worn by a user, for example a singer or actor.
- the far-field audio recording device 101 and the near- field audio recording devices 102 are located within a recording space 103.
- the far-field audio recording device 101 is in communication with an RIR processing apparatus 104 either via a wired or wireless connection.
- the RIR processing apparatus 104 either via a wired or wireless connection.
- the RIR processing apparatus 104 may be located within the recording space 103 or outside the recording space 103.
- the RIR processing apparatus 104 has access to an RIR database 105 containing RIR data relating to the recording space 103.
- the RIR database 105 may be physically incorporated with the RIR processing apparatus 104. Alternatively, the RIR database
- RIR processing apparatus 105 may be maintained remotely with respect to the RIR processing apparatus 104.
- FIG. 2 is a schematic block diagram of the RIR processing apparatus 104.
- the RIR processing apparatus 104 may be incorporated within a general purpose computer. Alternatively, the RIR processing apparatus 104 may be a standalone apparatus.
- the RIR processing apparatus 104 may comprise a short-time Fourier transform (STFT) module 201 for determining short-time Fourier transforms of received audio signals.
- the RIR processing apparatus 104 comprises an RIR estimator 202 and a projection module 203.
- the RIR processing apparatus 104 comprises a processor 204 which controls the STFT module 201, the RIR estimator 202 and the projection module 203.
- the RIR processing apparatus 104 comprises a memory 205.
- the memory comprises a volatile memory 206 such as random access memory (RAM).
- the memory also comprises non- volatile memory 207, such as read-only memory (ROM).
- the RIR processing apparatus 104 further comprises input/output 208 to enable communication with the far-field audio recording device 101 and with the RIR database 105 as well as any other remote entities.
- the input/output 208 comprises hardware, software and/or firmware that allows the RIR processing apparatus 104 to
- the RIR processing apparatus 104 comprises a processor 204 communicatively coupled with memory 205.
- the memory 205 has computer readable instructions stored thereon, which when executed by the processor 204 causes the processor 204 to cause performance of various ones of the operations described with reference to Figure 3.
- the RIR processing apparatus 104 may in some instance be referred to, in general terms, as "apparatus".
- the RIR processing apparatus 104 may be of any suitable composition.
- the processor 204 may be a programmable processor that interprets computer program instructions and processes data.
- the processor 204 may include plural programmable processors.
- the processor 204 may be, for example, programmable hardware with embedded firmware.
- the processor 204 may be termed processing means.
- the processor 204 may alternatively or additionally include one or more Application Specific Integrated Circuits (ASICs). In some instances, processor 204 may be referred to as computing apparatus.
- ASICs Application Specific Integrated Circuits
- the processor 204 is coupled to the memory (or one or more storage devices) 205 and is operable to read/write data to/from the memory 205.
- the memory 205 may comprise a single memory unit or a plurality of memory units, upon which the computer readable instructions (or code) is stored.
- the memory 205 may comprise both volatile memory and non-volatile memory.
- the computer readable instructions/program code may be stored in the non-volatile memory and may be executed by the processor 204 using the volatile memory for temporary storage of data or data and instructions. Examples of volatile memory include RAM, DRAM, and SDRAM etc. Examples of non-volatile memory include ROM, PROM, EEPROM, flash memory, optical storage, magnetic storage, etc.
- the memories in general may be referred to as non-transitory computer readable memory media.
- the term 'memory' in addition to covering memory comprising both non-volatile memory and volatile memory, may also cover one or more volatile memories only, one or more non-volatile memories only, or one or more volatile memories and one or more non-volatile memories.
- the computer readable instructions/program code may be pre-programmed into the RIR processing apparatus 104.
- the computer readable instructions may arrive at the RIR processing apparatus 104 via an electromagnetic carrier signal or may be copied from a physical entity such as a computer program product, a memory device or a record medium such as a CD-ROM or DVD.
- the computer readable instructions may provide the logic and routines that enables the devices/apparatuses to perform the functionality described above.
- the combination of computer-readable instructions stored on memory (of any of the types described above) may be referred to as a computer program product.
- Embodiments of the present invention may be implemented in software, hardware, application logic or a combination of software, hardware and application logic.
- the software, application logic and/or hardware may reside on memory, or any computer media.
- the application logic, software or an instruction set is maintained on any one of various conventional computer-readable media.
- a "memory" or “computer-readable medium” may be any media or means that can contain, store, communicate, propagate or transport the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer.
- programmable gate arrays FPGA field-programmable gate arrays
- ASIC application specify circuits ASIC
- signal processing devices and other devices References to computer program, instructions, code etc. should be understood to express software for a programmable processor firmware such as the programmable content of a hardware device as instructions for a processor or configured or configuration settings for a fixed function device, gate array,
- the far-field audio recording device 101 comprising a microphone array composed of far-field
- the sound sources may be moving and have time-varying mixing properties, denoted by room impulse response (RIR), hcrfi t) for each channel c at each time index n.
- RIR room impulse response
- hcrfi t room impulse response
- Some of the sound sources e.g. speaker, car, piano or any sound source
- the resulting mixture signal can be given as:
- y c (n) is the audio mixture in time domain for each channel index c of the far-field audio recording device 101, i.e. the signal received at each far-field microphone;
- h C n ( T ) is th e partial impulse response in time domain (sample delay index r), i.e. the room impulse response;
- n c (n) is the noise signal in time domain.
- array signal allows expressing the capture in time-frequency domain
- Equation 2 wherein: is the STFT of the array mixture (frequency and frame index f,t);
- p near-field source signal
- h ⁇ d room impulse response (RIR) in STFT domain (frame delay index
- Xj t ⁇ is the STFT of pth reverberated (filtered/projected) source signal
- rif t is the STFT of the noise signal.
- the length of the convolutive frequency domain RIR is D timeframes which can vary from a few timeframes to several tens of frames depending on the STFT window length and maximum effective amount of reverberation components in the recording environment.
- This model differs from the usual assumption of instantaneous mixing in frequency domain with mixing consisting of complex valued weights only for the current timeframe.
- an audio signal y c (ri) is received from the far-field audio recording device 101.
- an audio signal (n) is received from the near-field audio recording device 102 for those sound sources provided with a near-field audio recording device 102.
- the location of the mobile source is determined.
- the location can be determined using information received from a tag with which the mobile source is provided. Alternatively, the location may be calculated using multilateration techniques described below.
- a short-time Fourier transform (STFT) is applied to both far-field and near- field audio signals.
- Alternative transforms may be applied to the audio signals as described below.
- time differences between the near-field and far-field audio signals can be taken into account. However, if the time differences are large (several hundreds of milliseconds or more) a rough alignment may be done prior to the process commencing. For example, if a wireless connection between a near-field microphone and RIR processor causes a delay, the delay may be manually fixed by delaying the other signals in the RIR processor or by an external delay processor which may be implemented as hardware or software.
- a signal activity detection may be estimated from the near-field signal in order to determine when the RIR estimate is to be updated. For example, if a source does not emit any signal over a time period, its RIR value does not need to be estimated.
- the RIR estimation may be performed using a block-wise linear least squares (LS) projection in offline operation mode, that is where the RIR estimation is performed as part of a calibration operation.
- LS block-wise linear least squares
- RLS recursive least squares
- the RLS algorithm may be used in offline operation instead of the block-wise linear LS algorithm. In any case, as a result, a set of RIR filters in time-frequency domain are obtained.
- the RIR ⁇ . 3 ⁇ 4 can be thought of as a projection operator from near-field signal space
- the projection is time, frequency and channel dependent.
- the parameters of RIR hjT td can be estimated using linear least squares (LS) regression, which is equivalent to finding the projection between the near-field and far-field signal spaces.
- LS regression for estimating RIR values may be applied for moving sound sources by processing the input signal in blocks of approximately 500ms and the RIR values may be assumed to be stationary within each block. Block-wise processing with moving sources assumes that the difference between RIR values associated with adjacent frames is relatively small and remains stable within the analysed block. This is valid for sound sources that move at low speeds in an acoustic environment where small changes in source position with respect to the receiver do not cause substantial change in the RIR value.
- the method of LS regression is applied individually for each source signal in each channel of the array. Additionally, the RIR values are frequency dependent and each frequency bin of the STFT is processed individually. Thus, in the following discussion it should be understood that the processing is repeated for all channels and all frequencies. Assuming a block of STFT frames with indices f + T where the RIR is assumed stationary inside the block, the mixture signal STFT with the convolutive frequency domain mixing can be given as:
- X is a matrix containing the near-field STFT coefficients starting from frame t - o and the delayed versions starting from t - i,...,t - D— i; and h is the RIR to be estimated.
- the length of the RIR filter to be estimated is D STFT frames.
- the block length is T + 1 frames, and T + 1 > D in order to avoid overfitting due to an overdetermined model.
- the projected source signal for a single block can be trivially obtained as:
- Equation 9 demonstrates the removal of a particular source signal from the audio mixture. As well as removing a source from the audio mixture, it is also possible to add the effect of a source to the audio mix. This maybe done by using addition instead of subtraction with a user specified gain.
- the RIR estimation presented in embodiments of the present invention allows removal of a target source from the audio mixture or addition of a source to the audio mixture of the far-field audio recording device 101.
- the signal emitted by the source can be replaced by augmenting separate content to the array mixture of the far-field audio recording device 101.
- DOA target source direction of arrival
- the problem of augmenting separate signals using the RIR values estimated from the target source in prior approaches lies in the fact that the source signal is not broadband and estimates of RIR values from frequencies with no signal energy emitted are unreliable. Having different spectral content (source signal frequency occupancy in each frame) leads to poor subjective quality of the synthesized augmented source since accurate RIR data for all frequencies are not available.
- embodiments herein described provide a calibration method with a constant broadband signal which is used to estimate and store RIR values from substantially all possible locations of the recording space.
- the purpose of the calibration stage is that reliably broadband RIR data from all positions of the recording space are captured before the actual operation (i.e. before an audio recording or broadcast).
- the location data may be either relative or absolute such as GPS coordinates.
- the target source is removed from the mixture using the block-wise LS or RLS method described above.
- the direction of arrival (DOA) is estimated either acoustically or using other localization techniques.
- the DOA may be estimated.
- the estimated RIR value in the time domain relating to each channel of the array of the far-field audio device 101 is analysed.
- the first received RIR sample that is above a threshold gives an estimate of the delay at which the sound arrives at the nearest microphone of the far-field audio device 101.
- Comparing the delays from all microphones of the far-field audio device 101 provides the time differences of arrival (TDOA) between microphones in the array of the far-field audio device 101. From these values the direction can be calculated using multilateration methods that are known in the art.
- the augmented source is synthesized using the target source DOA estimates for retrieving the RIR corresponding to each DOA from the database generated in the calibration stage.
- the length of the calibration stage depends on the size of the recording space and the required density of the database. The length of the calibration stage may vary from around 10 seconds to several minutes.
- FIG 4 is a plan view of a recording space 103 in accordance with an embodiment whereby audio data is recorded as part of a calibration stage.
- a speaker 400 is provided with a near-field microphone 102 such as a Lavalier microphone or a handheld microphone.
- the speaker 400 may also be provided with a location tag 401.
- a far-field audio recording device 101 is provided towards the centre of the recording space 103.
- the speaker 400 walks around the recording space 103 along a trajectory T.
- the speaker 400 speaks so that audio data is recorded by both the far-field audio recording device 101 and the near-field microphone 102.
- the person may also be playing an instrument or carrying a sound producing loudspeaker.
- the room impulse response (RIR) data are collected around the place of performance based on the captured "dry” and “wet” signals as well as available position data from the location tag 401.
- the RIR data are estimated based on the dry to wet signal transfer - l8 - function at every relevant position with a processing unit using one of the algorithms described above.
- Figure 5 is a plan view of a recording space 103 in accordance with another
- each drone 500 is provided with a near- field microphone 102.
- Each of the drones 500 emits a noise, either through a loudspeaker or merely from the drone rotors.
- Two or more far-field audio recording devices 101 are also provided.
- the RIR database 105 may be collected during an initial calibration phase where an audio source of wideband noise, for example white noise, MLSA sequence, pseudo random noise, or a talking human, an acoustic instrument, a flying drone with speaker or a ground based robot, is moving or is moved around the recording space 103 either manually or automatically.
- an audio source of wideband noise for example white noise, MLSA sequence, pseudo random noise, or a talking human, an acoustic instrument, a flying drone with speaker or a ground based robot, is moving or is moved around the recording space 103 either manually or automatically.
- the benefit of having some calibration recordings and database collection prior to an actual performance is that the pre-existing RIR database 105 can be used during the performance to insert additional sound sources to the audio mix in real-time.
- RIR data when wideband noise is used for calibration, the RIR data are more accurate over the whole spectrum.
- the recording stage will also have higher SNR available, for example when the audience is missing from the recording space 103. This may provide more accurate and/or faster RIR measurements.
- RIR data may be collected during the performance itself. This may be instead of the calibration phase described above or in addition to the calibration phase. In the latter scenario, the reliability of the RIR data captured during the calibration process described above using the least block-wise linear least squares projection may be improved by capturing further RIR during the performance itself.
- RIR data estimated are generally valid only for the frequency indices at which the source produced meaningful acoustic output.
- RIR data are applied to the same close-field signal and no mismatch between time-frequency content and RIR data occurs.
- the RIR data need to be broadband and valid at least for the STFT frequency indices where the augmented signal has significant energy.
- RIR data estimated at each position of the recording space 103 are used to gradually build a database of broadband RIR data by combining estimates at different times from the same location within the recording space 103.
- the recent magnitude spectrum of the near-field signal can be used as an indicator of reliability of the RIR data and only frequency indices with substantial signal energy are updated in the database.
- the database update can vary from simple weighted average to more advanced
- real-time RIR estimation may be performed by using a recursive least squares (RLS) algorithm.
- the signal model consisting of convolutive mixing in time-frequency domain, may be defined as: , _ yP yD - i h W Y (p) , _ yP W . n
- the filter weights vary for each time frame f and, again by dropping the frequency index /and the channel dimension, the filtering equation for a single source at time frame f may be specified as:
- Efficient real-time operation can be achieved with recursive estimation of the RIR filter weights h using the recursive least squares (RLS) algorithm.
- the cost function to be minimized with respect to filter weights may be expressed as:
- Equation 13 C(h t ) ⁇ ⁇ ⁇ 1 (Equation 13) which accumulates the estimation error from past frames with exponential weight l t_i .
- Equation 13 The RLS algorithm minimizing Equation 13 is based on recursive estimation of the inverse correlation matrix P t of the close-field signal and the optimal filter weights h t and can be summarized as:
- the initial regularization of the inverse autocorrelation matrix is achieved by defining S using a small positive constant, typically from 10 ⁇ 2 to 10 1 .
- a small ⁇ value causes faster convergence, whereas a larger ⁇ value constrains the initial convergence to happen over a longer time period (for example, over a few seconds).
- the contribution of past frames to the RIR filter estimate at current frame t may be varied over frequency.
- the forgetting factor ⁇ acts in a similar way as the analysis window shape in the truncated block-wise least squares algorithm.
- small changes in source position can cause substantial changes in the RIR filter values at high frequencies due to highly reflected and more diffuse sound propagation paths. Therefore, the contribution of past frames at high frequencies needs to be lower than at low frequencies. It is assumed that the RIR parameters slowly change at lower frequencies and source evidence can be integrated over longer periods, meaning that the exponential weight ⁇ ⁇ _ ' can have substantial values for frames up to 1.5 seconds in past.
- a similar regularization as described above with reference to block-wise LS may also be adopted for the RLS algorithm.
- the regularization is done to achieve a similar effect as in block-wise LS to improve robustness towards low-frequency crosstalk between near- field signals and avoid excessively large RIR weights.
- the near-field microphones are generally not directive at low frequencies and can pick up a fair amount of low- frequency signal content generated by noise source, for example traffic, loudspeakers etc.
- the RLS algorithm is given in a direct form.
- the formulation can be found for example from T. van Waterschoot, G. Rombouts, and M. Moonen, "Optimally regularized recursive least squares for acoustic echo cancellation," in Proceedings of The second annual IEEE BENELUX/DSP Valley Processing Symposium (SPS-DARTS 2006), Antwerp, Belgium, 2005, pp. 28-29.
- the direct form RLS algorithm updates are specified as,
- This algorithm would give the same result as the RLS algorithm discussed above but requires operation for calculating the inverse of the autocorrelation matrix, and is thus computationally more expensive, but does allow regularization of it.
- TR Tikhonov regularization
- ⁇ ⁇ is based on the regularization kernel and the inverse average log-spectrum of the close-field signal. It should be noted that the kernel k f needs to be modified to account for the differences between block-wise LS and RLS algorithms, and can depend on the level difference between the close-field signal and the far-field mixtures.
- regularization weight In addition to regularization weight being adjusted based on the average log-spectrum, it can also be varied based on the RMS level difference between near-field and far-field signals.
- the RMS levels of these signals might not be calibrated in real-time operation and thus additional regularization eight strategy is required.
- a trivial low-pass filter applied to RMS of each individual STFT frame can be used to track the varying RMS level of close-field and far-field signals.
- the estimated RMS level is used to adjust the regularization weights /?LMR or TR m order to achieve similar regularization impact as with RMS calibrated signals assumed in earlier equations.
- Additional RIR data to be inserted into the RIR database 105 may be collected during the actual performance. This can be made in order to add more data points, for example to make the RIR position database grid denser or for sensing the time varying responses, for example when more crowd comes inside the room, it dampens the room. Time varying responses may also be useful in post-production if some original performances are edited and later added back to the original recording space 103.
- Figure 6 illustrates a recording environment whereby a target source 601 is removed from the audio mixture and replaced with a replacement source 602 at the same position. Based on target source DOA trajectory or location estimates obtained from a location tag of the target source 601, the signal emitted by the target source 601 can be replaced by augmenting separate content to the array mixture.
- FIG. 6 An example scenario of this simple method to replace a speaker inside a room with another person is shown in Figure 6.
- the replacement of a target source may be done using realtime RIR estimation, where no RIR database 105 need be used.
- a calibration phase may be performed with respect to the recording space 103, as described above.
- a drawback of augmenting separate signals using the RIR data estimated from the target source 601 in real time lies in the fact that the target source signal may not be broadband and estimates of RIR data from frequencies with no signal energy emitted may be unreliable. Where the target source 601 and the replacement source 602 have different spectral content (i.e. source signal frequency occupancy in each frame) poor subjective quality of the synthesized augmented source may result since accurate RIR data for all frequencies may not be available.
- a calibration phase is used to build up a RIR database 105, as described above.
- the RIR data in RIR database 105 that is collected with wideband noise is accurate and reliable over the whole frequency spectrum. Using this pre- collected RIR data enables higher quality replacement of the audio source.
- a selection of a position within the recording space is received. This may be the position of the target source 601 received from any location determination method described above.
- a near-field audio signal is received from the target source 601.
- a RIR filter related to the position of the target source is identified.
- the identified room impulse response filter is then applied to the near-field audio signal of the target source to project the near-field audio signal of the target source into a far-field space.
- this RIR filter may be calculated in real-time.
- the projected near-field audio signal may then be removed from the audio mixture, as shown in Equation 9 above.
- a near-field audio signal from the replacement source 602 is received.
- a room impulse response filter relating to the position within the recording space is identified. This may be same room impulse response filter used to remove the target source. Alternatively, the room response response filter applied to the near-field audio signal of the replacement source 602 may be retrieved from a room impulse response filter database collected during a calibration phase.
- the selected room impulse response filter is then applied to the near-field audio signal of the replacement source 602 to obtain a projected near-field audio signal of the replacement source 602.
- the audio mixture of the far-field microphone device may then be augmented by adding the projected near-field audio signal of the replacement source 602 to the audio mixture.
- the target source 601 is removed and replaced with the replacement source 602.
- Figure 7 illustrates a recording environment whereby a completely new near-field signal recorded from a new source 701 located outside the recording space 103 is inserted into the audio mix of the far-field audio recording device 101.
- the RIR data need to be broadband and valid at least for the STFT frequency indices where the augmenting signal has significant energy.
- a user may wish for the new source 701 to be added to the recording space 103 at a particular virtual location within the recording space 103. Based on this specified virtual location, the new signal can be used to augment the content to the audio mixture recorded by the far-field microphone array of the far-field audio recording device 101.
- a virtual person can be visually rendered to an AR view and at the same time the audio can be rendered in such a way that it sounds as though the new source 701 is standing at the location at which the source appears visually in AR.
- An example scenario of this method to add a virtual speaker to a room is shown in Figure 7.
- Rendering a virtual speaker to a room and using some advanced AR, VR or 6D0F rendering a large amount of RIR data For example, there may be more than one far- field audio recording device 101-1 and 101-2 and new sound sources 701 to be rendered at the same time.
- the 6 DoF usage scenario requires that rendering from a first position to a second position is possible (in 6 DoF the listener, for whom the audio is being rendered in playback can move freely anywhere in the virtual environment).
- Embodiments of the invention use the RIR data from the RIR database 105 to render the audio objects with naturally sounding presence in any location within the scene.
- time varying RIR responses may be useful in post-production if some original performances are edited and later added back to the original recording space 103. In practice this requires that the most recent time stamped RIR data is obtained from the RIR database 105 in addition to the selected position.
- a near-field audio signal from the new source 701 is received to be added to a far-field audio mixture at a selected position of a recording space 103.
- a room impulse response filter relating to the selected position within the recording space is identified.
- the room response filter is applied to the near-field audio signal of the new source 701 may be retrieved from a room impulse response filter database collected during a calibration phase.
- the selected room impulse response filter is then applied to the near-field audio signal of the new source 701 to obtain a projected near-field audio signal of the new source 701.
- the audio mixture of the far-field microphone devices 101 may then be augmented by adding the projected near-field audio signal of the new source 701 to the audio mixture.
- the different functions discussed herein may be performed in a different order and/or concurrently with each other.
- one or more of the above-described functions may be optional or may be combined.
- the flow diagram of Figure 3 is an example only and that various operations depicted therein may be omitted, reordered and or combined.
Landscapes
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Otolaryngology (AREA)
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Circuit For Audible Band Transducer (AREA)
- Stereophonic System (AREA)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GBGB1709846.8A GB201709846D0 (en) | 2017-06-20 | 2017-06-20 | Processing audio signals |
PCT/FI2018/050395 WO2018234617A1 (en) | 2017-06-20 | 2018-05-25 | AUDIO SIGNAL PROCESSING |
Publications (3)
Publication Number | Publication Date |
---|---|
EP3642957A1 true EP3642957A1 (de) | 2020-04-29 |
EP3642957A4 EP3642957A4 (de) | 2021-03-17 |
EP3642957B1 EP3642957B1 (de) | 2023-07-19 |
Family
ID=59462425
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP18821106.4A Active EP3642957B1 (de) | 2017-06-20 | 2018-05-25 | Verarbeitung von audiosignalen |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP3642957B1 (de) |
GB (1) | GB201709846D0 (de) |
WO (1) | WO2018234617A1 (de) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113032721A (zh) * | 2021-03-11 | 2021-06-25 | 哈尔滨工程大学 | 一种低计算复杂度的远场和近场混合信号源参数估计方法 |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2577905A (en) | 2018-10-10 | 2020-04-15 | Nokia Technologies Oy | Processing audio signals |
CN111414669B (zh) * | 2018-12-19 | 2023-11-14 | 北京猎户星空科技有限公司 | 一种音频数据处理的方法及装置 |
CN111951786A (zh) * | 2019-05-16 | 2020-11-17 | 武汉Tcl集团工业研究院有限公司 | 声音识别模型的训练方法、装置、终端设备及介质 |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9449613B2 (en) * | 2012-12-06 | 2016-09-20 | Audeme Llc | Room identification using acoustic features in a recording |
GB2543276A (en) * | 2015-10-12 | 2017-04-19 | Nokia Technologies Oy | Distributed audio capture and mixing |
-
2017
- 2017-06-20 GB GBGB1709846.8A patent/GB201709846D0/en not_active Ceased
-
2018
- 2018-05-25 WO PCT/FI2018/050395 patent/WO2018234617A1/en unknown
- 2018-05-25 EP EP18821106.4A patent/EP3642957B1/de active Active
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113032721A (zh) * | 2021-03-11 | 2021-06-25 | 哈尔滨工程大学 | 一种低计算复杂度的远场和近场混合信号源参数估计方法 |
CN113032721B (zh) * | 2021-03-11 | 2022-11-01 | 哈尔滨工程大学 | 一种低计算复杂度的远场和近场混合信号源参数估计方法 |
Also Published As
Publication number | Publication date |
---|---|
EP3642957A4 (de) | 2021-03-17 |
GB201709846D0 (en) | 2017-08-02 |
EP3642957B1 (de) | 2023-07-19 |
WO2018234617A1 (en) | 2018-12-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3642957B1 (de) | Verarbeitung von audiosignalen | |
JP6637014B2 (ja) | 音声信号処理のためのマルチチャネル直接・環境分解のための装置及び方法 | |
JP5635669B2 (ja) | オーディオ入力信号の反響コンテンツを抽出および変更するためのシステム | |
KR102008771B1 (ko) | 청각-공간-최적화 전달 함수들의 결정 및 사용 | |
KR102470962B1 (ko) | 사운드 소스들을 향상시키기 위한 방법 및 장치 | |
WO2018234619A2 (en) | AUDIO SIGNAL PROCESSING | |
JP5857071B2 (ja) | オーディオ・システムおよびその動作方法 | |
US9552840B2 (en) | Three-dimensional sound capturing and reproducing with multi-microphones | |
US11317233B2 (en) | Acoustic program, acoustic device, and acoustic system | |
US11483651B2 (en) | Processing audio signals | |
KR101934999B1 (ko) | 잡음을 제거하는 장치 및 이를 수행하는 방법 | |
US10979846B2 (en) | Audio signal rendering | |
US11122381B2 (en) | Spatial audio signal processing | |
WO2022014326A1 (ja) | 信号処理装置および方法、並びにプログラム | |
WO2018234618A1 (en) | AUDIO SIGNAL PROCESSING | |
EP3643083A1 (de) | Räumliche audioverarbeitung | |
JP2008022069A (ja) | 音声収録装置および音声収録方法 | |
KR20240097694A (ko) | 임펄스 응답 결정 방법 및 상기 방법을 수행하는 전자 장치 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20200120 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
A4 | Supplementary search report drawn up and despatched |
Effective date: 20210212 |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: H04R 1/32 20060101ALN20210208BHEP Ipc: H04R 3/00 20060101AFI20210208BHEP Ipc: H04S 7/00 20060101ALN20210208BHEP |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R079 Ref document number: 602018053743 Country of ref document: DE Free format text: PREVIOUS MAIN CLASS: H03H0021000000 Ipc: H04R0003000000 Ref country code: DE Ref legal event code: R079 Free format text: PREVIOUS MAIN CLASS: H03H0021000000 Ipc: H04R0003000000 |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
INTG | Intention to grant announced |
Effective date: 20230217 |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: H04S 7/00 20060101ALN20230206BHEP Ipc: H04R 1/32 20060101ALN20230206BHEP Ipc: H04R 3/00 20060101AFI20230206BHEP |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE PATENT HAS BEEN GRANTED |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 602018053743 Country of ref document: DE |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: LT Ref legal event code: MG9D |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: MP Effective date: 20230719 |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: MK05 Ref document number: 1590677 Country of ref document: AT Kind code of ref document: T Effective date: 20230719 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230719 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20231020 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20231119 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230719 Ref country code: RS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230719 Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20231120 Ref country code: NO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20231019 Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230719 Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230719 Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20231119 Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230719 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20231020 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230719 Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230719 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230719 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 602018053743 Country of ref document: DE |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230719 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SM Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230719 Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230719 Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230719 Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230719 Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230719 Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230719 Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230719 |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230719 |
|
26N | No opposition filed |
Effective date: 20240422 |