WO2022190245A1 - 騒音抑圧装置、騒音抑圧方法、及び騒音抑圧プログラム - Google Patents
騒音抑圧装置、騒音抑圧方法、及び騒音抑圧プログラム Download PDFInfo
- Publication number
- WO2022190245A1 WO2022190245A1 PCT/JP2021/009490 JP2021009490W WO2022190245A1 WO 2022190245 A1 WO2022190245 A1 WO 2022190245A1 JP 2021009490 W JP2021009490 W JP 2021009490W WO 2022190245 A1 WO2022190245 A1 WO 2022190245A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- noise
- data
- noise suppression
- input data
- weighting factor
- Prior art date
Links
- 230000001629 suppression Effects 0.000 title claims abstract description 180
- 238000000034 method Methods 0.000 title claims description 30
- 238000004364 calculation method Methods 0.000 claims abstract description 24
- 238000012545 processing Methods 0.000 claims abstract description 20
- 230000003595 spectral effect Effects 0.000 claims description 5
- 230000002829 reductive effect Effects 0.000 claims description 4
- 230000000694 effects Effects 0.000 description 17
- 230000002411 adverse Effects 0.000 description 16
- 238000013528 artificial neural network Methods 0.000 description 11
- 238000010586 diagram Methods 0.000 description 9
- 238000001228 spectrum Methods 0.000 description 6
- 230000006866 deterioration Effects 0.000 description 4
- 238000011156 evaluation Methods 0.000 description 4
- 238000002474 experimental method Methods 0.000 description 3
- 206010002953 Aphonia Diseases 0.000 description 2
- 238000003909 pattern recognition Methods 0.000 description 2
- 101000911772 Homo sapiens Hsc70-interacting protein Proteins 0.000 description 1
- 101000710013 Homo sapiens Reversion-inducing cysteine-rich protein with Kazal motifs Proteins 0.000 description 1
- 101000661807 Homo sapiens Suppressor of tumorigenicity 14 protein Proteins 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 210000004072 lung Anatomy 0.000 description 1
- 230000000873 masking effect Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0208—Noise filtering
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0208—Noise filtering
- G10L21/0216—Noise filtering characterised by the method used for estimating noise
- G10L21/0224—Processing in the time domain
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/03—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
- G10L25/18—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being spectral information of each sub-band
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/03—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
- G10L25/21—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being power information
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/60—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for measuring the quality of voice signals
Definitions
- the present disclosure relates to a noise suppression device, a noise suppression method, and a noise suppression program.
- the Weiner method is known as a method for reducing noise components from a sound signal in which noise (hereinafter also referred to as “noise”) is mixed with voice (hereinafter also referred to as "speech").
- noise hereinafter also referred to as "noise”
- voice hereinafter also referred to as "speech”
- the SN signal-noise ratio
- the voice component is degraded. Therefore, there has been proposed a method of suppressing deterioration of voice components while improving the SN ratio by performing noise reduction processing according to the SN ratio (see, for example, Non-Patent Document 1).
- the present disclosure has been made to solve the above-described problems, and provides a noise suppression device, a noise suppression method, and a The purpose is to provide a noise suppression program.
- a noise suppression device includes a noise suppression unit that performs noise suppression processing on input data to generate noise-suppressed data; a weighting factor calculation unit that determines a weighting factor based on the noise-suppressed data in a section; and a weighted addition of the input data and the noise-suppressed data using a value based on the weighting factor as a weight. and a weighted sum unit that generates output data.
- Another noise suppression device includes a noise suppression unit that performs noise suppression processing on input data to generate noise-suppressed data; Weighting factor calculation for dividing into a plurality of short intervals and determining a weighting factor for each of the plurality of short intervals based on the input data in the plurality of short intervals and the post-noise suppression data in the plurality of short intervals.
- a weighted sum unit that generates output data by performing weighted addition of the input data and the noise-suppressed data using a value based on the weighting coefficient as a weight in each of the plurality of short intervals; characterized by comprising
- FIG. 2 is a diagram showing an example of hardware configuration of a noise suppression device according to Embodiments 1 to 3;
- FIG. 1 is a functional block diagram schematically showing the configuration of a noise suppression device according to Embodiment 1;
- FIG. 4 is a flow chart showing the operation of the noise suppression device according to Embodiment 1;
- FIG. 7 is a functional block diagram schematically showing the configuration of a noise suppression device according to Embodiment 2;
- FIG. 1 is a functional block diagram schematically showing the configuration of a noise suppression device according to Embodiment 1;
- FIG. 4 is a flow chart showing the operation of the noise suppression device according to Embodiment 1;
- FIG. 7 is a functional block diagram schematically showing the configuration of a noise suppression device according to Embodi
- FIG. 11 is a functional block diagram schematically showing the configuration of a noise suppression device according to Embodiment 3; 10 is a flow chart showing the operation of the noise suppression device according to Embodiment 3; 10 is a flow chart showing a method of calculating addition coefficients in the noise suppression device according to Embodiment 3.
- FIG. 11 is a functional block diagram schematically showing the configuration of a noise suppression device according to Embodiment 3; 10 is a flow chart showing the operation of the noise suppression device according to Embodiment 3; 10 is a flow chart showing a method of calculating addition coefficients in the noise suppression device according to Embodiment 3.
- noise suppression device a noise suppression method, and a noise suppression program according to embodiments will be described below with reference to the drawings.
- the following embodiments are merely examples, and the embodiments can be combined as appropriate and each embodiment can be modified as appropriate.
- FIG. 1 shows an example of a hardware configuration of a noise suppression device 1 according to Embodiment 1.
- the noise suppression device 1 is a device capable of executing the noise suppression method according to the first embodiment.
- the noise suppression device 1 is, for example, a computer that executes the noise suppression program according to the first embodiment.
- the noise suppression device 1 includes a processor 101 as an information processing section for processing information, a memory 102 as a volatile storage device, and a non-volatile storage device 103 as a storage section for storing information. , and an input/output interface 104 used to transmit and receive data to and from an external device.
- the nonvolatile storage device 103 may be part of another device that can communicate with the noise suppression device 1 via a network.
- the noise suppression program can be obtained by downloading over a network or reading from a recording medium such as an optical disc storing information.
- the hardware configuration of FIG. 1 can also be applied to noise suppression devices 2 and 3 according to Embodiments 2 and 3, which will be described later.
- a processor 101 controls the overall operation of the noise suppression device 1 .
- the processor 101 is, for example, a CPU (Central Processing Unit) or an FPGA (Field Programmable Gate Array).
- the noise suppression device 1 may be realized by a processing circuit. Also, the noise suppression device 1 may be realized by software, firmware, or a combination thereof.
- the memory 102 is the main storage device of the noise suppression device 1 .
- the memory 102 is, for example, a RAM (Random Access Memory).
- the nonvolatile storage device 103 is an auxiliary storage device of the noise suppression device 1 .
- the nonvolatile storage device 103 is, for example, a HDD (Hard Disk Drive) or an SSD (Solid State Drive).
- the input/output interface 104 inputs input data Si(t) and outputs output data So(t).
- the input data Si(t) is, for example, data input from a microphone and converted into digital data.
- the input/output interface 104 is used for receiving operation signals based on user operations by a user operation unit (for example, voice input start button, keyboard, mouse, touch panel, etc.), communication with other devices, and the like.
- a user operation unit for example, voice input start button, keyboard, mouse, touch panel, etc.
- t is an index indicating a position on the time series. A larger value of t indicates a later time on the time axis.
- FIG. 2 is a functional block diagram schematically showing the configuration of the noise suppression device 1 according to Embodiment 1.
- the noise suppression device 1 includes a noise suppressor 11 , a weighted coefficient calculator 12 , and a weighted summation unit 13 .
- the input data Si(t) of the noise suppression device 1 is PCM (pulse code modulation) data obtained by A/D (analog/digital) conversion of a signal in which a noise component is superimposed on a speech component to be recognized.
- PCM pulse code modulation
- A/D analog/digital
- the output data So(t) is data in which the noise component in the input data Si(t) is suppressed.
- the output data So(t) is sent to, for example, a known speech recognition device.
- the meanings of t and T are as already explained.
- the noise suppression unit 11 receives the input data Si(t) and suppresses the noise components in the input data Si(t) to obtain PCM data, that is, the noise that is data after noise suppression processing has been performed. Output the suppressed data Ss(t).
- the meanings of t and T are as already explained.
- the amount of noise component suppression is insufficient, or a phenomenon occurs in which the speech component, which is the voice component to be recognized, is distorted or lost. Sometimes.
- the noise suppression unit 11 can use any noise suppression method.
- the noise suppression unit 11 performs noise suppression processing using a neural network (NN).
- the noise suppression unit 11 learns a neural network before performing noise suppression processing. Learning can be performed, for example, using PCM data of voice with noise superimposed as input data and PCM data with no noise superimposed on voice as teacher data, using the error back propagation method.
- the weighting factor calculator 12 determines the weighting factor ⁇ based on the input data Si(t) in a predetermined section on the time series and the post-noise suppression data Ss(t) in the predetermined section (that is, calculate.
- the weighted sum unit 13 generates output data So(t) by performing weighted addition of the input data Si(t) and the noise-suppressed data Ss(t) using the value based on the weighting factor ⁇ as a weight.
- FIG. 3 is a flowchart showing the operation of the noise suppression device 1.
- FIG. 3 is a flowchart showing the operation of the noise suppression device 1.
- step ST11 in FIG. 3 when the noise suppression device 1 starts receiving the input data Si(t) and the input data Si(t) is input to the noise suppression device 1, the noise suppression unit 11 receives the input data Si (t) is subjected to noise suppression processing to generate noise-suppressed data Ss(t).
- the weighting coefficient calculation unit 12 receives the input data Si(t), which is data before noise suppression, and the data after noise suppression Ss(t), and the input data Si(t) and Power P1 of input data Si(t) and noise-suppressed data Ss(t) in a predetermined interval (for example, a short-time interval such as 0.5 seconds) from the beginning of noise-suppressed data Ss(t)
- a predetermined interval for example, a short-time interval such as 0.5 seconds
- the power P2 of is calculated. It is considered that the data in the predetermined section does not contain the speech component to be recognized, but contains only the noise component. The reason for this is that speech is rarely started immediately after the noise suppression device 1 is activated (for example, immediately after a voice input start operation is performed).
- the speaker that is, the user who emits the speech to be recognized performs a speech input start operation in the device, and after inhaling air, speaks while exhaling from the lungs. Because it does not emit For this reason, the predetermined interval at the start of speech input is usually a noise-only interval that does not include the speaker's voice, that is, a noise interval.
- the noise section is denoted by E. As shown in FIG.
- the noise section E is not limited to a section of 0.5 seconds from the beginning of the input data, and may be a section of other length such as a section of 1 second or a section of 0.75 seconds.
- the noise section E is too long, the possibility of voice components being mixed increases, but the reliability of the weighting factor ⁇ improves.
- the noise interval E is too short, the possibility of voice components being mixed is low, but the reliability of the weighting coefficient ⁇ is lowered. Therefore, it is desirable that the noise section E is appropriately set according to the use environment, user's request, and the like.
- the weighting factor calculator 12 uses the power P1 of the input data Si(t) in the noise section E and the power P2 of the noise-suppressed data Ss(t) in the noise section E to calculate the decibel value of the ratio of the two.
- a noise suppression amount R is calculated. That is, the weighting coefficient calculator 12 calculates the noise suppression amount R based on the ratio of the power P1 of the input data Si(t) in the noise section E to the power of the noise-suppressed data Ss(t) in the noise section E. , the value of the weighting factor ⁇ is determined based on the amount of noise suppression R.
- a calculation formula for the noise suppression amount R is, for example, the following formula (1).
- the noise suppression amount R calculated by Equation (1) is the degree of noise suppression by the noise suppressor 11 between the input data Si(t) in the noise section E and the post-noise suppression data Ss(t) in the noise section E. indicates As the noise suppression amount R increases, the degree of noise suppression by the noise suppressor 11 increases.
- the weighting factor calculation unit 12 determines the value of the weighting factor ⁇ based on the calculated noise suppression amount R. That is, the weighting factor calculator 12 compares the calculated noise suppression amount R with a predetermined threshold value TH_R, and determines the value of the weighting factor ⁇ based on the result of this comparison.
- the weighting factor calculator 12 when the noise suppression amount R is less than the threshold TH_R (YES in step ST13), the weighting factor calculator 12 outputs a predetermined value ⁇ 1 as the weighting factor ⁇ in step ST14 . .
- the weighting factor calculator 12 when the noise suppression amount R is equal to or greater than the threshold TH_R (NO in step ST13), the weighting factor calculator 12 outputs a predetermined value ⁇ 2 as the weighting factor ⁇ in step ST15.
- ⁇ 1 and ⁇ 2 are constants of 0 or more and 1 or less satisfying ⁇ 1 > ⁇ 2 .
- the weighting coefficient calculation unit 12 that calculates the weighting coefficient ⁇ in this way has a small noise suppression effect because the noise suppression amount R is small, and conversely, noise that is considered to have a large adverse effect due to voice distortion or loss.
- the weighting factor ⁇ for the input data Si(t) is increased to reduce the adverse effects of noise suppression.
- the weighting coefficient calculation unit 12 reduces the weighting coefficient ⁇ for the input data Si(t) because it is considered that the noise suppression effect is large.
- the weighted sum unit 13 uses the following equation (2) based on the input data Si(t), the noise-suppressed data Ss(t), and the weighting coefficient ⁇ to obtain: Calculate and output the output data So(t).
- the noise suppression device 1 or the noise suppression method according to Embodiment 1 in a noisy environment where the noise suppression amount R is small, the weighting coefficient ⁇ to be multiplied by the input data Si(t) is increased. and reduce the coefficient (1- ⁇ ) indicating the noise suppression effect.
- the weighting coefficient ⁇ multiplied by the input data Si(t) is decreased, and the coefficient (1 ⁇ ) indicating the noise suppression effect is increased.
- the output data So(t) speech data that is less adversely affected by distortion or loss of the speech to be recognized, without excessively reducing the noise suppression effect. That is, in the first embodiment, it is possible to appropriately suppress the noise component and the deterioration of the voice component in the input data Si(t).
- the input data Si(t) in the noise section E which is a short time from the start of voice input to the noise suppression device 1
- the value of the weighting factor ⁇ is determined. Therefore, unlike the technique of determining the weighting factor ⁇ using the SN ratio of input data, there is no need to use voice power that is difficult to measure in a noisy environment. Therefore, it is possible to improve the calculation accuracy of the weighting coefficient ⁇ , and appropriately suppress the noise component and the deterioration of the voice component in the input data Si(t). Also, the weighting factor ⁇ can be determined without delay for the input data Si(t).
- FIG. 4 is a block diagram schematically showing the configuration of the noise suppression device 2 according to Embodiment 2.
- the noise suppression device 2 includes a noise suppressor 11, a weighted coefficient calculator 12a, a weighted summation unit 13, a weighted coefficient table 14, and a noise type determination model 15.
- the hardware configuration of the noise suppression device 2 is the same as that shown in FIG.
- the weighting factor table 14 and the noise type determination model 15 are obtained in advance by learning, for example, and stored in the nonvolatile storage device 103 .
- the weighting factor table 14 holds predetermined weighting factor candidates in association with noise identification numbers assigned to each of a plurality of types of noise.
- the noise type determination model 15 is used to determine which of the multiple types of noise in the weighting factor table 14 the noise component included in the input data is based on the spectral feature amount of the input data.
- the weighting factor calculation unit 12a uses the noise type determination model (15) to determine the noise most similar to the data of the predetermined section (E) in the input data among the plurality of types of noise. Then, from the weighting coefficient table 14, a weighting coefficient candidate associated with the noise identification number of the calculated noise is output as the weighting coefficient ⁇ .
- FIG. 5 is a diagram showing an example of the weighting factor table 14.
- FIG. 14 In the weighting factor table 14, candidates for the optimum weighting factor ⁇ (that is, weighting factor candidates) predetermined in association with the noise identification number for each noise for a plurality of types of noise to which noise identification numbers have been assigned in advance. is retained.
- the weighting coefficient table 14 is created in advance using multiple types of noise data and voice data for evaluation.
- noise-superimposed audio data is created by superimposing one of a plurality of types of noise data on the evaluation audio data, and the data is input to the noise suppression unit 11, and the output data is This is data after noise suppression. This processing is performed for each of a plurality of types of noise data to obtain a plurality of noise-suppressed data.
- a speech recognition experiment is performed on the recognition rate evaluation data for each of the plurality of weighting factors, and the weighting factor with the highest recognition rate is stored in the weighting factor table 14 together with the noise identification number of the noise data.
- the speech recognition experiment is performed by a speech recognition engine that recognizes speech.
- a speech recognition engine recognizes human speech and converts it to text.
- the speech recognition experiment is desirably performed using a speech recognition engine used in combination with the noise suppression device 2, but a known speech recognition engine can be used.
- the noise type determination model 15 is a model that is used to determine which of a plurality of types of noise to which noise identification numbers have been assigned in advance is most similar to the noise component included in the input data Si(t). .
- the noise type determination model 15 is created in advance using a plurality of types of noise data to which noise identification numbers are assigned in advance.
- the spectral feature amount of multiple types of noise data to which noise identification numbers are assigned in advance is calculated, and the noise type determination model 15 is created using the calculated spectral feature amount.
- the noise type determination model 15 can be constructed from a known pattern recognition model such as a neural network or GMM (Gaussian Mixture Model).
- GMM Gaussian Mixture Model
- a neural network is used as the noise type determination model 15 .
- the number of output units of the neural network is the number of types of noise to which noise identification numbers are given in advance. Each output unit is associated with a noise identification number.
- a mel filter bank feature amount is used as the spectrum feature amount.
- the neural network which is the noise type determination model 15, before implementing noise suppression.
- the Mel filter bank feature amount is used as input data
- the output value of the output unit corresponding to the noise identification number of the input data is set to 1
- the output value of the other output units is set to 0, and the error back propagation method is used as teacher data.
- the noise type determination model 15 learns such that when the mel filter bank feature amount of noise is input, the output value of the output unit of the corresponding noise identification number becomes higher than the output values of the other output units. be. Therefore, when judging the type of noise, the noise identification number associated with the output unit that outputs the highest value for the input mel filter bank feature amount is used as the judgment result.
- FIG. 6 is a flowchart showing the operation of the noise suppression device 2.
- the noise suppression unit 11 When the input data Si(t) is input to the noise suppression device 2, the noise suppression unit 11 performs noise suppression processing on the input data Si(t) in step ST21 of FIG. t) is output.
- step ST22 of FIG. 6 when the weighting factor calculator 12a receives the input data Si(t), the noise section E (for example, 0 A mel filter bank feature amount, which is a spectrum feature amount of the input data Si(t), is calculated for a short period of 0.5 seconds), and a noise identification number is obtained using the noise type determination model 15 . That is, the weighting factor calculation unit 12a inputs the mel filter bank feature amount to the noise type determination model 15, and determines the noise identification number associated with the output unit that outputs the highest value among the output units of the noise type determination model 15. obtain. Then, referring to the weighting factor table 14, the weighting factor candidate corresponding to the noise identification number is output as the weighting factor ⁇ .
- the noise section E for example, 0 A mel filter bank feature amount, which is a spectrum feature amount of the input data Si(t), is calculated for a short period of 0.5 seconds
- a noise identification number is obtained using the noise type determination model 15 . That is, the weighting factor calculation unit 12a inputs
- the weighted sum unit 13 receives the input data Si(t), the noise-suppressed data Ss(t) output from the noise suppression unit 11, and the weighting factor ⁇ , and By (2), the output data So(t) is calculated and output.
- the operation of the weighted sum unit 13 is the same as that of the first embodiment.
- the weighting coefficient calculation unit 12a uses the noise type determination model 15 to determine the weight of the noise included in the input data Si(t). The type is determined, and an appropriate weighting factor candidate for the noise environment is determined (that is, acquired) from the weighting factor table 14 based on the result of this determination as the weighting factor ⁇ . Therefore, there is an effect that noise suppression performance can be improved.
- the second embodiment is the same as the first embodiment.
- FIG. 7 is a functional block diagram schematically showing the configuration of the noise suppression device 3 according to Embodiment 3.
- the noise suppressor 3 includes a noise suppressor 11, a weighted coefficient calculator 12b, a weighted summator 13b, and a speech noise determination model 16.
- the hardware configuration of the noise suppression device 3 is the same as that shown in FIG.
- the audio noise determination model 16 is stored in the non-volatile storage device 103, for example.
- the voice/noise determination model 16 is a model that determines whether voice is included in the data included in the input data Si(t).
- the voice/noise determination model 16 is created in advance using voice data and multiple types of noise data.
- the spectrum feature amount is calculated for multiple types of noise data, voice data, data obtained by superimposing multiple types of noise on voice data, and multiple types of noise data, and the calculated spectrum feature amount is used.
- a voice noise determination model 16 is created.
- the speech noise determination model 16 can be constructed with any pattern recognition model such as a neural network or GMM.
- a neural network is used to create the speech noise determination model 16 .
- the number of output units of the neural network is assumed to be two, which are associated with speech and noise.
- a mel filter bank feature amount is used as the spectrum feature amount. Before implementing noise suppression, it is necessary to train the neural network, which is the speech noise determination model 16 .
- the mel filter bank feature value is used as input data, and if the input data includes voice data, i.e., voice data or voice data on which multiple types of noise are superimposed, the output value of the output unit corresponding to voice is set to 1, and noise If the output value of the output unit corresponding to is 0, and the input data is noise data, the output value of the output unit corresponding to voice is 0, and the output value of the output unit corresponding to noise is 1 as teacher data. It can be implemented using error backpropagation.
- the voice/noise judgment model 16 receives the mel filter bank feature amount of voice data or voice data superimposed with noise, the output value of the output unit corresponding to the voice increases, and the noise data mel filter bank feature amount increases.
- the weighting factor calculation unit 12b selects the output unit that outputs the highest value for the input mel filter bank feature quantity as associated with speech. If it is something, it can be determined that it is data containing voice, and if it is associated with noise, it can be determined that it is noise.
- FIG. 8 is a flowchart showing the operation of the noise suppression device 3.
- the noise suppression unit 11 When the input data Si(t), is input to the noise suppression device 3, the noise suppression unit 11 performs noise suppression processing on the input data Si(t) in step ST31 of FIG. Output (t).
- one short section D j contains the number of data corresponding to the time length d, and all of the J short sections D 1 to D J contain T pieces of data.
- J is an integer obtained by the following formula (3).
- the symbol [ ] is an operator that truncates the numerical value within the symbol to the nearest integer to integerize the numerical value within the symbol.
- step ST33 a weighting coefficient ⁇ j is calculated for each short section D j and output together with the value of the short duration d.
- a specific method for calculating the weighting factor ⁇ j will be described later.
- step ST34 the weighted sum unit 13b inputs the input data Si(t), the noise-suppressed data Ss(t), the weighting coefficient ⁇ j , and the time length d of the short section, and the following equation (4)
- the output data So(t) is obtained by and output.
- j is calculated by the following formula (5).
- the symbol [ ] is an operator that truncates the decimal point of the numerical value in the symbol to make the numerical value in the symbol into an integer.
- FIG. 9 is a flow chart showing a method of calculating the weighting factor ⁇ j .
- the weighting factor calculation unit 12b uses the audio noise determination model 16 to determine whether the mel filter bank feature amount is for audio data or for noise data on which noise is superimposed.
- the weighting factor calculator 12b inputs the mel-filter bank feature amount to the voice noise determination model 16, and the output unit outputting the highest value among the output units of the voice noise determination model 16 is the unit associated with the voice. If so, it is determined to contain voice, otherwise it is determined to be noise.
- the weighting factor calculation unit 12b branches the processing depending on whether or not the determination result of the short section Dj includes voice. If the determination result includes voice, in step ST44, the weighting factor calculation unit 12b determines whether or not the noise suppression amount Rj is equal to or greater than a predetermined threshold TH_Rs . ), a predetermined value A1 (also referred to as a "first value”) is set as the weighting coefficient ⁇ j in step ST45. On the other hand, when the value of the noise suppression amount Rj is less than the threshold TH_Rs , the weighting factor calculation unit 12b sets a predetermined value A2 (also referred to as a “second value”) in step ST46 to the weighting factor ⁇ j .
- a predetermined value A1 also referred to as a “first value”
- the weighting factor calculation unit 12b sets a predetermined value A2 (also referred to as a “second value”) in step ST46 to the weighting factor ⁇ j .
- the value A1 and the value A2 are constants of 0 or more and 1 or less that satisfy A1>A2.
- the weighting factor ⁇ j By calculating the weighting factor ⁇ j in this way, when the noise suppression amount R j is large for the interval in which the data in the short interval D j is determined to include voice, the noise-suppressed data Ss(t) is Since there is a possibility that the voice has been lost, the value of the weighting factor ⁇ j for the input data Si(t) can be increased to reduce the adverse effects such as voice loss due to noise suppression. On the other hand, when the amount of noise suppression Rj is small, it is considered that the loss of voice has little adverse effect. By increasing the weight of , it is possible to reduce the adverse effects of speech distortion or loss without significantly reducing the effectiveness of noise suppression.
- the weighting factor calculation unit 12b determines whether or not the noise suppression amount Rj is less than a predetermined threshold TH_Rn (also referred to as “first threshold”) in step ST47, and determines whether the noise suppression amount If Rj is less than the predetermined threshold TH_Rn , a predetermined value A3 (also referred to as "third value”) is set as the weighting coefficient ⁇ j in step ST48.
- a predetermined threshold TH_Rn also referred to as “first threshold”
- the weighting factor calculator 12b sets a predetermined value A4 (also referred to as "fourth value") in step ST49 as the weighting factor ⁇ j .
- the value A3 and the value A4 are constants of 0 or more and 1 or less that satisfy A3 ⁇ A4.
- the noise suppression amount Rj is small for data determined to be noise, so the effect of noise suppression is small, and conversely, the adverse effect of voice distortion or loss may increase.
- the weighting factor ⁇ for the input data Si(t) can be increased to reduce the adverse effects of noise suppression.
- the noise suppression amount Rj is large, it is considered that the noise suppression effect is large.
- the noise suppression amount R j is If it is large, there is a possibility that voice has disappeared from the noise-suppressed data Ss (t). can reduce the adverse effects of
- the noise suppression amount Rj is small, so the noise suppression effect is small, and conversely, there is a possibility that the adverse effects of voice distortion or loss may increase.
- the weighting factor ⁇ for the input data Si(t) can be increased to reduce the adverse effects of noise suppression.
- the third embodiment is the same as the first embodiment.
- a speech recognition device can be configured by connecting a known speech recognition engine that converts speech data to text data after any one of the noise suppression devices 1 to 3, and the speech recognition accuracy in the speech recognition device can be improved. can be improved. For example, when a user uses a voice recognition device outdoors or in a factory to input the result of equipment inspection by voice, it is necessary to perform voice recognition with high voice recognition accuracy even if there is noise such as the operating sound of the equipment. can be done.
- noise suppression device 1 to 3 noise suppression device, 11 noise suppression unit, 12, 12a, 12b weighting coefficient calculation unit, 13, 13b weighted sum unit, 14 weighting coefficient table, 15 noise type determination model, 16 speech noise determination model, 101 processor, 102 memory 103 non-volatile storage device 104 input/output interface Si(t) input data Ss(t) noise suppression data So(t) output data D j short interval ⁇ , ⁇ j weighting coefficient R R j noise suppression amount.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Quality & Reliability (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Noise Elimination (AREA)
Abstract
Description
図1は、実施の形態1に係る騒音抑圧装置1のハードウェア構成の例を示す。騒音抑圧装置1は、実施の形態1に係る騒音抑圧方法を実行することができる装置である。騒音抑圧装置1は、例えば、実施の形態1に係る騒音抑圧プログラムを実行するコンピュータである。図1に示されるように、騒音抑圧装置1は、情報を処理する情報処理部としてプロセッサ101と、揮発性記憶装置としてのメモリ102と、情報を格納する記憶部としての不揮発性記憶装置103と、外部機器との間でデータの送受信を行うために使用される入出力インタフェース104とを備えている。不揮発性記憶装置103は、騒音抑圧装置1とネットワークを介して通信可能な他の装置の一部であってもよい。騒音抑圧プログラムは、ネットワークを経由して行われるダウンロード又は情報を記憶する光ディスクなどのような記録媒体からの読み込みによって取得可能である。なお、図1のハードウェア構成は、後述の実施の形態2及び3に係る騒音抑圧装置2及び3にも適用可能である。
図4は、実施の形態2に係る騒音抑圧装置2の構成を概略的に示すブロック図である。図4において、図2に示される構成要素と同一又は対応する構成要素には、図2に示される符号と同じ符号が付されている。図4に示されるように、騒音抑圧装置2は、騒音抑圧部11と、加重係数算出部12aと、加重和部13と、加重係数表14と、騒音種類判定モデル15とを備えている。また、騒音抑圧装置2のハードウェア構成は、図1に示されるものと同様である。加重係数表14及び騒音種類判定モデル15は、例えば、予め学習により求められ、不揮発性記憶装置103に記憶される。
図7は、実施の形態3に係る騒音抑圧装置3の構成を概略的に示す機能ブロック図である。図7において、図2に示される構成要素と同一又は対応する構成要素には、図2に示される符号と同じ符号が付されている。図7に示されるように、騒音抑圧装置3は、騒音抑圧部11と、加重係数算出部12bと、加重和部13bと、音声騒音判定モデル16とを備えている。また、騒音抑圧装置3のハードウェア構成は、図1に示されるものと同様である。音声騒音判定モデル16は、例えば、不揮発性記憶装置103に記憶される。
Dj={t=(j-1)*d+1,(j-1)*d+2,…,j*d}
と表記すると、D1~DJは、以下のように表記される。
D2={t=d+1,d+2,…,2d}
D3={t=2d+1,2d+2,…,3d}
…
Dj={t=(j-1)*d+1,(j-1)*d+2,…,j*d}
…
DJ={t=(J-1)*d+1,(J-1)*d+2,…,T}
短区間Dj={t=(j-1)*d+1,(j-1)*d+2,…,j*d}
における入力データ
Si(t)、(t=(j-1)*d+1,(j-1)*d+2,…,j*d)
、及び騒音抑圧後データ
Ss(t)、(t=(j-1)*d+1,(j-1)*d+2,…,j*d)
を受け取り、短区間Djにおける入力データSi(t)のパワーPijと、短区間Djにおける騒音抑圧後データSs(t)のパワーPsjを算出し、両者の比のデシベル値である騒音抑圧量Rjを、以下の式(6)により算出する。
短区間Dj={t=(j-1)*d+1,(j-1)*d+2,…,j*d}
における入力データ
Si(t)、(t=(j-1)*d+1,(j-1)*d+2,…,j*d)
に対してスペクトル特徴量であるメルフィルタバンク特徴量を算出する。加重係数算出部12bは、音声騒音判定モデル16を用いて、メルフィルタバンク特徴量が音声データのものであるか又は騒音が重畳した騒音データのものであるかを判定する。すなわち、加重係数算出部12bは、メルフィルタバンク特徴量を音声騒音判定モデル16に入力し、音声騒音判定モデル16の出力ユニット中で最も高い値を出力した出力ユニットが音声に対応付けられたユニットであれば音声を含むと判定し、そうでなければ騒音と判定する。
上記騒音抑圧装置1~3のいずれかの後段に、音声データをテキストデータに変換する公知の音声認識エンジンを接続することにより音声認識装置を構成することができ、音声認識装置における音声認識精度を向上させることができる。例えば、ユーザが屋外又は工場で音声認識装置を使用して機器の点検の結果の入力を音声で行う場合、機器の動作音などの騒音があっても、高い音声認識精度で音声認識を行うことができる。
Claims (10)
- 入力データに対して騒音抑圧処理を行って騒音抑圧後データを生成する騒音抑圧部と、
時系列上の予め定められた区間における前記入力データと前記予め定められた区間における前記騒音抑圧後データとに基づいて加重係数を決定する加重係数算出部と、
前記加重係数に基づく値を重みとして用いて、前記入力データと前記騒音抑圧後データとを重み付け加算することで出力データを生成する加重和部と、
を備えたことを特徴とする騒音抑圧装置。 - 前記加重係数算出部は、前記入力データの入力が開始された時点から予め定められた時間が経過するまでの間を、前記予め定められた区間として用いる
ことを特徴とする請求項1に記載の騒音抑圧装置。 - 前記加重係数算出部は、前記予め定められた区間における前記入力データのパワーと前記予め定められた区間における前記騒音抑圧後データのパワーとの比に基づいて加重係数を算出する
ことを特徴とする請求項1又は2に記載の騒音抑圧装置。 - 複数種類の騒音にそれぞれ付与された騒音識別番号と対応付けて、予め定められた前記加重係数の候補を保持する加重係数表と、
前記入力データに含まれる騒音成分が前記加重係数表における前記複数種類の騒音のいずれであるかを、前記入力データのスペクトル特徴量に基づいて判定するために使用される騒音種類判定モデルと、
を更に備え、
前記加重係数算出部は、
前記騒音種類判定モデルを用いて、前記複数種類の騒音のうちで、前記入力データにおける前記予め定められた区間のデータに最も類似している騒音を算出し、
前記加重係数表から前記算出された騒音の騒音識別番号に対応付けられた前記加重係数の候補を前記加重係数として出力する
ことを特徴とする請求項1から3のいずれか1項に記載の騒音抑圧装置。 - 入力データに対して騒音抑圧処理を行って騒音抑圧後データを生成する騒音抑圧部と、
前記入力データの全区間のデータを時系列上の予め定められた複数の短区間に区分し、前記複数の短区間おける前記入力データと前記複数の短区間における前記騒音抑圧後データとに基づいて、前記複数の短区間の各々における加重係数を決定する加重係数算出部と、
前記複数の短区間の各々において、前記加重係数に基づく値を重みとして用いて、前記入力データと前記騒音抑圧後データとを重み付け加算することで出力データを生成する加重和部と、
を備えたことを特徴とする騒音抑圧装置。 - 入力データのスペクトル特徴量に基づいて当該入力データが音声か騒音かを判定するための音声騒音判定モデルを更に備え、
前記加重係数算出部は、
前記入力データの全区間のデータを予め定められた時間ごとの短区間に区切り、
前記短区間ごとに、前記入力データと前記騒音抑圧後データとのパワー比である騒音抑圧量を算出するとともに前記音声騒音判定モデルを用いて前記入力データが音声又は騒音のいずれであるかを判定し、
前記入力データが音声であると判定した場合に、前記騒音抑圧量が予め定められた第1の閾値以上であれば前記加重係数を予め定められた第1の値とし、前記騒音抑圧量が前記第1の閾値未満であれば前記加重係数を前記第1の値よりも小さい予め定められた第2の値とし、
前記入力データが騒音であると判定した場合に、前記騒音抑圧量が予め定められた第2の閾値未満であれば前記加重係数を予め定められた第3の値とし、前記騒音抑圧量が前記第2の閾値以上であれば前記加重係数を前記第3の値以上である予め定められた第4の値として、
前記短区間ごとに前記加重係数を前記加重和部に出力する
ことを特徴とする請求項5に記載の騒音抑圧装置。 - コンピュータによって実行される騒音抑圧方法であって、
入力データに対して騒音抑圧処理を行って騒音抑圧後データを生成するステップと、
時系列上の予め定められた区間における前記入力データと前記予め定められた区間における前記騒音抑圧後データとに基づいて加重係数を決定するステップと、
前記加重係数に基づく値を重みとして用いて、前記入力データと前記騒音抑圧後データとを重み付け加算することで出力データを生成するステップと、
を有することを特徴とする騒音抑圧方法。 - コンピュータに、請求項7に記載の騒音抑圧方法を実行させることを特徴とする騒音抑圧プログラム。
- コンピュータによって実行される騒音抑圧方法であって、
入力データに対して騒音抑圧処理を行って騒音抑圧後データを生成するステップと、
前記入力データの全区間のデータを時系列上の予め定められた複数の短区間に区分し、前記複数の短区間おける前記入力データと前記複数の短区間における前記騒音抑圧後データとに基づいて、前記複数の短区間の各々における加重係数を決定するステップと、
前記複数の短区間の各々において、前記加重係数に基づく値を重みとして用いて、前記入力データと前記騒音抑圧後データとを重み付け加算することで出力データを生成するステップと、
を有することを特徴とする騒音抑圧方法。 - コンピュータに、請求項9に記載の騒音抑圧方法を実行させることを特徴とする騒音抑圧プログラム。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP21930102.5A EP4297028A4 (en) | 2021-03-10 | 2021-03-10 | NOISE CANCELLATION DEVICE, NOISE CANCELLATION METHOD, AND NOISE CANCELLATION PROGRAM |
JP2023504950A JP7345702B2 (ja) | 2021-03-10 | 2021-03-10 | 騒音抑圧装置、騒音抑圧方法、及び騒音抑圧プログラム |
CN202180094907.7A CN116964664A (zh) | 2021-03-10 | 2021-03-10 | 噪声抑制装置、噪声抑制方法以及噪声抑制程序 |
PCT/JP2021/009490 WO2022190245A1 (ja) | 2021-03-10 | 2021-03-10 | 騒音抑圧装置、騒音抑圧方法、及び騒音抑圧プログラム |
US18/233,476 US20230386493A1 (en) | 2021-03-10 | 2023-08-14 | Noise suppression device, noise suppression method, and storage medium storing noise suppression program |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2021/009490 WO2022190245A1 (ja) | 2021-03-10 | 2021-03-10 | 騒音抑圧装置、騒音抑圧方法、及び騒音抑圧プログラム |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/233,476 Continuation US20230386493A1 (en) | 2021-03-10 | 2023-08-14 | Noise suppression device, noise suppression method, and storage medium storing noise suppression program |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022190245A1 true WO2022190245A1 (ja) | 2022-09-15 |
Family
ID=83226425
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2021/009490 WO2022190245A1 (ja) | 2021-03-10 | 2021-03-10 | 騒音抑圧装置、騒音抑圧方法、及び騒音抑圧プログラム |
Country Status (5)
Country | Link |
---|---|
US (1) | US20230386493A1 (ja) |
EP (1) | EP4297028A4 (ja) |
JP (1) | JP7345702B2 (ja) |
CN (1) | CN116964664A (ja) |
WO (1) | WO2022190245A1 (ja) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2001024167A1 (fr) * | 1999-09-30 | 2001-04-05 | Fujitsu Limited | Dispositif antiparasite |
JP2010160246A (ja) * | 2009-01-07 | 2010-07-22 | Nara Institute Of Science & Technology | 雑音抑圧装置およびプログラム |
WO2017065092A1 (ja) * | 2015-10-13 | 2017-04-20 | ソニー株式会社 | 情報処理装置 |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07193548A (ja) * | 1993-12-25 | 1995-07-28 | Sony Corp | 雑音低減処理方法 |
CN1192358C (zh) * | 1997-12-08 | 2005-03-09 | 三菱电机株式会社 | 声音信号加工方法和声音信号加工装置 |
-
2021
- 2021-03-10 CN CN202180094907.7A patent/CN116964664A/zh active Pending
- 2021-03-10 JP JP2023504950A patent/JP7345702B2/ja active Active
- 2021-03-10 WO PCT/JP2021/009490 patent/WO2022190245A1/ja active Application Filing
- 2021-03-10 EP EP21930102.5A patent/EP4297028A4/en active Pending
-
2023
- 2023-08-14 US US18/233,476 patent/US20230386493A1/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2001024167A1 (fr) * | 1999-09-30 | 2001-04-05 | Fujitsu Limited | Dispositif antiparasite |
JP2010160246A (ja) * | 2009-01-07 | 2010-07-22 | Nara Institute Of Science & Technology | 雑音抑圧装置およびプログラム |
WO2017065092A1 (ja) * | 2015-10-13 | 2017-04-20 | ソニー株式会社 | 情報処理装置 |
Non-Patent Citations (2)
Title |
---|
JUNKO SASAKI: "Study on the Effective Ratio of Adding Original Source Signal in Low-distortion Noise Reduction Method Using Masking Effect", PROCEEDINGS OF THE AUTUMN MEETING OF THE ACOUSTICAL SOCIETY OF JAPAN, September 1998 (1998-09-01), pages 503 - 504 |
See also references of EP4297028A4 |
Also Published As
Publication number | Publication date |
---|---|
JPWO2022190245A1 (ja) | 2022-09-15 |
US20230386493A1 (en) | 2023-11-30 |
EP4297028A4 (en) | 2024-03-20 |
JP7345702B2 (ja) | 2023-09-15 |
EP4297028A1 (en) | 2023-12-27 |
CN116964664A (zh) | 2023-10-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9269368B2 (en) | Speaker-identification-assisted uplink speech processing systems and methods | |
Hasan et al. | CRSS systems for 2012 NIST speaker recognition evaluation | |
Ghosh et al. | Robust voice activity detection using long-term signal variability | |
Narayanan et al. | Improving robustness of deep neural network acoustic models via speech separation and joint adaptive training | |
US7590526B2 (en) | Method for processing speech signal data and finding a filter coefficient | |
JP4943335B2 (ja) | 話者に依存しない堅牢な音声認識システム | |
US7856353B2 (en) | Method for processing speech signal data with reverberation filtering | |
Novoa et al. | Uncertainty weighting and propagation in DNN–HMM-based speech recognition | |
US20140278418A1 (en) | Speaker-identification-assisted downlink speech processing systems and methods | |
Tsilfidis et al. | Automatic speech recognition performance in different room acoustic environments with and without dereverberation preprocessing | |
Kumar | Comparative performance evaluation of MMSE-based speech enhancement techniques through simulation and real-time implementation | |
Hansen et al. | Speech enhancement based on generalized minimum mean square error estimators and masking properties of the auditory system | |
Kumar | Real-time performance evaluation of modified cascaded median-based noise estimation for speech enhancement system | |
Sadjadi et al. | Blind spectral weighting for robust speaker identification under reverberation mismatch | |
JP4728791B2 (ja) | 音声認識装置、音声認識方法、そのプログラムおよびその記録媒体 | |
Schwartz et al. | USSS-MITLL 2010 human assisted speaker recognition | |
WO2022190245A1 (ja) | 騒音抑圧装置、騒音抑圧方法、及び騒音抑圧プログラム | |
JP2013114151A (ja) | 雑音抑圧装置、方法及びプログラム | |
Ichikawa et al. | Dynamic features in the linear-logarithmic hybrid domain for automatic speech recognition in a reverberant environment | |
Laaksonen et al. | Artificial bandwidth expansion method to improve intelligibility and quality of AMR-coded narrowband speech | |
JP5200080B2 (ja) | 音声認識装置、音声認識方法、およびそのプログラム | |
JP2017187746A (ja) | 音声処理システムおよび音声処理方法 | |
De Wet et al. | Additive background noise as a source of non-linear mismatch in the cepstral and log-energy domain | |
JP2005321539A (ja) | 音声認識方法、その装置およびプログラム、その記録媒体 | |
Stern et al. | Acoustical pre-processing for robust speech recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21930102 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2023504950 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202180094907.7 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2021930102 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2021930102 Country of ref document: EP Effective date: 20230919 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |