EP0852052B1 - System zur adaptiven filterung von tonsignalen zur verbesserung der sprachverständlichkeit bei umgebungsgeräuschen - Google Patents
System zur adaptiven filterung von tonsignalen zur verbesserung der sprachverständlichkeit bei umgebungsgeräuschen Download PDFInfo
- Publication number
- EP0852052B1 EP0852052B1 EP96931552A EP96931552A EP0852052B1 EP 0852052 B1 EP0852052 B1 EP 0852052B1 EP 96931552 A EP96931552 A EP 96931552A EP 96931552 A EP96931552 A EP 96931552A EP 0852052 B1 EP0852052 B1 EP 0852052B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- noise
- frame
- filter
- speech
- estimate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Lifetime
Links
- 230000005236 sound signal Effects 0.000 title claims description 36
- 238000001914 filtration Methods 0.000 title claims description 11
- 230000007613 environmental effect Effects 0.000 title 1
- 230000004044 response Effects 0.000 claims description 30
- 238000000034 method Methods 0.000 claims description 23
- 230000001747 exhibiting effect Effects 0.000 claims 1
- 230000006870 function Effects 0.000 description 23
- 230000003595 spectral effect Effects 0.000 description 16
- 238000012545 processing Methods 0.000 description 15
- 230000001413 cellular effect Effects 0.000 description 12
- 230000003044 adaptive effect Effects 0.000 description 9
- 230000009467 reduction Effects 0.000 description 9
- 238000001514 detection method Methods 0.000 description 7
- 230000005540 biological transmission Effects 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000006835 compression Effects 0.000 description 3
- 238000007906 compression Methods 0.000 description 3
- 230000003247 decreasing effect Effects 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000007423 decrease Effects 0.000 description 2
- 235000019800 disodium phosphate Nutrition 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 206010019133 Hangover Diseases 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000002238 attenuated effect Effects 0.000 description 1
- 239000000969 carrier Substances 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 230000036039 immunity Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0208—Noise filtering
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0208—Noise filtering
- G10L21/0216—Noise filtering characterised by the method used for estimating noise
- G10L2021/02168—Noise filtering characterised by the method used for estimating noise the estimation exclusively taking place during speech pauses
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/78—Detection of presence or absence of voice signals
- G10L2025/783—Detection of presence or absence of voice signals based on threshold decision
- G10L2025/786—Adaptive threshold
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0208—Noise filtering
- G10L21/0216—Noise filtering characterised by the method used for estimating noise
- G10L21/0232—Processing in the frequency domain
Definitions
- the present invention relates to noise reduction systems, and in particular, to an adaptive speech intelligibility enhancement system for use in portable digital radio telephones.
- PCNs personal communication networks
- Digital communication systems take advantage of powerful digital signal processing techniques.
- Digital signal processing refers generally to mathematical and other manipulation of digitized signals. For example, after converting (digitizing) an analog signal into digital form, that digital signal may be filtered, amplified, and attenuated using simple mathematical routines in a digital signal processor (DSP).
- DSPs are manufactured as high speed integrated circuits so that data processing operations can be performed essentially in real time. DSPs may also be used to reduce the bit transmission rate of digitized speech which translates into reduced spectral occupancy of the transmitted radio signals and increased system capacity.
- a serial bit rate of 112 Kbits/sec is produced.
- voice coding techniques can be used to compress the serial bit rate from 112 Kbits/sec to 7.95 Kbits/sec to achieve a 14:1 reduction in bit transmission rate. Reduced transmission rates translate into more available bandwidth.
- VSELP vector sourcebook excited linear predictive coding
- the distortion is caused in large part by the environment in which the mobile telephones are used.
- Mobile telephones are typically used in a vehicle's interior where there is often ambient noise produced by the vehicle's engine and surrounding vehicular traffic.
- This ambient noise in the vehicle's interior is typically concentrated in the low audible frequency range and the magnitude of the noise can vary due to such factors as the speed and acceleration of the vehicle and the extent of the surrounding vehicular traffic.
- This type of low frequency noise also has the tendency of significantly decreasing the intelligibility of the speech coming from the speaking person in the car environment.
- the decrease in speech intelligibility caused by low frequency noise can be particularly significant in communication systems deploying a VSELP vocoder, but can also occur in communication systems that do not include a VSELP vocoder.
- the influence of the ambient noise on the mobile telephone can also be affected by the manner in which the mobile telephone is used.
- the mobile telephone may be used in a hands-free mode where the telephone user talks on the telephone while the mobile telephone is in a cradle. This frees the telephone user's hands to drive but also increases the distance that the telephone user's audible words must travel before reaching the microphone input of the mobile telephone. This increased distance between the user and the mobile telephone, along with the varying ambient noise, can result in noise being a significant portion of the total power spectral energy of the audio signal inputted into the mobile telephone.
- the present invention provides a method and an apparatus for selectively altering a frame of a digital signal according to claims 1 and 9.
- the present invention provides an adaptive noise reduction system that reduces the undesirable contributions of encoded background noise while both minimizing any negative impact on the quality of the encoded speech and minimizing any increased drain on digital signal processor resources.
- the method and system of the present invention increases the intelligibility of the speech in a digitized audio signal by passing frames of the digitized audio signal through a filter circuit.
- the filter circuit functions as an adjustable, high-pass filter which filters a portion of the digitized signal in a low audible frequency range and passes the portion of the digitized signal falling in higher frequency ranges.
- the filter circuit filters a large segment of the noise in the digitized audio signal while only filtering less important segments of the speech. This results in a relatively larger portion of the noise energy being removed compared to the portion of the speech energy removed.
- a filter control circuit is used to adjust the filter circuit to exhibit different frequency response curves as a function of a noise estimate and/or a spectral profile result corresponding to the noise in the audio signal.
- the noise estimate and/or the spectral profile result are adjusted on a frame-by-frame basis for the digital signal and as a function of speech detection. If speech is not detected, the noise estimate and/or spectral profile result is updated for the current frame. If speech is detected, the noise estimate and/or spectral profile result is left unadjusted.
- the filter circuit calculates noise estimates for the frames of the digitized audio signals.
- the noise estimates correspond to the amount of background noise in the frames of the digitized audio signals.
- the filter control circuit uses the noise estimates to adjust the filter circuit to filter larger portions of the low frequency range of speech as the relative amount of background noise to speech in a low frequency range of speech increases.
- no background noise is present, no portion of the speech signal is filtered. Larger portions of noise and speech information are extracted when there is a higher level of background noise.
- the overall intelligibility of the audio signal can be increased by increasing the portion of low frequency energy being filtered as the noise estimates increase.
- a modified filter control circuit is used to adjust the filter circuit to exhibit different frequency response curves as a function of a noise profile of the noise estimate over a selected frequency range in the audio signal.
- the filter control circuit includes a spectral analyzer for determining a noise profile estimate as a function of the detection speech. A noise profile estimate is determined for a current frame and compared to a reference noise profile. Based on this comparison, the filter circuit is adaptively adjusted to extract varying amounts of low frequency energy from the current frame.
- the adaptive noise reduction system may be advantageously applied to telecommunication systems in which portable/mobile radio transceivers communicate over RF channels with each other or with fixed telephone line subscribers.
- Each transceiver includes an antenna, a receiver for converting radio signals received over an RF channel via the antenna into analog audio signals, and a transmitter.
- the transmitter includes a coder-decoder (codec) for digitizing analog audio signals to be transmitted into frames of digitized speech information, the speech information including both speech and background noise.
- codec coder-decoder
- a digital signal processor processes a current frame based on an estimate of the background noise and the detection of speech in the current frame to minimize background noise.
- a modulator modulates an RF carrier with the processed frame of digitized speech information for subsequent transmission via the antenna.
- FIG. 1 is a general block diagram of the adaptive noise reduction system 100 according to the present invention.
- Adaptive noise reduction system 100 includes a filter control circuit 105 connected to a filter circuit 115.
- Filter control circuit 105 generates a filter control signal for a current frame of a digitized audio signal.
- the filter control signal is outputted to the filter circuit 115, and the filter circuit 115 adjusts in response to the filter control signal to exhibit a high-pass frequency response curve selected based on the filter control signal.
- the adjusted filter circuit 115 filters the current frame of the digitized audio signal.
- the filtering signal is processed by a voice coder 120 to produce a coded signal representing the digitized audio signal.
- Figure 2 illustrates the time division multiple access (TDMA) frame structure employed by the IS-54 standard for digital cellular telecommunications.
- a "frame” is a twenty millisecond time period which includes one transmit block TX, one receive block RX, and a signal strength measurement block used for mobile-assisted hand-off (MAHO).
- the two consecutive frames shown in Figure 2 are transmitted in a forty millisecond time period. Digitized speech and background noise information is processed and filtered on a frame-by-frame basis as further described below.
- the functions of the filter control circuit 105, filter circuit 115, and voice coder 120 shown in Figure 1 are implemented with a high speed digital signal processor.
- One suitable digital signal processor is the TMS320C53 DSP available from Texas Instruments.
- the TMS320C53 DSP includes on a single integrated chip a sixteen-bit microprocessor, on-chip RAM for storing data such as speech frames to be processed, ROM for storing various data processing algorithms including the VSELP speech compression algorithm, and other algorithms to be described below for implementing the functions performed by the filter control circuit 105 and the filter circuit 115.
- a first embodiment of the present invention is shown in Figure 3.
- the filter circuit 115 is adjusted as a function of background noise estimates determined by the filter control circuit.
- Frames of pulse code modulated (PCM) audio information are sequentially stored in the DSP's on-chip RAM.
- the audio information could be digitized using other digitization techniques.
- Each PCM frame is retrieved from a DSP on-chip RAM and processed by frame energy estimator 210, and stored temporarily in temporary frame store 220.
- the energy of the current frame determined by frame energy estimator 210 is provided to noise estimator 230 and speech detector 240 function blocks.
- Speech detector 240 indicates that speech is present in the current frame when the frame energy estimate exceeds the sum of the previous noise estimate and a speech threshold. If the speech detector 240 determines that no speech is present, the digital signal processor 200 calculates an updated noise estimate as a function of the previous noise estimate and the current frame energy (block 230).
- the updated noise estimate is outputted to a filter selector 235.
- Filter selector 235 generates a filter control signal based on the noise estimate.
- the filter selector 235 accesses a look-up table in generating the filter control signal.
- the look-up table includes a series of filter control values that are each matched with a noise estimate or range of noise estimates.
- a filter control value from a look-up table is selected based on the updated noise estimate and this filter control value is represented by a filter control signal outputted to a filter bank 265 for the filter circuit 115.
- a hangover time of N frames is set upon the selection of a new filter.
- a new filter can only be selected every N frames, where N is an integer greater than one and preferably greater than 10.
- the filter circuit 115 is adjusted in response to the filter control signal to exhibit a high-pass frequency response curve that corresponds with the inputted filter control signal and noise estimate.
- Various different types of filter circuits well known in prior art can be utilized to exhibit selected frequency response curves in response to the filter control signal.
- These prior art filters include IIR filters such as Butterworth, Chebyshev (Tschebyscheff) or elliptic filters. IIR filters are preferable to FIR filters, which also can be used, due to lower processing requirements.
- the filtered signal is processed by a voice coder 120 which is used to compress the bit rate of the filtered signal.
- the voice coder 120 uses vector sourcebook excited linear predictive coding (VSELP) to code the audio signal.
- VSELP vector sourcebook excited linear predictive coding
- CELP code excited linear predictive
- RPE-LTP residual pulse excited linear predictive
- IMBE improved multiband excited
- the digital signal processor 200 described in conjunction with Figure 3 can be used, for example, in the transceiver of a digital portable/mobile radiotelephone used in a radio telecommunications system.
- Figure 4 illustrates one such digital radio transceiver which may be used in a cellular telecommunications network.
- Audio signals including speech and background noise are input in a microphone 400 to a coder-decoder (codec) 402 which preferably is an application specific integrated circuit (ASIC).
- codec coder-decoder
- ASIC application specific integrated circuit
- the band limited audio signals detected at microphone 400 are sampled by the codec 402 at a rate of 8,000 samples per second and blocked into frames. Accordingly, each twenty millisecond frame includes 160 speech samples. These samples are quantized and converted into a coded digital format such as 14-bit linear PCM.
- the transmit DSP 200 performs channel encoding functions, the frame energy estimation, noise estimation, speech detection, FFT, filter functions and digital speech coding/compression in accordance with the VSELP algorithm, as described above in conjunction with Figure 3.
- a supervisory microprocessor 432 controls the overall operation of all of the components in the transceiver shown in Figure 4.
- the filtered PCM data stream generated by transmit DSP 200 is provided for quadrature modulation and transmission.
- an ASIC gate array 404 generates in-phase (I) and quadrature (Q) channels of information based upon the filtered PCM data stream from DSP 200.
- the I and Q bit streams are processed by matched, low pass filters 406 and 408 and passed onto IQ mixers in balanced modulator 410.
- a reference oscillator 412 and a multiplier 414 provide a transmit intermediate frequency (IF).
- the I signal is mixed with in-phase IF, and the Q signal is mixed with quadrature IF (i.e., the in-phase IF delayed by 90 degrees by phase shifter 416).
- the mixed I and Q signals are summed, converted "up" to an RF channel frequency selected by channel synthesizer 430, and transmitted via duplexer 420 and antenna 422 over the selected radio frequency channel.
- signals received via antenna 422 and duplexer 420 are down converted from the selected receive channel frequency in a mixer 424 to a first IF frequency using a local oscillator signal synthesized by channel synthesizer 430 based on the output of reference oscillator 428.
- the output of the first IF mixer 424 is filtered and down converted in frequency to a second IF frequency based on another output from channel synthesizer 430 and demodulator 426.
- a receive gate array 434 then converts the second IF signal into a series of phase samples and a series of frequency samples.
- the receive DSP 436 performs demodulation, filtering, gain/attenuation, channel decoding, and speech expansion on the received signals.
- the processed speech data are then sent to codec 402 and converted to baseband audio signals for driving loudspeaker 438.
- Frame energy estimator 210 determines the energy in each frame of audio signals.
- Frame energy estimator 210 determines the energy of the current frame by calculating the sum of the squared values of each PCM sample in the frame (step 505). Since there are 160 samples per twenty millisecond frame for an 8000 samples per second sampling rate, 160 squared PCM samples are summed.
- the frame energy estimate is determined according to equation 1 below: The frame energy value calculated for the current frame is stored in the on-chip RAM 202 of DSP 200 (step 510).
- the functions of speech detector 240 include fetching a noise estimate previously determined by noise estimator 230 from the on-chip RAM of DSP 200 (step 515). Of course, when the transceiver is initially powered up, no noise estimate will exist. Decision block 520 anticipates this situation and assigns a noise estimate in step 525. Preferably, an arbitrarily high value, e.g. 20 dB above normal speech levels, is assigned as the noise estimate in order to force an update of the noise estimate value as will be described below.
- the frame energy determined by frame energy estimator 210 is retrieved from the on-chip RAM 202 of DSP 200 (block 530). A decision is made in block 535 as to whether the frame energy estimate exceeds the sum of the retrieved noise estimate plus a predetermined speech threshold value, as shown in equation 2 below: frame energy estimate > (noise estimate + speech threshold)
- the speech threshold value may be a fixed value determined empirically to be larger than short term energy variations of typical background noise and may, for example, be set to 9 dB. In addition, the speech threshold value may be adaptively modified to reflect changing speech conditions such as when the speaker enters a noisier or quieter environment. If the frame energy estimate exceeds the sum in equation 2, a flag is set in block 570 that speech exists. If speech detector 240 detects that speech exists, then noise estimator 230 is bypassed and the noise estimate calculated for the previous frame in the digitized audio is retrieved and used as the current noise estimate. Conversely, if the frame energy estimate is less than the sum in equation 2, the speech flag is reset in block 540.
- ETSI European Telecommunications Standards Institute
- GSM Global System for Mobile communications
- RE/SMG-020632P RE/SMG-020632P
- the noise estimation update routine of noise estimator 230 is executed.
- the noise estimate is a running average of the frame energy during periods of no speech. As described above, if the initial start-up noise estimate is chosen sufficiently high, speech is not detected, and the speech flag will be reset thereby forcing an update of the noise estimate.
- noise estimate previous noise estimate + ⁇ /256 Since ⁇ is positive, the noise estimate must be increased. However, a smaller step size of ⁇ /256 (as compared to ⁇ /2) is chosen to gradually increase the noise estimate and provide substantial immunity to transient noise.
- the noise estimate calculated for the current frame is outputted to the filter selector 235.
- filter selector 235 accesses a look-up table and uses the current noise estimate to select a filter control value (Step 572).
- the filter circuit 115 (in Step 574) is then adjusted as a function of the selected filter control value to exhibit a frequency response curve intended to increase the amount of noise filtered as the noise estimate and background noise increases.
- the PCM samples stored in DSP RAM are then passed through the adjusted filter circuit 265 to filter the PCM samples in order to remove noise (Step 576).
- the filtered PCM samples are then processed by voice coder 120 (step 578), and the coded samples are then outputted to RF transmit circuits (Step 580).
- Figures 6A and 6B show examples of how the filter circuit 115 adjusts to exhibit different frequency response curves F1-F4 for different filter control signals inputted to the filter circuit 115.
- the filter circuit 115 can be selected to exhibit a series of different frequency response curves with the frequency response curves F1-F4 having cut-off frequencies F1c-F4c, respectively.
- the cut-off frequencies of filter circuit 115 may range in the preferred embodiment from 300 Hz to 800 Hz.
- the filter circuit 115 is designed to exhibit frequency response curves having higher cut-off frequencies. The higher cut-off frequencies result in a larger portion of frame energy falling within the lower frequency range of speech being extracted by the filter circuit 115.
- the filter circuit 115 can be selected to exhibit a series of different frequency response curves F1-F4 with each frequency response curve having a different slope and the same cut-off frequencies.
- the cut-off frequency for frequency response curves F1-F4 is in the above-mentioned range.
- the filter circuit 115 is adjusted to exhibit frequency response curves having steeper slopes. The steeper slopes result in a larger portion of frame energy falling within the lower frequency range of speech being extracted by the filter circuit 115.
- the filter circuit 115 filters the current frames as a function of the noise estimate calculated for the current frame.
- the current frame is filtered so that the noise is reduced and a major portion of the speech is passed.
- the major portion of speech which is passed unfiltered provides for recognizable speech output with only a minimal reduction in the quality of the speech signal.
- a combination of different cutoff frequencies and different slopes could be used for adaptively extracting selected portions of frame energy falling within a low frequency range of speech.
- Figure 7 depicts an example look-up table accessed by filter selector 235 in order to select one of the filter response curves F1-F4 for filter circuit 115.
- the look-up table includes a series of potential noise estimates N1-Nn and filter control values F1-Fn that correspond with potential response curves that are exhibitable by the filter circuit 115.
- Noise estimates N1-Nn can each represent a range of noise estimates and are each matched with a particular filter control value F1-F4.
- the filter control circuit 105 generates a filter control signal by calculating a noise estimate and retrieving from the look-up table the filter control value associated therewith.
- Figures 8A & B and 9A & B show how the audio signal for two frames are each adaptively filtered to provide an improved audio signal outputted to the RF transmitter.
- Figures 8A and 8B show a first frame and a second frame of an audio signal containing speech components s1 and s2 and noise components n1 and n2, respectively. As shown, the noise energy n1 and n2 in both frames is concentrated in a low audible frequency range, while the speech energy s1 and s2 is concentrated in a higher audible frequency range.
- Figure 9A shows the noise signal n1 and speech signal s1 for the first frame after filtering.
- Figure 9B shows the noise signal n2 and speech signal s2 for the second frame after filtering.
- the adaptive audio noise reduction system 100 is designed to account for the difference in noise level between the first frame and the second frame by adjusting the filter control circuit 105 based on a calculated noise estimate for the current frame. For example, a noise estimate N1 and a spectral profile S1 is calculated by filter control circuit 105 and a filter control value of F1 is selected for the first frame.
- the filter circuit 115 is adjusted based on filter control value F1 and exhibits a frequency response curve F1 having a cut-off frequency Flc, as shown in Figure 6A. The first frame is passed through this adjusted filter circuit 115.
- the filter circuit 115 is selected so that a large portion of the noise n1 and only a small portion of speech s1 falls below the cut-off frequency F1c of the frequency response curve F1. This results in noise n1 being effectively filtered and only a relatively insignificant portion of speech s1 being filtered.
- the filtered audio signal of the first frame is shown in Figure 9A.
- a higher background noise is present, and assuming speech is not detected, a higher noise estimate n2 is calculated by filter control circuit 105.
- a higher corresponding filter control value F2 is determined for the second frame based on the higher noise estimate.
- the filter circuit 115 is adjusted in response to the higher filter control value F2 to exhibit a frequency response curve having a higher cut-off frequency F2c, as shown in Figure 6A.
- the subsequent frame of audio signal is passed through the adjusted filter circuit 115. Because the cut-off frequency F2c of the frequency response curve F2 is higher for the subsequent frame, a larger portion of both the noise n2 and speech s2 is filtered.
- the portion of speech s2 filtered is still relatively insignificant to the intelligibility information contained by the frame so that there is only minimal affect on the speech.
- the disadvantage of filtering a larger portion of the speech s2 is offset by the advantage of the increased removal of noise n2 from the second frame.
- the filtered spectral portion of the speech does not significantly contribute to the intelligibility of the speech.
- the filtered audio signal of the second frame is shown in Figure 9B.
- a second preferred embodiment of adaptive noise reduction system 100 is shown in Figures 10-12.
- the filter control circuit 105 adjusts the filter circuit 115 as a function of noise profile estimates.
- a noise profile estimate is calculated for each frame and is compared to a reference noise profile. Based on this comparison, the filter circuit 115 is adaptively adjusted to extract varying amounts of low frequency energy from the current frame.
- the filter control circuit 105 includes a spectral analyzer 270, in addition to frame energy estimator 210, noise estimator 230, speech detector 240, and filter selector 235 which are described with respect to the first preferred embodiment.
- the filter control circuit 105 determines noise estimates and detects speech for the received frames as described for the first embodiment and shown in flow charts 5A and 5B.
- the spectral analyzer 270 updates the noise profile estimate and uses the noise profile estimate in adjusting the filter circuit 115.
- Figure 11 shows the steps performed by spectral analyzer 270 incorporated into the overall process previously described in the flow charts of Figures 5A and 5B for the first preferred embodiment.
- the spectral analyzer 270 first determines a noise profile for the current frame (step 600).
- the noise profile determined for the current frame includes energy calculations for different frequencies (i.e., frequency bins) within a selected low frequency range of speech for the current frame.
- the selected frequency range is approximately 300 to 800 hertz.
- the noise profile of the current frame can be determined by processing the current frame using a Fast Fourier Transform (FFT) having N frequency bins. Processing digital signals using an FFT is well-known in the prior art and is advantageous in that very little processing power is required where the FFT is limited to a relatively small number of frequency bins such as 32.
- An FFT having N frequency bins produces energy calculations at N different frequencies. The energy calculations for the frequency bins falling within the selected frequency range form the noise profile for the current frame.
- FFT Fast Fourier Transform
- the noise profile for the current frame is averaged with a noise profile estimate determined for the previous frame of the audio signal.
- a stored, initial noise profile estimate can be used.
- each noise energy estimate e i corresponds to an average of the energy calculations at a particular frequency in the selected frequency range over a plurality of successive frames in which no speech was detected.
- the filter circuit 115 is adjusted on a more gradual basis.
- the noise profile estimate can be equated to the noise profile of the current frame.
- the energy estimates e i of the noise profile estimate are then compared with a reference noise profile (step 604).
- the reference energy thresholds e ri can be determined empirically.
- the noise energy estimates e i are successively compared to corresponding reference energy thresholds e ri from the highest frequency energy estimate e 1 to the lowest frequency energy estimate e n .
- the filter circuit 235 uses the determined comparison value c i to determine a filter control value.
- the filter control value is selected from a look-up table such as that shown in Figure 12.
- the look-up table includes a series of comparison values c i and corresponding filter control values F i .
- the filter circuit 115 is adjusted as a function of the selected filter control value.
- the filter circuit 115 is adjusted to exhibit a frequency response curve for extracting low frequency energy from the current frame.
- the filter circuit 115 is adjusted to extract increasing amounts of low frequency energy as noise energy estimates at successively higher frequencies surpass their corresponding reference energy thresholds.
- Figure 6A and 6B show example frequency response curves for selected filter control values.
- noise profile estimates helps improve the ability to adaptively adjust the filter circuit to extract low frequency energy in a manner to improve the overall quality of speech. Since the car environment is not the only environment where a mobile telecommunications device is used, and therefore the noise profile in certain situations could be tilted more towards higher frequencies, the spectral analyzer 270 can be selectively disabled when noise energy in the low frequencies is small. Also, when a significant portion of the noise frequency spectrum resides in lower frequencies a steeper filtering slope could be applied even though some processing power may be sacrificed. This extra processing requirement is still fairly small.
- the adaptive noise filter system of the present invention is implemented simply and without significant increase in DSP calculations. More complex methods of reducing noise, such as "spectral subtraction,” require several calculation-relates MIPS and a large amount of memory for data and program code storage. By comparison, the present invention may be implemented using only a fraction of the MIPS and memory required for the "spectral subtraction" algorithm which also introduces more speech distortion. Reduced memory reduces the size of the DSP integrated circuits; decreased MIPS decreases power consumption. Both of these attributes are desirable for battery-powered portable/mobile radiotelephones.
- a DSP is disclosed as performing the functions of the frame energy estimator 210, noise estimator 230, speech detector 240, filter selector 235 and filter circuit 265, these functions could be implemented using other digital and/or analog components.
- an adaptive filtering system 100 could be implemented where the filter circuit 115 is adjusted as a function of both noise estimates and noise profile estimates.
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Quality & Reliability (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Noise Elimination (AREA)
- Filters That Use Time-Delay Elements (AREA)
- Signal Processing Not Specific To The Method Of Recording And Reproducing (AREA)
- Tone Control, Compression And Expansion, Limiting Amplitude (AREA)
Claims (10)
- Verfahren zur selektiven Änderung eines Rahmens eines Digitalsignals, das aus einer Vielzahl von aufeinanderfolgenden Rahmen besteht, wobei das Digitalsignal repräsentativ ist für ein Audiosignal, das an einem Sender empfangen wird, das Audiosignal abwechselnd aus einer Sprachkomponente, einer Rauschkomponente und der Sprachkomponente zusammen mit der Rauschkomponente besteht, und das Verfahren die Schritte umfasst:Abschätzen eines Energiepegels (505) eines Rahmens des Digitalsignals;Bestimmen (535), ansprechend auf die während des Abschätzungsschritts gemachte Abschätzung, ob der Rahmen des Digitalsignals eine Sprachkomponente enthält;Aktualisieren einer Rauschabschätzung als Funktion einer vorangehenden Rauschabschätzung und des Energiepegels, der während des Abschätzschritts abgeschätzt wurde, wenn der Bestimmungsschritt bestimmt, dass der Rahmen keine Sprachkomponente enthält;Zugreifen (572) auf einen Eintrag in einer Nachschlagtabelle, welche Filtercharakteristiken gegenüber Pegeln von Rauschabschätzungen aufführt, wobei der Eintrag, auf den zugegriffen wird, in Beziehung steht mit der Rauschabschätzung, die während des Aktualisierungsschritts aktualisiert wurde;Wählen (574) von Filtercharakteristiken für ein Filter, so dass das Filter eine Frequenzantwortkurve aufweist, die eine variable Verstärkung über unterschiedliche Frequenzbereiche hat, die Filtercharakteristik ausgewählt wird ansprechend auf die gespeicherten Filtercharakteristiken des Eintrags auf den während des Zugriffsschritts zugegriffen wurde; undFiltern (576) des Rahmens der Digitaldaten mit dem Filter, welches die Filtercharakteristiken aufweist, um dadurch den Rahmen der Digitaldaten ansprechend auf die Filtercharakteristiken zu ändern.
- Verfahren nach Anspruch 1, ferner gekennzeichnet durch den zusätzlichen Zwischenschritt der Bestimmung (600) einer Rauschprofilabschätzung des Rahmens des Digitalsignals, wenn bestimmt wird, dass der Rahmen der Digitaldaten die Sprachkomponente nicht enthält.
- Verfahren nach Anspruch 2, wobei die Rauschprofilabschätzung, die während des Schritts der Bestimmung (600) der Rauschprofilabschätzung bestimmt wurde, während des Aktualisierungsschritts zur Aktualisierung der Rauschabschätzung verwendet wird.
- Verfahren nach Anspruch 1, wobei die Nachschlagtabelle, auf welche während des Zugriffsschritts zugegriffen wird, gekennzeichnet ist durch eine Vielzahl von Einträgen (C1-CN,F4-FN), wobei jeder Eintrag der Vielzahl eine getrennte Filtercharakteristik enthält.
- Verfahren nach Anspruch 4, wobei die getrennten Filtercharakteristiken der Vielzahl von Einträgen der Nachschlagtabelle getrennte Hochpass-Filtercharakteristiken umfassen, wobei jede Hochpass-Filtercharakteristik durch eine getrennte Abschneidefrequenz (F1c, F2c, F3c, F4c) gekennzeichnet ist.
- Verfahren nach Anspruch 4, wobei die getrennten Filtercharakteristiken der Vielzahl von Einträgen der Nachschlagtabelle getrennte Hochpass-Filtercharakteristiken umfassen, wobei jede Hochpass-Filtercharakteristik definiert ist durch eine getrennte Kurvensteigung (F1, F2, F3, F4) der Frequenzantwort.
- Verfahren nach Anspruch 1, gekennzeichnet durch den weiteren Schritt der weiteren Inkrementierung eines Zählerwerts, um jeden Rahmen zu zählen, für welchen ein Energiepegel während des Abschätzschritts abgeschätzt wird.
- Verfahren nach Anspruch 7, wobei der Schritt des Wählens der Filterschaltung-Filtercharakteristiken durchgeführt wird wenn der Zählerwert jedes N-te mal inkrementiert wird, wobei N ein ganzzahliger Wert größer als 1 ist.
- Vorrichtung (100; 200) zur selektiven Änderung eines Rahmens eines Digitalsignals, das aus einer Vielzahl von aufeinanderfolgenden Rahmen besteht, wobei das Digitalsignal repräsentativ ist für ein Audiosignal, das an einem Sender empfangen wird, das Audiosignal abwechselnd aus einer Sprachkomponente, einer Rauschkomponente und der Sprachkomponente zusammen mit der Rauschkomponente besteht, wobei die Vorrichtung umfasst:einen Energiepegel-Abschätzer (210), der angeschlossen ist Anzeichen eines Rahmens des Digitalsignals zu empfangen, wobei der Energiepegel-Abschätzer dazu dient, einen Energiepegel des Rahmens des Digitalsignals abzuschätzen;einen Sprachdetektor (240), der an den Energiepegel-Abschätzer angeschlossen ist, wobei der Sprachdetektor dazu dient zu Bestimmen, ob der Rahmen des Digitalsignals eine Sprachkomponente enthält;einen Rausch-Abschätzer (230), der betreibbar ist, wenn der Sprachdetektor bestimmt, dass ein Rahmen keine Sprachkomponente enthält, wobei der Rausch-Abschätzer dazu dient, eine Rauschabschätzung als Funktion einer vorangehenden Rauschabschätzung, und des durch den Abschätzer abgeschätzten Energiepegels zu aktualisieren;eine Nachschlagtabelle (Fig. 12), welche eine Vielzahl von Einträgen enthält, wobei jeder Eintrag Pegel von Rauschabschätzungen anzeigt, und ansprechend auf eine durch den Rausch-Abschätzer gebildete Rauschabschätzung auf einen Eintrag der Nachschlagtabelle zugegriffen wird; undein Filter (265), das angeschlossen ist den Rahmen der Digitaldaten zu empfangen, wobei das Filter wählbare Filtercharakteristiken aufweist, die es dem Filter ermöglichen, eine Frequenzantwortkurve aufzuweisen, die eine veränderliche Verstärkung über unterschiedliche Frequenzbereiche hat, und die Auswahl der Filtercharakteristiken des Filters bestimmt wird ansprechend auf den Eintrag der Nachschlagtabelle, auf den ansprechend auf die von dem Rausch-Abschätzer aktualisierte Rauschabschätzung zugegriffen wird.
- Vorrichtung (100; 200) nach Anspruch 9, ferner gekennzeichnet durch einen Rauschprofil-Abschätzer (270) zur Bestimmung einer Rauschprofil-Abschätzung des Rahmens der Digitaldaten, wenn von der Sprachkomponenten-Bestimmungseinrichtung bestimmt wird, dass der Rahmen der Digitaldaten keine Sprachkomponente enthält.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US52800595A | 1995-09-14 | 1995-09-14 | |
US528005 | 1995-09-14 | ||
PCT/US1996/014665 WO1997010586A1 (en) | 1995-09-14 | 1996-09-13 | System for adaptively filtering audio signals to enhance speech intelligibility in noisy environmental conditions |
Publications (2)
Publication Number | Publication Date |
---|---|
EP0852052A1 EP0852052A1 (de) | 1998-07-08 |
EP0852052B1 true EP0852052B1 (de) | 2001-06-13 |
Family
ID=24103874
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP96931552A Expired - Lifetime EP0852052B1 (de) | 1995-09-14 | 1996-09-13 | System zur adaptiven filterung von tonsignalen zur verbesserung der sprachverständlichkeit bei umgebungsgeräuschen |
Country Status (15)
Country | Link |
---|---|
EP (1) | EP0852052B1 (de) |
JP (1) | JPH11514453A (de) |
KR (1) | KR100423029B1 (de) |
CN (1) | CN1121684C (de) |
AU (1) | AU724111B2 (de) |
BR (1) | BR9610290A (de) |
CA (1) | CA2231107A1 (de) |
DE (1) | DE69613380D1 (de) |
EE (1) | EE03456B1 (de) |
MX (1) | MX9801857A (de) |
NO (1) | NO981074L (de) |
PL (1) | PL185513B1 (de) |
RU (1) | RU2163032C2 (de) |
TR (1) | TR199800475T1 (de) |
WO (1) | WO1997010586A1 (de) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102128976A (zh) * | 2011-01-07 | 2011-07-20 | 钜泉光电科技(上海)股份有限公司 | 电能表的能量脉冲输出方法、装置及电能表 |
WO2021179045A1 (en) * | 2020-03-13 | 2021-09-16 | University Of South Australia | A data processing method |
Families Citing this family (169)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE19747885B4 (de) * | 1997-10-30 | 2009-04-23 | Harman Becker Automotive Systems Gmbh | Verfahren zur Reduktion von Störungen akustischer Signale mittels der adaptiven Filter-Methode der spektralen Subtraktion |
JP2001508197A (ja) * | 1997-10-31 | 2001-06-19 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | 構成信号にノイズを加算してlpc原理により符号化された音声のオーディオ再生のための方法及び装置 |
KR20000074236A (ko) * | 1999-05-19 | 2000-12-15 | 정몽규 | 오토 오디오 볼륨 제어장치 |
US8645137B2 (en) | 2000-03-16 | 2014-02-04 | Apple Inc. | Fast, language-independent method for user authentication by voice |
JP2001318694A (ja) * | 2000-05-10 | 2001-11-16 | Toshiba Corp | 信号処理装置、信号処理方法および記録媒体 |
US6983242B1 (en) * | 2000-08-21 | 2006-01-03 | Mindspeed Technologies, Inc. | Method for robust classification in speech coding |
KR20030010432A (ko) * | 2001-07-28 | 2003-02-05 | 주식회사 엑스텔테크놀러지 | 잡음환경에서의 음성인식장치 |
IL148592A0 (en) | 2002-03-10 | 2002-09-12 | Ycd Multimedia Ltd | Dynamic normalizing |
KR100978015B1 (ko) * | 2002-07-01 | 2010-08-25 | 코닌클리케 필립스 일렉트로닉스 엔.브이. | 고정 스펙트럼 전력 의존 오디오 강화 시스템 |
WO2004004297A2 (en) * | 2002-07-01 | 2004-01-08 | Koninklijke Philips Electronics N.V. | Stationary spectral power dependent audio enhancement system |
WO2004008801A1 (en) * | 2002-07-12 | 2004-01-22 | Widex A/S | Hearing aid and a method for enhancing speech intelligibility |
US7242763B2 (en) | 2002-11-26 | 2007-07-10 | Lucent Technologies Inc. | Systems and methods for far-end noise reduction and near-end noise compensation in a mixed time-frequency domain compander to improve signal quality in communications systems |
DE10305369B4 (de) * | 2003-02-10 | 2005-05-19 | Siemens Ag | Benutzeradaptives Verfahren zur Geräuschmodellierung |
US7127076B2 (en) | 2003-03-03 | 2006-10-24 | Phonak Ag | Method for manufacturing acoustical devices and for reducing especially wind disturbances |
EP2254352A3 (de) * | 2003-03-03 | 2012-06-13 | Phonak AG | Verfahren zur Herstellung von akustischen Geräten und zur Verringerung von Windstörungen |
CA2691762C (en) | 2004-08-30 | 2012-04-03 | Qualcomm Incorporated | Method and apparatus for an adaptive de-jitter buffer |
KR100640865B1 (ko) | 2004-09-07 | 2006-11-02 | 엘지전자 주식회사 | 음성 품질 향상 방법 및 장치 |
US8085678B2 (en) | 2004-10-13 | 2011-12-27 | Qualcomm Incorporated | Media (voice) playback (de-jitter) buffer adjustments based on air interface |
US8082156B2 (en) | 2005-01-11 | 2011-12-20 | Nec Corporation | Audio encoding device, audio encoding method, and audio encoding program for encoding a wide-band audio signal |
GB2429139B (en) * | 2005-08-10 | 2010-06-16 | Zarlink Semiconductor Inc | A low complexity noise reduction method |
US8677377B2 (en) | 2005-09-08 | 2014-03-18 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
KR100667852B1 (ko) * | 2006-01-13 | 2007-01-11 | 삼성전자주식회사 | 휴대용 레코더 기기의 잡음 제거 장치 및 그 방법 |
EP4178110B1 (de) * | 2006-01-27 | 2024-04-24 | Dolby International AB | Effiziente filterung mit einer komplex modulierten filterbank |
US9318108B2 (en) | 2010-01-18 | 2016-04-19 | Apple Inc. | Intelligent automated assistant |
KR101414233B1 (ko) | 2007-01-05 | 2014-07-02 | 삼성전자 주식회사 | 음성 신호의 명료도를 향상시키는 장치 및 방법 |
KR100883896B1 (ko) * | 2007-01-19 | 2009-02-17 | 엘지전자 주식회사 | 음성명료도 향상장치 및 방법 |
KR100876794B1 (ko) * | 2007-04-03 | 2009-01-09 | 삼성전자주식회사 | 이동 단말에서 음성의 명료도 향상 장치 및 방법 |
US8977255B2 (en) | 2007-04-03 | 2015-03-10 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
EP2191466B1 (de) * | 2007-09-12 | 2013-05-22 | Dolby Laboratories Licensing Corporation | Spracherweiterung mit stimmklarheit |
CN101904097B (zh) | 2007-12-20 | 2015-05-13 | 艾利森电话股份有限公司 | 噪声抑制方法和设备 |
EP2232704A4 (de) * | 2007-12-20 | 2010-12-01 | Ericsson Telefon Ab L M | Rauschunterdrückungsverfahren und vorrichtung |
US9330720B2 (en) | 2008-01-03 | 2016-05-03 | Apple Inc. | Methods and apparatus for altering audio output signals |
CN101221767B (zh) * | 2008-01-23 | 2012-05-30 | 晨星半导体股份有限公司 | 人声语音加强装置与应用于其上的方法 |
US8996376B2 (en) | 2008-04-05 | 2015-03-31 | Apple Inc. | Intelligent text-to-speech conversion |
EP2373067B1 (de) * | 2008-04-18 | 2013-04-17 | Dolby Laboratories Licensing Corporation | Verfahren und Vorrichtung zum Aufrechterhalten der Sprachhörbarkeit in einem Mehrkanalaudiosystem mit minimalem Einfluss auf die Surround-Hörerfahrung |
US10496753B2 (en) | 2010-01-18 | 2019-12-03 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US20100030549A1 (en) | 2008-07-31 | 2010-02-04 | Lee Michael M | Mobile device having human language translation capability with positional feedback |
WO2010067118A1 (en) | 2008-12-11 | 2010-06-17 | Novauris Technologies Limited | Speech recognition involving a mobile device |
DE102009011583A1 (de) | 2009-03-06 | 2010-09-09 | Krones Ag | Verfahren und Vorrichtung zum Herstellen und Befüllen von dünnwandigen Getränkebehältern |
US10241644B2 (en) | 2011-06-03 | 2019-03-26 | Apple Inc. | Actionable reminder entries |
US10241752B2 (en) | 2011-09-30 | 2019-03-26 | Apple Inc. | Interface for a virtual digital assistant |
US9858925B2 (en) | 2009-06-05 | 2018-01-02 | Apple Inc. | Using context information to facilitate processing of commands in a virtual assistant |
US10706373B2 (en) | 2011-06-03 | 2020-07-07 | Apple Inc. | Performing actions associated with task items that represent tasks to perform |
US9431006B2 (en) | 2009-07-02 | 2016-08-30 | Apple Inc. | Methods and apparatuses for automatic speech recognition |
US10276170B2 (en) | 2010-01-18 | 2019-04-30 | Apple Inc. | Intelligent automated assistant |
US10553209B2 (en) | 2010-01-18 | 2020-02-04 | Apple Inc. | Systems and methods for hands-free notification summaries |
US10705794B2 (en) | 2010-01-18 | 2020-07-07 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US10679605B2 (en) | 2010-01-18 | 2020-06-09 | Apple Inc. | Hands-free list-reading by intelligent automated assistant |
DE202011111062U1 (de) | 2010-01-25 | 2019-02-19 | Newvaluexchange Ltd. | Vorrichtung und System für eine Digitalkonversationsmanagementplattform |
US8682667B2 (en) | 2010-02-25 | 2014-03-25 | Apple Inc. | User profiling for selecting user specific voice input processing information |
CN102202038B (zh) * | 2010-03-24 | 2015-05-06 | 华为技术有限公司 | 一种实现语音能量显示的方法、系统、会议服务器和终端 |
US9837097B2 (en) | 2010-05-24 | 2017-12-05 | Nec Corporation | Single processing method, information processing apparatus and signal processing program |
CN101859569B (zh) * | 2010-05-27 | 2012-08-15 | 上海朗谷电子科技有限公司 | 数字音频信号处理降噪的方法 |
US8639516B2 (en) | 2010-06-04 | 2014-01-28 | Apple Inc. | User-specific noise suppression for voice quality improvements |
US10762293B2 (en) | 2010-12-22 | 2020-09-01 | Apple Inc. | Using parts-of-speech tagging and named entity recognition for spelling correction |
US9262612B2 (en) | 2011-03-21 | 2016-02-16 | Apple Inc. | Device access using voice authentication |
US10057736B2 (en) | 2011-06-03 | 2018-08-21 | Apple Inc. | Active transport based notifications |
US8994660B2 (en) | 2011-08-29 | 2015-03-31 | Apple Inc. | Text correction processing |
AU2012232977A1 (en) * | 2011-09-30 | 2013-04-18 | Apple Inc. | Using context information to facilitate processing of commands in a virtual assistant |
US10134385B2 (en) | 2012-03-02 | 2018-11-20 | Apple Inc. | Systems and methods for name pronunciation |
US9483461B2 (en) | 2012-03-06 | 2016-11-01 | Apple Inc. | Handling speech synthesis of content for multiple languages |
US9280610B2 (en) | 2012-05-14 | 2016-03-08 | Apple Inc. | Crowd sourcing information to fulfill user requests |
US9721563B2 (en) | 2012-06-08 | 2017-08-01 | Apple Inc. | Name recognition system |
CN102737646A (zh) * | 2012-06-21 | 2012-10-17 | 佛山市瀚芯电子科技有限公司 | 单一麦克风的实时动态语音降噪方法 |
US9495129B2 (en) | 2012-06-29 | 2016-11-15 | Apple Inc. | Device, method, and user interface for voice-activated navigation and browsing of a document |
US9576574B2 (en) | 2012-09-10 | 2017-02-21 | Apple Inc. | Context-sensitive handling of interruptions by intelligent digital assistant |
US9547647B2 (en) | 2012-09-19 | 2017-01-17 | Apple Inc. | Voice-based media searching |
KR20240132105A (ko) | 2013-02-07 | 2024-09-02 | 애플 인크. | 디지털 어시스턴트를 위한 음성 트리거 |
US10652394B2 (en) | 2013-03-14 | 2020-05-12 | Apple Inc. | System and method for processing voicemail |
US9368114B2 (en) | 2013-03-14 | 2016-06-14 | Apple Inc. | Context-sensitive handling of interruptions |
AU2014233517B2 (en) | 2013-03-15 | 2017-05-25 | Apple Inc. | Training an at least partial voice command system |
WO2014144579A1 (en) | 2013-03-15 | 2014-09-18 | Apple Inc. | System and method for updating an adaptive speech recognition model |
CN104095640A (zh) * | 2013-04-03 | 2014-10-15 | 达尔生技股份有限公司 | 血氧饱和度检测方法及装置 |
WO2014197336A1 (en) | 2013-06-07 | 2014-12-11 | Apple Inc. | System and method for detecting errors in interactions with a voice-based digital assistant |
US9582608B2 (en) | 2013-06-07 | 2017-02-28 | Apple Inc. | Unified ranking with entropy-weighted information for phrase-based semantic auto-completion |
WO2014197334A2 (en) | 2013-06-07 | 2014-12-11 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
WO2014197335A1 (en) | 2013-06-08 | 2014-12-11 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US10176167B2 (en) | 2013-06-09 | 2019-01-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
KR101772152B1 (ko) | 2013-06-09 | 2017-08-28 | 애플 인크. | 디지털 어시스턴트의 둘 이상의 인스턴스들에 걸친 대화 지속성을 가능하게 하기 위한 디바이스, 방법 및 그래픽 사용자 인터페이스 |
EP3008964B1 (de) | 2013-06-13 | 2019-09-25 | Apple Inc. | System und verfahren für durch sprachsteuerung ausgelöste notrufe |
EP2816557B1 (de) * | 2013-06-20 | 2015-11-04 | Harman Becker Automotive Systems GmbH | Identifizierung von Störsignalen in Audiosignalen |
US9697831B2 (en) * | 2013-06-26 | 2017-07-04 | Cirrus Logic, Inc. | Speech recognition |
DE112014003653B4 (de) | 2013-08-06 | 2024-04-18 | Apple Inc. | Automatisch aktivierende intelligente Antworten auf der Grundlage von Aktivitäten von entfernt angeordneten Vorrichtungen |
US9620105B2 (en) | 2014-05-15 | 2017-04-11 | Apple Inc. | Analyzing audio input for efficient speech and music recognition |
US10592095B2 (en) | 2014-05-23 | 2020-03-17 | Apple Inc. | Instantaneous speaking of content on touch devices |
US9502031B2 (en) | 2014-05-27 | 2016-11-22 | Apple Inc. | Method for supporting dynamic grammars in WFST-based ASR |
US9715875B2 (en) | 2014-05-30 | 2017-07-25 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US9785630B2 (en) | 2014-05-30 | 2017-10-10 | Apple Inc. | Text prediction using combined word N-gram and unigram language models |
US9842101B2 (en) | 2014-05-30 | 2017-12-12 | Apple Inc. | Predictive conversion of language input |
US10170123B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Intelligent assistant for home automation |
US9430463B2 (en) | 2014-05-30 | 2016-08-30 | Apple Inc. | Exemplar-based natural language processing |
US9633004B2 (en) | 2014-05-30 | 2017-04-25 | Apple Inc. | Better resolution when referencing to concepts |
US10078631B2 (en) | 2014-05-30 | 2018-09-18 | Apple Inc. | Entropy-guided text prediction using combined word and character n-gram language models |
US10289433B2 (en) | 2014-05-30 | 2019-05-14 | Apple Inc. | Domain specific language for encoding assistant dialog |
US9734193B2 (en) | 2014-05-30 | 2017-08-15 | Apple Inc. | Determining domain salience ranking from ambiguous words in natural speech |
CN110797019B (zh) | 2014-05-30 | 2023-08-29 | 苹果公司 | 多命令单一话语输入方法 |
US9760559B2 (en) | 2014-05-30 | 2017-09-12 | Apple Inc. | Predictive text input |
US9338493B2 (en) | 2014-06-30 | 2016-05-10 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US10659851B2 (en) | 2014-06-30 | 2020-05-19 | Apple Inc. | Real-time digital assistant knowledge updates |
EP2980801A1 (de) | 2014-07-28 | 2016-02-03 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Verfahren zur Schätzung des Rauschens in einem Audiosignal, Rauschschätzer, Audiocodierer, Audiodecodierer und System zur Übertragung von Audiosignalen |
US10446141B2 (en) | 2014-08-28 | 2019-10-15 | Apple Inc. | Automatic speech recognition based on user feedback |
US9818400B2 (en) | 2014-09-11 | 2017-11-14 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US10789041B2 (en) | 2014-09-12 | 2020-09-29 | Apple Inc. | Dynamic thresholds for always listening speech trigger |
US9646609B2 (en) | 2014-09-30 | 2017-05-09 | Apple Inc. | Caching apparatus for serving phonetic pronunciations |
US10127911B2 (en) | 2014-09-30 | 2018-11-13 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US9886432B2 (en) | 2014-09-30 | 2018-02-06 | Apple Inc. | Parsimonious handling of word inflection via categorical stem + suffix N-gram language models |
US10074360B2 (en) | 2014-09-30 | 2018-09-11 | Apple Inc. | Providing an indication of the suitability of speech recognition |
US9668121B2 (en) | 2014-09-30 | 2017-05-30 | Apple Inc. | Social reminders |
US10552013B2 (en) | 2014-12-02 | 2020-02-04 | Apple Inc. | Data detection |
US9711141B2 (en) | 2014-12-09 | 2017-07-18 | Apple Inc. | Disambiguating heteronyms in speech synthesis |
RU2589298C1 (ru) * | 2014-12-29 | 2016-07-10 | Александр Юрьевич Бредихин | Способ повышения разборчивости и информативности звуковых сигналов в шумовой обстановке |
US9865280B2 (en) | 2015-03-06 | 2018-01-09 | Apple Inc. | Structured dictation using intelligent automated assistants |
US9721566B2 (en) | 2015-03-08 | 2017-08-01 | Apple Inc. | Competing devices responding to voice triggers |
US9886953B2 (en) | 2015-03-08 | 2018-02-06 | Apple Inc. | Virtual assistant activation |
US10567477B2 (en) | 2015-03-08 | 2020-02-18 | Apple Inc. | Virtual assistant continuity |
US9899019B2 (en) | 2015-03-18 | 2018-02-20 | Apple Inc. | Systems and methods for structured stem and suffix language models |
US9842105B2 (en) | 2015-04-16 | 2017-12-12 | Apple Inc. | Parsimonious continuous-space phrase representations for natural language processing |
US10083688B2 (en) | 2015-05-27 | 2018-09-25 | Apple Inc. | Device voice control for selecting a displayed affordance |
US10127220B2 (en) | 2015-06-04 | 2018-11-13 | Apple Inc. | Language identification from short strings |
US10101822B2 (en) | 2015-06-05 | 2018-10-16 | Apple Inc. | Language input correction |
US10255907B2 (en) | 2015-06-07 | 2019-04-09 | Apple Inc. | Automatic accent detection using acoustic models |
US10186254B2 (en) | 2015-06-07 | 2019-01-22 | Apple Inc. | Context-based endpoint detection |
US11025565B2 (en) | 2015-06-07 | 2021-06-01 | Apple Inc. | Personalized prediction of responses for instant messaging |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
US9697820B2 (en) | 2015-09-24 | 2017-07-04 | Apple Inc. | Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks |
US10366158B2 (en) | 2015-09-29 | 2019-07-30 | Apple Inc. | Efficient word encoding for recurrent neural network language models |
US11010550B2 (en) | 2015-09-29 | 2021-05-18 | Apple Inc. | Unified language modeling framework for word prediction, auto-completion and auto-correction |
US11587559B2 (en) | 2015-09-30 | 2023-02-21 | Apple Inc. | Intelligent device identification |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
EP3374990B1 (de) | 2015-11-09 | 2019-09-04 | Nextlink IPR AB | Verfahren und system zur rauschunterdrückung |
US10049668B2 (en) | 2015-12-02 | 2018-08-14 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10223066B2 (en) | 2015-12-23 | 2019-03-05 | Apple Inc. | Proactive assistance based on dialog communication between devices |
CN105869650B (zh) * | 2015-12-28 | 2020-03-06 | 乐融致新电子科技(天津)有限公司 | 数字音频数据播放方法及装置 |
US10446143B2 (en) | 2016-03-14 | 2019-10-15 | Apple Inc. | Identification of voice inputs providing credentials |
CN106060717A (zh) * | 2016-05-26 | 2016-10-26 | 广东睿盟计算机科技有限公司 | 一种高清晰度动态降噪拾音器 |
US9934775B2 (en) | 2016-05-26 | 2018-04-03 | Apple Inc. | Unit-selection text-to-speech synthesis based on predicted concatenation parameters |
US9972304B2 (en) | 2016-06-03 | 2018-05-15 | Apple Inc. | Privacy preserving distributed evaluation framework for embedded personalized systems |
US10249300B2 (en) | 2016-06-06 | 2019-04-02 | Apple Inc. | Intelligent list reading |
US10049663B2 (en) | 2016-06-08 | 2018-08-14 | Apple, Inc. | Intelligent automated assistant for media exploration |
DK179588B1 (en) | 2016-06-09 | 2019-02-22 | Apple Inc. | INTELLIGENT AUTOMATED ASSISTANT IN A HOME ENVIRONMENT |
US10490187B2 (en) | 2016-06-10 | 2019-11-26 | Apple Inc. | Digital assistant providing automated status report |
US10067938B2 (en) | 2016-06-10 | 2018-09-04 | Apple Inc. | Multilingual word prediction |
US10509862B2 (en) | 2016-06-10 | 2019-12-17 | Apple Inc. | Dynamic phrase expansion of language input |
US10192552B2 (en) | 2016-06-10 | 2019-01-29 | Apple Inc. | Digital assistant providing whispered speech |
US10586535B2 (en) | 2016-06-10 | 2020-03-10 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
DK201670540A1 (en) | 2016-06-11 | 2018-01-08 | Apple Inc | Application integration with a digital assistant |
DK179049B1 (en) | 2016-06-11 | 2017-09-18 | Apple Inc | Data driven natural language event detection and classification |
DK179415B1 (en) | 2016-06-11 | 2018-06-14 | Apple Inc | Intelligent device arbitration and control |
DK179343B1 (en) | 2016-06-11 | 2018-05-14 | Apple Inc | Intelligent task discovery |
US9748929B1 (en) * | 2016-10-24 | 2017-08-29 | Analog Devices, Inc. | Envelope-dependent order-varying filter control |
US10593346B2 (en) | 2016-12-22 | 2020-03-17 | Apple Inc. | Rank-reduced token representation for automatic speech recognition |
CN107039044B (zh) * | 2017-03-08 | 2020-04-21 | Oppo广东移动通信有限公司 | 一种语音信号处理方法及移动终端 |
DK179745B1 (en) | 2017-05-12 | 2019-05-01 | Apple Inc. | SYNCHRONIZATION AND TASK DELEGATION OF A DIGITAL ASSISTANT |
DK201770431A1 (en) | 2017-05-15 | 2018-12-20 | Apple Inc. | Optimizing dialogue policy decisions for digital assistants using implicit feedback |
US10157627B1 (en) | 2017-06-02 | 2018-12-18 | Bose Corporation | Dynamic spectral filtering |
WO2019187841A1 (ja) * | 2018-03-30 | 2019-10-03 | パナソニックIpマネジメント株式会社 | 騒音低減装置 |
RU2680735C1 (ru) * | 2018-10-15 | 2019-02-26 | Акционерное общество "Концерн "Созвездие" | Способ разделения речи и пауз путем анализа значений фаз частотных составляющих шума и сигнала |
CN109643554B (zh) * | 2018-11-28 | 2023-07-21 | 深圳市汇顶科技股份有限公司 | 自适应语音增强方法和电子设备 |
US11438452B1 (en) | 2019-08-09 | 2022-09-06 | Apple Inc. | Propagating context information in a privacy preserving manner |
CN112581935B (zh) | 2019-09-27 | 2024-09-06 | 苹果公司 | 环境感知语音辅助设备以及相关系统和方法 |
US11501758B2 (en) | 2019-09-27 | 2022-11-15 | Apple Inc. | Environment aware voice-assistant devices, and related systems and methods |
CN111370033B (zh) * | 2020-03-13 | 2023-09-22 | 北京字节跳动网络技术有限公司 | 键盘声处理方法、装置、终端设备及存储介质 |
CN111402916B (zh) * | 2020-03-24 | 2023-08-04 | 青岛罗博智慧教育技术有限公司 | 一种语音增强系统、方法及手写板 |
CN114093391A (zh) * | 2020-07-29 | 2022-02-25 | 华为技术有限公司 | 一种异常信号的过滤方法及装置 |
CN111916106B (zh) * | 2020-08-17 | 2021-06-15 | 牡丹江医学院 | 一种提高英语教学中发音质量的方法 |
CN112927715B (zh) * | 2021-02-26 | 2024-06-14 | 腾讯音乐娱乐科技(深圳)有限公司 | 一种音频处理方法、设备及计算机可读存储介质 |
CN114550740B (zh) * | 2022-04-26 | 2022-07-15 | 天津市北海通信技术有限公司 | 噪声下的语音清晰度算法及其列车音频播放方法、系统 |
CN118411998B (zh) * | 2024-07-02 | 2024-09-24 | 杭州知聊信息技术有限公司 | 基于大数据的音频噪声处理方法及系统 |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4461025A (en) * | 1982-06-22 | 1984-07-17 | Audiological Engineering Corporation | Automatic background noise suppressor |
US4630305A (en) * | 1985-07-01 | 1986-12-16 | Motorola, Inc. | Automatic gain selector for a noise suppression system |
US4811404A (en) * | 1987-10-01 | 1989-03-07 | Motorola, Inc. | Noise suppression system |
DE4012349A1 (de) * | 1989-04-19 | 1990-10-25 | Ricoh Kk | Einrichtung zum beseitigen von geraeuschen |
JP3065739B2 (ja) * | 1991-10-14 | 2000-07-17 | 三菱電機株式会社 | 音声区間検出装置 |
US5412735A (en) * | 1992-02-27 | 1995-05-02 | Central Institute For The Deaf | Adaptive noise reduction circuit for a sound reproduction system |
JPH05259928A (ja) * | 1992-03-09 | 1993-10-08 | Oki Electric Ind Co Ltd | 適応制御ノイズキャンセラ装置及び適応制御ノイズキャンセル方法 |
US5251263A (en) * | 1992-05-22 | 1993-10-05 | Andrea Electronics Corporation | Adaptive noise cancellation and speech enhancement system and apparatus therefor |
JPH0695693A (ja) * | 1992-09-09 | 1994-04-08 | Fujitsu Ten Ltd | 音声認識装置用騒音低減回路 |
JP3270866B2 (ja) * | 1993-03-23 | 2002-04-02 | ソニー株式会社 | 雑音除去方法および雑音除去装置 |
US5485522A (en) * | 1993-09-29 | 1996-01-16 | Ericsson Ge Mobile Communications, Inc. | System for adaptively reducing noise in speech signals |
US5657422A (en) * | 1994-01-28 | 1997-08-12 | Lucent Technologies Inc. | Voice activity detection driven noise remediator |
-
1996
- 1996-09-13 PL PL96325532A patent/PL185513B1/pl not_active IP Right Cessation
- 1996-09-13 RU RU98107313/09A patent/RU2163032C2/ru not_active IP Right Cessation
- 1996-09-13 BR BR9610290A patent/BR9610290A/pt not_active IP Right Cessation
- 1996-09-13 CA CA002231107A patent/CA2231107A1/en not_active Abandoned
- 1996-09-13 JP JP9512112A patent/JPH11514453A/ja not_active Ceased
- 1996-09-13 DE DE69613380T patent/DE69613380D1/de not_active Expired - Lifetime
- 1996-09-13 TR TR1998/00475T patent/TR199800475T1/xx unknown
- 1996-09-13 KR KR10-1998-0701913A patent/KR100423029B1/ko not_active IP Right Cessation
- 1996-09-13 CN CN96198008A patent/CN1121684C/zh not_active Expired - Fee Related
- 1996-09-13 WO PCT/US1996/014665 patent/WO1997010586A1/en active IP Right Grant
- 1996-09-13 EE EE9800068A patent/EE03456B1/xx not_active IP Right Cessation
- 1996-09-13 AU AU70784/96A patent/AU724111B2/en not_active Ceased
- 1996-09-13 EP EP96931552A patent/EP0852052B1/de not_active Expired - Lifetime
-
1998
- 1998-03-09 MX MX9801857A patent/MX9801857A/es unknown
- 1998-03-11 NO NO981074A patent/NO981074L/no not_active Application Discontinuation
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102128976A (zh) * | 2011-01-07 | 2011-07-20 | 钜泉光电科技(上海)股份有限公司 | 电能表的能量脉冲输出方法、装置及电能表 |
CN102128976B (zh) * | 2011-01-07 | 2013-05-15 | 钜泉光电科技(上海)股份有限公司 | 电能表的能量脉冲输出方法、装置及电能表 |
WO2021179045A1 (en) * | 2020-03-13 | 2021-09-16 | University Of South Australia | A data processing method |
Also Published As
Publication number | Publication date |
---|---|
PL185513B1 (pl) | 2003-05-30 |
KR19990044659A (ko) | 1999-06-25 |
CN1201547A (zh) | 1998-12-09 |
PL325532A1 (en) | 1998-08-03 |
EE9800068A (et) | 1998-08-17 |
AU7078496A (en) | 1997-04-01 |
NO981074L (no) | 1998-05-13 |
KR100423029B1 (ko) | 2004-07-01 |
WO1997010586A1 (en) | 1997-03-20 |
EE03456B1 (et) | 2001-06-15 |
NO981074D0 (no) | 1998-03-11 |
CA2231107A1 (en) | 1997-03-20 |
CN1121684C (zh) | 2003-09-17 |
BR9610290A (pt) | 1999-03-16 |
RU2163032C2 (ru) | 2001-02-10 |
DE69613380D1 (de) | 2001-07-19 |
TR199800475T1 (xx) | 1998-06-22 |
EP0852052A1 (de) | 1998-07-08 |
AU724111B2 (en) | 2000-09-14 |
JPH11514453A (ja) | 1999-12-07 |
MX9801857A (es) | 1998-11-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP0852052B1 (de) | System zur adaptiven filterung von tonsignalen zur verbesserung der sprachverständlichkeit bei umgebungsgeräuschen | |
EP0645756B1 (de) | System zur angepassten Reduktion von Geräuschen bei Sprachsignalen | |
EP1017042B1 (de) | Durch Sprachaktivitätsdetektion gesteuerte Rauschunterdrückung | |
EP0786760B1 (de) | Sprachkodierung | |
US5544250A (en) | Noise suppression system and method therefor | |
US8977556B2 (en) | Voice detector and a method for suppressing sub-bands in a voice detector | |
EP0784311B1 (de) | Verfahren und Vorrichtung zur Feststellung der Sprachaktivität in einem Sprachsignal und eine Kommunikationsvorrichtung | |
FI116643B (fi) | Kohinan vaimennus | |
KR20010078401A (ko) | 개선된 오디오신호의 음성/잡음 분류를 위한 복합신호활동 검출 | |
EP0599664B1 (de) | Sprachkodierer und Verfahren zur Sprachkodierung | |
US5666429A (en) | Energy estimator and method therefor | |
US7889874B1 (en) | Noise suppressor | |
US5710862A (en) | Method and apparatus for reducing an undesirable characteristic of a spectral estimate of a noise signal between occurrences of voice signals | |
EP1040467A1 (de) | Fernsprech-endgerät | |
JP2002076960A (ja) | ノイズ抑制方法及び携帯電話 | |
WO2001022401A1 (en) | Processing circuit for correcting audio signals, receiver, communication system, mobile apparatus and related method | |
EP1238479A1 (de) | Verfahren und vorrichtung zur unterdrückung von akustischem hintergrundrauschen in einem kommunikationssystem |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 19980210 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): BE DE DK ES FI FR GB GR IT NL PT SE |
|
17Q | First examination report despatched |
Effective date: 19990517 |
|
RIC1 | Information provided on ipc code assigned before grant |
Free format text: 7G 10L 21/02 A |
|
GRAG | Despatch of communication of intention to grant |
Free format text: ORIGINAL CODE: EPIDOS AGRA |
|
GRAG | Despatch of communication of intention to grant |
Free format text: ORIGINAL CODE: EPIDOS AGRA |
|
GRAH | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOS IGRA |
|
GRAH | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOS IGRA |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): BE DE DK ES FI FR GB GR IT NL PT SE |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: FR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20010613 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20010613 Ref country code: BE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20010613 |
|
REF | Corresponds to: |
Ref document number: 69613380 Country of ref document: DE Date of ref document: 20010719 |
|
ITF | It: translation for a ep patent filed | ||
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20010913 Ref country code: GB Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20010913 Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20010913 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20010914 Ref country code: DE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20010914 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20010917 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20011220 |
|
EN | Fr: translation not filed | ||
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
GBPC | Gb: european patent ceased through non-payment of renewal fee |
Effective date: 20010913 |
|
26N | No opposition filed | ||
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: NL Payment date: 20051229 Year of fee payment: 11 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: IT Payment date: 20070926 Year of fee payment: 12 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: NL Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20080401 |
|
NLV4 | Nl: lapsed or anulled due to non-payment of the annual fee |
Effective date: 20080401 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IT Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20080913 |