EP4113516A1 - Procédé et système de détection et d'amélioration de la parole - Google Patents

Procédé et système de détection et d'amélioration de la parole Download PDF

Info

Publication number
EP4113516A1
EP4113516A1 EP22176918.5A EP22176918A EP4113516A1 EP 4113516 A1 EP4113516 A1 EP 4113516A1 EP 22176918 A EP22176918 A EP 22176918A EP 4113516 A1 EP4113516 A1 EP 4113516A1
Authority
EP
European Patent Office
Prior art keywords
speech
domain features
noise
input audio
noisy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP22176918.5A
Other languages
German (de)
English (en)
Inventor
Anna Kim
Eamonn Shaw
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pexip AS
Original Assignee
Pexip AS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pexip AS filed Critical Pexip AS
Publication of EP4113516A1 publication Critical patent/EP4113516A1/fr
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L21/0224Processing in the time domain
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L21/0232Processing in the frequency domain
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/57Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for processing of video signals
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/78Detection of presence or absence of voice signals
    • G10L25/84Detection of presence or absence of voice signals for discriminating voice from noise

Definitions

  • the present invention relates to detecting and enhancing speech in a multipoint videoconferencing session, in particular a method of speech detection and speech enhancement in a speech detection and speech enhancement unit of Multipoint Conferencing Node (MCN) and a method of training the same.
  • MCN Multipoint Conferencing Node
  • Terminals and endpoints being able to participate in a conference may be traditional stationary video conferencing endpoints, external devices, such as mobile and computer devices, smartphones, tablets, personal devices and PCs, and browser-based video conferencing terminals.
  • Video conferencing systems allow for simultaneous exchange of audio, video and data information among multiple conferencing sites.
  • MCN Multipoint Conferencing Node
  • MCUs Multipoint Control Units
  • MCI Multi Control Infrastructure
  • CNs Collaborations Nodes
  • MCU is the most common used term, and has traditionally has been associated with hardware dedicated to the purpose, however, the functions of an MCN could just as well be implemented in software installed on general purpose severs and computers, so in the following, all kinds of nodes, devices and software implementing features, services and functions providing switching and layout functions to allow the endpoints and terminals of multiple sites to intercommunicate in a conference, including (but not excluding) MCUs, MCIs and CNs are from now on referred to as MCNs.
  • Audio quality represents a key aspect of the video conferencing experience.
  • One major challenge is the heterogeneous and dynamic nature of the audio environment of the various conference participants.
  • some participants may use headsets with directional microphones that are closed to the mouth, while others may use the built-in speaker and microphone on a laptop computer.
  • a speaker with multiple microphones is often placed in the middle of the table, where participants are sitting at different distances to the shared audio unit.
  • the noise contributions may be stationary or non-stationary, bursty, narrow or wideband, or contain speech-like harmonics.
  • One challenge is to reduce or eliminate noises that are disturbances in the meeting. Some noise can be minimized by audio devices on the client side which have active background noise cancellation. Non-verbal sounds produced by the speaker, such as coughing, sneezing and heavy breathing are generally undesirable but can't be removed in the same manner. Means to differentiate speech from noise, i.e. reliable speech detection, is therefore needed. In addition, speech corrupted by noise becomes less intelligible. The extent of degradation depends on the amount and the type of noise contributions. How to enhance the quality and intelligibility of speech when noise is present is the other challenge to be addressed.
  • VAD voice activity detection
  • speech enhancement are typically implemented on the client side, i.e. where speech is generated or perceived.
  • the GSM standard features audio codecs that support VAD for better bandwidth utilization.
  • Various enhancement algorithms implementations can be found in various high performance headsets.
  • DNN deep learning neural network
  • the Opus audio codec for example improves VAD and supports classification of speech and music using RNN (recurrent neural networks) in its Opus 1.3 release.
  • Microsoft have also been actively pursuing real-time de-noising of speech using neural networks (venturebeat.com/2020/04/03/microsoft-teams-ai-machine-learning-real-time-noisesuppression-typing/) implemented on the client side.
  • Virtual microphone and speaker systems such as Nvidia RTX voice noise cancellation plugin and Krisp, can be installed on a computer and be used in conjunction with conferencing and streaming services to improve audio quality, handling both inbound (speaker) and outbound (microphone) noises
  • Fig. 1 audio and video streams from conference participants are received and processed in parallel in the Multipoint Conferencing Node (MCN).
  • MCN Multipoint Conferencing Node
  • Such an architecture may ensure end-to-end quality-of-service, balance network loads, and provide advanced processing and services. Speech detection and enhancement done in the MCN would provide the participants with better audio quality irrespective of the client-side implementation. The user is completely freed of the burden to decide if inbound or outbound noise reduction is needed.
  • an object of the present invention is to overcome or at least mitigate drawbacks of prior art video conferencing systems.
  • a first aspect the invention provides a method of training a speech detection and speech enhancement unit of a Multipoint Conferencing Node (MCN), comprising
  • Training the speech detection and speech enhancement unit separately for different of acoustic environments has several advantages.
  • One advantage is that when the machine learning models that are optimized for the specific categories of acoustic environment, the machine learning models are smaller and require shorter time and resources to train.
  • Another advantage is that trained machine learning model do not require large scale DNNs, hence greatly reduced computation footprint and resource utilization and makes the system possible to scale.
  • Another advantage of training the speech detection and speech enhancement unit separately for different of acoustic environments is that it can easily be extended to accommodate different acoustic environments of interest.
  • the plurality of acoustic environments may comprise meeting room with video conferencing endpoint, home office, and public space.
  • the statistical generative model may be a Gaussian Mixture Model.
  • a second aspect of the invention provides a method of speech detection and speech enhancement in a speech detection and speech enhancement unit of Multipoint Conferencing Node (MCN), comprising
  • advantage of the second aspect is that noise is suppress but not eliminated, such that speech is more easily comprehensible, but the underlying acoustic background is not lost.
  • the auxiliary information of the at least one videoconferencing participant may comprise at least one of a number of participants in a video image received from the at least one videoconferencing participant, and a specification of a videoconferencing endpoint received from the at least one videoconferencing participant.
  • the acoustic environment may comprise meeting room with video conferencing endpoint, home office, and public space.
  • the speech detection and speech enhancement unit may be trained according the method of the first aspect.
  • the noise classifier may be a Bayesian classifier.
  • the noise reduction mask may be a composite noise reduction mask.
  • the composite noise reduction mask may be based on an estimated binary mask (EBM) generated using the Bayesian classifier.
  • EBM estimated binary mask
  • the method of the second aspect may comprise updating the statistical generative model representing the probability distributions of the T-F domain features of noisy speech trained for the determined acoustic environment when the estimated probability of one the received input audio segments is noisy speech is close an estimated probability of the one received input audio segments is belonging clean speech.
  • Fig. 1 schematically illustrates multi-point videoconferencing system 100 with three videoconferencing endpoints 101a, 101b, 101c in communication with a multipoint conferencing node (MCN) 104.
  • MCN multipoint conferencing node
  • Input audio 102a, 102b, 102c captured at the videoconferencing endpoints 101a, 101b, 101c is transmitted to the MCU 104, then the input audio 102a, 102b, 102c is mixed with audio from the other videoconferencing endpoints101a, 101b, 101c, and output audio 103a, 103b, 103c is transmitted back out to the videoconferencing endpoints 101a, 101b, 101c.
  • Fig. 2 is a flowchart illustrating an exemplary method of providing speech detection and speech enhancement in a multi-point videoconferencing system.
  • the first step of a method of training a speech detection and speech enhancement unit of a Multipoint Conferencing Node is to provide an acoustic dependent dataset 1.
  • it is collected, for each of a plurality of acoustic environments, a plurality of audio samples consisting of noise, clean speech and noisy speech.
  • the plurality of acoustic environments may comprise three broad acoustic environment categories, such as meeting room with video conferencing endpoint, home office, and public space.
  • the dataset consists of noise n(k), and clean speech x(k) and noisy speech z(k) samples, where k denotes the sampling instances in time.
  • the relationship function g( • ) may be linear (e.g. additive) or nonlinear (in the case of reverberation). It is not required that g (•) is known. It is sufficient that each noisy speech has a known source of noise and speech it is generated from.
  • the dataset is preferably made to capture main sources of noise and their relations to clean speech in a videoconferencing setting.
  • the dataset should be sufficiently large to avoid overfitting of machine learning models.
  • the dataset may preferably contain samples of different languages and utterances of people of various gender, age, and accents.
  • the dataset may preferably contain common noise contributions from the considered acoustic environments.
  • each of the plurality of samples is labeled with an acoustic environmental label corresponding to one of the plurality of acoustic environments, such as meeting room with video conferencing endpoint, home office, and public space.
  • Each of the plurality of audio samples consisting of clean speech is labeled with clean speech label.
  • the labels may be provided in the form of timestamps indicating when clean speech starts and stops.
  • the next step of the training method is to extract Time-Frequency (T-F) domain features 2 from the plurality of audio samples consisting of noise, clean speech and noisy speech.
  • T-F Time-Frequency
  • Extracting features from a speech signal in the T-F domain may be performed by several possible methodologies known to the skilled person. Different choices of methodology will lead to different machine learning models and variations in performance. However, there is no specific feature extraction method that is optimal in all use case scenarios.
  • One exemplary, commonly used feature extraction methods, which generates Mel frequency coefficients is described herein with reference to Fig. 3 . Audio samples are packed into fixed frame sizes in milliseconds range (e.g. 10ms). Short Time Fourier Transform is applied to the resulting audio frames in fixed batch size (e.g.4) into the frequency domain, where there are overlapping between the frames (e.g. 50%). A windowing function (e.g. Hanning) is applied to minimize the edge effects between frames. 128 channel Mel filterbanks is applied and then Discrete Cosine Transform is applied. This results in n x M features where n is the number of frames processed in batch and M is the number of Mel-frequency cepstrum coefficient (MFCC).
  • MFCC Mel-frequency cepstrum
  • the next step of the training method is training, for each of the plurality of acoustic environments, one speech detection classifier 3 by inputting the T-F domain features of the plurality of audio samples consisting of noise, clean speech and noisy speech, the acoustic environmental labels and the clean speech labels to a first deep neural network (DNN). That is, one classifier is trained per acoustic environment category, or its subcategories if needed. Weights of the trained classifier may be quantized to support for example 16bits precision or lower.
  • the trained DNN model typically contains an input layer, hidden layers and output layer.
  • the next step of the training method comprises training, for each of the plurality of environments, one statistical generative model 5 representing the probability distributions of the T-F domain features of noisy speech, by inputting the T-F domain features of the plurality of audio samples comprising clean speech, the T-F domain features of the plurality audio samples comprising noise and the plurality of audio samples comprising noisy speech.
  • the statistical generative model may in one embodiment be a Gaussian Mixture Model (GMM).
  • the Ideal binary mask is a known technique for noise suppression in speech.
  • the binary mask must be estimated in practice, as the IBM is not known apriori.
  • a statistical generative model may be trained in the unsupervised manner to represent the probability distributions of the T-F features of noisy speech.
  • the T-F features of the acoustic training samples are divided into two classes, C / s , where speech is dominating, i.e. clean speech, and C / n , where noise is dominating, i.e. noisy speech.
  • SNR aprior
  • X( ⁇ , k) is the T-F feature vectors of the clean speech signal
  • N(r,k) is that of the noisy speech signal
  • E[•] is an expectations
  • GMM Gaussian Mixture Model
  • the MCN receives input audio segments from at least one videoconferencing participant 101a, 101b, 101c.
  • the MCN determines an acoustic environment 4 based on auxiliary information of the at least one videoconferencing participant.
  • the auxiliary information of the at least one videoconferencing participant may comprise a number of participants in a video image received from the at least one videoconferencing participant, e.g. determined via face count by analyzing the video image.
  • Home offices are typically limited to one participant, meeting rooms may have multiple faces counts, whereas public spaces can have multiple face counts but will not be identified as a meeting room.
  • the auxiliary information of the at least one videoconferencing participant may also comprise a specification of a videoconferencing endpoint received from the at least one videoconferencing participant, such as identification of the type of videoconferencing endpoint, e.g. room system, desktop system, PC-client etc.
  • the auxiliary information is preferably not stored, nor may it be used to identify specific participants.
  • the next step of online operation of the speech detection and speech enhancement unit is to extract Time-Frequency (T-F) domain features 2 from the received input audio segments. This is may be done as described above with reference to the methodology of training the system.
  • the next step is to input the extracted T-F domain features into a speech detection classifier 3 trained for the determined acoustic environment to determine if each of the received input audio segments is speech.
  • one of the received input audio segments is speech
  • a noise reduction mask 6 is applied on the received input audio segments according to the determination of the received audio segment is noisy speech, whereby enhanced speech is obtained in the audio segments.
  • the noise classifier is a Bayesian classifier P Cl s
  • Z ⁇ k ⁇ P Cl s P Z ⁇ k
  • Z( ⁇ ,k)) is the a posteriori probability of the noisy speech feature vectors Z( ⁇ ,k) belong to the speech dominant class Cl s
  • (Cl s ) is the corresponding a priori probability
  • P(Cl s ) is the probability of the feature vectors of the class and y denotes the constant offset, i.e. probability of features Z( ⁇ ,k).
  • An a posteriori probability estimation is done for the noise dominant class P(Cl n
  • the noise reduction mask may in one embodiment be a composite noise reduction mask, in particular based on an estimated binary mask (EBM) generated using the Bayesian classifier.
  • EBM estimated binary mask
  • Noise reduction based on spectral gain defined in terms of SNR aprior may effectively limit the musical noise (harmonic distortions), part of the speech harmonics might also be lost in the process.
  • Hard decision noise mask such as Ideal binary mask (IBM) suffers from this effect.
  • the Ideal ratio mask on the other hand is a soft decision mask which is made by applying the energy ratio between the target clean speech and the received noisy speech. Since it is based on a posteriori SNR (received noisy speech), it has the drawback of inducing harmonic distortions and can result in ringing sound which is undesirable. On the other hand, less removal of noise can be used for retaining the acoustic context. Combination of the two masks may therefore be used to achieve better balance between noise reduction and speech enhancement.
  • EBM estimated binary mask
  • 2 is the estimated clean speech power.
  • the ERM can be determined as follows:
  • CM ⁇ k ⁇ ERM ⁇ k + ⁇ EBM ⁇ k
  • the weights ⁇ and ⁇ are tuned for each of the acoustic environment class.
  • ERM contribution it is expected the ERM contribution to be considerably smaller than EBM, as some noise contribution is expected to retain to provide the acoustic context.
  • the estimated CM is applied to each T-F region of the noisy speech spectrum, summing the components from the different filter bands and inverse Fourier transform is then applied to construct the resulting enhanced speech signal.
  • the method may further comprise updating 7 the statistical generative model representing the probability distributions of the T-F domain features of noisy speech trained for the determined acoustic environment when the estimated probability of one the received input audio segments is noisy speech is close an estimated probability of the one received input audio segments is belonging clean speech.
  • new data can be included in the trained GMM to provide better confidence in the EBM estimation.
  • New weights of the model can be estimated using the same EM algorithm for training the GMM offline, along with new means and variance for the mixture. Constraints may be applied to ensure convergence of the EM algorithm.
  • T-F features of the current audio frame is extracted. Then a posteriori probability estimation is performed using the trained GMM. The ratio of the number of ambiguous T-F instances relative to the total number of T-F features is calculated. Then the calculated ratio is compared with a threshold. If the ratio is below the threshold, then the composite noise mask is calculated. If the calculated ratio is above the threshold, then the mean, variance and weights are recalculated, and the used to update the GMM for the next audio frame.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Acoustics & Sound (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Telephonic Communication Services (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Radio Relay Systems (AREA)
EP22176918.5A 2021-06-30 2022-06-02 Procédé et système de détection et d'amélioration de la parole Pending EP4113516A1 (fr)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
NO20210874A NO347277B1 (en) 2021-06-30 2021-06-30 Method and system for speech detection and speech enhancement

Publications (1)

Publication Number Publication Date
EP4113516A1 true EP4113516A1 (fr) 2023-01-04

Family

ID=77265171

Family Applications (1)

Application Number Title Priority Date Filing Date
EP22176918.5A Pending EP4113516A1 (fr) 2021-06-30 2022-06-02 Procédé et système de détection et d'amélioration de la parole

Country Status (3)

Country Link
US (1) US20230005469A1 (fr)
EP (1) EP4113516A1 (fr)
NO (1) NO347277B1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130022223A1 (en) * 2011-01-25 2013-01-24 The Board Of Regents Of The University Of Texas System Automated method of classifying and suppressing noise in hearing devices
WO2017119901A1 (fr) * 2016-01-08 2017-07-13 Nuance Communications, Inc. Système et procédé pour adaptation de détection de parole
US20200184987A1 (en) * 2020-02-10 2020-06-11 Intel Corporation Noise reduction using specific disturbance models
US20210074282A1 (en) * 2019-09-11 2021-03-11 Massachusetts Institute Of Technology Systems and methods for improving model-based speech enhancement with neural networks

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10546593B2 (en) * 2017-12-04 2020-01-28 Apple Inc. Deep learning driven multi-channel filtering for speech enhancement
CN111613243B (zh) * 2020-04-26 2023-04-18 云知声智能科技股份有限公司 一种语音检测的方法及其装置
CN112581973B (zh) * 2020-11-27 2022-04-29 深圳大学 一种语音增强方法及系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130022223A1 (en) * 2011-01-25 2013-01-24 The Board Of Regents Of The University Of Texas System Automated method of classifying and suppressing noise in hearing devices
WO2017119901A1 (fr) * 2016-01-08 2017-07-13 Nuance Communications, Inc. Système et procédé pour adaptation de détection de parole
US20210074282A1 (en) * 2019-09-11 2021-03-11 Massachusetts Institute Of Technology Systems and methods for improving model-based speech enhancement with neural networks
US20200184987A1 (en) * 2020-02-10 2020-06-11 Intel Corporation Noise reduction using specific disturbance models

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
BAO FENG ET AL: "Signal power estimation based on convex optimization for speech enhancement", 2017 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), IEEE, 12 December 2017 (2017-12-12), pages 483 - 487, XP033315466, DOI: 10.1109/APSIPA.2017.8282080 *
ZHAO XIAOJIA ET AL: "Robust Speaker Identification in Noisy and Reverberant Conditions", IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, IEEE, USA, vol. 22, no. 4, 1 April 2014 (2014-04-01), pages 836 - 845, XP011542426, ISSN: 2329-9290, [retrieved on 20140306], DOI: 10.1109/TASLP.2014.2308398 *

Also Published As

Publication number Publication date
NO347277B1 (en) 2023-08-21
NO20210874A1 (en) 2023-01-02
US20230005469A1 (en) 2023-01-05

Similar Documents

Publication Publication Date Title
US11894014B2 (en) Audio-visual speech separation
US10678501B2 (en) Context based identification of non-relevant verbal communications
US10511718B2 (en) Post-teleconference playback using non-destructive audio transport
US8676572B2 (en) Computer-implemented system and method for enhancing audio to individuals participating in a conversation
WO2021196905A1 (fr) Procédé et appareil de traitement de déréverbération de signal vocal, dispositif informatique et support de stockage
US20170270930A1 (en) Voice tallying system
WO2021179651A1 (fr) Procédé et appareil de traitement de mélange audio d'appel, support d'enregistrement et dispositif informatique
US7698141B2 (en) Methods, apparatus, and products for automatically managing conversational floors in computer-mediated communications
JP2024507916A (ja) オーディオ信号の処理方法、装置、電子機器、及びコンピュータプログラム
CN117121103A (zh) 用于实时声音增强的方法和装置
EP4113516A1 (fr) Procédé et système de détection et d'amélioration de la parole
CN117079661A (zh) 一种声源处理方法及相关装置
EP1453287B1 (fr) Gestion automatique de groupes de conversation
Jahanirad et al. Blind source computer device identification from recorded VoIP calls for forensic investigation
EP4300490B1 (fr) Procédé et dispositif de traitement audio pour anonymisation vocale
CN116866321B (zh) 一种无中心多路声音一致性选择方法及系统
WO2023249786A1 (fr) Téléconférence distribuée utilisant des modèles d'amélioration personnalisés
Nag et al. Non-negative matrix factorization on a multi-lingual overlapped speech signal: a signal and perception level analysis
US20240203438A1 (en) Noise suppression for speech data with reduced power consumption
Gogate et al. Application for Real-time Audio-Visual Speech Enhancement
Ronssin et al. Application for Real-time Personalized Speaker Extraction.
Rustrana et al. Spectral Methods for Single Channel Speech Enhancement in Multi-Source Environment
Kim et al. A main speaker decision for a distributed telepresence system
WO2019003131A1 (fr) Procédé de traitement numérique de signal audio et système associé

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20230704

RBV Designated contracting states (corrected)

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20240523