WO2014076129A1 - Procédé pour faire fonctionner un système de conférence téléphonique et système de conférence téléphonique - Google Patents

Procédé pour faire fonctionner un système de conférence téléphonique et système de conférence téléphonique Download PDF

Info

Publication number
WO2014076129A1
WO2014076129A1 PCT/EP2013/073720 EP2013073720W WO2014076129A1 WO 2014076129 A1 WO2014076129 A1 WO 2014076129A1 EP 2013073720 W EP2013073720 W EP 2013073720W WO 2014076129 A1 WO2014076129 A1 WO 2014076129A1
Authority
WO
WIPO (PCT)
Prior art keywords
audio
audio signals
audio signal
activity
linguistic
Prior art date
Application number
PCT/EP2013/073720
Other languages
German (de)
English (en)
Other versions
WO2014076129A8 (fr
Inventor
Christian Hoene
Michael Haun
Patrick SCHREINER
Original Assignee
Symonics GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Symonics GmbH filed Critical Symonics GmbH
Publication of WO2014076129A1 publication Critical patent/WO2014076129A1/fr
Publication of WO2014076129A8 publication Critical patent/WO2014076129A8/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/56Arrangements for connecting several subscribers to a common circuit, i.e. affording conference facilities
    • H04M3/568Arrangements for connecting several subscribers to a common circuit, i.e. affording conference facilities audio processing specific to telephonic conferencing, e.g. spatial distribution, mixing of participants
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2203/00Aspects of automatic or semi-automatic exchanges
    • H04M2203/50Aspects of automatic or semi-automatic exchanges related to audio conference
    • H04M2203/5072Multiple active speakers

Definitions

  • the invention relates to a method for operating a telephone conference system and a telephone conference system.
  • a telephone conference system In the course of a telephone conference, speech-based audio signals of the respective call participants of the telephone conference are mixed, so that a respective call participant can hear the other call participants.
  • a system that interconnects several call participants in this way is called a telephone conference system.
  • a moderator In a traditional telephone conference based on single-channel audio signal transmission, a moderator typically decides who is allowed to speak. As a rule, all participants in the conversation try to avoid interrupting others.
  • a technologically sophisticated alternative to deal with multiple simultaneous conversation participants or speakers is the use of surround sound, surround or stereo transmission technologies. For a listener can separate voices that come from different directions and focus on one. This is also called cocktail party effect.
  • the invention has for its object to provide a method for operating a telephone conference system and a telephone conference system available, which allow the use of single-channel (mono) transmission channels while separability of the individual call shares without, for example, a moderator decides who may speak. The invention solves this problem by a method according to claim 1 and a telephone conference system according to claim 3.
  • the method is used to operate a telephone conference system for at least two (call) participants.
  • the method comprises the following steps:
  • a linguistic activity of a respective conversation participant determined, i. For example, it is determined whether a respective conversation participant speaks or is silent. In the event that a linguistic activity is detected in more than one interviewee, i. if several call participants are speaking at the same time, only a single linguistic activity (i.e., the conversation contribution of a single conversation participant) is played back at a time, and the remaining linguistic activities or conversational messages are played back one after the other with a time delay.
  • the linguistic activities can be sequentially reproduced in the order of their temporal origin.
  • the determination of the linguistic activity may include a so-called Voice Activity Detection (VAD).
  • VAD Voice Activity Detection
  • the presence or absence of human speech or conversation contribution is determined, i. It is determined whether a conversation participant is silent or speaks. Consequently, silence and speech activity can be detected or recognized by means of VAD. Switching between conversation participants or conversation contributions can be carried out, for example, at the beginning of a speech break. Incidentally, reference is also made to the relevant specialist literature.
  • Linguistic activity determination can be further performed based on whether important or unimportant speech segments are involved. In the case of an audio signal or speech segment classified as important, a linguistic activity is determined or recognized. In the case of a classified as not important audio signal or speech segment, no linguistic activity is detected or detected.
  • Audio signals from different participants in the conversation in which several speech segments are present, which are classified as unimportant and thus not as linguistic activity, can be mixed with one another and simultaneously played back.
  • Unimportant parts would be, for example, speech segments in which speech characteristics do not change, for example during an "aaah".
  • a so-called codec which compresses an audio signal in the form of an audio stream at a variable bit rate, then a linguistic activity can be excluded with high probability if the codec generates a compressed data stream with a low bit rate.
  • the telephone conference system is intended for at least two call participants and is preferably designed to carry out the above-mentioned method.
  • the telephone conference system comprises at least two audio signal sources, wherein a respective audio signal source is assigned to a conversation participant and designed to generate an audio signal in dependence on a speech activity of the conversation participant.
  • the audio signal may be an analog audio signal. Typically, however, it is a digital audio signal.
  • At least one buffer or intermediate memory is provided, wherein the intermediate memory is designed to temporarily store at least a part of a respective audio signal.
  • the buffer can be designed as a first-in-first-out memory.
  • the intermediate memory can also be designed as a ring memory.
  • the latch may store the (digital) audio signal in response to a control signal generated by a controller upon storage need.
  • the buffer may be an electrical memory such as RAM, magnetic memory, etc. It can be provided exactly one buffer or it can be provided per audio signal, a buffer.
  • at least one voice activity recognition device is provided, which is designed to determine a linguistic activity in a respective audio signal.
  • the voice activity recognition device can be designed as a microprocessor on which a suitable voice activity recognition software runs. Precisely one voice activity recognition device can be provided, or one voice activity recognition device can be provided per audio signal.
  • a (digital) audio signal mixing device configured to mix input (digital) audio signals with each other and to output the mixed input audio signals as the output audio signal.
  • the output audio signal can be output simultaneously to a plurality of outputs of the audio signal mixing device.
  • At least two loudspeakers are provided, wherein a loudspeaker is assigned to a conversation participant, receives the output audio signal of the audio signal mixing device and outputs it as a corresponding sound signal.
  • a control device of the telephone conference system is adapted, in the event that in more than one of the audio signals, a linguistic activity is detected, only a single one of those audio signals in which a linguistic activity is detected, output to the audio signal mixing device as an audio input signal and not audio signals output to the audio signal mixing device as audio input signals in which a linguistic activity is detected, to be buffered in the buffer, ie For example, to control the latch for caching the audio signals, and time-delayed output as an audio input signal to the audio signal mixing device, so that they are reproduced with a time delay.
  • the audio signals can be cached before their reproduction exclusively during their determined linguistic activity.
  • At least one speaker collision detection device can be provided.
  • the speaker collision detection device is designed to detect or recognize simultaneous linguistic activities in a plurality of associated audio signals.
  • the speaker collision detection device can be part of the voice activity recognition device or can be formed separately from the voice activity recognition device. If the voice activity recognition device and the speaker collision recognition device are implemented separately from one another, the speaker collision recognition device can be arranged, for example, between the voice activity recognition device and the control device, wherein the voice collision engine Identification device, for example, recognizes multiple simultaneous speakers and passes this information to the controller.
  • the control device may be designed to check, in the event that no linguistic activity is detected in any of the audio signals, whether cached audio signals are present, and if cached audio signals are present, the cached audio signals one by one or mixed so long to the audio signal Mixer output as audio input signals until no cached audio signals are available.
  • the control device can be configured to output all audio signals as audio input signals to the audio signal mixing device in the event that no linguistic activity is detected in any of the audio signals and no cached audio signals are present.
  • the control device may be configured to output only the audio signal in which a linguistic activity is detected as audio input signal to the audio signal mixing device in the event that a linguistic activity is detected in only one of the audio signals and no cached audio signals are present ,
  • the audio signals can be mono audio signals.
  • the audio signal sources may be microphones of headsets and / or telephone handsets, and the loudspeakers may be the speakers of the headsets or handset, i. be integrated into the headsets and / or the telephone handset.
  • the voice activity recognition device or the speaker collision detection device can be designed to transform the audio signals into the frequency domain.
  • the voice activity recognition device or the speaker collision detection device can also be designed to examine whether the transformed audio signals are superposed in one or more frequency bands, ie to investigate whether linguistic activities can be recognized in a plurality of frequency bands.
  • the voice activity recognition device or the speaker collision detection device can be designed to indicate the thus determined speaker collision to transmit the control device.
  • Fig. 1 is a telephone conference system for several participants in the conversation.
  • Fig. 1 shows a telephone conference system for several participants.
  • the telephone conference system has, by way of example, two audio signal sources in the form of microphones 1_1 and 1_2 of two telephone handsets or headsets and two loudspeakers 5_1 and 5_2, which are part of the telephone handset or headsets.
  • the microphones 1_1 and 1_2 are assigned to a conversation participant and designed to generate a mono audio signal AS_1 or AS_2 depending on a linguistic activity of the respective conversation participant.
  • the audio signals AS_1 and AS_2 are digital audio signals, wherein the digitization of the linguistic activity in the microphones or in a downstream, not shown A D converter can be done.
  • the telephone conference system further has two latches 2_1 and 2_2, wherein a respective latch 2_1 and 2_2 is configured to latch an associated audio signal AS_1 or AS_2 in response to a memory control signal generated by a control device 6a and 6b when memory is required.
  • the telephone conference system further has a voice activity recognition device 3, which is designed to detect or recognize a linguistic activity in a respective audio signal AS_1 and AS_2 and to transmit the result of the determination to the control device 6a and 6b.
  • a voice activity recognition device 3 which is designed to detect or recognize a linguistic activity in a respective audio signal AS_1 and AS_2 and to transmit the result of the determination to the control device 6a and 6b.
  • a (digital) audio signal mixing device 4 which is designed to mix input audio signals with one another and to output the mixed input audio signals as output audio signal OS to two output terminals by way of example.
  • the control device has components 6a and 6b.
  • the control device component 6a is in data connection with the voice activity recognition device 3 and receives information regarding the linguistic activity of the audio signals AS_1 and AS_2 from the voice activity recognition device 3.
  • the control device Component 6a is further connected to the buffers 2_1 and 2_2 and controls these if necessary for buffering the associated audio signals AS_1 or AS_2.
  • the control device component 6b is connected on the input side to the microphones 1_1 and 1_2 and the latches 2_1 and 2_2 and on the output side to the audio signal mixing device 4.
  • the control device component 6b has internal switching logic, not shown, whose switching positions are determined by the control device component 6a. By means of the switching logic is determined which input or which inputs are looped through to the audio signal mixing device 4.
  • the control device or its components 6a and 6b is / are designed for the case in which a linguistic activity is detected in both audio signals AS_1 and AS_2, i. Both participants in the conversation speak to output only one of the two audio signals AS_1 or AS_2 via the control device component 6b to the audio signal mixing device 4 as the audio input signal and to buffer the other audio signal AS_1 or AS_2 in the associated buffer 2_1 or 2_2 and then as an audio input signal to the audio signal mixing device 4 with a time delay issue.
  • the audio signal AS_1 or AS_2 that is not output can be delayed in time until no linguistic activity is detected in the other audio signal AS_1 or AS_2.
  • the decision as to which of the two audio signals AS_1 and AS_2 is first output to the audio signal mixing device 4 may be based on the fact in which the audio signal AS_1 or AS_2 has first been recognized as having a linguistic activity.
  • the control device 6a and 6b is further adapted, in the event that in any of the audio signals AS_1 and AS_2 a linguistic activity is detected to check whether cached audio signals are present, and if cached audio signals are present, the cached audio signals one by one long to the audio signal mixing device 4 output as audio input signals until no cached audio signals are available.
  • an audio signal mixing device 4 which mixes two or more audio signals AS_1 and AS_2 in order to generate one or more identical output audio signals OS therefrom.
  • a conventional voice activity detector as used in the telephone system, can be used to distinguish phases of silence from phases of active speech.
  • the audio signals AS_1 and AS_2 can be delayed (reproduced). This means that they are not forwarded directly to the loudspeakers 5_1 and 5_2 or the audio signal mixing device 4, but can be buffered for any period of time in order to be forwarded later to the audio signal mixing device 4.
  • the latches 2_1 and 2_2 may be implemented as FIFO memories. For algorithmic efficiency reasons, it may be useful to store not only the audio signals AS_1 and AS_2 in the latches 2_1 and 2_2, but also the associated linguistic activity information.
  • the control device 6a and 6b monitors whether the voice activity recognition device 3 detects active or relevant signals and whether acoustic signals are stored in the latches 2_1 and 2_2.
  • all the audio signals AS_1 and AS_2 in the audio signal mixing device 4 can be mixed. This is the case, for example, when all conference or conversation participants are silent.
  • the audio signal mixing device 4 If only one relevant audio signal AS_1 or AS_2 is present, it is forwarded to the audio signal mixing device 4. This is the case, for example, when a conversation participant begins to talk. If further audio signals now become relevant or active, they are not forwarded without delay to the audio signal mixing device 4, but are stored in the latches 2_1 and 2_2. It avoids thus that a second participant in the conversation breaks the first participant in the conversation. As soon as the previously reproduced audio signal no longer contains any relevant information, ie is no longer active in speech, one of the stored audio signals is retrieved from the associated buffer memory 2_1 or 2_2 and routed to the audio signal mixing device 4 and thus to the loudspeakers 5_1 and 5_2. The second caller is now played delayed after the first is silent. This process continues until no latches 2_1 and 2_2 have stored more audio signals.
  • the two audio signal sources 1_1 and 1_2, the two latches 2_1 and 2_2 and the two loudspeakers 5_1 and 5_2 represent only an exemplary number. Of course, any number of these components can be used.
  • the voice activity recognition device 3 can be designed to transform the audio signals AS_1 and AS_2 into the frequency domain, to examine whether the transformed audio signals are superposed in one or more frequency bands and, in the event that the transformed audio signals in one or more frequency bands overlap, determine that there is linguistic activity in more than a single one of the audio signals AS_1 and AS_2.
  • the voice activity recognizer 3 may transmit the result of the determination to the controller 6a.

Abstract

L'invention concerne un procédé pour faire fonctionner un système de conférence téléphonique pour au moins deux participants à une conversation, le procédé comprenant les étapes suivantes: déterminer une activité vocale d'un participant à une conversation et dans le cas où une activité vocale est déterminée chez plus d'un participant à la conversation, ne reproduire qu'une seul activité vocale et reproduire de manière décalée dans le temps les activités vocales restantes.
PCT/EP2013/073720 2012-11-13 2013-11-13 Procédé pour faire fonctionner un système de conférence téléphonique et système de conférence téléphonique WO2014076129A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE201210220688 DE102012220688A1 (de) 2012-11-13 2012-11-13 Verfahren zum Betreiben eines Telefonkonferenzsystems und Telefonkonferenzsystem
DE102012220688.4 2012-11-13

Publications (2)

Publication Number Publication Date
WO2014076129A1 true WO2014076129A1 (fr) 2014-05-22
WO2014076129A8 WO2014076129A8 (fr) 2014-07-31

Family

ID=49578306

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2013/073720 WO2014076129A1 (fr) 2012-11-13 2013-11-13 Procédé pour faire fonctionner un système de conférence téléphonique et système de conférence téléphonique

Country Status (2)

Country Link
DE (1) DE102012220688A1 (fr)
WO (1) WO2014076129A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015001492A1 (fr) * 2013-07-02 2015-01-08 Family Systems, Limited Systèmes et procédés d'amélioration de services d'audioconférence
US11017790B2 (en) 2018-11-30 2021-05-25 International Business Machines Corporation Avoiding speech collisions among participants during teleconferences

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009001035A2 (fr) * 2007-06-22 2008-12-31 Wivenhoe Technology Ltd Transmission d'informations audio

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE4420212A1 (de) * 1994-06-04 1995-12-07 Deutsche Bundespost Telekom Übertragungssystem für gleichzeitige Mehrfachübertragung von mehreren Bild- und Tonsignalen
US20050210394A1 (en) * 2004-03-16 2005-09-22 Crandall Evan S Method for providing concurrent audio-video and audio instant messaging sessions

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009001035A2 (fr) * 2007-06-22 2008-12-31 Wivenhoe Technology Ltd Transmission d'informations audio

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015001492A1 (fr) * 2013-07-02 2015-01-08 Family Systems, Limited Systèmes et procédés d'amélioration de services d'audioconférence
US9087521B2 (en) 2013-07-02 2015-07-21 Family Systems, Ltd. Systems and methods for improving audio conferencing services
US9538129B2 (en) 2013-07-02 2017-01-03 Family Systems, Ltd. Systems and methods for improving audio conferencing services
US10553239B2 (en) 2013-07-02 2020-02-04 Family Systems, Ltd. Systems and methods for improving audio conferencing services
US11017790B2 (en) 2018-11-30 2021-05-25 International Business Machines Corporation Avoiding speech collisions among participants during teleconferences

Also Published As

Publication number Publication date
WO2014076129A8 (fr) 2014-07-31
DE102012220688A1 (de) 2014-05-15

Similar Documents

Publication Publication Date Title
DE60209637T2 (de) Steuerung eines Konferenzgespräches
EP1613124A2 (fr) Traitement des signaux stéréo de microphone pour le système de téléconférences
DE102021204829A1 (de) Automatische korrektur fehlerhafter audioeinstellungen
DE112015006800T5 (de) Verfahren und Kopfhörersatz zur Verbesserung einer Tonqualität
DE10251113A1 (de) Verfahren zum Betrieb eines Spracherkennungssystems
Carlile ACTIVE LISTENING: SPEECH INTELLIGIBILITY IN NOISY ENVIRONMENTS.
EP1912472A1 (fr) Procédé pour le fonctionnement d'une prothèse auditive and prothèse auditive
DE102014214052A1 (de) Virtuelle Verdeckungsmethoden
EP2077059B1 (fr) Procédé de fonctionnement d'une aide auditive et aide auditive
DE102012214611B4 (de) Verbesserte Tonqualität bei Telefonkonferenzen
EP2047668B1 (fr) Procédé, système de dialogue vocal et terminal de télécommunication pour la reproduction vocale multilingue
Schoenmaker et al. The multiple contributions of interaural differences to improved speech intelligibility in multitalker scenarios
DE102009035796B4 (de) Benachrichtigung über Audio-Ausfall bei einer Telekonferenzverbindung
EP1438833A2 (fr) Dispositif et procede d'annulation d'echo acoustique multicanal a nombre de canaux variable
WO2014076129A1 (fr) Procédé pour faire fonctionner un système de conférence téléphonique et système de conférence téléphonique
EP1808853A1 (fr) Dispositif et procédé pour améliorer l'intelligibilité d'un système de sonorisation, et programme informatique
EP2047632B1 (fr) Procédé pour réaliser une conférence vocale, et système de conférence vocale
EP1126687A2 (fr) Procede pour la reduction coordonnée de bruit et/ou d'écho
WO2008043758A1 (fr) Procédé d'utilisation d'une aide auditive et aide auditive
DE102014210760B4 (de) Betrieb einer Kommunikationsanlage
EP1062487B1 (fr) Dispositif a microphone pour la reconnaissance vocale dans des conditions spatiales variables
US10237413B2 (en) Methods for the encoding of participants in a conference
Schoenmaker et al. Better-ear rating based on glimpsing
JP2007096555A (ja) 音声会議システム、端末装置及びそれに用いる話者優先レベル制御方法並びにそのプログラム
Liang et al. Cat-astrophic effects of sudden interruptions on spatial auditory attention

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13789567

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13789567

Country of ref document: EP

Kind code of ref document: A1