SE1450295A1 - System and method of simultaneous interpretation - Google Patents

System and method of simultaneous interpretation Download PDF

Info

Publication number
SE1450295A1
SE1450295A1 SE1450295A SE1450295A SE1450295A1 SE 1450295 A1 SE1450295 A1 SE 1450295A1 SE 1450295 A SE1450295 A SE 1450295A SE 1450295 A SE1450295 A SE 1450295A SE 1450295 A1 SE1450295 A1 SE 1450295A1
Authority
SE
Sweden
Prior art keywords
sound interface
participant
sound
interface
interpretation system
Prior art date
Application number
SE1450295A
Other languages
Swedish (sv)
Inventor
Pär Stihl
Martin Hammarström
Original Assignee
Simultanex Ab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Simultanex Ab filed Critical Simultanex Ab
Priority to SE1450295A priority Critical patent/SE1450295A1/en
Priority to EP15765582.0A priority patent/EP3120534A4/en
Priority to PCT/SE2015/050284 priority patent/WO2015142249A2/en
Publication of SE1450295A1 publication Critical patent/SE1450295A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/56Arrangements for connecting several subscribers to a common circuit, i.e. affording conference facilities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/58Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B1/00Details of transmission systems, not covered by a single one of groups H04B3/00 - H04B13/00; Details of transmission systems not characterised by the medium used for transmission
    • H04B1/06Receivers
    • H04B1/16Circuits
    • H04B1/30Circuits for homodyne or synchrodyne receivers
    • H04B2001/305Circuits for homodyne or synchrodyne receivers using dc offset compensation techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2203/00Aspects of automatic or semi-automatic exchanges
    • H04M2203/20Aspects of automatic or semi-automatic exchanges related to features of supplementary services
    • H04M2203/2061Language aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2242/00Special services or facilities
    • H04M2242/12Language recognition, selection or translation arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M9/00Arrangements for interconnection not involving centralised switching
    • H04M9/08Two-way loud-speaking telephone systems with means for conditioning the signal, e.g. for suppressing echoes for one or both directions of traffic
    • H04M9/10Two-way loud-speaking telephone systems with means for conditioning the signal, e.g. for suppressing echoes for one or both directions of traffic with switching of direction of transmission by voice frequency
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Telephonic Communication Services (AREA)

Abstract

5 ABSTRACT The present disclosure relates to an interpretation system comprising afirst sound interface (3), for instance for an interviewer, a second soundinterface (5), for instance for an interviewer, and a third sound interface (7) foran interpreter. The interpretation system comprises a switching subsystem(13) which can be switched between a first setting, interviewer-interpreter-interviewee, and a second setting, interviewee-interpreter-interviewer. Thesystem comprises a processing unit (15) which is devised to detect speechoriginating from the first and second sound interfaces, and to autmatica||y control the switching subsystem depending on this detection. lntended for publication: Fig 1

Description

INTERPRETATION SYSTEM AND METHOD Technical field The present disclosure relates to an interpretation system with a firstsound interface, for a first participant such as an interviewer, a second soundinterface, for a second participant such as an interviewee, and a third soundinterface for a third participant such as an interpreter. The system has aswitching subsystem that can be switched between at least a first setting,where a voice signal generated at the first sound interface is connected pri-marily to the third sound interface and a voice signal generated at the thirdinterface is connected primarily to the second sound interface, and a secondsetting, where a voice signal generated at the second sound interface is con-nected primarily to the third sound interface and a voice signal generated atthe third sound interface is connected primarily to the first sound interface.
The disclosure further relates to a corresponding method.
Background Such a system is described in EP-1545111-A, which provides for bi-directional simultaneous interpretation services in connection with an inter-pretation assistance device. An interpreter may be in a remote location, andthe users may generate switch signals by pressing buttons. The switchsignals are detected by the system which directs sound to and from differentusers and the interpreter in such a way that unwanted sound is cancelled orattenuated.
One problem associated with such systems is how to make them userfriendly to provide smooth and clear interview sessions.
Summary An object of the present invention is therefore to provide an improvedsystem that is reliable and easy to use.
This object is achieved in an interpretation system of the initially men-tioned kind which is provided with a processing unit which is capable of de-tecting speech originating from the first and second sound interfaces, and tocontrol the switching subsystem depending on this detection, such that thesystem switches between the first and second settings. This means that thesystem can operate automatically and adapt to the interview situation. Theparticipants, particularly the interviewee who may be inexperienced, need notcontrol the system manually, and a clear and undisturbed interview maynevertheless be produced and optionally recorded. The interpreter also ob-tains a better working situation as he or she may concentrate on translating inone direction at the time in an orderly manner. The interpreter need not beconcerned with controlling the system.
The detection of beginning and termination of speech may be carriedout by comparing a parameter corresponding to the first order derivative ofthe RMS of the AC component in a voice signal to a positive and a negativethreshold, respectively. This has shown to provide a reliable detection also incases where there is background noise. Such detection may be carried out bydetecting and removing a DC component from a voice signal resulting in anAC signal, rectifying and low-pass filtering the AC signal to obtain a detectionsignal, and-comparing a first order derivative of detection signal to a positiveand a negative threshold.
The interpretation may be adapted to switch between an idle state andat least a first active state corresponding to the first setting, in which the firstparticipant is active, and a second active state corresponding to the secondsetting, wherein the second participant is active. This means that the systemadapts naturally to an interview situation. The system may be adapted toremain in the first active state for a predetermined time after it is detected thatthe first participant stops talking, and may further be adapted to remain in thefirst active state for a predetermined time after it is detected that the third 3 participant, interpreting the first participant, stops talking. This makes surethat the system does not switch in an undesired way because of the firstparticipant e.g. allowing the interpreter to catch up in the interview process. lt is possible to gradually adjust the gain of an amplifier of at least oneof the sound interfaces in response to a switching of the switching subsystem.This avoids disturbing clicks during switching.
The system may be adapted to provide a visual feedback signal inresponse to a switching of the switching subsystem, such as for instancechanging the backlight colour of a display in the system.
The present disclosure further relates to a corresponding method. Thatmethod generally involves steps corresponding to the measures carried outby the different features of the system, and the method may be varied in cor-respondence with the system.
Brief description of the (jrawings Fig 1 illustrates a system overview. A switching device is in an inter-viewer-to-interviewee setting.
Fig 2 shows the switching device in fig 1in an interviewee-to-interviewersetting.
Fig 3 shows a flow chart of a process for detection of speech.
Figs 4a-4d show schematically waveforms corresponding to the first foursteps of fig 3, and fig 4e illustrates an envelope, with a time frame larger thanthe waveforms in figs 4a-4d, where detection of speech takes place.
Fig 5 shows a flow chart for a switching procedure.
Detailed description Fig 1 illustrates schematically an overview of an simultaneous inter-pretation system 1 according to the present disclosure. The system isintended to use in a situation where a first person, hereinafter calledinterviewer, talks to a second person, hereinafter called interviewee. Thisnaming of the first and second persons is done to simplify the followingdisclosure does not limit the scope of the present disclosure. ln fact, the 4 interviewer and the interviewee may have totally symmetrical roles as simplypersons talking to each other.
Typically, the system may be used in situations such as police, customsand immigration investigations as well as healthcare procedures, and otherprocedures.
The interviewer and the interviewee do not share a common language,or may at least not be capable of communicating in a common language witha sufficient quality to ensure, depending on the situation, for instance legalcertainty or medical safety.
Usually, the interviewer and the interviewee may be present in the sameroom, although this is not necessary. The interpreter may also be present, ormay be available via a telephone line, a mobile telephone connection, a videoconference system, or the like. ln another example, the interpreter is presentbut placed e.g. in a neigboring room simply to maintain the interpreter'sanonymity. The system may be capable of dealing with all such configurationsby applying different settings, as will be discussed later. lt should be notedthat the interviewer or interviewee may be remote with regard to the systemas well. ln summary, and as an example, the system may comprise a first 3, asecond 5 and a third 7 sound interface, each providing a sound input 9, forfeeding sound to a user loudspeaker or more likely headphones, and a soundoutput 11 providing an output from a user microphone.
The system may further comprise a switching subsystem 13 that directsthe flow of sound in a path that is appropriate in the current situation. For in-stance, if the interviewer speaks, his or her microphone signal is transferredto the interpreter's headphones, and the signal from the latter's microphone istransferred to the interviewee's headphones. This path is achieved with theconnection pattern indicated with black filled dots in the switching subsystemof fig 1. When the interviewer stops speaking and the interviewee begins tospeak, this path is altered by the switching subsystem by changing the con-nection pattern as indicated with dashed rings, as will be discussed later. 5 The operation of the system is controlled by a processor unit 15, whichmay be a central processing unit, CPU, a digital signal processor, DSP, adedicated application specific integrated circuit, ASIC, or a collection of cir-cuits, optionally comprising both analog and digital signal processing means,as will be discussed further later.
Additionally, the system may include I/O processing means 17, a userinterface 19, and additional storage means 21, as will be discussed in moredetail later. ln order to achieve good sound quality, the input and output of eachsound interface may be provided with an amplifier 23 that the processor unitcan adjust.
Sound interface The sound interfaces may be adaptable to the configuration currentlyused. For instance, the system may allow, in one configuration, the inteviewerand the interviewee to be connected directly to the system by means of aheadset with earphones and a microphone, and to connect the interpreter viaa video conference system or a fixed telephone line. ln another configuration,all three parties may be connected directly to the system via a headset. Otherconfigurations, e.g. using cellphones may be considered, and it has also beenconsidered to use more than three sound interfaces. The latter may be usefule.g. to allow having two interpreters interpreting via an intermediate language,or only interpreting in one direction, from a first to a second language.
While unbalanced microphones can be used, it may be preferred to usebalanced microphones, e.g. using XLR connectors, to provide improvedsound quality and lesser susceptibility to interference. TRS (tip/ring/sleeve)connectors may be used as well. Further, phantom powering may be usedwhich provides a power source if a condenser microphone is used. Balancedheadphones may be used as well.
Other standardized line in/out connectors may be used to connect thesound interface to a videoconference system. 6 Each sound interface may also be connected to an internal mobile tele-phone system to connect one of the interfaces to e.g. a GSM compliant cellphone, at least as an emergency solution. Other options are available forwireless connection of a sound interface to a headset or the like, such as awireless LAN, Bluetooth, etc.
Regardless of which solution is used to connect the sound interfaces tointerviewers, interviewees and interpreters, it may be useful to allow theprocessor unit to control the amplitude of the incoming and outgoing signalsof each interface, which may be done by means of controlling each line'samplifier, as will be discussed later.
Switching subsystemThe switching subsystem may be accomplished with different means.
First, it should be noted that conveying both digital and analog sound signalshave been considered. While employing electronics well known for decades,analog signal transmission may be considered, as the interpretation system 1may be used in an environment with low interference and may be rathercompact. Further, analog systems can sometimes provide superior soundquality, thanks to absense of quantization noise, etc.
Needless to say, corresponding entirely digital systems may be employ-ed as well. ln fact, the switching subsystem may, as the skilled person under-stands, be realized with anything from a set of mechanical relays to a soft-ware routine executed in a processor as long as it is capable of switchingbetween different connection patterns, that connect the microphone of onespeaker to the headphones of another as necessary in the circumstances andas decided adaptively by the system. The system may be integrated in an IP(Internet Protocol) telephony system using session initiation protocols (SIP)and real-time transport (RTP) protocols.
As mentioned, the configuration indicated with black filled dots in theswitching subsystem of fig 1 is used when the interviewer speaks. The micro-phone signal from the interviewer's sound interface 3 is connected by the switching subsystem to the input/headphone line of the interpreter's sound 7 interface 7, such that the interpreter hears the interviewer's voice. The signalfrom the interpreter's microphone is similarly transferred to the interviewee'sheadphones by the switching system.
When the interviewer stops speaking and the interviewee begins tospeak, the path reverses; from the interviewee to the interpreter to the inter-viewer by changing the connection pattern as indicated with dashed rings,and as indicated in fig 2.
Other configurations are also possible. For instance, the system may beset in a conference mode, where each participant hear the others and canspeak to the others. Also, even if indicated as such in fig 1, the connectionsneed not switch between on and off. For instance, the interviewer may, in theconfiguration indicated in fig 2 hear the voice of the interviewee, at a lowvolume, together with the voice of the interpreter, at a higher volume. Thismay, even though the interviewer and interviewee may not share improve themutual understanding as the original speech, together with eye contact, body language, etc. can contribute with nuances and the like.
Processor unitThe processor unit may, as mentioned earlier, be a CPU, a DSP or an application specific circuit. lt should further be noted that the switchingsubunit, the amplifiers, and at least parts of the sound interfaces, etc. may beintegrated with the processing unit. Although the illustrated schematic con-figuration may be realised, it is primarily an example useful for understanding the overall disclosure of the system.
Speech detectionOne way of triggering the switching from one configuration to another is to detect when one party, typically the interviewer or the interviewee begins tospeak. An example of a method for accomplishing this speech detection isdescribed with reference to the flow chart of fig 3 and the corresponding waveforms shown in figs 4a-4d. 8 An analog voice signal is, very schematically shown in fig 4a. This signalhas an AC component and a DC component 27. ln a first step, the DC com-ponent is detected 25, and in a second step the DC component is removed29, leaving only the AC component in the signal as illustrated in fig 4b. ln aDSP this could be carried out with suitable subroutines, and in an analog sys-tem an operational amplifier or even a simple capacitor circuit may be used toremove the DC component directly. ln a third step, the signal is rectified 31 resulting in the waveform shownin fig 4c. This signal is in turn low-pass, LP, filtered 33 in a fourth step result-ing in the waveform of fig 4d. This resulting signal shows the instantaneouschanges in voice signal amplitude, and in a fifth step there is carried out adetection, which determines 35 whether the first order derivative of the ampli-tude, AA/At, exceeds a predetermined positive or negative threshold, cf. fig4e. lf a positive threshold is exceeded, it is determined that speech hasbegun, and if a negative threshold is exceeded it is determined that speechhas ended. This to a great extent corresponds to comparing a parametercorresponding to the first order derivative of the RMS of the AC component ina voice signal to a positive and a negative threshold, respectively. The systemmay react on this as will now be described.
Switching procedureThe disclosed features allows for automatically switching between inter- viewer and interviewee, and vice versa. This implies an improvement as aconversation can flow much more freely as compared to if a manual control,e.g. by the interviewer, would be used. Needless to say, it is possible to over-ride this automatical switching and carry out such manual control if needed ina specific interview situation.
Further, as compared to regular conference calls, the speech quality willbe much improved, as one party (interviewer/interviewee) at a time talks. Thisis particularly useful if the conversation is recorded e.g. as evidence. ln thatcase it may also be possible to analyse at a later stage how the interpretation 9 affects e.g. questions raised and answers produced in order to achieve higherlegal certainty.
The system remains in one connection pattern, e.g. interviewee to inter-viewer as long as the interviewee speaks.
When the system detects, for instance as described above, that theinterviewee stops talking, the system may wait for a short waiting time andthen switch to the reversed connection pattern in order to allow the inter-viewer to talk. The system may then produce optical and/or acoustic feedbackto the users to indicate that switching has taken place and that the previouslysilent part can begin to talk. Different feedback features are discussed later. lf, on the other hand, the interviewee resumes talking during the waitingtime, the system may remain in the first connection pattern until the inter-viewee is ready.
This procedure can be summarized in an example flow chart as shownin fig 5. Starting out from a state where the system is idle 37, it is continuouslyor at regular intervals tested whether interviewer speech is detected 39 orwhether interviewee speech is detected 41. lf for instance interviewer speechis detected, the system switches 43 to an interviewer-translator-intervieweepattern as described before, and provides feedback via the user interface aswill be discussed, such that the interviewer and interviewee become aware ofthe switching. As the processor unit (cf. 15 in fig 1) may be able to control theamplifiers (cf. 23 in fig 1)for the sound inputs and outputs for each interface,it is possible to make the switching smooth by allowing the amplifier gains toramp up and down rather than just switching on and off. This means thatuncomfortable and disturbing clicking in the switching transitions can beavoided.
The system is thus in an interviewer-active state 45, where preferablyany voice signals from the interviewee are shut down or at least substantiallyattenuated. lf the interviewee attempts to talk, a feedback signal, e.g. opticalor acoustic, may further be provided to the interviewee to inform the inter- viewee that he should wait. ln the interviewer-active state, the interviewer may thus talk for as longas needed without being interrupted. ln the interviewer-active state 45, it isregularly tested 47 whether the interviewer becomes inactive as discussedbefore. lf the interviewer is inactive for a predetermined time period T, whereT is typically in the range 0.5-5 s and preferably about 1 s, it is assumed thatthe interviewer has stopped talking. Howevever, it may be the case that theinterpreter lags a few seconds. lt is therefore optionally also tested 49whether the interpreter becomes inactive for a time period that may also be Ts, even if this is not necessary. lf this does not happen, it is assumed that theinterviewer has begun talking again, and the system remains in the inter-viewer-active state 45. lf the interpreter however is silent long enough, thesystem returns to the idle state 37, and this is indicated by the user interface,as a feedback to the participants.
As illustrated in fig 5, the system may operate in the same way if in theidle state 37, it is determined that the interviewee begins to talk, and thesystem enters an interviewee-active state 51. ln this way, an interviewsituation can be handled very smoothly, and can be readily dealt with by the interpreter.
User interface Again with reference to fig 1, the user interface 19, may typically includea keyboard 53 a screen 55, such as an LCD screen and some indicator lamps57. The keyboard 53 may be used to select different settings such as theabove-described automatic switching or the previously mentioned conferencemode. lt can also be used to manually control switching if needed.
Feedback to the users regarding in which state (e.g. interviewer-activeor interviewee-active as described above) may be provided in different wayse.g. using the screen 55 or the indicator lamps 57. One efficient way of givingfeedback is to use the screen's backlight colour. For instance, in the inter-viewer-active mode, the backlight may be red, while it is green in the idlemode. Other variations of course exist. 11 A user interface may also be useful to choose the language e.g. theinterviewee wishes to speak. For instance, a pressure sensitive screen mayinitially show a number of nations' flags, each representing a specificlanguage. The interviewee may than tap a desired flag/language, and asuitable interpreter is connected to the system accordingly.
I/O subsvstem and memorv The I/O subsystem 17 may connect the system to other functions. Forinstance, it is possible to provide additional feedback lights on each user'sheadset or the like to enhance the feedback function. Further connections tostorage solutions such as a harddrive, etc. may be provided to store interviewsound data produced during an interview. lt is possible to store voice data in anumber of separate channels.
Additionally it is possible to provide local storage, such as indicated withan SD card.
The present disclosure is not limited by the examples given above, andmay be varied in different ways within the scope of the appended claims.

Claims (10)

1. An interpretation system comprising a first sound interface (3), for afirst participant such as an interviewer, a second sound interface (5), for asecond participant such as an interviewee, and a third sound interface (7) fora third participant such as an interpreter, wherein the interpretation systemcomprises a switching subsystem (13) that can be switched between at leasta first setting (45), where a voice signal generated at the first sound interface(3) is connected primarily to the third sound interface (7) and a voice signalgenerated at the third interface is connected primarily to the second soundinterface (5), and a second setting (51), where a voice signal generated at thesecond sound interface (5) is connected primarily to the third sound interface(7) and a voice signal generated at the third sound interface (7) is connectedprimarily to the first sound interface (3), characterised by a processing unit(15) which is devised to detect speech originating from the first and secondsound interfaces, and to control the switching subsystem depending on thisdetection, such that the system switches between the first and secondsettings.
2. An interpretation system according to claim 1, which is adapted todetect beginning and termination of speech by comparing a parametercorresponding to the first order derivative of the RMS of the AC component in a voice signal to a positive and a negative threshold, respectively.
3. An interpretation system according to claim 2, which is adapted todetect beginning and termination of speech by: -detecting and removing a DC component from a voice signal resultingin an AC signal, -rectifying and low-pass filtering the AC signal to obtain a detectionsignal, and 13 -comparing a first order derivative of the detection signal to a positiveand a negative threshold.
4. An interpretation system according to any of the preceding claims,which is adapted to switch between an idle state and at least a first activestate corresponding to the first setting, in which the first participant is active,and a second active state corresponding to the second setting, wherein the second participant is active.
5. An interpretation system according to c|aim 4, which is adapted toremain in the first active state for a predetermined time after it is detected that the first participant stops talking.
6. An interpretation system according to c|aim 5, which is furtheradapted to remain in the first active state for a predetermined time after it isdetected that the third participant, interpreting the first participant, stopstalking.
7. An interpretation system according to any of the preceding claims,which is adapted to gradually adjust the gain of an amplifier of at least one ofthe sound interfaces in response to a switching of the switching subsystem.
8. An interpretation system according to any of the preceding claims,which is adapted to provide a visual feedback signal in response to aswitching of the switching subsystem.
9. An interpretation system according to c|aim 8, wherein the visual feedback signal includes changing the backlight colour of a display.
10. A method for controlling an interpretation system, the system comprising a first sound interface (3), for a first participant such as an interviewer, a second sound interface (5), for a second participant such as an 14 interviewee, and a third sound interface (7) for a third participant such as aninterpreter, wherein the interpretation system comprises a switchingsubsystem (13) that can be switched between at least a first setting (45),where a voice signal generated at the first sound interface (3) is connectedprimarily to the third sound interface (7) and a voice signal generated at thethird interface is connected primarily to the second sound interface (5), and asecond setting (51), where a voice signal generated at the second soundinterface (5) is connected primarily to the third sound interface (7) and a voicesignal generated at the third sound interface (7) is connected primarily to thefirst sound interface (3), characterised by detecting speech originating fromthe first and second sound interfaces, and to controlling the switching sub-system depending on this detection, such that the system switches between the first and second settings.
SE1450295A 2014-03-17 2014-03-17 System and method of simultaneous interpretation SE1450295A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
SE1450295A SE1450295A1 (en) 2014-03-17 2014-03-17 System and method of simultaneous interpretation
EP15765582.0A EP3120534A4 (en) 2014-03-17 2015-03-13 Interpretation system and method
PCT/SE2015/050284 WO2015142249A2 (en) 2014-03-17 2015-03-13 Interpretation system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
SE1450295A SE1450295A1 (en) 2014-03-17 2014-03-17 System and method of simultaneous interpretation

Publications (1)

Publication Number Publication Date
SE1450295A1 true SE1450295A1 (en) 2015-09-18

Family

ID=54145455

Family Applications (1)

Application Number Title Priority Date Filing Date
SE1450295A SE1450295A1 (en) 2014-03-17 2014-03-17 System and method of simultaneous interpretation

Country Status (3)

Country Link
EP (1) EP3120534A4 (en)
SE (1) SE1450295A1 (en)
WO (1) WO2015142249A2 (en)

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5867574A (en) * 1997-05-19 1999-02-02 Lucent Technologies Inc. Voice activity detection system and method
US20030088622A1 (en) * 2001-11-04 2003-05-08 Jenq-Neng Hwang Efficient and robust adaptive algorithm for silence detection in real-time conferencing
US20060126821A1 (en) * 2002-09-27 2006-06-15 Nozomu Sahashi Telephone interpretation assistance device and telephone interpretation system using the same
CN1685696A (en) * 2002-09-27 2005-10-19 银河网路股份有限公司 Telephone interpretation system
WO2004030328A1 (en) * 2002-09-27 2004-04-08 Ginganet Corporation Video telephone interpretation system and video telephone interpretation method
WO2005048574A1 (en) * 2003-11-11 2005-05-26 Matech, Inc. Automatic-switching wireless communication device
CN1937664B (en) * 2006-09-30 2010-11-10 华为技术有限公司 System and method for realizing multi-language conference
WO2009073194A1 (en) * 2007-12-03 2009-06-11 Samuel Joseph Wald System and method for establishing a conference in tow or more different languages
GB2469329A (en) * 2009-04-09 2010-10-13 Webinterpret Sas Combining an interpreted voice signal with the original voice signal at a sound level lower than the original sound level before sending to the other user

Also Published As

Publication number Publication date
EP3120534A4 (en) 2017-10-25
WO2015142249A2 (en) 2015-09-24
WO2015142249A3 (en) 2015-11-12
EP3120534A2 (en) 2017-01-25

Similar Documents

Publication Publication Date Title
US10499136B2 (en) Providing isolation from distractions
US10560774B2 (en) Headset mode selection
US9253303B2 (en) Signal processing apparatus and storage medium
US9380150B1 (en) Methods and devices for automatic volume control of a far-end voice signal provided to a captioning communication service
WO2018148762A3 (en) Method for user voice activity detection in a communication assembly, communication assembly thereof
EP2815566B1 (en) Audio signal processing in a communication system
US9661139B2 (en) Conversation detection in an ambient telephony system
US11089541B2 (en) Managing communication sessions with respect to multiple transport media
US20070099651A1 (en) Hold on telephony feature
US20140257799A1 (en) Shout mitigating communication device
US20180269842A1 (en) Volume-dependent automatic gain control
US20120140918A1 (en) System and method for echo reduction in audio and video telecommunications over a network
US20090097625A1 (en) Method of and System for Controlling Conference Calls
GB2492103A (en) Interrupting a Multi-party teleconference call in favour of an incoming call and combining teleconference call audio streams using a mixing mode
SE1450295A1 (en) System and method of simultaneous interpretation
US10483933B2 (en) Amplification adjustment in communication devices
EP3900315B1 (en) Microphone control based on speech direction
CN104301564A (en) Intelligent conference telephone with mouth shape identification
DE3426815A1 (en) Level adjustment for a telephone station with a hands-free facility
CN204231481U (en) A kind of intelligent meeting telephone set with nozzle type identification
US11832062B1 (en) Neural-network based denoising of audio signals received by an ear-worn device controlled based on activation of a user input device on the ear-worn device
JPH11275243A (en) Loud speaker type interphone system
JP2010034815A (en) Sound output device and communication system
JPH0435144A (en) Telephone system
JPH10290282A (en) Hands-free control circuit

Legal Events

Date Code Title Description
NAV Patent application has lapsed