GB2569650A - Managing streamed audio communication sessions - Google Patents

Managing streamed audio communication sessions Download PDF

Info

Publication number
GB2569650A
GB2569650A GB1721775.3A GB201721775A GB2569650A GB 2569650 A GB2569650 A GB 2569650A GB 201721775 A GB201721775 A GB 201721775A GB 2569650 A GB2569650 A GB 2569650A
Authority
GB
United Kingdom
Prior art keywords
audio
participant
make
participants
beginning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1721775.3A
Other versions
GB2569650B (en
GB201721775D0 (en
Inventor
Kegel Ian
Bailey Karis
Reed Martin
Hughes Peter
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
British Telecommunications PLC
Original Assignee
British Telecommunications PLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by British Telecommunications PLC filed Critical British Telecommunications PLC
Priority to GB1721775.3A priority Critical patent/GB2569650B/en
Publication of GB201721775D0 publication Critical patent/GB201721775D0/en
Publication of GB2569650A publication Critical patent/GB2569650A/en
Application granted granted Critical
Publication of GB2569650B publication Critical patent/GB2569650B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/56Arrangements for connecting several subscribers to a common circuit, i.e. affording conference facilities
    • H04M3/568Arrangements for connecting several subscribers to a common circuit, i.e. affording conference facilities audio processing specific to telephonic conferencing, e.g. spatial distribution, mixing of participants
    • H04M3/569Arrangements for connecting several subscribers to a common circuit, i.e. affording conference facilities audio processing specific to telephonic conferencing, e.g. spatial distribution, mixing of participants using the instant speaker's algorithm
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/78Detection of presence or absence of voice signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/56Arrangements for connecting several subscribers to a common circuit, i.e. affording conference facilities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2203/00Aspects of automatic or semi-automatic exchanges
    • H04M2203/30Aspects of automatic or semi-automatic exchanges related to audio recordings in general
    • H04M2203/301Management of recordings
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2203/00Aspects of automatic or semi-automatic exchanges
    • H04M2203/30Aspects of automatic or semi-automatic exchanges related to audio recordings in general
    • H04M2203/303Marking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2203/00Aspects of automatic or semi-automatic exchanges
    • H04M2203/50Aspects of automatic or semi-automatic exchanges related to audio conference
    • H04M2203/5072Multiple active speakers

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Telephonic Communication Services (AREA)

Abstract

A multi-party streamed audio communication system reduces instances of cross-talk and double-talk (due to eg. delay/latency) by detecting the beginning of speech or voice activity (49a, fig. 4) from one participant 50a and providing a signal to a conference session controller 72 to indicate on the other user devices 50b-d that the first participant has begun to make an audio contribution (eg. a pre-stored audible sound such as an “err” from the participant). Slightly later contributions from other participants may be suppressed such that they do not talk over one another, and phatic or filler expressions or speech patterns (“err, umm” etc.) may be stored locally (48a & 48ba, fig. 4) to further assist the arbitration decision and provide audible indicators.

Description

The present invention relates to streamed audio communication between participants, and in particular to methods, apparatus and systems for managing digitally-streamed audio communication sessions such as audio-calls and audio-conferences between two or more participants using user-devices or user terminals such as telephones or computing devices with audio (and possibly video) input and output modules.
Background
Conversation Analysis (CA) is a branch of linguistics which studies the way humans interact. Since the invention is based on an understanding of interactions between participants in conversations, and how the quality of the interactions is degraded by transmission delay, we first note some of the knowledge from Conversation Analysis.
In a free conversation the organisation of the conversation, in terms of who speaks when, is referred to as ‘turn-taking’. This is implicitly negotiated by a multitude of verbal cues within the conversation and also by non-verbal cues such as physical motion and eye contact. This behaviour has been extensively studied in the discipline of Conversation Analysis and leads to useful concepts such as:
- The Turn Constructional Unit (TCU), which is the fundamental segment of speech in a conversation - essentially a piece of speech that constitutes an entire ‘turn’.
- The Transition Relevance Place (TRP), which indicates where a turn or floor exchange can take place between speakers. TCUs are separated by TRPs.
These enable what is referred to as a basic turn-taking process to take place, a simplified example of which is shown in Figure 1, which will be discussed in more detail later. Briefly, in this example, as a TCU comes to an end the next talker is essentially determined by the next participant to start talking. Changes in talker take place at a TCU, though if no other participants start talking, the original talker can continue after the TCU. This decision process has been observed to lead generally to the following conference characteristics:
(i) Overwhelmingly, only one participant talks at a time.
(ii) Occurrences of more than one talker at a time are common, but brief.
(iii) Transitions from one turn to the next - with no gap or overlap - are common.
(iv) The most frequent gaps between talkers are in the region of 200ms. Gaps of more than 1 second are rare.
(v) It takes talkers at least 600ms to plan a one-word utterance and somewhat longer for sentences with multiple words. Combining this figure with the typical gap length implies that listeners are generally very good at predicting an approaching TRP.
Significantly, it is noted here that transmission delay on communication links between the respective conference participants can severely disrupt the turn-taking process because the identity of the next participant to start talking is disrupted by the delay.
In particular, delay in conference systems can cause significant problems to talkers compared to normal face-to-face meetings. One example of such a problem is so-called “double-talk,” where two (or more) people start to speak at the same time (i.e. each at the start of a turn constructional unit), each initially being unaware (due to transmission or other delay) that the other has also started to speak. Due to the delay, they only become aware of this problem after they have started talking, after which confusion may arise, sometimes with both talkers stopping, then starting again (sometimes both talking over each other again). For this and other reasons, designers generally try to reduce delay - there are many prior techniques attempting to address this issue (the reduction of delay).
An example of the impact of delay is shown in Figure 2, which shows three possible outcomes (noting that there may be many other possible outcomes) in a three-way conference arising from a situation where person or participant B and person/participant C respond to a question or comment from (or otherwise wish to talk after an audio contribution from) person/participant A. In the absence of significant delay or significant effects thereof, the next person to ‘have the floor’ is usually the first person to start talking. This is in line with normal conversation turn-taking. Unfortunately if there is delay present none of the participants know who started talking first, resulting in confusion.
In Figure 2, the question, comment or other such audio contribution from Person A is represented in each section by a zigzag-patterned block 20A. In the top section of the figure, representing occurrences at Person A's terminal, block 20A appears above the horizontal axis (which represents time), indicating that it is the audio contribution of Person A when talking. In the lower three sections of the figure, which represent occurrences at the terminals of Persons B and C in each of three scenarios or cases, block 20A appears below the axis, indicating that it is the audio contribution of Person A when this is heard by Persons B and C, who are listening. It will be noted that the audio contribution represented by zigzag-patterned block 20A is heard (by Persons B and C) a short time after it is spoken by Person A, due to any delay (transmission delay, processing delay, etc.).
The lower three sections of the figure represent possible subsequent occurrences at the terminals of Persons B and C in the following three cases, each of which may occur if Persons B and C both wish to start talking (or take the floor) after they have heard (or think they have heard) the end of Person A's audio contribution. In these sections, audio contributions from Person B are represented in each instance by diagonally-patterned blocks 20B, and audio contributions from Person C are represented in each instance by brickpatterned blocks 20C.
Case 1: Both B and C start to talk, then both stop talking when they realise the other is already talking (which they will each realise shortly after the other person actually started talking, due to any delay), leading to a period of confusion.
Case 2: B and C talk. B continues but C stops on realising that B is already talking. (NB This is the least disruptive of the three cases.)
Case 3: Both B and C talk. Both continue (possibly still not realising the other is already talking, or possibly because they each feel that the other is interrupting them), resulting in a period of double-talk (or possibly multi-talk if A also starts talking again at about the same time, or if there are other parties involved), and reduced intelligibility.
Whichever situation occurs, there will generally be an impact on the conversation flow, which may be significant, leading to confusion, frustration, or wasted time, or possibly more serious, if one party thinks another is interrupting them, ignoring them or otherwise being rude.
Even using the best delay reduction techniques, there is often considerable delay in multiparty conferencing systems, or at least sufficient delay that problems such as the above can occur. This may simply be because there is a finite time required for data indicative of respective audio contributions to be encoded ready for transmission to a remote terminal, transmitted via a communications network (and possibly a conference bridge) to that remote terminal, then decoded and played to a listener at that remote terminal. Delay may also be due to intermediate processing (e.g. at a conference bridge (discussed later), or due to the provision of effects such as spatialized audio) or may be due to the geographical placement of a conference bridge, which may be sufficiently remotely located as to cause delay to be incurred even if the participants interacting via it are geographically close to each other.
The end-to-end delay in a telecommunications system, often referred to as latency, or mouth-to-ear delay, can never be zero. In a modern digital system the latency is made up of a combination of delays - e.g. propagation delay, buffering, processing delay etc., and can typically be anywhere between 30ms to 1s. Unfortunately excessive latency can have a devastating impact on conversation quality. This is discussed in ITU-T recommendation G.114, which states the following: “Although a few applications maybe slightly affected by end-to-end (i.e., mouth-to-ear in the case of speech) delays of less than 150ms, if delays can be kept below this figure, most applications, both speech and non-speech, will experience essentially transparent interactivity1’, and “...delays above 400ms are unacceptable for general network planning purposes”. Details of the impact of delays on teleconference systems are described in ITU-T P.1305.
Referring briefly to prior patent documents, there are many which relate to techniques aiming to keep the latency as low as possible, examples of which are as follows:
United States patent application US2015/0172202 (Zealey et al), entitled Reduced System Latency for Dominant Speaker, relates to digital media systems, and in particular to digital media systems having data processing that aims to mitigate signal latency.
US2002/0118640 A1 (Oberman et al), entitled Dynamic Selection of Lowest Latency Path in a Network Switch, relates to systems and methods for routing data packets through network switches, with a view to routing packets efficiently whether a destination output port is available or is unavailable due to other packet traffic.
US2014/0022956 (Ramalho et al), entitled System and Method for Improving Audio Quality During Web Conferences Over Low-Speed Network Connections, relates to software webconferencing.
These techniques focus on optimising the mouth-to-ear transmission path to keep the latency low.
As discussed above, delay in two-party or larger multi-party speech communication can be a problem, often causing confusion in talkers whereby they “talk over” each other. This delay is often caused by intermediary processing systems (such as a conference bridge), but may simply be caused by transmission delay. While reducing or minimising delay is generally beneficial, it may not be possible to do so enough to prevent it from causing problems in relation to audio communication sessions.
Summary of the Invention
Preferred embodiments relate to techniques for reducing the impact of delay/latency in voice calls or other two-party or multi-party audio-conferences. Such delay/latency may be due to intermediary processing (e.g. processing and re-distribution at a conference bridge, or for effects such as spatialized audio), or may be due to geographical issues, for example.
In particular, preferred embodiments concern the problem of “double-talk”, where two (or more) participants start talking at about the same time, each initially unaware (due to delay) that another has also just started to talk. They may then become aware that another has also just started talking, leading to confusion with (for example) both/each stopping, re-starting (sometimes talking over each other again), etc.
As discussed above, techniques exist which attempt to reduce delay. Instead of or as well as this, preferred embodiments act in order to reduce the effect of such delay by causing participants to hear pre-emptive audio (and/or possibly visual) cues in advance of the start of any reproduction of other participant's current audio contributions, the pre-emptive cues being triggered or sent via a fast/direct signalling channel which should be faster than that used for the actual audio signals (e.g. avoiding audio processing in a conference bridge or otherwise). By triggering and causing other participants to hear such pre-emptive sounds (generally pre-stored at the other participants' audio terminals) indicative of a first talker having started to talk before those other participants started talking (and before the actual audio data arrives from the first talker, the pre-emptive sounds having been triggered via a (generally faster) signalling channel), other participants may be prevented from thinking they are the first talker and therefore starting/continuing to talk, thereby reducing double-talk.
According to a first aspect of the invention, there is provided a method of managing a streamed audio communication session between a plurality of user devices, the user devices being configured to send streamed data indicative of received audio contributions from respective participants in a multiple-participant audio communication session via a communications network to one or more other user devices for conversion to audio representations of said received audio contributions for one or more respective other participants; the method comprising:
monitoring audio contributions from respective participants, and in response to detection therefrom that a first participant is beginning to make an audio contribution at a first one of said user devices, providing a signal for at least one of said other user devices indicating that the first participant is beginning to make an audio contribution;
in response to receipt at said at least one other user device of a signal indicating that the first participant is beginning to make an audio contribution, triggering a predetermined audible indication for a respective participant at said at least one other user device indicating that said first participant is beginning to make an audio contribution.
The audio communication session would generally involve data indicative of received audio contributions being streamed digitally between user devices and reproduced as audio contributions at user devices for the respective participants, in order to allow them to hear the audio contributions made by other participants (and possibly by themselves as well).
According to preferred embodiments, the monitoring of audio contributions from a respective participant may be performed at the user device of said participant. In alternative embodiments, however, audio contributions from respective participants may be forwarded to a session control device such that the monitoring of audio contributions from the respective participants may be performed at the session control device.
According to preferred embodiments, the predetermined audible indication for a respective participant at said at least one other user device indicating that said first participant is beginning to make an audio contribution may be an audio representation of data previously stored at said at least one other user device. In such embodiments, the data previously stored at said at least one other user device may be a representation of a sound indicative of a participant beginning to make an audio contribution, and/or may be a representation of a sound indicative of said first participant beginning to make an audio contribution, and/or may be a representation of a sound previously received from said first participant at said first user device. Such data may be data determined in dependence on analysis of previously-received received audio contributions at said first user device.
According to preferred embodiments, the audio communication session may be managed by a session control device via which the user devices are configured to send said streamed data indicative of received audio contributions for forwarding to one or more other user devices. Such a session control device may comprise a communication bridge, conference controller or other such control device capable of perform audio and other such analysis of said streamed data, allowing comparison, arbitration and other more advanced functionality to be performed in respect thereof.
In such embodiments with a session control device, the session control device may be configured to identify, in response to respective detections that respective participants are beginning to make respective audio contributions, which of said respective participants was the earlier or earliest to begin to make an audio contribution, and thereby perform arbitration, determining which participant started talking first, or otherwise determining which participant's audio contributions are to be given priority.
Such a session control device may be configured to provide signals for at least one user device other than that of the participant who was the earlier or earliest to begin to make an audio contribution that the participant who was the earlier or earliest to begin to make an audio contribution is beginning to make an audio contribution. By virtue of this, pre-emptive sounds can be sent to participants other than the participant who started talking first, or to participants other than the participant to whom priority is to be given.
Such a session control device may be configured temporarily to suppress signals for the user device of the participant who was the earlier or earliest to begin to make an audio contribution that any other participant is beginning to make an audio contribution and/or temporarily to suppress audio representations of audio contributions from any other participant from being provided for the participant who was the earlier or earliest to begin to make an audio contribution. By suppressing such pre-emptive noises (and possibly actual speech from participants other than the first) from being relayed to the participant who started talking first, it can be ensured that such noises are not perceived as interruptions, and do not disturb the participant who started talking first.
According to preferred embodiments, the audio communication session may be managed by a session control device to which respective user devices are configured to send messages indicative of detections at said user devices that respective participants are beginning to make audio contributions, and from which messages are provided for other user devices indicative respective participants having begun to make audio contributions. Such a session control device may be configured to determine which of a plurality of participants identified as beginning to make audio contributions is to be prioritised, and is configured to provide messages for one or more other participants in dependence on such determinations. Such a session control device may be located with or associated with a session control device such as that discussed above, via which the user devices are configured to send said streamed data indicative of received audio contributions for forwarding to one or more other user devices.
According to a second aspect of the invention, there is provided apparatus for managing a streamed audio communication session between a plurality of user devices, the user devices being configured to send streamed data indicative of received audio contributions from respective participants in a multiple-participant audio communication session via a communications network to one or more other user devices for conversion to audio representations of said received audio contributions for one or more respective other participants; the apparatus comprising one or more processors operable to:
monitor audio contributions from respective participants, and in response to detection therefrom that a first participant is beginning to make an audio contribution at a first one of said user devices, to provide a signal for at least one of said other user devices indicating that the first participant is beginning to make an audio contribution; and in response to receipt at said at least one other user device of a signal indicating that the first participant is beginning to make an audio contribution, to trigger a predetermined audible indication for a respective participant at said at least one other user device indicating that said first participant is beginning to make an audio contribution.
The various options and preferred embodiments referred to above in relation to the first aspect are also applicable in relation to the second aspect.
Preferred embodiments may incorporate small portions of sound such as pre-recorded or synthetic speech sounds, or phatic/filler speech sounds, into the audio stream to be played to participants, generally triggering the incorporation of such small portions of sound via a channel that is faster than that via which the main or live audio data is being conveyed, e.g. using a faster channel than the intermediary processing system, conference bridge and/or transmission channel used for the main or live audio data, such that the incorporated sounds pre-empt and are heard slightly in advance of the arrival and reproduction for listening participants of actual utterances by other (talking) participants. By virtue of this, the effective delay may essentially be concealed by such pre-emptive sounds. This is notably different from the prior techniques which aim simply to reduce delay, and may be of particular benefit in situations where it may not be possible or realistic to remove delay or reduce it further. Listeners may subtly be made aware that another talker has started talking, making them less likely to interrupt the talker by starting or continuing a new turn construction unit at the same time as the other talker. Overall, this may improve the quality of two-party communication sessions or multi-party conferences.
Preferred embodiments may arbitrate between the starts of respective participants' attempted audio contributions, determining (at times when there is potential double-talk) which talker was the first talker and immediately triggering a pre-emptive sound to be heard at each other participant's terminal indicative of the first talker starting to talk, while also preventing that first talker from hearing any pre-emptive sounds indicative of any other participants starting to talk (thereby encouraging the first talker to continue talking).
Brief Description of the Drawings
A preferred embodiment of the present invention will now be described with reference to the appended drawings, in which:
Figure 1 illustrates a basic conversational turn-taking procedure;
Figure 2 shows some possible outcomes in a three-way conference in a situation where two participants attempt to respond to an audio contribution from another participant;
Figure 3 shows a conference system including a conference server or bridge;
Figure 4 illustrates the entities and interactions involved in a two-way, two-party audio communication session performed according to an embodiment without a conference server or bridge;
Figure 5 illustrates the entities and interactions involved in a multi-party audio communication session performed according to an embodiment with an audio bridge and a messaging bridge;
Figure 6 illustrates functional modules that may be present within a messaging bridge; and
Figure 7 is a block diagram of a computer system suitable for the operation of embodiments of the present invention.
Description of Preferred Embodiments of the Invention
With reference to the accompanying figures, methods and apparatus according to preferred embodiments will be described.
As mentioned earlier, Figure 1 illustrates a basic conversational turn-taking procedure. At stage s1, a Turn Construction Unit (TCU) of a participant is in progress. This TCU may come to an end by virtue of the current participant/talker stopping talking having explicitly selected the next talker (stage s2), in which case the procedure returns to stage s1 for a TCU of the selected next talker. If the current talker doesn't select the next talker, another participant may self-select (stage s3), with the procedure then returning to stage s1 for a TCU of the self-selected next talker. If no other participant self-selects at stage s3, the current talker may continue with the procedure returning to stage s1 for another TCU from the same talker. If the current talker doesn't continue, the procedure returns from stage s4 to stage s3 until another talker does self-select. Essentially, the next talker is determined by the next participant to start talking.
As discussed above, in particular with reference to Figure 2, delay may affect the turn-taking process. The participants themselves may not be aware of such delays (network delays, processing delays or other delays), or be aware that the normal running of the conversation may be being affected by such delays - they may in fact believe that other participants are being impolite (as interruptions in a face-to-face discussion or in a live discussion unaffected by such delays may be considered impolite) or genuinely slow to respond.
It should be noted that the following description of preferred embodiments will relate primarily to issues caused by delay, which can have such impacts on audio communication sessions. Audio communication sessions may also be affected by other issues such as echo, which may be caused by a far end acoustic terminal or any intermediate point in the transmission, and which can have a severe impact on conversation. Techniques exist to control it, and may be used together with the techniques described below.
As indicated earlier, while a two-party communication session may happen simply between the two parties involved, with encoded data indicative of respective audio contributions being routed directly (generally bi-directionally) between the parties via a communications network, multi-party communication sessions generally happen via a conference bridge, with data indicative of respective audio contributions being routed (again, generally bi-directionally) between each party and a conference bridge via a communications network, such that the conference bridge acts as a hub and possibly a control entity for the communication session. A conference bridge may of course be used for a two-party communication session, and a multi-party communication session may take place without a bridge (although this may require complex co-ordination).
Before discussing the specific functionality of preferred embodiments concerned with mitigating problems caused by delay, a brief explanation will be provided of a possible conference system including a conference server or bridge. It will be noted however that embodiments of the invention are applicable both in relation to communication systems and sessions involving a conference bridge (e.g. multi-party communication sessions) as well as to communication systems and sessions that do not involve the use of a conference bridge (e.g. simple two-party communication sessions).
A conference system involving a conference server or bridge is shown in Figure 3. A plurality of user terminals 37a, 37b, 37c are connected to a centralised conference server 30 via bidirectional data links that carry a combination of single-channel (generally upstream) audio data 31, multi-channel (generally downstream) audio data 32, and additional (generally bidirectional) digital control and/or reporting data 33. The data links could consist of a number of tandem links using a range of different transmission technologies and possibly include additional processing such as encryption, secure pipes and variable length data buffering. The actual individual links and routes (i.e. the precise network routers) by which the data proceeds between the conference server 30 and the respective user terminals 37 need not be fixed - they could be changed even during a communication session to links suffering a lower delay, experiencing lower congestion or incurring a lower cost, for example.
Single-line dotted arrows 31 represent single-channel upstream audio data carrying audio contributions of individual conference participants being transmitted/streamed from respective user terminals 37 to (and within) the conference server 30. Double-line dotted arrows 32 represent multi-channel/rendered downstream audio data resulting from the processing and combining at the conference server 30 of audio contributions of multiple conference participants, which is transmitted/streamed from the conference server 30 to the respective user terminals 37. Unbroken arrows 33 represent digital control and/or reporting data travelling between the conference participants 37 the conference server 30. It will be understood that the paths for the respective types of data 31,32, 33 may be via the same or different servers, routers or other network nodes (not shown).
Referring to the conference server 30, which could be used or configured for use in performing a method according to a preferred embodiment, upstream audio inputs from conference clients (received from their respective user terminals 37) may be passed through jitter buffers 310 before being passed to an Analysis Unit 340 and on to a Conference Control Unit 360, and to a Mixer/Processor 350. The jitter buffers 310 may be used to prevent data packets being discarded if they suffer excessive delay as they pass through the data link. The length of each jitter buffer may be determined by a jitter buffer controller 320 using an optimisation process which takes into account the measured jitter on the data packets and the packet loss rate, for example.
The mixer/processor 350 receives the upstream audio data from the conference terminals 37 (via jitter buffers 310 where these are present), performs signal processing to combine and render the audio signals and distributes the mixed/rendered signal back to the conference terminals 37. The analysis unit 340 takes the upstream audio data, extracts delay and other performance indicators, and passes them to the conference control unit 360.
The conference control unit 360 may be a processor or processor module configured to implement a set of controls to other system components (e.g. providing instructions to server(s), routers etc. and/or providing instructions to be implemented on the conference server 30 itself or on the individual conference terminals 37 relating to adjustments to audio parameters, for example) based on system-specific rules applied to data such as speechquality, transmission delay data and other timing data with a view to mitigating adverse effects, and to improving user experience and perception. It may comprise processing modules and memories for processing and storing information in respect of paths to respective conference terminals, information relating to respective user terminals, etc., and may send control data to the jitter buffer controller 320 and to external system components such as user terminals 37, server(s) and/or routers on paths thereto and therefrom, etc.
As indicated earlier, delay can be caused by a number of factors, such as the following:
- Fundamental transmission delay due to the distance between talkers inherent in a network and any transmission system.
- The transmission system may itself perform buffering either as part of the processing or to allow for varying inter-arrival delay of packets in a packet based transmission system.
- Audio mixing in a conference bridge where one is used (i.e. one is generally used in the context of a multi-party communication session) can add delay due to processing such as noise reduction, level control, equalisation etc.
- Audio mixing in a conference bridge may also include filtering such as spatial audio processing and artificial room rendering which can add considerable delay.
-Techniques to reduce the impact of packet loss such as forward error correction and retransmission.
In order to illustrate the specific functionality of preferred embodiments concerned with mitigating problems caused by delay, an embodiment will first be explained in the context of a simple two-party communication session that does not involve the use of a conference server or bridge. In such embodiments that do not involve a conference bridge, additional functionality may need to be performed by the respective user terminals, as will be explained. After the explanation of the two-party embodiment that does not make use of a conference bridge, an explanation will be provided of how an embodiment may be implemented in the context of a multi-party communication session that does make use of a conference server or bridge (and which thereby enables additional functionality to be performed therein and/or in an associated module that will be referred to as a messaging bridge.
With reference to Figure 4, this illustrates the entities and interactions involved in a symmetrical two-way, two-party audio communication session performed according to an embodiment without a conference server or bridge, and relates to a scenario in which two participants or parties A, 4a and B, 4b are involved as speakers and as listeners in the communication session. Participant A, 4a uses user terminal A, 40a, providing audio contributions (i.e. speaking) into a microphone or other audio input interface 41a so these can be encoded then transmitted via a communications network 400 to other parties such as Participant B, 4b, and receiving audio contributions (i.e. listening) via a speaker, headset or other audio output interface 42a which have been decoded following transmission via the communications network 400 from other parties such as Participant B, 4b. Similarly, Participant B, 4b uses user terminal B, 40b, providing audio contributions into audio input interface 41b so these can be encoded then transmitted via communications network 400 to other parties such as Participant A, 4a, and receiving audio contributions via audio output interface 42b which have been decoded following transmission via the communications network 400 from other parties such as Participant A, 4a.
As indicated earlier, while the transmission via the network 400 may be via an entity such as a conference bridge, for simplicity, it will simply be shown here as passing directly through the network 400 between the parties concerned. To illustrate how delay may be introduced, the present embodiment will regard the route through network 400 as simply passing through a generic transmission system 440. As well as transmission delay, this may cause delay due to buffering and/or processing in buffering and/or processing modules 445.
Note that of the possible causes of delay given above, generally, the most significant delays are mostly due to audio processing and buffering of audio data. On account of this, it will generally be possible to transmit control and status data between the user terminals in question significantly more quickly than it is generally possible to transmit audio data between them. This is significant, because if any remote terminals can be made aware in advance that another person has started talking, the output of respective audio processors 46a, 46b (respectively providing audio outputs for Participants A and B) at each of those remote terminals 40a, 40b can be modified by adding pre-recorded speech elements or other pre-emptive sounds prior to the start of the reproduction by the remote terminal 40 in question of the actual utterance from the talker 4 in question. Such pre-emptive sounds will discourage any such remote participants from starting to talk before the actual, near-live audio data from the talker in question arrives at the remote terminal in question.
A benefit of preferred embodiments therefore is that listeners will hear what will sound like the start of a remote talker's speech (or other such pre-emptive sounds) earlier than they would using existing mechanisms. Consequently, the disturbing effect of talkers “talking over each other” may be reduced to a level comparable with natural conversation (i.e. without delays caused by buffering/processing 445 in the transmission path or by the transmission system 440 in general). Another way of looking at this is that such embodiments allow a smoother transition from a ‘nobody talking’ state to a One person talking’ state, avoiding or mitigating against disruptive situations such as those shown in Figure 2.
As will be explained later, preferred embodiments, in particular such as those which make use of a control entity, may also allow for automated arbitration to be performed, with an almost-immediate determination being made at the control entity as to which participant was actually the first to start talking after a ‘nobody talking’ state, then messages being sent from the control entity causing each other party to hear a pre-emptive sound indicative of that first talker starting to talk prior to the start of the reproduction of the actual utterance from the first talker. Further, the control entity may temporarily suppress the reproduction of those other participants' utterances (or may send messages instructing the user terminals to suppress such reproduction temporarily) in particular at the terminal of the first talker, in order not to cause that first talker to stop talking unnecessarily.
Referring again to Figure 4, each user terminal 40 (i.e. 40a and 40b) contains the following components in addition to the audio input 41 and output 42 interfaces discussed above:
- An audio processor 46 (i.e. audio processor 46a in User Terminal A, 40a, and audio processor 46b in User Terminal B, 40b). Each of these is in the received audio path in question, and contains appropriate processing to include in the audio data to be played to the local listener in question pre-emptive sounds indicative of the other (i.e. remote) party starting to make an audio contribution, thereby concealing the presence of the delay between that remote party actually having started to make an audio contribution and the audio data for that audio contribution actually arriving at the listener's terminal. In practice the audio processors 46 are likely to be combined with existing receiver processors and buffers so that no additional delay is caused.
- A decision system 47 (i.e. 47a and 47b) that detects certain phatic or filler speech sounds such as ‘Umm’, ‘Err’ etc.
- At least one memory or store 48 (i.e. 48a and 48b, 48ab and 48ba) for storing audio data such as speech patterns of the respective participants;
- A speech detector 49 (i.e. 49a and 49b) which detects when the local user is talking.
As well as various internal interactions and communication links within the respective user terminals 40, Figure 4 also shows three types of communication between the user terminals:
(i) in-session audio data communication (with dotted lines 44ab and 44ba representing the generally-subject-to-delay transmission of encoded audio data from the audio input interface 41 of one participant to the audio processor 46 of the other participant, via the network 400) via which encoded audio traffic passes, possibly via an audio bridge, audio processing, buffering etc.;
(ii) pre-session audio data communication of stored audio data (with dotted lines 45ab and 45ba representing the transmission (again via the network 400) of stored audio data from a speech pattern store 48 (i.e. 48a and 48b) at one participant's user terminal of that participant's own speech patterns to a speech pattern store 48 (i.e. 48ab and 48ba) at the other participant's user terminal of the other participant's speech patterns, this being done in advance of such data being needed at the receiving terminal during a communication session; and (iii) in-session direct message (i.e. non-audio) communication (with unbroken arrows 43ab and 43ba representing the generally almost-instantaneous transmission of message data from the speech detector 49 of one participant to the audio processor 46 of the other participant, again via the network 400) that enables small amounts of control and information data to be exchanged rapidly, without encountering the type of delays encountered by audio data (i.e. due to audio processing, buffering etc.).
In relation to these types of communication, it will be appreciated that while (i) and (iii) happen as part of an in-progress, real-time communication session, but need not involve the same path through the network, (ii) would generally not need to be done during a communication session, and would generally be completed prior to the actual communication session. Again, it need not involve the same path through the network as (i) or (iii).
Looking now at the various memories 48 for the storage of audio data such as speech patterns, there may just be a single memory in each terminal 40, or the data may even be stored remote from the terminals, but the memories are represented in Figure 4 as two different stores within each terminal, to illustrate the different purposes for which storage is used, as follows:
- In user terminal A, 40a, there is a local talker speech pattern store 48a for a range of speech patterns such as phatic/filler expressions from the local talker, Participant A, 4a, and there is also a remote talker speech pattern store 48ab for a range of speech patterns such as phatic/filler expressions from the remote talker, Participant B, 4b.
- In user terminal B, 40b, there is local talker speech pattern store 48b for a range of speech patterns such as phatic/filler expressions from the local talker, who in this case is Participant B, 4b, and there is also a remote talker speech pattern store 48ba for a range of speech patterns such as phatic/filler expressions from the remote talker, who in this case is Participant A, 4a.
Data may be exchanged between the above modules as shown by the broken arrows 45ab and 45ba. Thus, copies of A's speech patterns, locally-stored at A's own terminal 40a in store 48a, may be provided to B via link 45ab and stored in B's store of A's speech patterns 48ba.
Similarly, copies of B's speech patterns, locally-stored at B's own terminal 40b in store 48b, may be provided to A via link 45ba and stored in A's store of B's speech patterns 48ab.
The respective decision systems 47 (i.e. 47a and 47b) may learn about their respective local talker’s speech patterns, including typical sounds made as they begin talking. These are often quite short and may include noises such as breathing sounds or short noises that constitute the start of speech (Ummm..., Errr..., etc.). The decision systems 47 may use any suitable pattern recognition method, for example a neural network could be trained using offline speech patterns tagged by an expert user. The decision systems 47 could be static (e.g. a trained neural net) or could train themselves over the course of use (e.g. reinforced learning). They capture the sounds and place these in the local speech pattern stores 48a, 48b of the local user. The sounds could be stored in any number of ways e.g. as waveforms, parametric speech codes or a combination of both.
Once there is a database of the local user's sounds in a local user store 48a, 48b, or as they are added, the sounds (waveforms or parametric speech codes) may be transmitted to the remote stores 48ab, 48ba at other user's terminals as set out above. Ideally, the stores 48a, 48b of each participant's own sounds are populated before the start of the communication session and are transmitted to the respective remote talker stores 48ab, 48ba at or before call set-up time, but the databases could be built up from the start of the communication session and sent during the communication to update mirrored databases at remote stores 48ab, 48ba
Each sound in each local talker store 48a, 48b may be indexed (unique for each sound and each talker) and this index may be transmitted with the sounds to remote store 48ab, 48ba such that it too can access the database using the same key used at local store 48a, 48b.
During the communication, if the online detector 49a of user terminal A, 40a, detects a portion of speech that is the start of a turn constructional unit (or turn transition point) of its local user 4a, it determines the sound that most closely matches it in local store 48a and sends the index of this sound to the local audio processor 46a. The audio processor 46a uses the transmitted index to recall the sound stored in store 48ba and sends this index to the audio channel of the recipient, participant B, 4b. By mixing this sound from store 48ba into the stream for the transmitted audio (which will be received slightly later from participant A, 4a) the recipient 4b can hear the required sound (or a sound similar to it) before it has actually been transmitted through the network 400 (i.e. via the transmission system 440 and/or intermediary processing unit 445). The audio processor 46b may need to mix the preemptive (delay concealment) sound from store 48ba carefully with the actual sound transmission once this is received from the network 400, for example using a trailing window at the end of the sound from store 48ba. This process could be aided by data from the local speech detector 49b.
By virtue of the above, if A starts to talk (or makes a sound indicating that they are about to talk) slightly before B, a message will be sent from A's speech detector 49a to B's audio processor 46b, triggering B's audio processor to incorporate a pre-emptive sound previously transferred from A's store 48a to B's store 48ba into the stream to be played to B. (At the same time, A's speech detector 49a may also signal to A's own audio processor 46a that any message received shortly afterwards from B's speech detector 49b should be ignored, in order to prevent a corresponding pre-emptive sound previously transferred from B's store 48b to A's store 48ab from being incorporated into the stream to be played to A.)
Looking now at how this may be implemented in relation to a multi-party scenario, this may involve each party interacting via a user terminal such as the User Terminals 40 shown in Figure 4, with bi-directional messaging links between the respective parties/user terminals. Each remote talker store would preferably have a set of pre-emptive sounds from each other participating talker, an appropriate one of which would be incorporated into the local audio stream on receipt of a message from whichever other participant had started talking.
In relation to such a multi-party scenario, and in particular one based on a system such as that described with reference to Figure 3, in which a communication session is handled via an audio bridge via which audio signals are passed and processed between the respective participants, it may be more effective for the conference system to be adapted to have a bridge for the messaging/data paths as well as the bridge for the audio data, or one bridge performing both functions (i.e. processing/distributing messaging data/signals from each participant as well as audio data/signals from each participant). An example of this is shown in Figure 5. In this scenario the messaging bridge 60 could include components allowing further centralised analysis of speech traffic in order to provide additional data to the User Terminals 50. The interactions that may take place between these components and the user terminals are illustrated in Figure 6, which will be discussed later.
Referring to Figure 5, this illustrates the entities and interactions involved in a multi-party audio communication session performed according to an embodiment with an audio bridge 70 and a messaging bridge 60, and relates to a scenario in which four participants or parties may be involved as speakers and as listeners via respective user terminals 50 (i.e. User Terminals A (50a), B (50b), C (50c) and D (50d)).
Briefly leaving aside the interactions of the user terminals 50 and the audio bridge 70 of Figure 5 with the messaging bridge 60, and the respective functions specifically associated with those interactions, the user terminals 50 and audio bridge 70 may generally perform functions similar to those of the user terminals 37 and conference server 30 in the conference system of Figure 3, so their normal functions will not be described again in detail. They will be summarised, but the following explanation will otherwise concentrate on the specific functionality of these components when performing a method according to a preferred embodiment.
Each user terminal 50 has a user interface 52 for audio input and output from/to a user (e.g. a headset comprising a microphone and a speaker, for example) and a network interface 58, which allows for input and output of encoded audio data, which is exchanged via a network connection between the respective user terminals 50 and the audio bridge 70, and for input and output of messaging data, which is exchanged via a network connection between the respective user terminals 50 and the messaging bridge 60.
Each user terminal 50 also has a processor 54 and a store 56. The processor 54 is shown as a single module in order to avoid over-complicating the diagram, but may perform functions equivalent to at least some of the functions of the audio processors 46, the decision systems 47 and the speech detectors 49 in the user terminals 40 shown in Figure 4, with others (or their equivalents) being performed by components of the messaging bridge 60 or the audio bridge 70. Similarly, the store 56 is shown as a single module in order to simplify the diagram, but may perform functions equivalent to some or all of the functions of the speech pattern stores 48 in the user terminals 40 shown in Figure 4 (although again, some of these (or their equivalents) may instead be performed by components of the messaging bridge 60 or the audio bridge 70). Primarily, however, each store 56 may store audio data such as speech patterns and/or other pre-emptive noises of the respective participants, some being those of the local participant (in order that these can be sent via a network connection, possibly via the audio bridge 70, to other participants, and some being those of other participants (having been received from those other participants via a network connection, possibly via the audio bridge 70, for use by the terminal of the local participant). It will be noted that the audio data for the respective participants may be collected and stored centrally, in a store in the audio bridge 70, for example, and incorporated into the respective rendered audio streams as appropriate and forwarded on to the respective participants.
Each processor 54 receives audio contributions of the local participant and encodes these for transmission via a network connection and via the audio bridge 70 to the other participants. Each processor 54 also receives encoded audio contributions transmitted from other participants via a network connection and via the audio bridge 70 and decodes these for audio reproduction to the local participant. Further, each processor 54 analyses audio contributions of the local participant in order to identify the start of new utterances, in response to which it causes messages to be sent via a network connection to the messaging bridge 70, causing this to forward resulting messages to the other participants. Further, each processor 54 receives messages received from the messaging bridge 70 indicative of other participants having started to make new utterances, and incorporates into the audio output for the local participant appropriate pre-emptive noises indicative of other participants having started to make new utterances.
The audio bridge 70 and messaging bridge 60 are shown in Figure 5 as being separate modules. It will be appreciated that they are shown this way primarily in order to make the following explanation of their respective functions clearer. While they may be separate, functionally and in terms of location, they may in fact be co-located, or form parts of one combined bridge module performing the functions of both bridges (i.e. processing/distributing messaging data/signals from each participant as well as audio data/signals from each participant). Such a combined bridge may also perform other conference control functions such as that performed by conference server 30 in the conference system of Figure 3. Additionally, there exists a range of possible hybrid embodiments between those shown in Figures 4 and 5, in which various parts of the processing may be performed in different places or by different entities (e.g. in one or other of the bridges, where these are used, in the user terminals, or elsewhere in the network). Messages may be passed through one or other of the bridges, or may take a route avoiding passing through either of the bridges.
Referring specifically to the audio bridge 70, this is shown as having a Conference Controller 72 (which may perform functions corresponding to those of the Conference Control Unit 360 in the Conference Server 30 of Figure 3), an Analysis Unit 74 (which may perform functions corresponding to those of the Analysis Unit 340 in the Conference Server 30 of Figure 3), and a Mixer/Processor 76 (which may perform functions corresponding to those of the Mixer/Processor 350 in the Conference Server 30 of Figure 3). It also has audio interfaces (not shown in order to avoid making the diagram over-complex) via which it receives individual encoded audio contributions from the respective user terminals and via which it provides rendered downstream audio to the respective user terminals. The received and downstream audio signals are symbolised by the broken arrows between the user terminals 50 and the audio bridge 70.
In addition to the functions referred to above, the Conference Controller 72 of the audio bridge 70 is in communication with the messaging bridge 60, allowing it to receive instructions from the messaging bridge 60.
Referring specifically to the messaging bridge 60, this is shown as having a message aggregator 62 which receives the messaging signals (symbolised by unbroken arrows) from the user terminals 50, a message analyser 64 which analyses those messages in order to determine, for example, which participant started talking first in situations where two or more participants have started talking at times very close to one another, and/or perform other such arbitration between the participants and/or their respective audio contributions, and a message distributor 66 which sends messaging signals to the user terminals 50 in dependence on the outcome of the analysis by the message analyser 64. While the messaging signals from the messaging bridge 60 could simply be sent directly to the respective user terminals 50, instructing respective user terminals to include pre-emptive sounds in the respective audio playbacks to participants other than the first talker, in this example, they are also provided to the audio bridge, allowing this to adjust the rendered audio streams being provided to the respective user terminals 50 as well.
The messaging bridge 60 may thus interact with audio bridge 70, allowing this to suppress actual audio contributions (temporarily, at least) from participants other than the first talker, as well as performing the function of determining which messaging signals to send to which user terminals in order to trigger the playing of pre-emptive sounds to participants other than the first talker.
It will be appreciated that the audio bridge 70 and/or messaging bridge 60 may be configured to implement policies other than the priority to first talker policy explained above, either in addition to or as well as the priority to first talker policy. An example of another such policy may be a priority to a meeting's chairperson policy, according to which the beginning of an utterance by a designated chairperson may result in pre-emptive sounds indicative of the chairperson being played to each other participant, and/or audio contributions from those other participants temporarily being suppressed, even if the chairperson was not actually the first talker. Other possible priority policies may also be implemented in addition to or instead of the above.
Referring to Figure 6, this illustrates in more detail than Figure 5 the functional modules that may be present within the messaging bridge 60, in particular illustrating that messages from each of the user terminals 50 may be received by the message aggregator 62 in order that these may be combined and forwarded on to each of the user terminals 50, and may also be passed to the message analyser 64 for analysis (e.g. for arbitration as to which participant was the first to start an utterance). The combined messages from the message aggregator 62 and the results of the analysis by the message analyser 64 may both be passed to the message distributor 66 for distribution to the respective user terminals 50.
Other Embodiments and Options.
- While the decision systems 47 in Figure 4 may learn purely from their local talker, they (and/or their counterpart decision-making modules in the processors 54 or associated components of the messaging bridge 60 or audio bridge 70 in Figure 5) may learn from one or more other sources, from interactions with and/or between other talkers, for example.
- The speech stores 48 in Figure 4 (and/or their counterpart stores in Figure 5) may be built up from examples of the respective local and remote talker’s speech, but they could initially contain a number of standard sounds (e.g. breathing noise, throat-clearances, lip smacks, etc.) that could be used before a particular talker’s patterns are learned and stored.
- While the decision systems 47, speech pattern stores 48 and speech detectors 49 are shown as being located at the respective user terminals 40 in Figure 4, they may be located at an intermediary system, or at one particular user terminal. The location should be chosen such that the delay in decision-making and exchanging messages is minimised.
- The embodiments described above refer to audio channels of a communication session, but visual cues could also be provided to video-capable user terminals where appropriate. These could involve simple lights or symbols indicating that a talker has started talking or could involve more complex cues, such as on-screen changes to the visual image of a talker (e.g. changes to start moving the lips in a pattern that matches a chosen pre-emptive sound, possibly from a visual database learned from the talker in question). Similarly, visual cues may also be used to determine when participants begin to make contributions. If participant A joins an audiovisual conference, but participant B joins but for audio only, the system could detect that participant A is about to speak from a visual cue (e.g. opening mouth, sitting up, raising a finger) and send a message to participant B that participant A is about to speak.
- Pre-emptive sounds could be used for every utterance after a period of silence, for every instance of potentially-conflicting utterances, or for other types of situation. They could be used such that the pre-emptive sounds are only used when the amount of delay or the length of the period of silence being broken is above a certain threshold, for example.
- Processing such as that which may be implemented on actual speech signals in an audio bridge 70 (e.g. spatialized audio processing) may also be used on sounds such as those stored in the stores 48. Such processing would preferably be performed in advance of a communication session, but it may be done dynamically, during a communication session.
Figure 7 is a block diagram of a computer system suitable for the operation of embodiments of the present invention. A central processor unit (CPU) 702 is communicatively connected to a data store 704 and an input/output (I/O) interface 706 via a data bus 708. The data store
704 can be any read/write storage device or combination of devices such as a random access memory (RAM) or a non-volatile storage device, and can be used for storing executable and/or non-executable data. Examples of non-volatile storage devices include disk or tape storage devices. The I/O interface 706 is an interface to devices for the input or output of data, or for both input and output of data. Examples of I/O devices connectable to I/O interface 706 include a keyboard, a mouse, a display (such as a monitor) and a network connection.
Insofar as embodiments of the invention described are implementable, at least in part, using a software-controlled programmable processing device, such as a microprocessor, digital signal processor or other processing device, data processing apparatus or system, it will be appreciated that a computer program for configuring a programmable device, apparatus or system to implement the foregoing described methods is envisaged as an aspect of the present invention. The computer program may be embodied as source code or undergo compilation for implementation on a processing device, apparatus or system or may be embodied as object code, for example.
Suitably, the computer program is stored on a carrier medium in machine or device readable form, for example in solid-state memory, magnetic memory such as disk or tape, optically or magneto-optically readable memory such as compact disk or digital versatile disk etc., and the processing device utilises the program or a part thereof to configure it for operation. The computer program may be supplied from a remote source embodied in a communications medium such as an electronic signal, radio frequency carrier wave or optical carrier wave. Such carrier media are also envisaged as aspects of the present invention.
It will be understood by those skilled in the art that, although the present invention has been described in relation to the above described example embodiments, the invention is not limited thereto and that there are many possible variations and modifications which fall within the scope of the invention.
The scope of the invention may include other novel features or combinations of features disclosed herein. The applicant hereby gives notice that new claims may be formulated to such features or combinations of features during prosecution of this application or of any such further applications derived therefrom. In particular, with reference to the appended claims, features from dependent claims may be combined with those of the independent claims and features from respective independent claims may be combined in any appropriate manner and not merely in the specific combinations enumerated in the claims.

Claims (15)

1) A method of managing a streamed audio communication session between a plurality of user devices, the user devices being configured to send streamed data indicative of received audio contributions from respective participants in a multiple-participant audio communication session via a communications network to one or more other user devices for conversion to audio representations of said received audio contributions for one or more respective other participants; the method comprising:
monitoring audio contributions from respective participants, and in response to detection therefrom that a first participant is beginning to make an audio contribution at a first one of said user devices, providing a signal for at least one of said other user devices indicating that the first participant is beginning to make an audio contribution;
in response to receipt at said at least one other user device of a signal indicating that the first participant is beginning to make an audio contribution, triggering a predetermined audible indication for a respective participant at said at least one other user device indicating that said first participant is beginning to make an audio contribution.
2) A method according to Claim 1 wherein the monitoring of audio contributions from a respective participant is performed at the user device of said participant.
3) A method according to Claim 1 or 2 wherein the predetermined audible indication for a respective participant at said at least one other user device indicating that said first participant is beginning to make an audio contribution is an audio representation of data previously stored at said at least one other user device.
4) A method according to Claim 3 wherein the data previously stored at said at least one other user device is a representation of a sound indicative of a participant beginning to make an audio contribution.
5) A method according to Claim 3 or 4 wherein the data previously stored at said at least one other user device is a representation of a sound indicative of said first participant beginning to make an audio contribution.
6) A method according to Claim 3, 4 or 5 wherein the data previously stored at said at least one other user device is a representation of a sound previously received from said first participant at said first user device.
7) A method according to Claim 3, 4, 5 or 6 wherein the data previously stored at said at least one other user device is data determined in dependence on analysis of previouslyreceived received audio contributions at said first user device.
8) A method according to any of the preceding claims wherein the audio communication session is managed by a session control device via which the user devices are configured to send said streamed data indicative of received audio contributions for forwarding to one or more other user devices.
9) A method according to Claim 8 wherein the session control device is configured to identify, in response to respective detections that respective participants are beginning to make respective audio contributions, which of said respective participants was the earlier or earliest to begin to make an audio contribution.
10) A method according to Claim 9 wherein the session control device is configured to provide signals for at least one user device other than that of the participant who was the earlier or earliest to begin to make an audio contribution that the participant who was the earlier or earliest to begin to make an audio contribution is beginning to make an audio contribution.
11) A method according to Claim 9 or 10 wherein the session control device is configured temporarily to suppress signals for the user device of the participant who was the earlier or earliest to begin to make an audio contribution that any other participant is beginning to make an audio contribution and/or temporarily to suppress audio representations of audio contributions from any other participant from being provided for the participant who was the earlier or earliest to begin to make an audio contribution.
12) A method according to any of the preceding claims wherein the audio communication session is managed by a session control device to which respective user devices are configured to send messages indicative of detections at said user devices that respective participants are beginning to make audio contributions, and from which messages are provided for other user devices indicative respective participants having begun to make audio contributions.
13) A method according to Claim 12 wherein the session control device to which respective user devices are configured to send messages is configured to determine which of a plurality of participants identified as beginning to make audio contributions is to be prioritised, and is configured to provide messages for one or more other participants in dependence on such determinations.
14) Apparatus for managing a streamed audio communication session between a plurality of user devices, the user devices being configured to send streamed data indicative of received audio contributions from respective participants in a multiple-participant audio communication session via a communications network to one or more other user devices for conversion to audio representations of said received audio contributions for one or more respective other participants; the apparatus comprising one or more processors operable to:
monitor audio contributions from respective participants, and in response to detection therefrom that a first participant is beginning to make an audio contribution at a first one of said user devices, to provide a signal for at least one of said other user devices indicating that the first participant is beginning to make an audio contribution; and in response to receipt at said at least one other user device of a signal indicating that the first participant is beginning to make an audio contribution, to trigger a predetermined audible indication for a respective participant at said at least one other user device indicating that said first participant is beginning to make an audio contribution.
15) A computer program element comprising computer program code to, when loaded into a computer system and executed thereon, cause the computer to perform the steps of a method as claimed in any of claims 1 to 13.
GB1721775.3A 2017-12-22 2017-12-22 Managing streamed audio communication sessions Expired - Fee Related GB2569650B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1721775.3A GB2569650B (en) 2017-12-22 2017-12-22 Managing streamed audio communication sessions

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1721775.3A GB2569650B (en) 2017-12-22 2017-12-22 Managing streamed audio communication sessions

Publications (3)

Publication Number Publication Date
GB201721775D0 GB201721775D0 (en) 2018-02-07
GB2569650A true GB2569650A (en) 2019-06-26
GB2569650B GB2569650B (en) 2020-11-25

Family

ID=61131468

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1721775.3A Expired - Fee Related GB2569650B (en) 2017-12-22 2017-12-22 Managing streamed audio communication sessions

Country Status (1)

Country Link
GB (1) GB2569650B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11444795B1 (en) 2021-02-25 2022-09-13 At&T Intellectual Property I, L.P. Intelligent meeting assistant

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6466550B1 (en) * 1998-11-11 2002-10-15 Cisco Technology, Inc. Distributed conferencing system utilizing data networks
GB2463153A (en) * 2008-09-09 2010-03-10 Avaya Inc Notification of dropped audio in a teleconference call
US20100284310A1 (en) * 2009-05-05 2010-11-11 Cisco Technology, Inc. System for providing audio highlighting of conference participant playout
US20110184736A1 (en) * 2010-01-26 2011-07-28 Benjamin Slotznick Automated method of recognizing inputted information items and selecting information items

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6466550B1 (en) * 1998-11-11 2002-10-15 Cisco Technology, Inc. Distributed conferencing system utilizing data networks
GB2463153A (en) * 2008-09-09 2010-03-10 Avaya Inc Notification of dropped audio in a teleconference call
US20100284310A1 (en) * 2009-05-05 2010-11-11 Cisco Technology, Inc. System for providing audio highlighting of conference participant playout
US20110184736A1 (en) * 2010-01-26 2011-07-28 Benjamin Slotznick Automated method of recognizing inputted information items and selecting information items

Also Published As

Publication number Publication date
GB2569650B (en) 2020-11-25
GB201721775D0 (en) 2018-02-07

Similar Documents

Publication Publication Date Title
EP3729770B1 (en) Managing streamed audio communication sessions
CN102422639B (en) System and method for translating communications between participants in a conferencing environment
US11356492B2 (en) Preventing audio dropout
EP1942646A2 (en) Multimedia conferencing method and signal
CN101627576B (en) Multipoint conference video switching
JP5103734B2 (en) A system that provides status for remote conferencing
US8885804B2 (en) Method for carrying out an audio conference, audio conference device, and method for switching between encoders
US10009475B2 (en) Perceptually continuous mixing in a teleconference
US20080137558A1 (en) Catch-up playback in a conferencing system
US12067992B2 (en) Audio codec extension
US9426087B2 (en) Reduced system latency for dominant speaker
US9014058B2 (en) Enhancement of audio conference productivity through gain biasing
US20180091563A1 (en) Streamed communication
GB2569650A (en) Managing streamed audio communication sessions
WO2015107382A1 (en) Method and device for packet acoustic echo cancellation
Akoumianakis et al. The MusiNet project: Towards unraveling the full potential of Networked Music Performance systems
US11425258B2 (en) Audio conferencing in a room
CN113079267B (en) Audio conferencing in a room
US20240121280A1 (en) Simulated choral audio chatter
WO2024160496A1 (en) Proximity-based audio conferencing
JPH07226930A (en) Communication conference system

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20231222