WO2003094383A1 - Wireless conferencing system - Google Patents

Wireless conferencing system Download PDF

Info

Publication number
WO2003094383A1
WO2003094383A1 PCT/US2003/014120 US0314120W WO03094383A1 WO 2003094383 A1 WO2003094383 A1 WO 2003094383A1 US 0314120 W US0314120 W US 0314120W WO 03094383 A1 WO03094383 A1 WO 03094383A1
Authority
WO
WIPO (PCT)
Prior art keywords
frames
series
audio signal
terminal
source
Prior art date
Application number
PCT/US2003/014120
Other languages
French (fr)
Inventor
Doree Duncan Seligmann
Original Assignee
Avaya Technology Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Avaya Technology Corporation filed Critical Avaya Technology Corporation
Priority to US10/672,319 priority Critical patent/US7539486B2/en
Publication of WO2003094383A1 publication Critical patent/WO2003094383A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/56Arrangements for connecting several subscribers to a common circuit, i.e. affording conference facilities
    • H04M3/562Arrangements for connecting several subscribers to a common circuit, i.e. affording conference facilities where the conference facilities are distributed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/56Arrangements for connecting several subscribers to a common circuit, i.e. affording conference facilities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W76/00Connection management
    • H04W76/40Connection management for selective distribution or broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2207/00Type of exchange or network, i.e. telephonic medium, in which the telephonic communication takes place
    • H04M2207/18Type of exchange or network, i.e. telephonic medium, in which the telephonic communication takes place wireless networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/02Details of telephonic subscriber devices including a Bluetooth interface

Definitions

  • the present invention relates to telecommunications in general, and, more particularly, to a wireless conferencing system.
  • a conference call is a commonplace tool of business typically involving three or more people at two or more physically separated locations. Often, the number of participants in a conference call is small, and all participants can typically follow along with the conversations.
  • Conference calls can involve a large number of participants, or the number of different locations involved can be large, or both. Furthermore, individual participants might be calling in from "remote" locations (e.g., home, an airport, a hotel room, etc.). The number of people involved, locations represented, and remote participants can diminish the effectiveness of the conference call. Specifically, participants in a conference call are subject to several limitations, including:
  • the present invention is a conferencing system that is based upon wireless technology. Access points, one or more per conference area, distribute the audio to and from each participant in each conference area. Wireless terminals, typically headsets, are used by the participants to receive audio from other participants that are not collocated and to transmit audio to other participants that are not collocated.
  • a mixer keeps track of which participants are collocated with each other versus not collocated with each other. By doing so, the mixer transmits the composite audio signal to each specific participant that at a minimum represents the audio from those participants who are not collocated with the specific participant. Furthermore, the mixer can be used to create subconferences, sidebars, and whispering effects. The mixer can also be used to bridge in audio messages to specific participants during the course of a call. By managing the distractions of the overall group of participants, the conferencing system can improve the effectiveness of conference calls.
  • the illustrative embodiment of the present invention comprises: receiving a first series of frames that represents a first audio signal from a first source; receiving a second series of frames that represents a second audio signal from a second source; forming a third series of frames that represents a composite signal comprising at least one of the first audio signal and the second audio signal, wherein the composite signal is based on the location of the first source relative to the second source; and transmitting the third series of frames.
  • Figure 1 depicts conferencing system 100 in accordance with the illustrative embodiment of the present invention.
  • Figure 2 depicts a block diagram of the salient components of access point 102-/ in accordance with the illustrative embodiment of the present invention.
  • FIG. 3 depicts a block diagram of the salient components of terminal 103-/-; ' in accordance with the illustrative embodiment of the present invention.
  • Figure 4 depicts a block diagram of the salient components of controller 105-/ in accordance with the illustrative embodiment of the present invention.
  • Figure 5 depicts a block diagram of the salient components of mixer 106-/ in accordance with the illustrative embodiment of the present invention.
  • Figure 6 depicts a block diagram an exemplary set of terminals with mixer 106-/ in accordance with the illustrative embodiment of the present invention.
  • Figure 7 depicts a flowchart of the illustrative embodiment of the present invention.
  • FIG. 1 depicts a schematic diagram of conferencing system 100, in accordance with the illustrative embodiment of the present invention.
  • Conferencing system 100 comprises telecommunications network 101, access points 102-1 through 102-M, terminals 103-/-1 through 103-/- ⁇ /, associated with each access point 102-/, remote terminals 104-1 through 104-P, controllers 105-1 through 105-Q, and mixers 106-1 through 106-/ , interconnected as shown.
  • M, P, Q, and R are positive integers.
  • Telecommunications network 101 is a distribution system comprising switches and a backbone network that allows access points 102-1 through 102-M to communicate with each other.
  • telecommunications network 101 can comprise the Public Switched Telephone Network (PSTN), or telecommunications network 101 can comprise a local or wide area network (e.g., within a building, encompassing multiple locations of a corporation, etc.). It will be clear to those skilled in the art how to make and use telecommunications network 101.
  • PSTN Public Switched Telephone Network
  • telecommunications network 101 can comprise a local or wide area network (e.g., within a building, encompassing multiple locations of a corporation, etc.). It will be clear to those skilled in the art how to make and use telecommunications network 101.
  • access point 102-/ can communicate in accordance with the Bluetooth protocol or another wireless air interface protocol, wherein a limit might exist on the number of active participants that can be associated with a given access point at any given time.
  • Bluetooth has a limit of eight active terminals. Therefore, more than one access point might be required within a given conference room or area.
  • Each access point 102-/ has associated with it one or more terminals.
  • the number of terminals associated with each access point 102-/ can be different across access points.
  • some or all of the terminals throughout conferencing system 100 can be headsets.
  • Each participant in the conference room or area uses terminal 103-i-j to hear the non-collocated participants and to talk to the non-collocated participants.
  • the headsets can replace the speaker and microphone associated with a speakerphone, if all participants in the conference room use headsets.
  • Terminal 103-i-j is described in detail later. It will be clear to those skilled in the art, after reading this specification, how to make and use terminal 103-i-j.
  • the audio signals exchanged between remote terminal 104-/ and terminal 103-i-j are converted between different formats within telecommunications network 101, in some embodiments by mixer 106-/. It will be clear to those skilled in the art how to convert audio signals between different formats (e.g., TI pulse-code modulation versus Bluetooth, etc.).
  • controller 105-/ can manage the audio signals, notify the presence of participants, and allow other systems to both control and retrieve information during a meeting.
  • Controller 105-/ can be accessed by one or more participants during the conference call, or alternatively by a conference call attendant, through a graphical user interface or a web-based interface. Controller 105-/ is described in detail later.
  • Mixer 106-/ is described in detail later. It will be clear to those skilled in the art, after reading this specification, how to make and use mixer 106-/.
  • FIG. 2 depicts a block diagram of the salient components of access point 102-/ in accordance with the illustrative embodiment of the present invention.
  • Access point 102-/ comprises receiver 201, processor 202, memory 203, transmitter 204, and network interface 205, interconnected as shown.
  • Receiver 201 is a circuit that is capable of receiving frames (i.e., packets) from the shared communications channel, in well-known fashion, and of forwarding them to processor 202.
  • the frames include both data frames (e.g., for transmitting audio, etc.) and control frames (e.g., for transmitting identifiers of terminals, etc.). It will be clear to those skilled in the art how to make and use receiver 201.
  • Processor 202 is a general-purpose processor that is capable of performing the tasks described below and with respect to Figures 6 and 7. It will be clear to those skilled in the art, after reading this specification, how to make and use processor 202.
  • Memory 203 is capable of storing programs and data used by processor 202. It will be clear to those skilled in the art how to make and use memory 203.
  • Transmitter 204 is a circuit that is capable of receiving frames from processor 202, in well-known fashion, and of transmitting them on the shared communications channel. It will be clear to those skilled in the art how to make and use transmitter 204.
  • Network interface 205 is a circuit that is capable of transmitting frames to telecommunications network 101 received from processor 202.
  • Network interface 205 is also capable of receiving frames from telecommunications network 101 to send to processor 202. It will be clear to those skilled in the art how to make and use network interface 205.
  • Access point 102-/ can be a terminal with wireless capabilities and conferencing software, a computer with the ability to communicate in accordance with a specific air interface protocol, or a standalone access point connected to a wider network (e.g., telecommunications network 101, etc.). Access point 102-/ communicates over a shared communications channel with one or more terminals. Access point 102-/ can be associated with an Internet Protocol (IP) terminal, a modified Bluetooth-enabled phone, or a phone modified to operate in accordance with another wireless air interface protocol.
  • IP Internet Protocol
  • FIG. 3 depicts a block diagram of the salient components of terminal 103-i-j in accordance with the illustrative embodiment of the present invention.
  • Terminal 103-i-j comprises receiver 301, processor 302, memory 303, speaker 304, microphone 305, and transmitter 306, interconnected as shown.
  • Receiver 301 is a circuit that is capable of receiving frames from the shared communications channel, in well-known fashion, and of forwarding them to processor 302.
  • the frames include both data frames (e.g., for transmitting audio, etc.) and control frames (e.g., for transmitting distinctive tone information, etc.). It will be clear to those skilled in the art how to make and use receiver 301.
  • Processor 302 is a general-purpose processor that is capable of performing the tasks described below and with respect to Figures 6 and 7. It will be clear to those skilled in the art, after reading this specification, how to make and use processor 302.
  • Memory 303 is capable of storing programs and data used by processor 302. It will be clear to those skilled in the art how to make and use memory 303.
  • Speaker 304 is an acoustic transducer that is capable of accepting electrical signals from processor 302 and converting those signals to audio signals that the user of terminal 103-i-j can hear. It will be clear to those skilled in the art how to make and use speaker 304.
  • Microphone 305 is an acoustic transducer that is capable of accepting audio signals from the user of terminal 103-i-j and converting those signals to electrical signals transmitted to processor 302. It will be clear to those skilled in the art how to make and use microphone 305.
  • Transmitter 306 is a circuit that is capable of receiving frames (i.e., data and control frames) from processor 302, in well-known fashion, and of transmitting them on the shared communications channel. It will be clear to those skilled in the art how to make and use transmitter 306.
  • Terminal 103-i-j has a unique identifier and this identifier can be associated with the user of terminal 103-i-j.
  • Other wireless devices can also be associated with the user of terminal 103-i-j. This association can be used to record the identities of the participants as each wireless device (e.g., terminal 103-/-; ' , etc.) makes its presence known.
  • FIG. 4 depicts a block diagram of the salient components of controller 105-/ ' in accordance with the illustrative embodiment of the present invention.
  • Controller 105-/ comprises network interface 401, processor 402, memory 403, and operator interface 404, interconnected as shown.
  • Network interface 401 is a circuit that is capable of transmitting frames to telecommunications network 101 received from processor 402.
  • Network interface 401 is also capable of receiving frames from telecommunications network 101 to send to processor 402. It will be clear to those skilled in the art how to make and use network interface 401.
  • Processor 402 is a general-purpose processor that is capable of performing the tasks described below and with respect to Figures 6 and 7. It will be clear to those skilled in the art, after reading this specification, how to make and use processor 402.
  • Memory 403 is capable of storing programs and data used by processor 402. It will be clear to those skilled in the art how to make and use memory 403.
  • Operator interface 404 is capable of accepting commands from an attendant or a conference call participant. Operator interface 404 is also capable of displaying information (e.g., status, etc.) and of representing that information to the attendant or participant. It will be clear to those skilled in the art how to make and use operator interface 404.
  • the participants or attendant or both can initiate calls on conferencing system 100 by using controller 105-/.
  • a participant can initiate a call by using a companion device (e.g., a wireless tablet computer that is present in a conferencing area, etc.) and interface tied to a personnel directory or some other database of people.
  • An application program interface delivered through the Internet and wirelessly over a shared communications channel can serve as the means to initiate calls.
  • Controller 105-/ can take the input transmitted from the companion device and map the input into dialing instructions that are understandable by switches present in telecommunications network 101.
  • FIG. 5 depicts a block diagram of the salient components of mixer 106-/ in accordance with the illustrative embodiment of the present invention.
  • Mixer 106-/ comprises network interface 501, processor 502, memory 503, and operator interface 504, interconnected as shown.
  • Network interface 501 is a circuit that is capable of transmitting frames to telecommunications network 101 received from processor 502.
  • Network interface 501 is also capable of receiving frames from telecommunications network 101 to send to processor 502. It will be clear to those skilled in the art how to make and use network interface 501.
  • Processor 502 is a general-purpose processor that is capable of performing the tasks described below and with respect to Figures 6 and 7. It will be clear to those skilled in the art, after reading this specification, how to make and use processor 502.
  • Memory 503 is capable of storing programs and data used by processor 502. It will be clear to those skilled in the art how to make and use memory 503.
  • Operator interface 504 is capable of accepting commands from an attendant or a conference call participant. Operator interface 504 is also capable of displaying information (e.g., status, etc.) and of representing that information to the attendant or participant. It will be clear to those skilled in the art how to make and use operator interface 504.
  • Mixer 106-/ can be used to adjust volume levels of the various audio signals.
  • the volume levels can be controlled through an automatic gain control mechanism associated with mixer 106-/, or each participant can adjust the volume of each speaking participant manually to a level that is correct for that listening participant.
  • Figure 6 depicts an alternative block diagram of conferencing system 100, in which the interconnections between exemplary terminal 103-1-1, terminal 103-2-1, terminal 103-5-1, and mixer 106-1 are shown so as to highlight the streams of data (e.g., audio, etc.) flowing between each element depicted. Other elements present in conferencing system 100 are omitted from Figure 6 to emphasize the flows between each element depicted.
  • Terminals 103-1-1 and 103-2-1 are associated with conference call participants who are both present in a first conference room.
  • Terminal 103-5-1 is associated with a conference call participant who is present in a second conference room.
  • Terminal 103-1-1 transmits across path 601-1 a first series of frames that represents a first audio signal.
  • the first audio signal can represent the voice of the user of terminal 103-1-1.
  • Terminal 103-1-2 transmits across path 601-2 a second series of frames that represents a second audio signal.
  • the second audio signal can represent the voice of the user of terminal 103-1-2.
  • Terminal 103-5-1 transmits across path 601-3 a third series of frames that represents a third audio signal.
  • the third audio signal can represent the voice of the user of terminal 103-5-1.
  • Mixer 106-1 receives all three series of frames, as well as possibly other series of frames representing other audio signals. Furthermore, mixer 106-1 can receive control frames, either from the terminals depicted or from other sources (e.g., controller 105-/, ere.) throughout conferencing system 100.
  • sources e.g., controller 105-/, ere.
  • Mixer 106-1 forms a fourth series of frames that represents a composite comprising at least one of the first audio signal, the second audio signal, and the third audio signal. Mixer 106-1 transmits the fourth series of frames across path 602-1. Terminal 103-1-1 receives the fourth series of frames.
  • Mixer 106-1 forms a fifth series of frames that represents a composite comprising at least one of the first audio signal, the second audio signal, and the third audio signal. Mixer 106-1 transmits the fifth series of frames across path 602-2. Terminal 103-1-2 receives the fifth series of frames.
  • Mixer 106-1 forms a sixth series of frames that represents a composite signal comprising at least one of the first audio signal, the second audio signal, and the third audio signal. Mixer 106-1 transmits the sixth series of frames across path 602-3. Terminal 103-5-1 receives the sixth series of frames.
  • Mixer 106-1 knows that terminal 103-1-1, terminal 103-1-2, and 103-5-1 are participating on the conference call because the three terminals transmit terminal identifiers to mixer 106-1.
  • the identifiers can be sent when a terminal joins a call, sent periodically, sent in a control frame, sent as part of a data frame, or sent in other ways.
  • the identifiers can be sent directly to mixer 106-1 or through another system (e.g., controller 105-/, etc.). It will be clear to those skilled in the art how to determine the participants involved in a conference call.
  • mixer 106-1 knows where the three terminals are located relative to each other for determining collocation status. This can be determined by noting the access point through which a given terminal is participating on the conference call, or it can be determined through some other means.
  • the access point identifier, or equivalent, can be provided to mixer 106-1.
  • remote terminal 104-/ a conventional handset, it can be assumed that the user of remote terminal 104-/ has to receive the audio signals from all of the other participants, since the collocation status of remote terminal 104-/ might be indeterminate.
  • a participant or attendant associated with a conference call can adjust the volume levels of the individual audio signals that are represented in a series of frames transmitted by mixer 106-1.
  • the user of terminal 103-1-1 can adjust the individual levels of the audio signals represented in the fourth series, which is being received by terminal 103-1-1.
  • the user adjusts the levels by sending a command to mixer 106-1, either directly or through another system (e.g., controller 105-/, etc.).
  • Mixer 106-1 can select which input audio signals are included in the composite signal represented by an output series of frames (i.e., frames transmitted by mixer 106-1). For instance, since the users of terminals 103-1-1 and 103-1-2 are known to be in the same conference room and can hear each other without the use of headsets, the series of frames being provided to terminal 103-1-1 does not have to include the audio signal from terminal 103-1-1 or the audio signal from terminal 103-1-1, for that matter. Mixer 106-1 only includes the audio signal from terminal 103-5-1 in what is transmitted to terminals 103-1-1 and 103-1-2, since the user of terminal 103-5-1 is in a separate conference area.
  • the mixing of the audio signals into each output series of frames is dependent on the location of one source (e.g., terminal 103-1- 1, etc.) with respect to another source (e.g., terminal 103-1-2, terminal 103-5-1, etc.).
  • one source e.g., terminal 103-1- 1, etc.
  • another source e.g., terminal 103-1-2, terminal 103-5-1, etc.
  • terminal 103-5-1 At the start of a conference call, the users of terminals 103-1-1 and terminals 103-1-2 are waiting in a conference room for other participants to join on.
  • terminal 103-5-1 sends a terminal identifier (i.e., via a control frame or in the header of a data frame) to the network.
  • Mixer 106-1 is then directed to mix into the composite audio signals being transmitted to terminals 103-1-1 and 103-1-2 an indication of the participant who joined the call.
  • the indication can at a whisper level of volume, and it can be in a different tone than the audio of the person speaking (e.g., female voice if a male is speaking, etc.).
  • a user of a terminal begins to speak, the other users who are not collocated in the same area can receive information on the person speaking. For example, suppose the user of terminal 103-1-1 begins to speak. Either terminal 103-1-1 can transmit the start-of-speech event to mixer 106-1 or mixer 106-1 can detect the event. It will be clear to those skilled in the art how to recognize that there is voice activity at a particular terminal.
  • Mixer 106-1 determines which terminals are not collocated with respect to terminal 103-1-1, in this case, terminal 103-5-1.
  • Mixer 106-1 mixes in an audio signal containing information on the user of terminal 103-1-1 (e.g., the name of the person, ere).
  • the indication can at a whisper level of volume, and it can be in a different tone than the audio of the person speaking (e.g., female voice if a male is speaking, etc.).
  • an outside party i.e., a person that is not part of the conference call
  • the outside party can do so without interrupting the other participants.
  • the user of terminal 103-1-2 has made prior arrangements with a secretary to have the secretary interrupt the user if a particular event occurs (e.g., an important person calls, a piece of news relevant to the conference call is learned, etc.).
  • the secretary can command mixer 106-1 to mix in an audio signal into the frames being received by terminal 103-1-2 to bridge in either a message to call the secretary or the actual information that the secretary wishes to convey to the user.
  • Mixer 106-/ can create subconferences, sidebar conversations, and whispering effects. Examples of audio messages that can be bridged in include alerts and interruptions.
  • controller 105-/ can handle the functionality associated with mixer 106-/ or work in concert with mixer 106-/.
  • Figure 7 depicts a flowchart of the salient tasks performed by the illustrative embodiment of the present invention. It will be clear to those skilled in the art which tasks depicted in Figure 7 can be performed simultaneously or in a different order than that depicted.
  • mixer 106-/ receives a first series of frames that represents a first audio signal from a first source.
  • the first source can be a wireless headset that communicates in accordance with the Bluetooth protocol or another air interface protocol.
  • the second source can be a wireless headset that communicates in accordance with the Bluetooth protocol or another air interface protocol.
  • mixer 106-/ forms a third series of frames that represents a composite comprising at least one of the first audio signal and the second audio signal.
  • the composite is based on the location of the first source relative to the second source.
  • the individual levels of the first audio signal and the second audio signal as represented in the third series of frames are adjustable remotely.
  • mixer 106-/ ' transmits the third series of frames.

Abstract

A technique for improving the effectiveness of conference calls is disclosed. The present invention is a conferencing system (100) that is based on wireless technology. Access points (102-I-j), one or more per conference area, distribute the audio to and from each participant (103-1-N) in the conference area. Wireless terminals (103-1-N) are used by the participants to receive audio from other participants that are not collocated and to transmit audio to other participants that are not collocated.

Description

WIRELESS CONFERENCING SYSTEM
Cross-Reference to Related Applications
[oooi] This application claims the benefit of U.S. Provisional Patent Application 60/378,193, filed on 6 May 2002, Attorney Docket 501034-A-OO-US, entitled "Bluetooth Conference Room Phone System: Bluetooth Headsets Replace the Meeting Room Speakerphone," which is also incorporated by reference.
Field of the Invention
[0002] The present invention relates to telecommunications in general, and, more particularly, to a wireless conferencing system.
Background of the Invention
[0003] A conference call is a commonplace tool of business typically involving three or more people at two or more physically separated locations. Often, the number of participants in a conference call is small, and all participants can typically follow along with the conversations.
[0004] Conference calls, however, can involve a large number of participants, or the number of different locations involved can be large, or both. Furthermore, individual participants might be calling in from "remote" locations (e.g., home, an airport, a hotel room, etc.). The number of people involved, locations represented, and remote participants can diminish the effectiveness of the conference call. Specifically, participants in a conference call are subject to several limitations, including:
• Difficulty in hearing non-collocated participants,
• Difficulty on the part of remote participants or participants in one conference room to hear participants in another conference room,
• Ambient noises (e.g., fan noise, rustling of papers, etc.) causing further interference,
• The audio from different, non-collocated participants being received at different volumes,
• Having to physically manipulate the speakerphone to adjust it,
• Phone numbers and other relevant information having to be entered by hand,
• Reliance on a paper record for a "complete" list of participants, and • The participant who is speaking at any given moment not being easily identified by non-collocated participants.
Summary of the Invention
[0005] The present invention is a conferencing system that is based upon wireless technology. Access points, one or more per conference area, distribute the audio to and from each participant in each conference area. Wireless terminals, typically headsets, are used by the participants to receive audio from other participants that are not collocated and to transmit audio to other participants that are not collocated.
[0006] A mixer keeps track of which participants are collocated with each other versus not collocated with each other. By doing so, the mixer transmits the composite audio signal to each specific participant that at a minimum represents the audio from those participants who are not collocated with the specific participant. Furthermore, the mixer can be used to create subconferences, sidebars, and whispering effects. The mixer can also be used to bridge in audio messages to specific participants during the course of a call. By managing the distractions of the overall group of participants, the conferencing system can improve the effectiveness of conference calls.
[0007] The illustrative embodiment of the present invention comprises: receiving a first series of frames that represents a first audio signal from a first source; receiving a second series of frames that represents a second audio signal from a second source; forming a third series of frames that represents a composite signal comprising at least one of the first audio signal and the second audio signal, wherein the composite signal is based on the location of the first source relative to the second source; and transmitting the third series of frames.
Brief Description of the Drawings
[0008] Figure 1 depicts conferencing system 100 in accordance with the illustrative embodiment of the present invention.
[0009] Figure 2 depicts a block diagram of the salient components of access point 102-/ in accordance with the illustrative embodiment of the present invention.
[ooio] Figure 3 depicts a block diagram of the salient components of terminal 103-/-;' in accordance with the illustrative embodiment of the present invention.
[ooii] Figure 4 depicts a block diagram of the salient components of controller 105-/ in accordance with the illustrative embodiment of the present invention. [0012] Figure 5 depicts a block diagram of the salient components of mixer 106-/ in accordance with the illustrative embodiment of the present invention.
[0013] Figure 6 depicts a block diagram an exemplary set of terminals with mixer 106-/ in accordance with the illustrative embodiment of the present invention.
[0014] Figure 7 depicts a flowchart of the illustrative embodiment of the present invention.
Detailed Description
[0015] FIG. 1 depicts a schematic diagram of conferencing system 100, in accordance with the illustrative embodiment of the present invention. Conferencing system 100 comprises telecommunications network 101, access points 102-1 through 102-M, terminals 103-/-1 through 103-/-Λ/, associated with each access point 102-/, remote terminals 104-1 through 104-P, controllers 105-1 through 105-Q, and mixers 106-1 through 106-/ , interconnected as shown. M, P, Q, and R are positive integers.
[0016] Telecommunications network 101 is a distribution system comprising switches and a backbone network that allows access points 102-1 through 102-M to communicate with each other. As examples, telecommunications network 101 can comprise the Public Switched Telephone Network (PSTN), or telecommunications network 101 can comprise a local or wide area network (e.g., within a building, encompassing multiple locations of a corporation, etc.). It will be clear to those skilled in the art how to make and use telecommunications network 101.
[0017] Access point 102-/, for /=1 to M, is used to distribute audio, control, and other signals to and from each participant in a conference call in a particular room or area.
[0018] There can be more than one access point 102-/ for multiple reasons. First, access point 102-/ can communicate in accordance with the Bluetooth protocol or another wireless air interface protocol, wherein a limit might exist on the number of active participants that can be associated with a given access point at any given time. For example, Bluetooth has a limit of eight active terminals. Therefore, more than one access point might be required within a given conference room or area.
[0019] Second, multiple conference rooms that are far apart from each other might be used during a particular conference call. At least one access point 102-/ is needed to support each conference room, so multiple access points are required to support more than one conference room. [0020] Access point 102-/ is described in detail later.
[0021] Each access point 102-/ has associated with it one or more terminals. Terminal 103-i-j, for /=1 to M and '=1 to /V„ is a telecommunications device associated with a particular user who is already participating in a conference call or joining a conference call. The number of terminals associated with each access point 102-/ can be different across access points. In some embodiments, some or all of the terminals throughout conferencing system 100 can be headsets. Each participant in the conference room or area uses terminal 103-i-j to hear the non-collocated participants and to talk to the non-collocated participants. The headsets can replace the speaker and microphone associated with a speakerphone, if all participants in the conference room use headsets. Terminal 103-i-j is described in detail later. It will be clear to those skilled in the art, after reading this specification, how to make and use terminal 103-i-j.
[0022] Remote terminal 104-/, for /'= 1 to P, can be a conventional handset tied into the public switched telephone network or the private branch exchange in a building. The audio signals exchanged between remote terminal 104-/ and terminal 103-i-j are converted between different formats within telecommunications network 101, in some embodiments by mixer 106-/. It will be clear to those skilled in the art how to convert audio signals between different formats (e.g., TI pulse-code modulation versus Bluetooth, etc.).
[0023] Controller 105-/, for /=1 to , is used to manage the ongoing conference call. For example, controller 105-/ can manage the audio signals, notify the presence of participants, and allow other systems to both control and retrieve information during a meeting. Controller 105-/ can be accessed by one or more participants during the conference call, or alternatively by a conference call attendant, through a graphical user interface or a web-based interface. Controller 105-/ is described in detail later.
[0024] In some embodiments, mixer 106-/, for i=l to /?, is present to specifically manage the content of each audio signal. Mixer 106-/ is described in detail later. It will be clear to those skilled in the art, after reading this specification, how to make and use mixer 106-/.
[0025] Figure 2 depicts a block diagram of the salient components of access point 102-/ in accordance with the illustrative embodiment of the present invention. Access point 102-/ comprises receiver 201, processor 202, memory 203, transmitter 204, and network interface 205, interconnected as shown. [0026] Receiver 201 is a circuit that is capable of receiving frames (i.e., packets) from the shared communications channel, in well-known fashion, and of forwarding them to processor 202. The frames include both data frames (e.g., for transmitting audio, etc.) and control frames (e.g., for transmitting identifiers of terminals, etc.). It will be clear to those skilled in the art how to make and use receiver 201.
[0027] Processor 202 is a general-purpose processor that is capable of performing the tasks described below and with respect to Figures 6 and 7. It will be clear to those skilled in the art, after reading this specification, how to make and use processor 202.
[0028] Memory 203 is capable of storing programs and data used by processor 202. It will be clear to those skilled in the art how to make and use memory 203.
[0029] Transmitter 204 is a circuit that is capable of receiving frames from processor 202, in well-known fashion, and of transmitting them on the shared communications channel. It will be clear to those skilled in the art how to make and use transmitter 204.
[0030] Network interface 205 is a circuit that is capable of transmitting frames to telecommunications network 101 received from processor 202. Network interface 205 is also capable of receiving frames from telecommunications network 101 to send to processor 202. It will be clear to those skilled in the art how to make and use network interface 205.
[0031] Access point 102-/ can be a terminal with wireless capabilities and conferencing software, a computer with the ability to communicate in accordance with a specific air interface protocol, or a standalone access point connected to a wider network (e.g., telecommunications network 101, etc.). Access point 102-/ communicates over a shared communications channel with one or more terminals. Access point 102-/ can be associated with an Internet Protocol (IP) terminal, a modified Bluetooth-enabled phone, or a phone modified to operate in accordance with another wireless air interface protocol.
[0032] Figure 3 depicts a block diagram of the salient components of terminal 103-i-j in accordance with the illustrative embodiment of the present invention. Terminal 103-i-j comprises receiver 301, processor 302, memory 303, speaker 304, microphone 305, and transmitter 306, interconnected as shown.
[0033] Receiver 301 is a circuit that is capable of receiving frames from the shared communications channel, in well-known fashion, and of forwarding them to processor 302. The frames include both data frames (e.g., for transmitting audio, etc.) and control frames (e.g., for transmitting distinctive tone information, etc.). It will be clear to those skilled in the art how to make and use receiver 301.
[0034] Processor 302 is a general-purpose processor that is capable of performing the tasks described below and with respect to Figures 6 and 7. It will be clear to those skilled in the art, after reading this specification, how to make and use processor 302.
[0035] Memory 303 is capable of storing programs and data used by processor 302. It will be clear to those skilled in the art how to make and use memory 303.
[0036] Speaker 304 is an acoustic transducer that is capable of accepting electrical signals from processor 302 and converting those signals to audio signals that the user of terminal 103-i-j can hear. It will be clear to those skilled in the art how to make and use speaker 304.
[0037] Microphone 305 is an acoustic transducer that is capable of accepting audio signals from the user of terminal 103-i-j and converting those signals to electrical signals transmitted to processor 302. It will be clear to those skilled in the art how to make and use microphone 305.
[0038] Transmitter 306 is a circuit that is capable of receiving frames (i.e., data and control frames) from processor 302, in well-known fashion, and of transmitting them on the shared communications channel. It will be clear to those skilled in the art how to make and use transmitter 306.
[0039] Terminal 103-i-j has a unique identifier and this identifier can be associated with the user of terminal 103-i-j. Other wireless devices can also be associated with the user of terminal 103-i-j. This association can be used to record the identities of the participants as each wireless device (e.g., terminal 103-/-;', etc.) makes its presence known.
[0040] Figure 4 depicts a block diagram of the salient components of controller 105-/' in accordance with the illustrative embodiment of the present invention. Controller 105-/ comprises network interface 401, processor 402, memory 403, and operator interface 404, interconnected as shown.
[0041] Network interface 401 is a circuit that is capable of transmitting frames to telecommunications network 101 received from processor 402. Network interface 401 is also capable of receiving frames from telecommunications network 101 to send to processor 402. It will be clear to those skilled in the art how to make and use network interface 401. [0042] Processor 402 is a general-purpose processor that is capable of performing the tasks described below and with respect to Figures 6 and 7. It will be clear to those skilled in the art, after reading this specification, how to make and use processor 402.
[0043] Memory 403 is capable of storing programs and data used by processor 402. It will be clear to those skilled in the art how to make and use memory 403.
[0044] Operator interface 404 is capable of accepting commands from an attendant or a conference call participant. Operator interface 404 is also capable of displaying information (e.g., status, etc.) and of representing that information to the attendant or participant. It will be clear to those skilled in the art how to make and use operator interface 404.
[0045] The participants or attendant or both can initiate calls on conferencing system 100 by using controller 105-/. As an example, a participant can initiate a call by using a companion device (e.g., a wireless tablet computer that is present in a conferencing area, etc.) and interface tied to a personnel directory or some other database of people. An application program interface delivered through the Internet and wirelessly over a shared communications channel can serve as the means to initiate calls. Controller 105-/ can take the input transmitted from the companion device and map the input into dialing instructions that are understandable by switches present in telecommunications network 101.
[0046] Figure 5 depicts a block diagram of the salient components of mixer 106-/ in accordance with the illustrative embodiment of the present invention. Mixer 106-/ comprises network interface 501, processor 502, memory 503, and operator interface 504, interconnected as shown.
[0047] Network interface 501 is a circuit that is capable of transmitting frames to telecommunications network 101 received from processor 502. Network interface 501 is also capable of receiving frames from telecommunications network 101 to send to processor 502. It will be clear to those skilled in the art how to make and use network interface 501.
[0048] Processor 502 is a general-purpose processor that is capable of performing the tasks described below and with respect to Figures 6 and 7. It will be clear to those skilled in the art, after reading this specification, how to make and use processor 502.
[0049] Memory 503 is capable of storing programs and data used by processor 502. It will be clear to those skilled in the art how to make and use memory 503. [0050] Operator interface 504 is capable of accepting commands from an attendant or a conference call participant. Operator interface 504 is also capable of displaying information (e.g., status, etc.) and of representing that information to the attendant or participant. It will be clear to those skilled in the art how to make and use operator interface 504.
[0051] Mixer 106-/ can be used to adjust volume levels of the various audio signals. The volume levels can be controlled through an automatic gain control mechanism associated with mixer 106-/, or each participant can adjust the volume of each speaking participant manually to a level that is correct for that listening participant.
[0052] Figure 6 depicts an alternative block diagram of conferencing system 100, in which the interconnections between exemplary terminal 103-1-1, terminal 103-2-1, terminal 103-5-1, and mixer 106-1 are shown so as to highlight the streams of data (e.g., audio, etc.) flowing between each element depicted. Other elements present in conferencing system 100 are omitted from Figure 6 to emphasize the flows between each element depicted. Terminals 103-1-1 and 103-2-1 are associated with conference call participants who are both present in a first conference room. Terminal 103-5-1 is associated with a conference call participant who is present in a second conference room.
[0053] Terminal 103-1-1 transmits across path 601-1 a first series of frames that represents a first audio signal. The first audio signal can represent the voice of the user of terminal 103-1-1. Terminal 103-1-2 transmits across path 601-2 a second series of frames that represents a second audio signal. The second audio signal can represent the voice of the user of terminal 103-1-2. Terminal 103-5-1 transmits across path 601-3 a third series of frames that represents a third audio signal. The third audio signal can represent the voice of the user of terminal 103-5-1.
[0054] Mixer 106-1 receives all three series of frames, as well as possibly other series of frames representing other audio signals. Furthermore, mixer 106-1 can receive control frames, either from the terminals depicted or from other sources (e.g., controller 105-/, ere.) throughout conferencing system 100.
[0055] Mixer 106-1 forms a fourth series of frames that represents a composite comprising at least one of the first audio signal, the second audio signal, and the third audio signal. Mixer 106-1 transmits the fourth series of frames across path 602-1. Terminal 103-1-1 receives the fourth series of frames.
[0056] Mixer 106-1 forms a fifth series of frames that represents a composite comprising at least one of the first audio signal, the second audio signal, and the third audio signal. Mixer 106-1 transmits the fifth series of frames across path 602-2. Terminal 103-1-2 receives the fifth series of frames.
[0057] Mixer 106-1 forms a sixth series of frames that represents a composite signal comprising at least one of the first audio signal, the second audio signal, and the third audio signal. Mixer 106-1 transmits the sixth series of frames across path 602-3. Terminal 103-5-1 receives the sixth series of frames.
[0058] Mixer 106-1 knows that terminal 103-1-1, terminal 103-1-2, and 103-5-1 are participating on the conference call because the three terminals transmit terminal identifiers to mixer 106-1. The identifiers can be sent when a terminal joins a call, sent periodically, sent in a control frame, sent as part of a data frame, or sent in other ways. The identifiers can be sent directly to mixer 106-1 or through another system (e.g., controller 105-/, etc.). It will be clear to those skilled in the art how to determine the participants involved in a conference call.
[0059] Furthermore, mixer 106-1 knows where the three terminals are located relative to each other for determining collocation status. This can be determined by noting the access point through which a given terminal is participating on the conference call, or it can be determined through some other means. The access point identifier, or equivalent, can be provided to mixer 106-1. For remote terminal 104-/, a conventional handset, it can be assumed that the user of remote terminal 104-/ has to receive the audio signals from all of the other participants, since the collocation status of remote terminal 104-/ might be indeterminate.
[0060] A participant or attendant associated with a conference call can adjust the volume levels of the individual audio signals that are represented in a series of frames transmitted by mixer 106-1. For example, the user of terminal 103-1-1 can adjust the individual levels of the audio signals represented in the fourth series, which is being received by terminal 103-1-1. The user adjusts the levels by sending a command to mixer 106-1, either directly or through another system (e.g., controller 105-/, etc.).
[0061] Mixer 106-1 can select which input audio signals are included in the composite signal represented by an output series of frames (i.e., frames transmitted by mixer 106-1). For instance, since the users of terminals 103-1-1 and 103-1-2 are known to be in the same conference room and can hear each other without the use of headsets, the series of frames being provided to terminal 103-1-1 does not have to include the audio signal from terminal 103-1-1 or the audio signal from terminal 103-1-1, for that matter. Mixer 106-1 only includes the audio signal from terminal 103-5-1 in what is transmitted to terminals 103-1-1 and 103-1-2, since the user of terminal 103-5-1 is in a separate conference area. In other words, the mixing of the audio signals into each output series of frames is dependent on the location of one source (e.g., terminal 103-1- 1, etc.) with respect to another source (e.g., terminal 103-1-2, terminal 103-5-1, etc.).
[0062] At the start of a conference call, the users of terminals 103-1-1 and terminals 103-1-2 are waiting in a conference room for other participants to join on. When the user of terminal 103-5-1 joins the call, terminal 103-5-1 sends a terminal identifier (i.e., via a control frame or in the header of a data frame) to the network. Mixer 106-1 is then directed to mix into the composite audio signals being transmitted to terminals 103-1-1 and 103-1-2 an indication of the participant who joined the call. The indication can at a whisper level of volume, and it can be in a different tone than the audio of the person speaking (e.g., female voice if a male is speaking, etc.).
[0063] When a user of a terminal begins to speak, the other users who are not collocated in the same area can receive information on the person speaking. For example, suppose the user of terminal 103-1-1 begins to speak. Either terminal 103-1-1 can transmit the start-of-speech event to mixer 106-1 or mixer 106-1 can detect the event. It will be clear to those skilled in the art how to recognize that there is voice activity at a particular terminal. Mixer 106-1 determines which terminals are not collocated with respect to terminal 103-1-1, in this case, terminal 103-5-1. Mixer 106-1 mixes in an audio signal containing information on the user of terminal 103-1-1 (e.g., the name of the person, ere). The indication can at a whisper level of volume, and it can be in a different tone than the audio of the person speaking (e.g., female voice if a male is speaking, etc.).
[0064] When an outside party (i.e., a person that is not part of the conference call) has to interrupt a participant, the outside party can do so without interrupting the other participants. For example, suppose the user of terminal 103-1-2 has made prior arrangements with a secretary to have the secretary interrupt the user if a particular event occurs (e.g., an important person calls, a piece of news relevant to the conference call is learned, etc.). The secretary can command mixer 106-1 to mix in an audio signal into the frames being received by terminal 103-1-2 to bridge in either a message to call the secretary or the actual information that the secretary wishes to convey to the user.
[0065] Mixer 106-/ can create subconferences, sidebar conversations, and whispering effects. Examples of audio messages that can be bridged in include alerts and interruptions. Alternatively controller 105-/ can handle the functionality associated with mixer 106-/ or work in concert with mixer 106-/.
[0066] Figure 7 depicts a flowchart of the salient tasks performed by the illustrative embodiment of the present invention. It will be clear to those skilled in the art which tasks depicted in Figure 7 can be performed simultaneously or in a different order than that depicted.
[0067] At task 701, mixer 106-/ receives a first series of frames that represents a first audio signal from a first source. The first source can be a wireless headset that communicates in accordance with the Bluetooth protocol or another air interface protocol.
[0068] At task 702, mixer 106-/' receivers a second series of frames that represents a second audio signal from a second source. The second source can be a wireless headset that communicates in accordance with the Bluetooth protocol or another air interface protocol.
[0069] At task 703, mixer 106-/ forms a third series of frames that represents a composite comprising at least one of the first audio signal and the second audio signal. The composite is based on the location of the first source relative to the second source. In some embodiments, the individual levels of the first audio signal and the second audio signal as represented in the third series of frames are adjustable remotely.
[0070] At task 704, mixer 106-/' transmits the third series of frames.
[0071] It is to be understood that the above-described embodiments are merely illustrative of the present invention and that many variations of the above-described embodiments can be devised by those skilled in the art without departing from the scope of the invention. It is therefore intended that such variations be included within the scope of the following claims and their equivalents.
[0072] What is claimed is:

Claims

1. A method comprising: receiving a first series of frames that represents a first audio signal from a first source; receiving a second series of frames that represents a second audio signal from a second source; forming a third series of frames that represents a composite comprising at least one of said first audio signal and said second audio signal, wherein said composite is based on the location of said first source relative to said second source; and transmitting said third series of frames.
2. The method of claim 1 further comprising : receiving a fourth series of frames that represents a third audio signal from a third source; and forming a fifth series of frames that represents a composite comprising at least one of said first audio signal, said second audio signal, and said third audio signal, wherein said composite is based on the location of said first source relative to said second source and wherein said fourth series of frames identifies said second source.
3. The method of claim 2 wherein said fourth series of frames is based on the voice activity of the user of said second source.
4. The method of claim 2 wherein said fourth series of frames is based on the user of said second source joining a conference call.
5. The method of claim 1 wherein said first source and said second source are wireless headsets that communicate in accordance with the Bluetooth protocol.
6. The method of claim 1 wherein the individual levels of said first audio signal and said second audio signal as represented in said third series of frames are adjustable remotely.
7. An apparatus comprising: a network interface for:
(1) receiving a first series of frames that represents a first audio signal from a first source;
(2) receiving a second series of frames that represents a second audio signal from a second source; and (3) transmitting a third series of frames; and a processor for forming said third series of frames that represents a composite comprising at least one of said first audio signal and said second audio signal, wherein said composite is based on the location of said first source relative to said second source.
8. The apparatus of claim 7 wherein: said network interface is also for receiving a fourth series of frames that represents a third audio signal; and said processor is also for forming a fifth series of frames that represents a composite comprising at least one of said first audio signal, said second audio signal, and said third audio signal, wherein said composite is based on the location of said first source relative to said second source and wherein said fourth series of frames identifies said second source.
9. The apparatus of claim 8 wherein said fourth series of frames is based on the voice activity of the user of said second source.
10. The apparatus of claim 8 wherein said fourth series of frames is based on the user of said second source joining a conference call.
11. The apparatus of claim 7 wherein said first source and said second source are wireless headsets that communicate in accordance with the Bluetooth protocol.
12. The apparatus of claim 7 wherein the individual levels of said first audio signal and said second audio signal as represented in said third series of frames are adjustable remotely.
13. The apparatus of claim 7 further comprising an access point for interconnecting said first source with said network interface.
14. An apparatus comprising: a mixer for:
(1) receiving a first series of frames that represents a first audio signal from a first terminal;
(2) receiving a second series of frames that represents a second audio signal from a second terminal;
(3) forming a third series of frames that represents a composite comprising at least one of said first audio signal and said second audio signal, wherein said composite is based on the location of said first terminal relative to said second terminal; and
(4) transmitting said third series of frames; said first terminal for:
(1) transmitting said first series of frames; and
(2) receiving said third series of frames; and said second terminal for transmitting said second series of frames.
15. The apparatus of claim 14 wherein said mixer is also for:
(5) receiving a fourth series of frames that represents a third audio signal; and
(6) forming a fifth series of frames that represents a composite comprising at least one of said first audio signal, said second audio signal, and said third audio signal, wherein said composite is based on the location of said first terminal relative to said second terminal and wherein said fourth series of frames identifies said the user of second terminal.
16. The apparatus of claim 15 wherein said fourth series of frames is based on the voice activity of the user of said second terminal.
17. The apparatus of claim 15 wherein said fourth series of frames is based on the user of said second terminal joining a conference call.
18. The apparatus of claim 14 wherein said first terminal and said second terminal are wireless headsets that communicate in accordance with the Bluetooth protocol.
19. The apparatus of claim 14 wherein the individual levels of said first audio signal and said second audio signal as represented in said third series of frames are adjustable remotely.
20. The apparatus of claim 14 further comprising a controller for providing information on conference call participants and on the participant who is currently talking.
PCT/US2003/014120 2002-05-06 2003-05-06 Wireless conferencing system WO2003094383A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US10/672,319 US7539486B2 (en) 2002-05-06 2003-09-26 Wireless teleconferencing system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US37819302P 2002-05-06 2002-05-06
US60/378,193 2002-05-06

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US10/672,319 Continuation-In-Part US7539486B2 (en) 2002-05-06 2003-09-26 Wireless teleconferencing system

Publications (1)

Publication Number Publication Date
WO2003094383A1 true WO2003094383A1 (en) 2003-11-13

Family

ID=29401592

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2003/014120 WO2003094383A1 (en) 2002-05-06 2003-05-06 Wireless conferencing system

Country Status (1)

Country Link
WO (1) WO2003094383A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1519553A1 (en) * 2003-09-26 2005-03-30 Avaya Technology Corp. Wireless teleconferencing system
WO2005083936A1 (en) * 2004-02-26 2005-09-09 Sennheiser Electronic Gmbh & Co. Kg Stations that are connected to a conference system via external network units
EP1868362A1 (en) * 2006-06-15 2007-12-19 Avaya Technology Llc Method for coordinating co-resident teleconferencing endpoints to avoid feedback

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5483528A (en) * 1994-10-11 1996-01-09 Telex Communications, Inc. TDM digital matrix intercom system
US5533112A (en) * 1994-03-31 1996-07-02 Intel Corporation Volume control in digital teleconferencing

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5533112A (en) * 1994-03-31 1996-07-02 Intel Corporation Volume control in digital teleconferencing
US5483528A (en) * 1994-10-11 1996-01-09 Telex Communications, Inc. TDM digital matrix intercom system

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7539486B2 (en) 2002-05-06 2009-05-26 Avaya Inc. Wireless teleconferencing system
EP1519553A1 (en) * 2003-09-26 2005-03-30 Avaya Technology Corp. Wireless teleconferencing system
WO2005083936A1 (en) * 2004-02-26 2005-09-09 Sennheiser Electronic Gmbh & Co. Kg Stations that are connected to a conference system via external network units
EP1868362A1 (en) * 2006-06-15 2007-12-19 Avaya Technology Llc Method for coordinating co-resident teleconferencing endpoints to avoid feedback
US7876890B2 (en) 2006-06-15 2011-01-25 Avaya Inc. Method for coordinating co-resident teleconferencing endpoints to avoid feedback

Similar Documents

Publication Publication Date Title
CA2482273C (en) Wireless teleconferencing system
US7742587B2 (en) Telecommunications and conference calling device, system and method
US7023821B2 (en) Voice over IP portable transreceiver
US20210051188A1 (en) Instant communications system having established communication channels between communication devices
EP2098034B1 (en) System and method for configuration of teleconferencing based on proximity information
US20220321631A1 (en) Instant communications system having established communication channels between communication devices
US7973857B2 (en) Teleconference group formation using context information
US20040239754A1 (en) Systems and methods for videoconference and/or data collaboration initiation
US8265240B2 (en) Selectively-expandable speakerphone system and method
EP2991325A1 (en) Remote conference realizing method and apparatus
EP2755368B1 (en) Teleconferencing system comprising Master Communication Device for mixing audio and connecting to neighbouring devices
CN101674382B (en) Notification of dropped audio in a teleconference call
KR20110070507A (en) Method and system of one-to-one and group communication simultaneously in wireless ip network
WO2003094383A1 (en) Wireless conferencing system
KR100724928B1 (en) Device and method of informing communication using push to talk scheme in mobile communication terminal
CN115699719A (en) AC system
JP2001094604A (en) Multimedia information communication system and computer system
KR100630125B1 (en) Method for mediating call of push to talk
KR20030073965A (en) Service method of multimedia chatting to mobile phone
CN103516919A (en) Method, device and terminal for sending audio data
KR20040052172A (en) Management method for dynamic conference call of IP-PABX
JPH066470A (en) Private branch exchange telephone system
JP2017054193A (en) Information processing device, information processing system, program, and recording medium
JPH01300754A (en) Conference telephone terminal equipment

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 10672319

Country of ref document: US

AK Designated states

Kind code of ref document: A1

Designated state(s): CA US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
122 Ep: pct application non-entry in european phase