WO2001078443A2 - Systeme de communication mains libres a oreillette - Google Patents

Systeme de communication mains libres a oreillette Download PDF

Info

Publication number
WO2001078443A2
WO2001078443A2 PCT/US2001/011069 US0111069W WO0178443A2 WO 2001078443 A2 WO2001078443 A2 WO 2001078443A2 US 0111069 W US0111069 W US 0111069W WO 0178443 A2 WO0178443 A2 WO 0178443A2
Authority
WO
WIPO (PCT)
Prior art keywords
earset
microprocessor
command
based appliance
voice
Prior art date
Application number
PCT/US2001/011069
Other languages
English (en)
Other versions
WO2001078443A3 (fr
Inventor
Thomas R. Pirelli
Sanjeev D. Patel
Marc W. Cygnus
Joseph J. Schmid
Michael E. Wagener
Original Assignee
Arialphone, Llc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Arialphone, Llc. filed Critical Arialphone, Llc.
Priority to DE10191530T priority Critical patent/DE10191530T1/de
Priority to AU2001253166A priority patent/AU2001253166A1/en
Priority to GB0129150A priority patent/GB2370181A/en
Publication of WO2001078443A2 publication Critical patent/WO2001078443A2/fr
Publication of WO2001078443A3 publication Critical patent/WO2001078443A3/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/60Substation equipment, e.g. for use by subscribers including speech amplifiers
    • H04M1/6033Substation equipment, e.g. for use by subscribers including speech amplifiers for providing handsfree use or a loudspeaker mode in telephone sets
    • H04M1/6041Portable telephones adapted for handsfree use
    • H04M1/6058Portable telephones adapted for handsfree use involving the use of a headset accessory device connected to the portable telephone
    • H04M1/6066Portable telephones adapted for handsfree use involving the use of a headset accessory device connected to the portable telephone including a wireless connection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B1/00Details of transmission systems, not covered by a single one of groups H04B3/00 - H04B13/00; Details of transmission systems not characterised by the medium used for transmission
    • H04B1/38Transceivers, i.e. devices in which transmitter and receiver form a structural unit and in which at least one part is used for functions of transmitting and receiving
    • H04B1/3827Portable transceivers
    • H04B1/385Transceivers carried on the body, e.g. in helmets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • H04M1/04Supports for telephone transmitters or receivers
    • H04M1/05Supports for telephone transmitters or receivers specially adapted for use on head, throat or breast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/253Telephone sets using digital voice transmission
    • H04M1/2535Telephone sets using digital voice transmission adapted for voice communication over an Internet Protocol [IP] network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/26Devices for calling a subscriber
    • H04M1/27Devices whereby a plurality of signals may be stored simultaneously
    • H04M1/271Devices whereby a plurality of signals may be stored simultaneously controlled by voice recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/60Substation equipment, e.g. for use by subscribers including speech amplifiers
    • H04M1/6033Substation equipment, e.g. for use by subscribers including speech amplifiers for providing handsfree use or a loudspeaker mode in telephone sets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/725Cordless telephones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B1/00Details of transmission systems, not covered by a single one of groups H04B3/00 - H04B13/00; Details of transmission systems not characterised by the medium used for transmission
    • H04B1/38Transceivers, i.e. devices in which transmitter and receiver form a structural unit and in which at least one part is used for functions of transmitting and receiving
    • H04B1/3827Portable transceivers
    • H04B1/385Transceivers carried on the body, e.g. in helmets
    • H04B2001/3866Transceivers carried on the body, e.g. in helmets carried on the head
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/02Details of telephonic subscriber devices including a Bluetooth interface
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1025Accumulators or arrangements for charging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1041Mechanical or electronic switches, or control elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1058Manufacture or assembly
    • H04R1/1066Constructional aspects of the interconnection between earpiece and earpiece support
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2201/00Details of transducers, loudspeakers or microphones covered by H04R1/00 but not provided for in any of its subgroups
    • H04R2201/10Details of earpieces, attachments therefor, earphones or monophonic headphones covered by H04R1/10 but not provided for in any of its subgroups
    • H04R2201/107Monophonic and stereophonic headphones with microphone for two-way hands free communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2420/00Details of connection covered by H04R, not provided for in its groups
    • H04R2420/07Applications of wireless loudspeakers or wireless microphones

Definitions

  • the present invention relates to an earset communication system.
  • the earset communication system includes a hands-free earset for use in Voice over Network (VoN) communication, voice dictation, control of a computer, and/or voice control of a number of additional functions (e.g., home entertainment and home automation).
  • VoIP Voice over Network
  • VoIP Voice over Internet Protocol
  • VoIP Internet telephony may be combined with other modes of communication, such as video conferencing, and data or application sharing, giving a user tremendous power to communicate with others, worldwide, at a fraction ofthe cost of conventional telephone systems.
  • VoIP Voice over Internet Protocol
  • Packet-switched networks are generally more cost efficient than circuit switched networks because they require no call set-up time (resulting in faster delivery of traffic) and because users can efficiently share the same channel (resulting in lower cost).
  • VoIP Voice over Network
  • VoIP Voice over Internet Protocol
  • VoIP is used herein to refer to a specific form of VoN transmission: voice communication over packet-switched networks using the Internet Protocol.
  • VoIP is currently the most common implementation for VoN in the consumer market and is yet another selection available to a user for communicating more efficiently than ever before.
  • increasing the number of communication mediums also increases the complexity of communicating because the user must decide which medium will be used and then interface with the appropriate device.
  • an office user has a wide variety of communication mediums to select from, the different systems typically each require their own user interface, resulting in increased complexity and ergonomic problems.
  • an office user's work space must also provide a substantial amount of space for myriad devices, including for example a telephone- speakerphone, a computer keyboard, a mouse, a monitor, speakers and a microphone, a camera for voice and video over IP applications, and perhaps a personal digital assistant ("PDA"). Electrical connections are required for each product also creating messy cable nests. Additional office communication products may also be required or desired such as cellular telephones, pagers, printers, scanners, dictation machines and personal organizers, further increasing ergonomic problems by increasing options presented to the user, and f rther reducing valuable desk space as well.
  • the net effect of having multiple user-machine interfaces may actually result in reduced efficiency and productivity for the office user.
  • a conventional telephone-speakerphone is largely redundant hardware for users with cordless telephones, cellular telephones and/or computer-based telephony devices.
  • the telephone keypad and display is redundant with the computer keyboard and monitor since these functions may be combined to simplify the user-machine interface and reduce the required desk-top space.
  • Existing devices also require the user to operate and maintain multiple terminal devices further contributing to ergonomic inefficiency.
  • a typical modern office has poor ergonomics due to the incompatibility between these multiple devices and multiple interfaces, thereby requiring a user to learn to use and maintain each of them effectively.
  • Office communication device manufacturers have attempted to improve the user- machine interface by developing hands-free products and wireless communication systems in order to eliminate handsets and to promote freedom of motion. Although hands-free devices freed the user from having to hold a handset, these devices were limited to merely being extensions of a telephone handset. A user still has to manually control the communication device whether it is a telephone, answering machine, fax machine or a computer for sending email.
  • Conventional cordless telephones utilize an F link to provide wireless communication between the handset and the base station. However, conventional cordless telephones are limited, to establishing a wireless link between the handset and the base station with manual control interfaces.
  • Voice recognition systems were developed in order to convert speech into text based on the recognition of spoken words. For example, through the use of speech recognition software, a user does not have to use the computer keyboard in order to type text. Speech may be processed through a recognition algorithm resulting in the recognition ofthe word and the representation ofthe word as text or a computer display. These systems however have been largely limited to word processing applications.
  • An object ofthe invention is to improve the efficiency and productivity of a home or office user.
  • Productivity is improved by replacing existing conventional user-machine interfaces with a single convenient user-machine interface that is transparent to the user and responsive predominately to voice commands.
  • the invention improves productivity by consolidating the functions of numerous communications interfaces to a single, hands-free, wireless communications interface. This also provides the advantage of eliminating training ofthe user to operate these various devices.
  • the user is provided with a hands-free, wireless earset that operates as a communication interface.
  • the earset may provide control and/or communication functionality in accordance with voice commands issued by the user.
  • VoN communication is enabled without requiring a manual control interface.
  • VoN communication includes VoIP.
  • Another aspect ofthe invention is to provide an earset communication system that provides control functionality using speech recognition software running on a microprocessor- based appliance.
  • the earset is coupled by air interface to a base station, which is capable of connecting to the microprocessor-based appliance (e.g., a PC, handheld computer, PDA, set top box, cable modem, and the like).
  • the base station connects directly to a PC (personal computer) and uses software running on the PC.
  • the earset has the potential to control network functions, such as Internet connectivity, home entertainment functions (such as home TV, DVD, audio and/or video systems and the like), home automation functions and the like, when connected to the microprocessor-based appliance.
  • the system includes an earset communicator and a base station that preferably allows wireless communication between these elements.
  • the earset communicator allows hands-free and wireless operation ofthe communication system, thereby completely freeing the user from being confined to the desktop.
  • the base station operates with a voice recognition and control programs in a controller, giving the user simple, fast and complete control of every communication capability through the controller, including for example control of telephony and data. Therefore, one embodiment ofthe invention combines the communication power and flexibility of a controller-communication system with control functionality via simple voice commands.
  • the earset communication system may be used for communication via Internet telephony or VoIP, voice browsing ofthe Internet, voice dialing and control management, voice dictation, PSTN telephony, and/or home confrol functions.
  • the system allows the user to access files, review paperwork, work on the computer, and handle other office or home related activities without being tied to the desk because the earset has no tethering wires.
  • the earset is a lightweight battery powered device having a noise canceling microphone or microphone array.
  • This earset has an advantage over existing headsets with long boom microphones because the microphone is located outside the user's field of vision so that the user can work without distraction, converse face-to-face, and even drink a beverage while using the phone or PC without having to move or remove the earset.
  • the earset preferably allows the user to converse on a call, command other electronic devices, use both hands to type or perform other functions, get up and move around the entire home or small office, all without the need to detach wires, remove the earset or carry around a cordless telephone.
  • the system automatically dials stored telephone numbers based on a voice command. Thus, no phone numbers need to be remembered by the user, and no digits are required to be manually dialed.
  • the earset communicator interface provides tremendous advantages while in an automobile or involved in other activity that requires the use of both hands.
  • a user may operate a mobile telephone, computer or other peripheral device in a hands-free mode.
  • the earset device communicates with a PDA, which is in turn connected to a network that is capable of supporting voice communication.
  • Another embodiment ofthe earset communicator system utilizes the base station to communicate with both the earset communicator and the wireless telephone network.
  • voice commands are used for all control functions.
  • Voice recognition software allows the user to interact with, for example, a computer via spoken commands to initiate a VoIP call.
  • the system uses voice recognition for control functions such as placing phone calls and answering phone calls.
  • Voice recognition software may also be utilized in conjunction with commands received through the earset to perform other functions, such as checking schedules and appointments, controlling functions for audio, video, lighting, HVAC,(Heating Ventilation Air Conditioning), motorized windows and doors, etc., voice browsing ofthe Internet, voice dictation, and integration with existing 3 rd party software to create unique vertical applications.
  • the earset preferably rests comfortably on the user's ear and is held in place by an earhook.
  • a transceiver base station communicates with the earset via a wireless link, hi accordance with a preferred embodiment, the earset communicator is extremely lightweight (approximately 28 grams, or 1 ounce) so that it may comfortably be supported entirely by the user's ear, without the need for an over-the-head band.
  • FIG. 1 illustrates the earset
  • Figure 3 illustrates the inner side ofthe earset
  • Figures 4 A & 4B illustrate two rear- view embodiments ofthe earset
  • Figure 5 A is a functional block diagram of one embodiment ofthe earset communication system that illustrates audio flow information and Figure 5B further illustrates a plurality of types ofthe network interface shown in Figure 5 A;
  • Figure 6 is a functional block diagram illustrating a VoN audio interface in the microprocessor-based appliance shown in Figures 5 A and 5B;
  • Figure 7 A illustrates a block diagram of a preferred embodiment ofthe earset communication system including hardware and software system components
  • Figure 7B illustrates an alternative embodiment ofthe earset communication system in which the network interface is incorporated into the base station
  • Figure 8A illustrates the analog-to-digital and digital-to-analog conversions in the base station portion of Figure 7A and Figure 8B illustrates an alternative embodiment of that portion ofthe base station eliminating the analog portion ofthe path;
  • Figure 9 illustrates generalized software flow for handling the issuance of a command by the user
  • Figure 10 illustrates the software flow for making a call
  • Figure 11 A illustrates the software flow for making a call where the user provides the called party's name and location
  • Figure 1 IB illustrates the software flow for making a call where the user provides only the called party's name
  • Figure 11C illustrates the software flow for requesting the voice agent
  • Figure 12 illustrates the software flow for retrieving schedule information
  • Figure 13 illustrates the software flow for control of home entertainment functions.
  • Figure 14 illustrates the generalized software flow diagram for home automation functions.
  • Figure 15 illustrates a system for utilizing the earset and base station in a Voice over Network implementation.
  • the earset communication system includes three main components: a wearable transceiver, hereinafter referred to as the earset or earset communicator 10; a transceiver base station 20 having an interface to a microprocessor-based appliance 30; and a microprocessor-based appliance 30.
  • the microprocessor-based appliance 30 preferably includes at least one network interface, such as a network card or modem for access to the Internet or a corporate network and a PSTN telephony interface, for example a voice modem or similar device for access to the PSTN.
  • the microprocessor-based appliance 30 preferably utilizes voice recognition software and communication software modules to interface with a communication medium.
  • the microprocessor-based 1 appliance 30 maybe, for example, a personal computer, a server, a PDA, a set top box, a cable modem, a handheld computer, or a web browsing kiosk. Media devices and other household controllers often are processor controlled, and therefore are capable of being integrated into the earset communication system.
  • the microprocessor-based appliance 30 may utilize any type of computer architecture including conventional microprocessors and neural networked processors.
  • the network interface 300 provided by the microprocessor- based appliance 30 couples the base station 20 to a network capable of supporting voice (e.g., the Internet, corporate intranets, corporate networks, the PSTN and the like), hi accordance with a preferred embodiment, the network is a packet-switched network that supports VoIP, Voice over ATM, Voice over Frame Relay, Voice over cable, Voice over DSL, and the like.
  • the network may be a wired network, a wireless network, or a combination ofthe foregoing.
  • the network may be a local area network (LAN), but for communication applications will more typically be a wide area network (WAN), a combination of WANs, the Internet or the PSTN.
  • the microprocessor-based appliance 30 includes a read-only memory (ROM) structure, a random access memory (RAM) structure, associated data and address buses, and a port for coupling the microprocessor-based appliance 30 to the base station 20.
  • the port that couples the microprocessor-based appliance 30 to the base station 20 is a Universal Serial Bus ("USB") port.
  • USB Universal Serial Bus
  • the microprocessor-based appliance 30 is preferably a personal computer.
  • the microprocessor-based appliance 30 may alternatively be a handheld computer, a PDA, a set top box, a server, a cable modem, a web browsing kiosk or the like.
  • the phrase web browsing kiosk refers to an appliance, which includes the microprocessor-based appliance 30 structure recited above, or equivalents thereto, that is specifically adapted for browsing the Internet.
  • the Earset The earset 10 preferably includes an audio transducer 53, a speaker 52 and a microphone 50, as shown in Figures 7 A and 7B.
  • the audio transducer 53 may be used for ringing or other similar paging or notice type functions.
  • the audio transducer 53 is capable of generating a tone that is loud enough to notify the user of an incoming call, page or the like.
  • the speaker 52 may provide the notice-type functions ofthe audio fransducer 53, although this is less preferable because volume limitations on the speaker 52 may prevent the user from hearing the ringing or paging tone when the earset 10 is not present on the users' ear.
  • the user may hear audio from a speakerphone (not shown) instead of an audio transducer 53 or speaker 52.
  • FIGs 2 and 3 illustrate an exemplary form ofthe earset 10.
  • the earset 10 is designed to be worn comfortably on the user's ear.
  • the speaker 52 extends from the earset 10 and is configured to be inserted into the user's ear.
  • the speaker may be surrounded by gel and/or foam to improve comfort and fit ofthe earset 10.
  • the earset 10 may be carried by the user.
  • the earset 10 is preferably extremely lightweight (approximately 30 grams, or 1 ounce) so that it may comfortably be supported entirely by the user's ear.
  • the earset 10 is supported upon the user's ear by an earhook 14, as shown in Figures 2 and 3.
  • the earhook 14 not only stabilizes the earset 10 on the user's head when worn on the ear but also orients the microphone 50 for reception of commands spoken by the user.
  • This earhook 14 may be connected to the earset device 10 via a thermal plastic ring which has notched detents for repeatable positioning.
  • the earhook 14 may be made out of a plastic or flexible wire so it can mold to fit each ear comfortably.
  • a lightweight earhook/speaker is plugged into a 2.5 mm jack which is located between the optional battery charging and parking contacts which are shown in Figures 4A and 4B.
  • the microphone 50 is mounted in a cavity at an end of the earset 10 that is distal from the earhook 14.
  • the microphone 50 is housed in an adjustable mini-boom.
  • the microphone 50 housing is preferably acoustically insulated to minimize coupling of unwanted mechanical noise.
  • the microphone signal line is preferably electrically shielded to prevent the coupling of unwanted RF energy.
  • the use ofthe mini-boom or equivalently —the extension ofthe length ofthe earset toward the lip plane, is required for the high signal-to-noise ratio demanded by currently available voice recognition software. From the standpoint ofthe user, and for simplification ofthe mechanical design ofthe earset 10, it would be preferable to eliminate the mini-boom and to instead simply mount the microphone 50 directly to the earset at a greater distance from the lip plane. It is envisioned that, as speech recognition software improves and the noise background therefore becomes less pertinent, the mini-boom may be eliminated from the earset.
  • the microphone 50 is preferably a miniature, passive noise canceling electret element with a cardioid response pattern.
  • the mini-boom is pivotally attached to the body ofthe earset 10 to allow the mini-boom to pivot away from the major axis ofthe earset 10.
  • the mini-boom may pivot up to approximately 20° away from the major axis.
  • the mini-boom is eliminated and the single microphone 50 and mini-boom are replaced by a microphone array with an associated DSP system that is programmed to reduce background noise and echoes. It is also envisioned that speech recognition software will in the future progress to the point where the noise cancellation techniques described above are not required.
  • the obverse ofthe microphone 50 may be ported to enhance passive noise cancellation. Either active or passive noise cancellation techniques may be used.
  • an array of microphones may be used with a adaptive combiner to select a weighted group of microphone signals to provide the lowest noise and therefore the highest signal to noise ratio.
  • the speaker bud 114 shown in Figure 3 preferably extends from the body ofthe earset 10 and is covered by an acoustically permeable foam cap, which acts as a cushion to prevent the convex covering ofthe speaker bud 114 from irritating the ear.
  • the speaker 52 is optimally capable of reproducing sound in the voice audio frequency band.
  • the convex shape allows it to self-seat, centering upon the ear canal (in the Concha), with minimal to no adjustment, when placing the earset 10 upon the ear.
  • the earset 10 may be powered by a lightweight rechargeable battery 54, such as a Lithium-Ion Polymer battery. Other types of rechargeable batteries may alternatively be used. Without limiting the invention, a battery having the following characteristics is acceptable for the present application, although other batteries may alternatively be used.
  • the weight ofthe , battery may be approximately 7 grams or less.
  • the dimensions may be approximately a width of 20 mm by length of 50 mm by a depth of 5 mm.
  • the battery may have an approximate capacity of 250 mAH or more, and be capable of powering the earset for more than 2 hours.
  • the approximate battery voltage may be from 3.3 V to 4.1 V with an approximate nominal voltage of 3.8 V. Future improvements in battery performance including increased volumetric energy density and increased gravametric energy density may also be utilized.
  • the battery 54 may be encased in a plastic pack that is mounted on the side ofthe earset 10 from the back as shown in Figures 4A and 4B.
  • the earset 10 includes battery charging/power contacts that are connected to the battery pack internally, i.e. through the earset, and the base station 20 includes mating contacts for charging the battery when the earset 10 is not is use.
  • the battery may be removed from the earset 10 for charging, such as in a charging stand that may be incorporated into the base station 20.
  • the battery 54 is preferably located as close to the ear as possible to keep the center of gravity ofthe earset 10 nearest the center ofthe ear, and to be positioned to balance the earset 10.
  • the earset communicator 10 may normally be in a "sleep," or inactive, status in which most of its systems and components are powered down.
  • the earset 10 also includes a set of parking contacts as illustrated in the alternative embodiment of Figures 4A and 4B.
  • an identification code which is commonly associated with a radio transceiver chipset within the earset 10 is sent by the earset 10 to the base station 20.
  • the base station 20 becomes associated with a particular earset 10.
  • the base station 20 will communicate with the proper earset 10 even in an environment in which numerous earsets 10 are transmitting command signals.
  • the earset may be provided with a separate speaker/microphone which can be plugged into an optional 2.5mm jack at the rear ofthe earset, as shown in Figure 4 A.
  • the audio is diverted from the internal microphone 50 and speaker 52 to a connected wired speaker/microphone or speakerphone.
  • the user may then attach the earset to his or her shirt, or wear it with a lanyard around their neck. Since the wired microphone/speaker typically weighs only 1/8 of an ounce (3.5 grams), this may be a more comfortable arrangement for some users.
  • FIG. 7A illustrates a block diagram of a preferred embodiment ofthe earset communication system.
  • Figure 7B illustrates an alternative embodiment.
  • the system uses an RF link 180 to provide hands-free operation between a self-contained compact earset 10 and a base station 20, which has interfaces to a microprocessor-based appliance 30 and a communication network 300.
  • the earset communicator 10 comprises a radio frequency transceiver system 62, 60 for wireless radio frequency between the earset 10 and the base station 20.
  • the radio transceiver 60 is preferably a 900 MHz Digital Spread Spectrum Transceiver Model No. RF105, which is commercially available from Conexant Systems, Incorporated of Newport Beach, CA. This chipset, for example, will automatically select one of 40 available channels.
  • Radio fransceiver 60 also preferably includes a Conexant 900 MHz Class AB RF Power Amplifier Model No. RF106 which provides a communicating range of approximately 250 feet (76 meters).
  • the earset Codec 58 is preferably a Hummingbird 100-pin ASIC + CODEC (single chip) Model No. RSST7504 or equivalent.
  • the base station Audio Processor 272 is preferably a 144 pin Hummingbird ASIC Model number RSST7504, and the base station CODEC 224 is preferably a 32 pin Hummingbird CODEC Model number 20415.
  • the RF antenna 56 may reside within the plastic enclosure ofthe earset 10 provided the antenna 56 meets the minimum fractional wavelength requirements ofthe transmit frequency. The antenna 56 may be positioned along the outer edge ofthe plastic earset case.
  • transceivers 62, 60 may each be a 2.4 GHz spread spectrum transceiver system such as is available from Siemens Electronics, or a 900 MHz chipset such as offered by Rockwell/Conexant (as previously discussed), or an Ericsson bluetooth chipset, Model No. PBA 313 1/2, or any other chipset that supports wireless communication.
  • these chipsets are based on a full duplex analog, CDMA or TDMA technology formats. Chipsets from other manufacturers may alternatively be used, provided their air interface specifications provide high quality voice and security.
  • One skilled in the art is capable of identifying commercially available components for the air interface in the system and would also recognize other substitute chipsets.
  • An advantage provided by the 900 MHz and 2.4 GHz chipsets is that they provide the earset 10 with a substantially longer usable range than is available from known headset arrangements.
  • the output ofthe earset radio receiver 60 is connected through the ASIC 108 to an amplifier in the CODEC 58 where the output portion ofthe audio circuit will drive the speaker 52.
  • the output level ofthe signal sent to the earset speaker 52 is controlled digitally by the Hummingbird chip 108.
  • a tone may be emitted from an internal audio transducer 53 to alert the user of a low battery state.
  • an out of range tone may optionally be emitted by the internal audio transducer 53 when the earset 10 is not within the recognizable range ofthe base station 20.
  • the earset 10 preferably emits a specific tone, for example, periodically every 10 seconds.
  • the earset 10 will emit a repeating ringing tone, preferably via the audio transducer 53, to notify the user of an incoming call.
  • the microprocessor-based appliance 30 may send a signal to the base station 20, which in turn relays the signal to the earset 10 to begin the ringing tone.
  • the user preferably may locate the earset 10 by activating a paging signal from the computer 30, or the base station 20 for the optional case in which the base station 20 includes a button for sending the paging signal.
  • the earset 10 may emit a repeating paging tone cadence to allow the user to locate the earset 10.
  • the earset communicator 10 contains controls that allow the user to switch the earset 10 to an "on" or active state when use ofthe earset functions is desired or necessary, such as when answering an incoming telephone call.
  • a single button, i.e. a command button 110, on the earset communicator 10 prompts the microprocessor-based appliance 30 that a voice command is imminent.
  • the user preferably receives, in response to the user depressing the command button 110, a configurable ready prompt through the earset internal audio transducer 53 from the microprocessor-based appliance 30.
  • the ready prompt notifies the user that the system is preferably ready to receive a voice command.
  • the ready prompt is stored on the microprocessor-based appliance 30 for example in a digital sound file format that allows the user to configure or record customized prompts.
  • the earset internal audio transducer 53 may also be used to notify the user of system status such as incoming phone calls, low battery status, paging signals, and "out of range" warnings.
  • the Base Station 20 is the communications gateway between the microprocessor-based appliance 30 and the earset 10 in the earset communication system.
  • the base station 20 contains circuitry necessary to operate the earset 10.
  • the base station 20 footprint is preferably small relative to a desktop.
  • tlje base station 20 is small enough to be conveniently used while traveling, such as with a laptop computer.
  • An internal RF antenna 22 may be used in order to provide a more aesthetically pleasing appearance, however, an external antenna 22 may alternatively be used.
  • Antenna diversity may be utilized to increase signal to noise ratio and decrease RF interference.
  • the fransceiver base station 20 provides a USB interface 21 to the microprocessor-based appliance 30, having an associated memory structure.
  • the microprocessor-based appliance 30 may be a personal computer ("PC"), PDA (personal data assistant), or other microprocessor-based device such as a set top box, cable modem, or other Internet device/appliance, or home control automation system or other Internet services device.
  • PC personal computer
  • PDA personal data assistant
  • Other types of interfaces to the microprocessor- based appliance 30, such as RS-232, PCMCIA, Bluetooth or infrared, may alternatively be used.
  • Figure 8 A illusfrates a portion ofthe base station 20 hardware from Figure 7 A and illusfrates the form ofthe voice signal between the USB interface 21 and the Hummingbird ASIC 272.
  • the voice signal is digital, such as 16 bit, 8kHz linear PCM data, between the ASIC 272 and the CODEC 224, which then converts voice signals from the ASIC 272 into analog form.
  • the voice signal is then digitized by the CODEC 282 and passed to the USB interface 21.
  • the opposite conversions are made for signals fraveling from the USB interface to the Hummingbird ASIC 272.
  • the intermediate conversion to analog form allows the Hummingbird ASIC 272 and the USB interface 21 to operate using independent clocks.
  • the intermediate conversion may be eliminated as shown in Figure 8B.
  • the, base station 20 draws power entirely from the USB connection 21 to the computer 30.
  • the base station 20 may be powered from a DC power adapter connected to an AC power source, commonly known to those skilled in the art. This alternative power source may be required where the base station 20 provides battery charging capability as noted above.
  • the base station 20 may be a standalone unit, or may attach directly to the microprocessor-based appliance 30.
  • the microprocessor- based appliance 30 is a laptop computer, it may be desirable to mount the base station 20 to the laptop for ease of use during transit.
  • the base station 20 may be incorporated into the microprocessor- based appliance 30, either by physically incorporating the base station 20 hardware into the appliance 30 form factor or, where the appliance 30 is already capable of supporting a wireless connection to the earset, by programming the appliance 30 to perform the base station 20 functions.
  • the base station 20 may be incorporated into the microprocessor- based appliance 30, either by physically incorporating the base station 20 hardware into the appliance 30 form factor or, where the appliance 30 is already capable of supporting a wireless connection to the earset, by programming the appliance 30 to perform the base station 20 functions.
  • personal computers, PDAs, cellular telephones and the like will include fransceivers that support commiinication in accordance with the Bluetooth protocol. Those skilled in the art would be capable upon reviewing this document of adapting the earset 10 to interface with such appliances 30.
  • the base station 20 provides an interference-resistant, secure RF link for multiple earsets.
  • the system may support up to 8 earsets. If multiple earsets 10 are communicating simultaneously, they act as "Conference Call” units, working in the same manner as multiple wired telephones on a single line.
  • the earset to base station range is preferably in excess of 75 meters in the presence of interference from structures such as walls and ceilings.
  • the signal between earset 10 and base station 20 is preferably capable of passing through a minimum of six standard wood stud and drywall walls, which are typical of residential construction.
  • the earset 10 has the ability to associate itself with a specific base station 20 when in the presence of multiple base stations within the reception area.
  • the earset 10 may include parking contacts that, as is known in the art of cordless telephones, allow the earset 10 and base station 20 to be logically mated.
  • the base station 20 and earset 10 may be set up to use a particular encryption technology.
  • the earsets 10 may be logically mated with a base station 20, the system allows many earsets 10 be associated with a single base station 20, or alternatively allows numerous earset 10/base station 20 pairs to be operated within the same area.
  • the microprocessor-based appliance 30 includes a network interface 300 that is accessible to the earset communication system via the software shown.
  • the base station 20 includes a network interface 300, which may be a DAA or "Data Access Arrangement" where the interface is to the PSTN.
  • the network interface 300 may be a connection which couples the appliance 30 (in the case of Figure 7A) or the base station 20 (in the case of Figure 7B) to a communication link such as a data service, Internet service, cable modem type service, or a conventional telephone network interface (also referred to as the "TelCo") 25.
  • a communication link such as a data service, Internet service, cable modem type service, or a conventional telephone network interface (also referred to as the "TelCo") 25.
  • the network interface 300 may connect directly to an Internet data service in order to provide VoN functionality in a consumer or home office environment.
  • the network interface 300 may connect to a LAN, WAN or corporate network.
  • FIG. 15 illustrates a block diagram of an embodiment ofthe present invention for using the earset 10 in conjunction with VoIP software to make an Internet-based 310 VoIP call.
  • the earset 10 may also be used within a corporate telecommunications enterprise 390 to make voice over network calls when integrated with a corporate VoIP (or any VoN) platform such as those offered by 3Com Corporation, Cisco Systems and others.
  • VoIP calls are made between the earset 10 and microprocessor-based appliance 30'.
  • Microprocessor-based appliance 30 is connected to the IP network (Internet) 310.
  • Voice is transmitted over the air interface 180 in a transmission to the base station 20.
  • Voice is transmitted (digital) via USB 21 to the microprocessor-based appliance 30.
  • Voice is transmitted to the IP client (software), as shown in Figure 6, on the microprocessor-based appliance 30.
  • Voice is converted into IP packets and transmitted through the network interface 36, shown in Figure 6, to the microprocessor-based appliance 30' via the Internet 310. Note that microprocessor-based appliance 30 to microprocessor-based appliance 30' VoIP communications do not require a VoIP gateway service provider 320.
  • VoIP calls are made between the earset 10 and telephone 380 or Corporate desktop equipment 390 via Centrex Service.
  • Microprocessor-based appliance 30 is connected to the IP network (Internet) 310.
  • Voice is transmitted over the air interface 180 in a transmission to the base station 20.
  • Voice is transmitted to the IP client (software), as shown in Figure 6, on the microprocessor-based appliance 30.
  • Voice is converted into IP packets and transmitted through the network interface 36, shown in Figure 6, to an IP Gateway 320 via the Internet 310.
  • the IP Gateway 320 in this scenario typically part ofthe telephone company cenfral office, converts the IP voice packets to analog and forwards the packets to the Central Office switch 330.
  • Central Office switch 330 transmits analog voice to analog telephone 380 or to Corporate desktop equipment 390 via Centrex.
  • VoIP calls are made between the earset 10 and Corporate desktop equipment 390 via telephone company Central Office switch 330.
  • Microprocessor-based appliance 30 is connected to the IP network (Intemet) 310.
  • Voice is transmitted over the air interface 180 in a transmission to the base station 20.
  • Voice is transmitted (digital) via USB 21 to the microprocessor-based appliance 30. 5. Voice is transmitted to the IP client (software), as shown in Figure 6, on the microprocessor-based appliance 30.
  • Voice is converted into IP packets and transmitted through the network interface 36, shown in Figure 6, to an IP Gateway 320 via the Internet 310.
  • the IP Gateway 320 in this scenario typically part ofthe telephone company central office, converts the IP voice packets to analog and forwards the packets to the Central
  • Central Office 330 transmits to corporate PBX 370, or IP PBX 360 (in this case, there is an IP Gateway 350 between the CO (central office) 330 and the IP PBX 360 to convert analog voice into IP Packets).
  • IP Gateway 350 between the CO (central office) 330 and the IP PBX 360 to convert analog voice into IP Packets.
  • PBX 370 or IP PBX 360 transmits voice to the corporate telecommunications network
  • the IP packets may be routed directly to an IP PBX 340 and delivered in IP form to the corporate desktop equipment 390.
  • IP PBX 340 may be routed directly to an IP PBX 340 and delivered in IP form to the corporate desktop equipment 390.
  • a method is described below with reference to Figure 6 for interaction between the earset 10 and the microprocessor-based appliance 30 to make VoIP calls. It should be recognized that the same method applies to other VoN protocols simply by replacing the IP client with an appropriate client that supports the desired protocol.
  • the earset 10 user is speaking: 1. The user speaks into the microphone 50 on the earset communicator 10. 2. The earset communicator 10 transmits the analog voice to the base station 20 over the air interface 180.
  • the base station transmits the analog voice to the microprocessor-based appliance 30 using a USB connection 21.
  • the USB audio driver 32 passes the voice to the IP Client application 34.
  • the IP client application 34 converts the analog USB voice to IP voice packets.
  • the client application 34 fransmits the IP voice packets to the microprocessor-based appliance's 30 network interface 36, such as a card or modem.
  • the PC's network interface 36 transmits the IP voice packets over the Internet 310.
  • the PC's network interface 36 receives IP voice packets and passes them along to the IP client application software.
  • the IP client application converts the IP voice packets to analog voice.
  • the USB audio driver 32 passes the analog voice to the base station 20 via a USB connection 21.
  • the base station 20 passes the analog voice to the earset communicator 10 over the air interface (i.e. using a wireless transmission) 180.
  • VoIP Voice over IP
  • infra-office branch to branch
  • inter-office communications The cost of infra-office communication can be broken down into: equipment, maintenance, and telephone charges. Equipment and maintenance costs are the primary areas of savings for inter-office communications. VoN technology can significantly reduce these costs in the following manner:
  • VoN equipment is less expensive than traditional telephone equipment. Additionally, with VoN technology voice fraffic travels over the same network infrastructure as data fraffic meaning there is no need to purchase and maintain a completely separate network to handle voice.
  • VoN technology utilizes the existing data network there is no need to maintain a completely separate voice network. Also, existing IS staff generally has the knowledge to support and maintain the existing data network so there is no need to hire and train duplicate staff to manage the voice communications component.
  • VoN communication technology uses the existing data network, there is no need to lease separate lines to handle voice fraffic in the case that the branch offices each have connected telephone equipment. In the event each branch office is not connected, and is using service provided by a long distance carrier, the savings can be greater because all long distance charges for infra-office calls can be eliminated.
  • the earset communicator system and software may be integrated with the offerings of VoN providers to add significant functionality including: voice agent capability to create "Intelligent Dial Tone," voice dialing, voice access to all telephony features (park, call, transfer, etc.) and voice mail, and integration with corporate contact management and collaboration systems (Microsoft Outlook, Lotus Notes, etc.).
  • the earset communication system preferably includes a VoN telephony system to provide a highly convenient, highly functional alternative to the soft phone (computer software) or telephone handset hardware.
  • the earset communicator 10 preferably supports functionality with both, VoN and traditional voice solutions.
  • the embodiments disclosed do not preclude working with standard telephone services. All the telephone functions described in this section apply to any transport medum, however the physical transport medium in the case of VoIP is based on the Intemet Protocol. Consumer VoIP
  • Another embodiment ofthe earset communication system provides IP Telephony in the consumer market to provide free or greatly reduced cost of long distance and international telephone calls.
  • VoIP Voice over IP
  • the earset communication system includes a wireless connection to a microprocessor-based appliance 30, through the base station 20, users can make VoIP calls from anywhere in the home, allowing them to use the earset communication system in conjunction with a VoIP provider to make calls like they might otherwise make using a standard telephone handset.
  • Another key advantage that the earset communication system adds to the VoIP platform is voice dialing, making the process of initiating and answering IP telephony calls extremely simple and convenient. Additional functionality accessible via the earset communication system software, such as voice mail, call screening, and unified messaging, round out the VoIP offering and make the complete solution an improvement over the existing analog telephone.
  • VoIP providers in the consumer VoIP market are demonstrating that the potential from this technology is significant. Some ofthe current VoIP providers are: Net2Phone, (http ://www.net2phone. com). PhoneFree and DialPad (http://www.dialpad.com).
  • VoIP Voice over IP
  • a consumer may utilize a high speed Internet connection like DSL or a cable modem (standard 33k - 56k dialup will also work, although the voice quality may be somewhat less than that of standard telephone service).
  • One ofthe primary problems with using VoIP is the fact that the user is tied to their computer - a problem that earset communication system neatly resolves.
  • additional capabilities that are enhanced by the earset communication system include voice chat for instant messaging, and voice-based command and confrol applications.
  • the instant messaging market consists of a substantial user base.
  • Some of these instant messaging products support voice conversation, while others only offer text- based chats.
  • Today, all of these services require that users be at their computers to engage in a chat. Integrating the earset communication system into these products allows users to initiate, respond to, and engage in a voice-based chat via the instant messaging software from anywhere in the home. Even without such integration, the earset communication system enables the users of instant messaging software that supports voice conversations to do so in a hands free manner while the user is moving freely throughout the home (although users will still have to initiate and answer the chat at the computer).
  • Figure 5 A is a functional block diagram ofthe earset communication system.
  • the microprocessor-based appliance 30 includes an interface 21 for communicating with the earset 10 via the base station 20 and also includes a network interface 300 for coupling the earset 10 via the appliance 30 to a network 80 that supports voice communication.
  • Figure 5B shows that the network interface 300 may include one or more of: a network connection, such as a connection to a LAN, WAN, the Internet and the like, and a connection to the PSTN, such as by a USB PSTN interface 46 or PSTN Telephony Interface 48.
  • the software 31 shown in Figures 5A and 5B is further described in Figures 7A and 7B.
  • the software modules shown in Figures 7 A and 7B, other than the earset agent application 320, are well known to those skilled in the art and are widely available.
  • the preferred earset agent application is commercially available as the Arial Voice Agent software, from ArialPhone LLC of Vemon Hills, Illinois.
  • FIG 7A further describes the preferred embodiment in which the microprocessor- based appliance 30 includes both a network interface or NIC and a PSTN Telephony Interface.
  • the microprocessor-based appliance's 30 PSTN Telephony Interface is replaced by the USB PSTN interface, which is illustrated as residing in the base station 20.
  • the PSTN Telephony Interface may be a voice modem, Dialogic D/41ESC, PhoneRider by MediaPhonics type boards, Internet Phone Jack by QuickNet type boards, or the like.
  • the network interface card 342 provides the interface for VoN communication, as described above with reference to Figure 6. Such cards are readily available from 3Com Corporation of Santa Clara, California, Intel Corp. of Santa Clara, California and others, and provide full-duplex capabilities. This interface is not utilized for PSTN telephony.
  • the PSTN Telephony may be a voice modem, Dialogic D/41ESC, PhoneRider by MediaPhonics type boards, Internet Phone Jack by QuickNet type boards, or the like.
  • the Interface in the microprocessor-based appliance 30 includes DTMF dialer circuitry that is capable of dialing a phone number transmitted from the microprocessor-based appliance 30 via its internal bus.
  • the PSTN Telephony Interface may include Caller ID detection circuitry that is capable of passing a caller's telephone number and test string to the microprocessor- based appliance 30 via its internal bus.
  • the PSTN Telephony Interface preferably provides to the microprocessor-based appliance 30 audio I/O support of 16-bit, 8-KHz PCM formats: unsigned linear, G.711.
  • a four conductor RJ-11 jack may be used to couple the PSTN Telephony Interface to a telephone line.
  • the PSTN Telephony Interface also has full-duplex audio circuitry that is capable of taking a first audio sfream from the telephone line and placing it on the internal bus ofthe microprocessor-based appliance 30.
  • the earset agent application 320 in conjunction with the well known device and media streaming drivers is capable of taking the first audio stream from the internal bus and transmitting it to the earset 10 via the base station 20.
  • the earset agent application 320 is capable of placing a second audio from the earset 10 via the base station 20 onto the internal bus.
  • the PSTN Telephony Interface is capable of taking the second audio sfream from the internal bus and placing it on the telephone line.
  • the first and second audio streams are processed simultaneously in the earset communication system.
  • the user speaks telephony control commands into the earset 10
  • they are transmitted to the earset agent application 320 via the base station 20.
  • the earset agent application 320 issues appropriate telephony confrol commands, such as on-hook, digit dialing, off-hook, flash, conference, mute and the like, to the PSTN Telephony Interface via the internal bus ofthe microprocessor-based appliance 30.
  • the full-duplex audio processing will allow the earset agent application 320 to record line or earset audio, and to communicate voice commands, play back PC audio to the line or earset 10.
  • the microprocessor-based appliance 30 is able to send earset control codes to the base station 20 to permit signaling and prompting to the earset 10 to perform a specific function.
  • the base station 20 has
  • the base station 20 also may include Caller ID detection circuitry 23 that is capable of passing a caller's telephone number and test string via the USB connection to the computer 30.
  • the base station 20 preferably provides to the microprocessor-based appliance 30 audio I/O support of 16-bit, 8-KHz PCM formats: unsigned linear, G.711.
  • the base station 20 includes a USB PSTN interface 46.
  • a four conductor RJ-11 jack may be used to couple the base station 20 via the USB PSTN interface 46 when connected to a telephone line.
  • the base station 20 also has full-duplex audio circuitry that is capable of communicating the audio stream provided via the USB connection 21 to the microprocessor-based appliance 30.
  • the microprocessor-based appliance 30 and base station 20 will communicate telephony confrol commands as well as full-duplex audio processing.
  • the full-duplex audio processing will allow the earset agent application 320 to record line or earset audio, and to communicate voice commands, play back PC audio to the line or earset 10.
  • the microprocessor-based appliance 30 is able to send earset confrol codes to the base station 20 to permit signaling and prompting to the earset 10 to perform a specific function.
  • the microprocessor-based appliance 30 may send an audio message to the earset 10, for example to alert the user of a call waiting.
  • the earset agent application 320 may communicate separately and simultaneously with both the local and remote parties when the parties are not communicating with each other. For example, the local party may perform an Internet look-up while the remote party receives a recorded music stream.
  • the earset agent application 320 via the microprocessor-based appliance 30 may communicate with the remote party to prompt the remote party to leave a message.
  • the unique form factor ofthe earset communication system provides significant support for vertical market solution providers to offer new, highly differentiable services.
  • vertical market services include: Public Safety
  • the application may also allow the service professional to request and retrieve information via the earset communication system.
  • voice commands are used for all functions and confrol ofthe system.
  • the base station 20 routes audio picked up from the earset microphone 50 to the microprocessor-based appliance 30, where speech recognition is applied to the input command signal and the command signal is processed.
  • Speech recognition software on the microprocessor-based appliance 30 interprets the voice command as described in greater detail below with reference to the software flow figures.
  • only commands are routed to the microprocessor-based appliance 30 and not audio during a conversation with another party. Once the user has issued the command to make a call, communication audio (i.e., the audio from a VoN conversation) is not picked up by the earset agent application 320.
  • the user at step 120 may depress the command button 110 on the earset 10 and, after receiving a ready prompt at step 130 from the microprocessor-based appliance 30, the user may speak a command at step 140, such as "Call Mr.
  • the connection to the network 80 preferably is muted while the command is issued and being responded to so the remote party does not hear the command. If the command was not recognized at step 170 or at step 220, then the user may again be prompted or asked to start over at step 110 or at step 140.
  • the system utilizes the Lemout & Hauspie speech recognition engine model # ASR 1600/M, which requires no voice training, no names or numbers to enter (assuming that the user already has names and numbers recorded in a contact management/address book system like Microsoft Outlook, Lotus Notes, Windows address book, etc.), and no learning curve to go through.
  • a contact management/address book system like Microsoft Outlook, Lotus Notes, Windows address book, etc.
  • the voice recognition engine will also preferably support multiple or alternative languages for example, English, Spanish, German, Chinese, French, Japanese to name a few.
  • the system may use the names that already exist in the user's contact file, through a dynamic interface to Microsoft Outlook, ACT, Lotus Organizer, and similar products.
  • the software that operates the system may be an application based on the Microsoft Windows 98 or Windows 2000 operating system (or any subsequent release) and will preferably comply with the "Designed for Microsoft Windows" Logo program, to which those interested may refer.
  • the system preferably includes an open hardware platform for multimedia playback and recording as well as button press events.
  • the system preferably includes an open hardware platform for telephony utilizing Microsoft's Telephony API standard. This allows other third party software applications to operate the required system hardware.
  • the system software application uses TAPI 2.0 specification to communicate with the system.
  • the system may also use the TAPI 3.0 specification when available or future versions as they become available.
  • USB universal serial bus
  • WDM Microsoft Win32 Driver Model
  • Hardware vendors who implement USB solutions for drivers can use the drivers provided by Microsoft or can create minidrivers to exploit any additional unique hardware features.
  • Features requiring a driver that are beyond the functionality ofthe basic USB audio driver include audio channeling, earset and base station control signaling, telephony control, and the voice command button feature.
  • the base station 20 preferably is a "Plug and Play" device as defined by the Microsoft PC99 (or PC2000) System Design Guide.
  • VA also referred to herein as the earset agent application 320, is a speech-based interface agent used to interact with the hardware and other third-party devices and software systems.
  • the voice agent utilizes program logic, a speech recognition engine, pre-recorded voice files, and text to speech synthesis where necessary.
  • the VA may use dedicated hardware or other TAPI compliant telephony devices for its audio I/Q and telephony control.
  • third party hardware and software systems like Savoy's CyberHouse, IBM's ViaVoice and various home automation devices can also be controlled through the VA.
  • the process for initiating the voice agent is by pressing the voice command button 110 on the earset 10 or base station 20 (speake ⁇ hone) to activate the VA at step 120.
  • This activation plays the ready prompt at step 130 of figure 10C through the earset speaker 52 and places the VA in a listening state.
  • the period of time for placing the VA in a listening state is a system configurable option: for example, 2 seconds. If no speech is detected, the system will revert to its previous state. Further details on the activation ofthe voice agent and the ready prompt are provided below with reference to the description ofthe various use cases.
  • the ready prompt may consist of a user recorded audio sfream (WAV file), a preselected application-offered audio sfream, or a simple combination of tones.
  • the ready prompt will be an application configurable variable.
  • the ready prompt may consist of:
  • the system is capable of answering the phone and asking the remote party their name and who they are calling.
  • the call may then be announced through wired or wireless speakers located strategically around the house or office that are controlled by the microprocessor-based appliance 30 running the software so the residents know who should answer the phone, and who is calling.
  • This feature can also be used for paging and general announcements.
  • the software can screen out telemarketing calls. Many telemarketers use predictive dialers, which are simply computer programs that dial phone numbers and wait for a human to answer the phone. Telemarketing calls by telemarketers using predictive dialers are screened out automatically because their predictive dialer software makes the determination that a person has not answered the telephone and hangs up. The system may also identify the caller thus eliminating the need for Caller ID.
  • Individual speakers in each room can be selected by the user or automatically by the software so that people may be paged and people may join a conversation.
  • the system may announce when vehicles have pulled into the driveway, when any door has been opened, when there are visitors at the front door and when mail has arrived.
  • a telephony service provider is a dynamic-link library (DLL) that supports communications over a telephone network through a set of exported service functions.
  • the service provider responds to a telephony request, sent to it by the TAPI, by carrying out the low-level tasks necessary to communicate over the telephone network.
  • the service provider in conjunction with TAPI, shields applications from the service and technology dependent details ofthe telephone network communication.
  • Each service provider is responsible for responding to telephony requests from TAPI to confrol lines and telephone devices.
  • a service provider is also responsible for controlling and assessing the information exchanged over a call. To manage this information (called the. media stream), the service provider must provide additional capabilities or functions.
  • the System TSP may optionally have configuration options to interface with PBX commands. These configuration options define what the flash, park, transfer, conference, forward, etc. commands equate to in terms of hook flash commands. For example, a conference command may consist of "flash *2".
  • Figure 9 which is preferably implemented in software, depicts a preferred method for handling a command issued by a user. As shown in Figure 9, and further described below, the method preferably includes the ability to handle recognition errors. It will be recognized upon review ofthe following that Figure 9 depicts a generalized method for issuing a command. Specific examples of particular commands will be presented separately below.
  • Figures 5 A and 5B illustrate the audio signal paths within the earset communication system associated with the general method.described in Figure 9.
  • initiation ofthe processing of a user command begins at step 115, where the initial conditions ofthe earset communication system are as follows: (1) the microprocessor-based appliance 30 is powered on; 2) the base station 20 is connected to the microprocessor-based appliance 30, such as via a USB port; 3) the base station 20 is powered on; and 4) a voice agent communication software application is running on the microprocessor-based appliance 30.
  • the user presses the command button 110 on the earset 10, shown in Figure 2, which causes the earset 10 to transmit a signal to the microprocessor-based appliance 30, through the base station 20.
  • the signal activates the voice agent.
  • the voice agent is preferably a speech-based interface agent used to interact with the system hardware and other third party software products, such as Microsoft Outlook, Lotus Notes, Lemout and Hauspie Voice Express, Dragon Dictate (from Dragon Systems), VoIP capable software (Net2Phone, DialPad, Microsoft NefMeeting), Instant Messaging Products (ICQ, AOL Instant Messenger, Yahoo! Messenger), or any other voice enabled applications or applications that could benefit from being voice enabled.
  • a suitable, commercially available voice agent is the Arial Voice Agent, offered by ArialPhone LLC of Ve non Hills, Illinois.
  • the microprocessor-based appliance 30 issues a ready prompt at step 130 to the earset 10 and places the voice agent in a listening state for in a pre-configured manner.
  • the ready prompt in the application may be configurable in one of many user selectable ways.
  • the ready prompt may be an audio stream containing a message pre-recorded by the user, a generic pre-selected audio sfream offered by the application software, or a simple earcon signal characterized by short bytes or tones that are associated with a specific event.
  • the user may issue a verbal command at step 140.
  • the system determines whether the user spoke. If the user does speak, then the method proceeds to step 160, where voice recognition processing is performed on the command. If the system detects silence, i.e. the user does not speak, then the method proceeds to step 152, where the user is re-prompted.
  • the number of times that the user may be re-prompted is a configurable option. The preferred number of re-prompts, for usability purposes, is 2 times total - i.e., initial command and 1 re-prompt.
  • the system determines whether the user has been re-prompted the predetermined number of times at step 156. If the user has not yet been prompted the maximum number of times, then the method returns to step 140 so the user may issue a command. If, on the other hand, the user has been prompted the predetermined number of times, then the method proceeds to step 240, where the user is infqrmed ofthe failure to recognize a command and then the system returns to step 115.
  • the voice recognition processor associated with the voice agent preferably returns recognition confidence level information, which may be used to determine how accurately a phrase, in this case a command, was recognized.
  • the speech recognition processor preferably assigns a confidence level to the spoken command and then sorts the assigned confidence level into one of three recognition quality categories: high confidence (for example, above 90% confidence), low confidence (for example, between 70%-90% confidence), and unrecognizable (for example, below 70% confidence).
  • high confidence for example, above 90% confidence
  • low confidence for example, between 70%-90% confidence
  • unrecognizable for example, below 70% confidence.
  • the confidence in the speech recognition is high and the method proceeds to step 190 where the PC implicitly verifies the issued command and opens a recognizer.
  • An implicit verification is characterized in that the user is not prompted to verbally confirm the command because ofthe high confidence in recognizing the spoken command.
  • step 195 the method determines whether the user has cancelled the confirmed command. If so, the method returns to step 130 where the earset 10 plays the ready prompt to let the user know they can restate the command. If on the other hand, the user does not cancel the confirmed command at step 195, then the method proceeds to step 210 where the command is executed.
  • step 160 If the confidence in the speech recognition is, for example, between 70% and 90%, then the confidence is categorized as low at step 160, and the method proceeds to step 180, where the earset agent application 320 sends a command verification prompt to the user.
  • command verification may comprise repeating the command and asking the user to verbally confirm its accuracy. Specifically, the user may hear through the speaker on the earset, "Did you say 'call John Doe'?"
  • the method determines whether the user replies affirmatively to the command verification prompt. If so, then the method proceeds to step 210 and the command is executed.
  • the reply is characterized as unrecognizable, and the user is re-prompted, at step 220, for a command.
  • the number of times to re-prompt the user is preferably a configurable option. Silence by the user during the configurable response period may be treated as an unrecognizable response at step 220. If the user has been re-prompted the predetermined number of times without resulting in an affirmative response, then the method proceeds to step 240. If the user has not been re- prompted the predetermined number of times, then the method returns to step 180.
  • step 170 the user is re-prompted, preferably repeatedly for a predetermined number of times. Once the user has been re-prompted the predetermined number of times, as determined at step 172, without the voice agent receiving a recognizable command, then the user is informed at step 176 of a failure to recognize the command, and the method returns to step 115.
  • the number of repetitions is preferably a user configurable option. If the user has not been re-prompted the predetermined number of times, then the method proceeds from step 172 to step 140 and the system awaits the user's command.
  • FIG. 10 is a flow chart illusfrating the basic course for making a call.
  • the user requests the voice agent. This corresponds to steps 115, 120 and 130 in Figure 9.
  • the user issues a command in a predetermined form to indicate to the communication system the user's desire to place a call.
  • the earset agent application 320 recognizes synonyms for commonly used commands.
  • the "call” command may be recognized whether the user says “call”, “dial” or “get me.”
  • the generahzed method of Figure 9 is followed in regard to recognition rates and the process in the event that the command is not recognized.
  • the actual command may request that the system call a person at a particular location.
  • the user may use a command, "Call Steve Smith at Work.”
  • the voice agent will therefore process the command for recognition of 1) the type of command, such as a call; 2) the person to call; and 3) the location.
  • step 270 the voice agent looks up the called party's number, such as an IP address or telephone number, at the requested location.
  • the user's contacts are stored in memory at the microprocessor-based appliance 30.
  • the microprocessor-based appliance 30 may include a software application for storing and accessing contact information.
  • Microsoft Outlook which is available from Microsoft Corp of Redmond, Washington
  • Lotus Notes which is available from Lotus Development Corporation of North Reading, Massachusetts.
  • the voice agent then confirms the command to call the called party at step 280. For example, the voice agent implicitly confirms the users request by stating to the user, "Calling Steve Smith at Work.” If the user does not cancel the confirmed command, the method proceeds to step 290, where a call is placed to the called party at the desired location. If however an explicit confirmation is required, for example where the confidence in the speech recognition ofthe command is Low Confidence or Unrecognizable, then the method preferably proceeds along the paths of steps 180 or 170, respectively, in Figure 9. Reference may be made to the flow chart in Figure 9 for further detail regarding command confirmation. Again, once the command is confirmed, either implicitly or expressly, the method proceeds to step 290 for execution by placing the call.
  • FIG. 11 A shows an alternative method for placing a call using the earset communication system.
  • the method shown in Figure 11 A generally follows the method of Figure 10, except that the system checks that the requested location for a particular person being called is valid.
  • steps 250 and 260 are the same in Figure 10, except that the method of Figure 11 A requires the user to specify a location for the called party. This method is necessary where a called party has multiple phone numbers designated by a unique location such as home or work.
  • steps 280 and 290 are present in both embodiments.
  • step 11 A the method of Figure 11 A proceeds to step 305, where the system determines whether the requested location is valid. Generally, a requested location will be considered valid if the user's contact information includes a number for the called party at the requested location.
  • step 310 the voice agent determines the called party's number at the requested location. From there, the method proceeds to place the call to the called party at the requested location in accordance with steps 280 and 290, which are described above. If, on the other hand, the requested location is invalid, then the method proceeds to step 325, where the voice agent informs the user that the location is not valid. For example, the voice agent in step 325 may respond with: "That's not a valid location; you can say [location_l]...[Location_n],” where [location_l]...[Location_n] correspond to the valid locations associated with the called party.
  • the system may prompt the user to enter one. Since each called party may have numerous numbers corresponding to different locations, for example, home, work, mobile and the like, the system will preferably inform the user of each valid location. Next, the user responds with the desired location at step 335. The method then returns to step 305 in order to determine if the location is valid. Once the location information is determined to be valid at step 305, then the method proceeds with steps 310, 280 and 290 as described above.
  • the flow chart in Figure 1 IB shows another alternative embodiment ofthe method for placing a call using the earset communication system.
  • the initial steps are similar to the initial steps in Figure 11 A, except that in Figure 1 IB the user command at step 260 includes only the called party's name.
  • the method proceeds to step 345, where the system determines whether there is more than one number assigned to the called party's name. If more than one number is assigned to the called party's name, then the method proceeds to step 355, where the voice agent prompts the user for more information, such as by requesting "At which location?"
  • step 365 the user will respond to the prompt by speaking the location desired for the called party.
  • the system determines, as described above with reference to Figure 1 IB, whether the location specified by the user is valid at step 305 and the method progresses as described with reference to Figure 1 IB.
  • step 345 if the method determines that there is only one number for the called party, then the method proceeds to step 375, where the voice agent determines the proper number from the user's contact information.
  • the method then proceeds to steps 280 and 290, which are described above, to complete placement ofthe call to the called party.
  • FIG. 12 is a flow chart illustrating the basic course for retrieving schedule information . Beginning at step 250, the user requests the voice agent. This corresponds to steps 115, 120 and 130 in Figure 9.
  • the user issues a command in a predetermined form to indicate to the communication system the user's desire to retrieve schedule information.
  • a command in a predetermined form to indicate to the communication system the user's desire to retrieve schedule information.
  • the generalized method of Figure 9 is followed with regard to recognition rates and the process in the event that the command is not recognized.
  • the actual command may request for a description ofthe user's schedule. For example, the user may use a command, "What is my schedule today?"
  • the voice agent will therefore process the command for recognition ofthe user's schedule.
  • the voice agent then confirms the command to retrieve schedule information for today at step 282. For example, the voice agent implicitly confirms the users request by stating to the user, "Retrieving schedule information for today.” If the user does not cancel the confirmed command, the method proceeds to step 288, where the schedule information is retrieved. If however an explicit confirmation is required, for example where the confidence in the speech recognition ofthe command is Low Confidence or Unrecognizable, then the method preferably proceeds along the paths of steps 180 or 170, respectively, in Figure 9. Reference may be made to the flow chart in Figure 9 for further detail regarding command confirmation. Again, once the command is confirmed, either implicitly or expressly, the method proceeds to step 288 for retrieving the schedule. Note that the user can interrupt and issue commands such as "next item", “previous item”, “next day”, “cancel”, etc.
  • the method proceeds to step 288, where the voice agent looks up the user's schedule.
  • the user's schedule is stored in memory at the microprocessor-based appliance 30.
  • the microprocessor-based appliance 30 may include a software application for storing schedule information.
  • software applications that are suitable for this purpose including, for example, Microsoft Outlook, which is available from Microsoft Corp of Redmond, Washington and Lotus Notes, which is available from Lotus Development Corporation of North Reading, Massachusetts.
  • the voice agent reads or plays the requested schedule information to the user based on the user's previous command.
  • the earset communicator system functions with existing home confrol and home entertainment applications that rely heavily on devices such as remote controls and PC-based software interfaces to control various home functions.
  • Implementing voice-based command and control of home functions using the earset communicator system greatly improve convenience and simplicity to the control ofthe home.
  • Existing TR remote control units are limited to line-of-sight operation and require multiple button sequences to be learned and pressed for most operations.
  • the earset communication system works from anywhere in the home and can respond to natural language commands, such as "Put on ESPN".
  • Functions that may be under voice confrol include: Television, Digital Music, DVD, Gaming, Lighting, HVAC (Heating Ventilation Air Conditioning), Motorized Blinds and the like.
  • FIG. 13 is a flow chart illustrating the steps for the control of home entertainment functions.
  • Figure 14 is a flow chart illustrating the steps for the confrol of home automation functions.
  • the steps referenced below refer to software flow diagrams of both figures 13 and 14. Beginning at step 250, the user requests the voice agent. This corresponds to steps 115, 120 and 130 in Figure 9.
  • the user issues a command in a predetermined form to indicate to the communication system the user's desire to control a home entertainment device.
  • the command may request that the TV be tuned to a particular channel as shown in Figure 13.
  • the generalized method of Figure 9 is followed with regard to recognition rates and the process in the event that the command is not recognized.
  • the voice agent then implicitly confirms the command to control or adjust the home entertainment device at step 284.
  • the voice agent implicitly confirms the users request by stating to the user, "Tuning TV to ESPN" for the control ofthe TV. If the user does not cancel the confirmed command, the method proceeds to step 294, where upon execution ofthe command, the home entertainment device is confrolled in the manner " commanded by the user.
  • the generalized flow chart shown in Figure 14 illustrates the software flow for adjustment or confrol of a home automation function.
  • the format ofthe generalized home automation command may be ⁇ adjustment or confrol> ofthe ⁇ home automation function> where the item in the ⁇ field> indicated is a command variable.
  • the user issues such a command at step 266 in Figure 14.
  • the actual home automation function may be, for example, to lower the kitchen blinds.
  • the voice agent software running on the microprocessor-based appliance 30 will therefore process the command for recognition ofthe command and, for identification ofthe appliance to be confrolled.
  • the voice agent confirms the command.
  • step 296 upon execution ofthe command, the appliance is confrolled in the manner commanded by the user. If however an explicit confirmation is required, for example where the confidence in the speech recognition ofthe command is Low Confidence or Unrecognizable, then the method preferably proceeds along the paths of steps 180 or 170, respectively, in Figure 9. Reference may be made to the flow chart in Figure 9 for further detail regarding command confirmation. Again, once the command is confirmed, either implicitly or expressly, and the method proceeds to step 296 for execution ofthe command. Once the command has been recognized, the method proceeds to step 280, where the appliance is adjusted in the desired manner.
  • the earset communication system may be embedded with a home appliance or home entertainment device, provided that the appliance or device includes a read-only memory (ROM) structure, a random access memory (RAM) structure, associated data and address buses, and a port for coupling the appliance or device to the base station 20.
  • ROM read-only memory
  • RAM random access memory
  • a port for coupling the appliance or device to the base station 20 a port for coupling the appliance or device to the base station 20.
  • ROM read-only memory
  • RAM random access memory
  • a port for coupling the appliance or device to the base station 20 a port for coupling the appliance or device to the base station 20.
  • ROM read-only memory
  • RAM random access memory
  • a port for coupling the appliance or device to the base station 20.
  • ⁇ home automation function> field for example could be an on/off operation, up/down volume, open/close, or other change mode function.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Health & Medical Sciences (AREA)
  • Otolaryngology (AREA)
  • Human Computer Interaction (AREA)
  • Selective Calling Equipment (AREA)
  • Telephonic Communication Services (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

La présente invention concerne un système et un procédé de réalisation de communications radio avec commande de l'ensemble par un logiciel de reconnaissance vocale s'exécutant sur un contrôleur. Ce système comporte un module de communication sur oreillette et une station de base qui permet les communications radio entre ces éléments. Le module de communication sur oreillette, qui repose confortablement sur l'oreille de l'utilisateur, est tenu en place par un crochet d'oreille. La station de base à émetteur-récepteur communique avec le module de communication sur oreillette et se connecte à un contrôleur hôte tel qu'un ordinateur personnel ou un produit d'électroménager domestique, et à une interface réseau telle qu'une connexion Internet ou une ligne téléphonique. Des commandes vocales conviennent à de nombreuses fonctions utilisées pour la gestion du système. La station de base achemine le signal audio du microphone sur oreillette vers le logiciel du contrôleur en vue de la reconnaissance vocale et du traitement des commandes. Le logiciel de reconnaissance vocale du contrôleur interprète la commande vocale et agit en conséquence.
PCT/US2001/011069 2000-04-06 2001-04-05 Systeme de communication mains libres a oreillette WO2001078443A2 (fr)

Priority Applications (3)

Application Number Priority Date Filing Date Title
DE10191530T DE10191530T1 (de) 2000-04-06 2001-04-05 Ohrhörerkommunikationssystem
AU2001253166A AU2001253166A1 (en) 2000-04-06 2001-04-05 Earset communication system
GB0129150A GB2370181A (en) 2000-04-06 2001-04-05 Earset communication system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US54472700A 2000-04-06 2000-04-06
US09/544,727 2000-04-06

Publications (2)

Publication Number Publication Date
WO2001078443A2 true WO2001078443A2 (fr) 2001-10-18
WO2001078443A3 WO2001078443A3 (fr) 2003-10-16

Family

ID=24173333

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2001/011069 WO2001078443A2 (fr) 2000-04-06 2001-04-05 Systeme de communication mains libres a oreillette

Country Status (4)

Country Link
AU (1) AU2001253166A1 (fr)
DE (1) DE10191530T1 (fr)
GB (1) GB2370181A (fr)
WO (1) WO2001078443A2 (fr)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004028096A1 (fr) * 2002-09-19 2004-04-01 Nortel Networks Limited Multi-rattachement et multi-hebergement de sous-systemes audio sans fil
WO2004062247A2 (fr) 2002-12-16 2004-07-22 3M Innovative Properties Company Système d'intercommunication sans fil et procédé de communication utilisant ce système
FR2853189A1 (fr) * 2003-03-26 2004-10-01 France Telecom Procede de gestion de communications vocales entre plusieurs terminaux
WO2005096602A1 (fr) * 2004-03-29 2005-10-13 Plantronics, Inc. Conversion de la parole en dtmf
EP1617700A1 (fr) * 2004-07-11 2006-01-18 Global Target Enterprise INC. Ecouteur bluetooth sans fil avec des connecteurs
WO2007005212A2 (fr) * 2005-07-01 2007-01-11 Plantronics, Inc. Systeme d'oreillette radio pour telephonie par ordinateur avec alerte des appels entrants et commande de prise de ligne a l'oreillette
WO2007005199A2 (fr) * 2005-07-01 2007-01-11 Plantronics, Inc. Systemes de casque sans fil et procedes permettant d'activer des programmes d'application sur un hote fonde sur un processeur
WO2008146082A2 (fr) 2006-07-21 2008-12-04 Nxp B.V. Réseau de microphones bluetooth
US7640066B2 (en) 2004-02-05 2009-12-29 York International Corporation Transport of encapsulated serial data via instant messaging communication
WO2010138342A1 (fr) * 2009-05-29 2010-12-02 Vocollect, Inc. Système commandé par la voix muni d'un casque
EP2037579A3 (fr) * 2007-09-13 2011-08-03 Fagor, S. Coop. Fernsteuereinrichtung für Haushaltsgeräte
US8995649B2 (en) 2013-03-13 2015-03-31 Plantronics, Inc. System and method for multiple headset integration
US10424297B1 (en) 2017-02-02 2019-09-24 Mitel Networks, Inc. Voice command processing for conferencing
US10455328B2 (en) 2017-07-14 2019-10-22 Hand Held Products, Inc. Adjustable microphone headset
US10560323B2 (en) 2013-03-15 2020-02-11 Koss Corporation Configuring wireless devices for a wireless infrastructure network
US11310574B2 (en) 2013-12-02 2022-04-19 Koss Corporation Wooden or other dielectric capacitive touch interface and loudspeaker having same

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102006018366A1 (de) * 2006-04-20 2007-10-25 Benjamin Teske Tastatur für eine effiziente Dateneingabe und Dateneingabesystem
US10015594B2 (en) 2016-06-23 2018-07-03 Microsoft Technology Licensing, Llc Peripheral device transducer configuration
US20170374187A1 (en) * 2016-06-23 2017-12-28 Microsoft Technology Licensing, Llc User Input Peripheral

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1992020167A1 (fr) * 1991-04-30 1992-11-12 Motorola, Inc. Dispositif de telecommunications personnel comprenant une caracteristique de commande a distance
EP0896318A2 (fr) * 1997-08-04 1999-02-10 Compaq Computer Corporation interconnexion entre ordinateur personnel et affichage dans un environnement multimédia
WO1999020032A1 (fr) * 1997-09-18 1999-04-22 Apropos Technology Systeme et procede permettant d'integrer un vocal-sur-reseau a la telephonie traditionnelle
US5982904A (en) * 1998-01-22 1999-11-09 Voice Communication Interface Corp. Wireless headset
US6041130A (en) * 1998-06-23 2000-03-21 Mci Communications Corporation Headset with multiple connections

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1992020167A1 (fr) * 1991-04-30 1992-11-12 Motorola, Inc. Dispositif de telecommunications personnel comprenant une caracteristique de commande a distance
EP0896318A2 (fr) * 1997-08-04 1999-02-10 Compaq Computer Corporation interconnexion entre ordinateur personnel et affichage dans un environnement multimédia
WO1999020032A1 (fr) * 1997-09-18 1999-04-22 Apropos Technology Systeme et procede permettant d'integrer un vocal-sur-reseau a la telephonie traditionnelle
US5982904A (en) * 1998-01-22 1999-11-09 Voice Communication Interface Corp. Wireless headset
US6041130A (en) * 1998-06-23 2000-03-21 Mci Communications Corporation Headset with multiple connections

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004028096A1 (fr) * 2002-09-19 2004-04-01 Nortel Networks Limited Multi-rattachement et multi-hebergement de sous-systemes audio sans fil
AU2003295720B2 (en) * 2002-12-16 2009-01-08 Partech, Inc. Wireless intercom system and method of communicating using wireless intercom system
WO2004062247A2 (fr) 2002-12-16 2004-07-22 3M Innovative Properties Company Système d'intercommunication sans fil et procédé de communication utilisant ce système
US7120388B2 (en) * 2002-12-16 2006-10-10 3M Innovative Properties Company Wireless intercom system and method of communicating using wireless intercom system
WO2004062247A3 (fr) * 2002-12-16 2005-02-03 3M Innovative Properties Co Système d'intercommunication sans fil et procédé de communication utilisant ce système
FR2853189A1 (fr) * 2003-03-26 2004-10-01 France Telecom Procede de gestion de communications vocales entre plusieurs terminaux
US7640066B2 (en) 2004-02-05 2009-12-29 York International Corporation Transport of encapsulated serial data via instant messaging communication
WO2005096602A1 (fr) * 2004-03-29 2005-10-13 Plantronics, Inc. Conversion de la parole en dtmf
EP1617700A1 (fr) * 2004-07-11 2006-01-18 Global Target Enterprise INC. Ecouteur bluetooth sans fil avec des connecteurs
WO2007005199A2 (fr) * 2005-07-01 2007-01-11 Plantronics, Inc. Systemes de casque sans fil et procedes permettant d'activer des programmes d'application sur un hote fonde sur un processeur
WO2007005212A2 (fr) * 2005-07-01 2007-01-11 Plantronics, Inc. Systeme d'oreillette radio pour telephonie par ordinateur avec alerte des appels entrants et commande de prise de ligne a l'oreillette
US8755845B2 (en) 2005-07-01 2014-06-17 Plantronics, Inc. Wireless headset systems and methods for activating application programs on processor-based host
WO2007005212A3 (fr) * 2005-07-01 2007-03-29 Plantronics Systeme d'oreillette radio pour telephonie par ordinateur avec alerte des appels entrants et commande de prise de ligne a l'oreillette
WO2007005199A3 (fr) * 2005-07-01 2007-05-18 Plantronics Systemes de casque sans fil et procedes permettant d'activer des programmes d'application sur un hote fonde sur un processeur
US8295771B2 (en) 2006-07-21 2012-10-23 Nxp, B.V. Bluetooth microphone array
JP2010517328A (ja) * 2006-07-21 2010-05-20 エヌエックスピー ビー ヴィ 無線電話システムおよび該システムにおける音声信号の処理方法
WO2008146082A2 (fr) 2006-07-21 2008-12-04 Nxp B.V. Réseau de microphones bluetooth
WO2008146082A3 (fr) * 2006-07-21 2009-06-11 Nxp Bv Réseau de microphones bluetooth
EP2037579A3 (fr) * 2007-09-13 2011-08-03 Fagor, S. Coop. Fernsteuereinrichtung für Haushaltsgeräte
WO2010138342A1 (fr) * 2009-05-29 2010-12-02 Vocollect, Inc. Système commandé par la voix muni d'un casque
US8995649B2 (en) 2013-03-13 2015-03-31 Plantronics, Inc. System and method for multiple headset integration
US10560323B2 (en) 2013-03-15 2020-02-11 Koss Corporation Configuring wireless devices for a wireless infrastructure network
US10680884B2 (en) 2013-03-15 2020-06-09 Koss Corporation Configuring wireless devices for a wireless infrastructure network
US12010471B2 (en) 2013-12-02 2024-06-11 Koss Corporation Wooden or other dielectric capacitive touch interface and loudspeaker having same
US11310574B2 (en) 2013-12-02 2022-04-19 Koss Corporation Wooden or other dielectric capacitive touch interface and loudspeaker having same
US10424297B1 (en) 2017-02-02 2019-09-24 Mitel Networks, Inc. Voice command processing for conferencing
US10455328B2 (en) 2017-07-14 2019-10-22 Hand Held Products, Inc. Adjustable microphone headset

Also Published As

Publication number Publication date
AU2001253166A1 (en) 2001-10-23
DE10191530T1 (de) 2002-10-24
GB0129150D0 (en) 2002-01-23
WO2001078443A3 (fr) 2003-10-16
GB2370181A (en) 2002-06-19

Similar Documents

Publication Publication Date Title
WO2001078443A2 (fr) Systeme de communication mains libres a oreillette
US6546262B1 (en) Cellular telephone accessory device for a personal computer system
US8917822B2 (en) System for text assisted telephony
US7650168B2 (en) Voice activated dialing for wireless headsets
US9532192B2 (en) Configurable phone with interactive voice response engine
US20050180464A1 (en) Audio communication with a computer
US6931463B2 (en) Portable companion device only functioning when a wireless link established between the companion device and an electronic device and providing processed data to the electronic device
US6493670B1 (en) Method and apparatus for transmitting DTMF signals employing local speech recognition
US20060229108A1 (en) Mobile phone extension and data interface via an audio headset connection
US6138096A (en) Apparatus for speech-based generation, audio translation, and manipulation of text messages over voice lines
EP2294801B1 (fr) Casque sans fil à moyens d'annonce vocale
KR100611626B1 (ko) 블루투스를 이용한 통합 통신장치
US20070225049A1 (en) Voice controlled push to talk system
WO2005096602A1 (fr) Conversion de la parole en dtmf
US6256611B1 (en) Controlling a telecommunication service and a terminal
KR20060006019A (ko) 무소음으로 선택 가능한 가청 통신을 제공하는 장치,시스템 및 방법
US7164934B2 (en) Mobile telephone having voice recording, playback and automatic voice dial pad
CN117544727A (zh) 一种家用智能音箱内置话机系统及实现方法
JP2002118689A (ja) 携帯電話の発信時相手応答に対する自動音声再生機能
JPH09116940A (ja) コンピュータ・電話統合システム
US20030210768A1 (en) Manual and automatic record feature in a telephone
JP2002237877A (ja) ハンズフリーシステム、携帯電話およびハンズフリー装置
JP2005222410A (ja) 車載用ハンドフリーメール装置
JPH11205437A (ja) 電話の無声応答方式
JP2003046647A (ja) 通話中継システム、通話中継方法、通話中継プログラム及びそれを記録した記録媒体

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

ENP Entry into the national phase

Ref document number: 200129150

Country of ref document: GB

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application
RET De translation (de og part 6b)

Ref document number: 10191530

Country of ref document: DE

Date of ref document: 20021024

WWE Wipo information: entry into national phase

Ref document number: 10191530

Country of ref document: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 69(1) EPC (EPO FORM 1205A) DATED 15.01.03

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP