US20180295656A1 - Smart Bluetooth Headset For Speech Command - Google Patents

Smart Bluetooth Headset For Speech Command Download PDF

Info

Publication number
US20180295656A1
US20180295656A1 US15/912,519 US201815912519A US2018295656A1 US 20180295656 A1 US20180295656 A1 US 20180295656A1 US 201815912519 A US201815912519 A US 201815912519A US 2018295656 A1 US2018295656 A1 US 2018295656A1
Authority
US
United States
Prior art keywords
lossless
wearable device
display information
serving
serving device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/912,519
Inventor
Christopher Parkinson
Dashen Fan
Frederick Herrmann
John Gassel
Murshed Khandaker
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kopin Corp
Original Assignee
Kopin Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kopin Corp filed Critical Kopin Corp
Priority to US15/912,519 priority Critical patent/US20180295656A1/en
Assigned to KOPIN CORPORATION reassignment KOPIN CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: PARKINSON, CHRISTOPHER, FAN, DASHEN, HERRMANN, FREDERICK, KHANDAKER, Murshed, GASSEL, JOHN
Publication of US20180295656A1 publication Critical patent/US20180295656A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W76/00Connection management
    • H04W76/10Connection setup
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/0017Lossless audio signal coding; Perfect reconstruction of coded audio signal by transmission of coding error
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/005Correction of errors induced by the transmission channel, if related to the coding algorithm
    • H04L65/602
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • H04L65/762Media network packet handling at the source 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/60Substation equipment, e.g. for use by subscribers including speech amplifiers
    • H04M1/6033Substation equipment, e.g. for use by subscribers including speech amplifiers for providing handsfree use or a loudspeaker mode in telephone sets
    • H04M1/6041Portable telephones adapted for handsfree use
    • H04M1/6058Portable telephones adapted for handsfree use involving the use of a headset accessory device connected to the portable telephone
    • H04M1/6066Portable telephones adapted for handsfree use involving the use of a headset accessory device connected to the portable telephone including a wireless connection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/02Details of telephonic subscriber devices including a Bluetooth interface
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/74Details of telephonic subscriber devices with voice recognition means

Definitions

  • a Bluetooth headset designed to pair with a cellphone, or other serving device typically employs a Bluetooth Hands-Free Profile (HFP) or Bluetooth HeadSet Profile (HSP) to control how audio is passed from the cellphone to the headset.
  • HFP Bluetooth Hands-Free Profile
  • HSP Bluetooth HeadSet Profile
  • the HFP or HSP profile allows incoming audio data on the cellphone to be relayed directly to the headset for immediate playback via a near-ear speaker.
  • audio collected at the headset from one or more near-mouth microphones is passed immediately to the cellphone, which includes the collected audio in the current audio telephone call.
  • Bluetooth headsets may offer some form of speech recognition to the user. Such speech recognition can be used to control features of the cellphone and to provide the user the ability to place calls just by speaking a command.
  • speech recognition can be used to control features of the cellphone and to provide the user the ability to place calls just by speaking a command.
  • to-date all Bluetooth headsets either run the speech recognition service directly on the Bluetooth headset itself, or use cloud-based recognition systems.
  • a drawback of the former speech recognition service is the need for complex, expensive electronics in the headset.
  • a drawback of the latter speech recognition service is the requirement of an always-on connection to the cloud.
  • Bluetooth devices speech recognition services have utilized the HFP or HSP for audio data transmission.
  • the band for the HFP or HSP is 8 kHz, which is generally too narrow for proper speech recognition.
  • a new Bluetooth HFP standard (v1.6), Wide-Band-Speech (WBS) with a 16 kHz sampling rate, has been used recently, together with compression method such as modified subband coding (mSBC).
  • WBS Wide-Band-Speech
  • mSBC modified subband coding
  • HFP and HSP which are designed for voice transmission, are lossy. (e.g., they sometimes lose voice packets or data). HFP and HSP typically do not re-transmit the lost voice packets at all, or re-transmit them at most once or twice to limit delay of the wireless phone call and continuation the wireless conversation. Losing a packet or two of speech data may be barely noticeable in the decoded speech output. Packet erasure concealment algorithms further reduce the speech degradation caused by missing speech packets. More important is reducing delay or lag in the cell phone conversation, so a lossy link is more acceptable than a high latency link for speech channels.
  • a standard Bluetooth headset is improved to provide better speech recognition and deliver information to the user.
  • the present invention substantially improves the voice recognition by addressing the loss of data packet problem in the Bluetooth.
  • the Bluetooth device may be, rather than a headset, another type of wearable device.
  • wearable devices may include a wrist-worn device, a device worn on the upper arm or other part of the body.
  • the invention may be a method of interfacing with a serving device from a wearable device worn by a user.
  • the method may include establishing a lossless and wireless data link between the serving device and the wearable device, collecting, by the wearable device, audio data from one or more microphones of the wearable device.
  • the method may further include sending, by the wearable device, the collected audio data to the serving device through the lossless and wireless data link.
  • the wearable device is a headset device. In another embodiment, the wearable device is a wrist watch device.
  • One embodiment further includes providing, by the serving device, speech recognition services associated with the audio data.
  • the speech recognition services include wide band speech processing and (iii) low-distortion speech compression.
  • Another embodiment further includes providing, by the wearable device, speech compression of the collected audio data.
  • the serving device is one or more of a cellphone, a smartphone, a tablet device, a laptop computer, a notebook computer, a desktop computer, a network server, a wearable mobile communications device, a wearable mobile computer and a cloud-based computing entity.
  • Another embodiment further includes providing, by the wearable device, noise cancellation services associated with the collected audio data. Another embodiment further includes sending, from the wearable device to the serving device, information to establish, at the serving device, one or more components necessary to support the lossless and wireless data link.
  • the one or more components necessary to support the lossless and wireless data link includes (i) one or more of a custom WIFI connection and a custom Bluetooth profile, (ii) a driver and (iii) compression/decompression code.
  • the lossless and wireless data link is a Bluetooth link operating with a custom Bluetooth profile.
  • the invention may be a method of establishing a lossless and wireless data link between a serving device and a wearable device.
  • the method may include establishing, by the wearable device, a wireless link of a first protocol between the wearable device and the serving device.
  • the method may further include establishing, by the wearable device and using the wireless link of the first protocol, a lossless wireless link of a second protocol.
  • the method may further include conveying to the serving device, by the wearable device, information to establish, at the serving device, one or more components necessary to support the lossless and wireless data link.
  • the one or more components necessary to support the lossless and wireless data link includes a custom Bluetooth profile, a driver and compression/decompression code.
  • the wireless link of a first protocol is a lossy Bluetooth link
  • the wireless link of a second protocol is a lossless Bluetooth link.
  • the lossless Bluetooth link is based on a Bluetooth SPP profile.
  • the lossless and wireless link of a first protocol is a Bluetooth link operating with a custom Bluetooth profile.
  • the invention may be a wearable device, including at least one microphone, at least one speaker, a voice compression engine, and a driver configured to transmit voice packets over a lossless, wireless data channel.
  • the lossless, wireless data channel is based on a Bluetooth SPP profile.
  • the voice compression engine includes one or more of (i) Sub-Band-Coder, (ii) Speex, and (iii) ETSI Distributed Speech Recognition.
  • One embodiment may further include a noise cancellation engine.
  • the noise cancellation engine receives an audio signal from two or more sources, and uses linear noise cancellation algorithms to reduce ambient noise
  • One embodiment may further include a code deployment module configured to convey a custom Bluetooth profile and driver to a serving device, to facilitate implementation of the lossless link at the serving device.
  • the code deployment module conveys a applet to the serving device to install the custom Bluetooth profile and driver on the serving device.
  • FIG. 1 is a block diagram illustrating an example embodiment of connecting a headset with a cellphone using two audio links.
  • FIG. 2 is a block diagram illustrating an example embodiment of processing and transmitting of audio signal for speech recognition according to the invention.
  • FIG. 3 is a block diagram illustrating another example embodiment of processing and transmitting of audio signal for speech recognition according to the invention.
  • FIG. 1 is an example embodiment of the invention.
  • This embodiment concerns two primary components—a headset 102 and a serving device 104 , connected by one or more wireless links.
  • the serving device 104 may be any device that could implement a wireless link to a hands-free headset, including but not limited to a cellphone, a smartphone, a tablet device, a laptop computer, a notebook computer, a desktop computer, a network server, a wearable mobile communications device, a wearable mobile computer or a cloud-based entity.
  • the wearable device may include a device worn on the user's wrist, upper arm, leg, waist or neck, or any other body part suitable for supporting a communications and/or computing device.
  • the headset 102 component may be, rather than a headset, a device worn on the user's wrist, upper arm, leg, waist or neck, or any other body part suitable for supporting wireless device (e.g., Bluetooth or WiFi device).
  • the serving device directly hosts a speech recognition service.
  • the embodiments establish a new, secondary data link from the serving device to the headset.
  • the secondary data link should be lossless.
  • the secondary data link may be a Bluetooth data link.
  • the secondary Bluetooth data link may be used to send the near-mouth microphone input (or a second copy of the microphone input, if the HFP link is active) to the serving device, which is running a speech recognition service/speech recognition processing software.
  • the secondary Bluetooth data link preserves the original Hands-free Profile link and ensures ongoing compatibility with the cellphone's existing firmware.
  • compression schemes can compress the audio data between serving device and headset in ways not supported by standard Hands-Free profiles (e.g., by using compression/decompression schemes that require a lossless data path).
  • the user can speak a command to the headset.
  • the command e.g., the spoken audio
  • the serving device via the secondary Bluetooth data link, whereupon the audio is passed into a speech recognition system.
  • the speech recognizer is then able to take appropriate action, such as initiating a new call to a given phone number.
  • Natural sentences can be spoken by the headset wearer to action other important functions, such as “send SMS message to John that I shall be late tonight”.
  • This sentence when processed by speech recognition and natural language/natural language understanding engines on the serving device or on a network server through the wireless link can be used to create and send appropriate SMS messages, for example.
  • the user can query the state of the phone or perform web-based queries by speaking to the headset and letting the serving device perform speech recognition and execute an action appropriate to the recognized speech.
  • the link can also send audio from the serving device back to the headset for playback via the near-ear speaker.
  • this is used to convey information back to the headset wearer via computer generated spoken phrases, aka Text-to-Speech (TTS).
  • TTS Text-to-Speech
  • SMS message can be converted to speech (e.g., text-to-speech) one the server side, and the speech audio of the reading can be sent over the Bluetooth link for playback to the user.
  • speech e.g., text-to-speech
  • This technique can be combined with the speech recognition service to provide a two-way question and answer system. For example, the user can now speak to the headset to ask a question such as “what time is it?” This audio can be processed by the speech recognition service, an answer calculated, and then spoken aloud to the user.
  • FIG. 1 is a block diagram illustrating an example embodiment of connecting a headset 102 with a serving device 104 using two bi-directional channels; a lossless data link 106 and a lossy data link 108 .
  • the lossless data link 106 is a Bluetooth link using Serial Port Profile (SPP)
  • the lossy data link 108 is a Bluetooth link using the headset profile (HSP) or the hands free profile (HFP).
  • the lossless data link 106 may be another digital data link such as WiFi or other wireless technologies known in the art.
  • SPP may provide an underlying basis for a lossless data link
  • the profile itself does not provide lossless transmission.
  • Bluetooth does not provide a standard profile to address the problem of packet loss, in particular when used for speech recognition purposes.
  • a customized profile is required, or at the very least a modified version of the SPP is required.
  • the lossless data link 106 is established and allowed to remain active as long as both the serving device 104 and the headset 102 are active (i.e., turned on).
  • the lossy data link 108 is active only when the user of the headset 102 is making a voice call.
  • one or more microphones 110 on the headset 102 collect audio data. Audio can then, optionally, be passed through a noise cancellation module 112 on the headset 102 to reduce background noise and improve speech recognition. The use of multiple microphones 110 may further improve the overall noise cancellation performance by more effectively canceling both stationary and non-stationary noises.
  • the microphone audio 114 may then be split into two streams, as shown. One of the audio streams is sent to the lossless data link 106 and one to the lossy data link 108 .
  • the lossy data link 108 is only established between headset 102 and serving device 104 as associated with an active telephone call. Thus, this communication link is intermittent.
  • the lossy data link 108 is established, one of the audio streams is sent to the serving device 104 as part of normal, hands-free system. Audio is sent from serving device 104 to headset 102 over the lossless data link 106 . Audio may also be sent from the serving device 104 to the headset 102 over the lossy (HFP or HSP) data link 108 , in the event that the serving device operation requires a call to occur over the HFP or HSP data link 108 .
  • the audio may be in the form of computer generated spoken phrases (e.g., Text-To-Speech service), which are played back on headset.
  • the audio is also played back on the headset 102 and merged with any spoken phrases from the lossless data link 106 (also referred to herein as command/control link).
  • the audio received through the lossless data link 106 may be given priority by temporarily muting the telephone call speech from the lossy data link 108 , or the two audio signals may be mixed so the user hears both simultaneously, or the audio from the lossy data link 108 may be temporarily attenuated (i.e., partially muted), to make it easier to hear the audio from the lossless data link 106 .
  • FIG. 2 and FIG. 3 are block diagrams illustrating example embodiments of processing and transmission of audio speech signal for speech recognition.
  • audio information is conveyed between a headset 202 and a serving device 204 across a bidirectional, lossless, wireless data link.
  • the audio speech signal is collected from two or more microphones 206 , and processed by a noise cancellation module 208 .
  • noise cancellation may be processed using linear algorithms to avoid introducing any non-linear distortion to the speech signal.
  • FIG. 2 illustrates compression of the speech signal with a voice compression module 210 .
  • the compressed speech signal is sent to the serving device 204 across a lossless, bidirectional, wireless data link 212 , for example a Serial Port Profile (SPP) Bluetooth data link.
  • SPP Serial Port Profile
  • the serving device 204 receives the compressed speech signal from the lossless data link 212 and decompresses the compressed speech data using a voice decompression module 214 .
  • the resulting voice data acquired through a lossless data path, can be used by an Automatic Speech Recognition (ASR) engine and/or a Natural Language Processing engine 216 .
  • ASR Automatic Speech Recognition
  • the serving device 204 may have digital speech files (e.g., Text-To-Speech (TTS) or WAVE (.wav format)) to send to the headset 202 .
  • the speech data is first compressed by a voice compression module, and send to the headset through the lossless data link 212 .
  • a voice decompression module 222 decompresses the speech data and provides the data to a TTS or WAVE play module 224 , which converts the audio file to an audio signal that drives a speaker 226 .
  • FIG. 3 illustrates an embodiment that provides front-end feature extraction and noise cancellation in the headset 302 , with an ASR backend and a natural language processing (NLP) engine in the serving device.
  • NLP natural language processing
  • TTS/WAVE files 318 may be transferred from the serving device 304 to the headset 302 through a voice compression module 320 , the lossless data link 312 , a voice decompression module 322 and a TTS or WAVE player driving a speaker 326 .
  • WAVE files may be stored on the headset, and initiated for playback on the headset by a simple command conveyed by the serving device.
  • FIG. 2 and FIG. 3 are examples of how the described embodiments may be provide useful functionality. These embodiments may be combined with each other, or with other embodiments that provide other features.
  • the Bluetooth Serial Port Profile does not by itself provide lossless transmission.
  • the described embodiments when used in conjunction with Bluetooth SPP, do create a lossless data link.
  • the described embodiments implement at least a custom Bluetooth profile and driver to implement the operations necessary for a lossless link.
  • Such operations may include retransmission protocols such as Automatic Repeat reQuest ARQ, Hybrid ARQ (HARQ), and other lost packet recovery techniques known in the art.
  • Some embodiments include custom software in both ends of the Bluetooth link.
  • the software may include custom Bluetooth profile(s), driver(s) and compression/decompression codes.
  • Some embodiments modify the Bluetooth SPP to provide a lossless data link, while other embodiments provide a completely custom Bluetooth profile to provide a lossless data link suitable for ASR. It should also be noted that while the example embodiments utilize Bluetooth to provide a wireless link, the described embodiments may utilize other wireless protocols and interfaces to provide the described benefits.
  • the described embodiments may also provide techniques for installing the aforementioned custom software and codes at the serving device side.
  • the serving device side may include a pre-installed custom driver.
  • the Bluetooth Hands-free device can download an applet (or other vehicle for conveying the necessary drivers and software) to the serving device through Bluetooth SPP link described above, once that Bluetooth link is established.
  • the described embodiments can easily be extended to accommodate a display on the Bluetooth headset.
  • the information required for display on the headset can be sent from the cellphone to the headset using the always-on command and control link.
  • Information can be sent and rendered by the headset.
  • information can be rendered by the cellphone and sent as an image or partial image to the headset for display. This latter method allows for the headset firmware to be simple and flexible—all of the hard work is done by the cellphone.
  • certain embodiments of the invention may be implemented as logic that performs one or more functions.
  • This logic may be hardware-based, software-based, or a combination of hardware-based and software-based. Some or all of the logic may be stored on one or more tangible computer-readable storage media and may include computer-executable instructions that may be executed by a controller or processor.
  • the computer-executable instructions may include instructions that implement one or more embodiments of the invention.
  • the tangible computer-readable storage media may be volatile or non-volatile and may include, for example, flash memories, dynamic memories, removable disks, and non-removable disks.

Abstract

A method of interfacing with a serving device from a wearable device worn by a user, the method includes establishing a lossless and wireless data link between the serving device and the wearable device. The method further includes sending, by the serving device, display information to the wearable device through the lossless and wireless data link. The method also includes presenting, by the wearable device, the display information to a display on the wearable device. The display information may be rendered at the wearable device. Alternatively, the display information may be rendered at the serving device and provided to the wearable device as an image or partial image.

Description

    RELATED APPLICATION
  • This application is a continuation of U.S. application Ser. No. 14/612,832, filed Feb. 3, 2015, which claims the benefit of U.S. Provisional Application No. 61/935,141, filed on Feb. 3, 2014. The entire teachings of the above applications are incorporated herein by reference.
  • BACKGROUND
  • A Bluetooth headset designed to pair with a cellphone, or other serving device, typically employs a Bluetooth Hands-Free Profile (HFP) or Bluetooth HeadSet Profile (HSP) to control how audio is passed from the cellphone to the headset. The HFP or HSP profile allows incoming audio data on the cellphone to be relayed directly to the headset for immediate playback via a near-ear speaker. Simultaneously, audio collected at the headset from one or more near-mouth microphones is passed immediately to the cellphone, which includes the collected audio in the current audio telephone call.
  • SUMMARY
  • Bluetooth headsets may offer some form of speech recognition to the user. Such speech recognition can be used to control features of the cellphone and to provide the user the ability to place calls just by speaking a command. However, to-date all Bluetooth headsets either run the speech recognition service directly on the Bluetooth headset itself, or use cloud-based recognition systems. A drawback of the former speech recognition service is the need for complex, expensive electronics in the headset. A drawback of the latter speech recognition service is the requirement of an always-on connection to the cloud.
  • In Bluetooth devices, speech recognition services have utilized the HFP or HSP for audio data transmission. The band for the HFP or HSP is 8 kHz, which is generally too narrow for proper speech recognition. To address this problem, a new Bluetooth HFP standard (v1.6), Wide-Band-Speech (WBS) with a 16 kHz sampling rate, has been used recently, together with compression method such as modified subband coding (mSBC).
  • Both HFP and HSP, which are designed for voice transmission, are lossy. (e.g., they sometimes lose voice packets or data). HFP and HSP typically do not re-transmit the lost voice packets at all, or re-transmit them at most once or twice to limit delay of the wireless phone call and continuation the wireless conversation. Losing a packet or two of speech data may be barely noticeable in the decoded speech output. Packet erasure concealment algorithms further reduce the speech degradation caused by missing speech packets. More important is reducing delay or lag in the cell phone conversation, so a lossy link is more acceptable than a high latency link for speech channels.
  • While it does not have a major impact on a cell phone call, the lost packet significantly degrades speech recognition. Bluetooth so far does not have a standard profile to address the problem of packet erasure when used for speech recognition purposes. The lossy protocol in voice channel has yet to be addressed in Bluetooth. In addition, HFP and HSP do not cancel enough non-stationary noises and can distort voice transmissions, which can degrade the accuracy of speech recognition.
  • In an embodiment of the present invention, a standard Bluetooth headset is improved to provide better speech recognition and deliver information to the user. In addition, the present invention substantially improves the voice recognition by addressing the loss of data packet problem in the Bluetooth.
  • In some embodiments, the Bluetooth device may be, rather than a headset, another type of wearable device. Such wearable devices may include a wrist-worn device, a device worn on the upper arm or other part of the body.
  • In one aspect, the invention may be a method of interfacing with a serving device from a wearable device worn by a user. The method may include establishing a lossless and wireless data link between the serving device and the wearable device, collecting, by the wearable device, audio data from one or more microphones of the wearable device. The method may further include sending, by the wearable device, the collected audio data to the serving device through the lossless and wireless data link.
  • In one embodiment, the wearable device is a headset device. In another embodiment, the wearable device is a wrist watch device.
  • One embodiment further includes providing, by the serving device, speech recognition services associated with the audio data.
  • In an embodiment, the speech recognition services include wide band speech processing and (iii) low-distortion speech compression.
  • Another embodiment further includes providing, by the wearable device, speech compression of the collected audio data.
  • In one embodiment, the serving device is one or more of a cellphone, a smartphone, a tablet device, a laptop computer, a notebook computer, a desktop computer, a network server, a wearable mobile communications device, a wearable mobile computer and a cloud-based computing entity.
  • Another embodiment further includes providing, by the wearable device, noise cancellation services associated with the collected audio data. Another embodiment further includes sending, from the wearable device to the serving device, information to establish, at the serving device, one or more components necessary to support the lossless and wireless data link.
  • In one embodiment, the one or more components necessary to support the lossless and wireless data link includes (i) one or more of a custom WIFI connection and a custom Bluetooth profile, (ii) a driver and (iii) compression/decompression code.
  • In another embodiment, the lossless and wireless data link is a Bluetooth link operating with a custom Bluetooth profile.
  • In another aspect, the invention may be a method of establishing a lossless and wireless data link between a serving device and a wearable device. The method may include establishing, by the wearable device, a wireless link of a first protocol between the wearable device and the serving device. The method may further include establishing, by the wearable device and using the wireless link of the first protocol, a lossless wireless link of a second protocol. The method may further include conveying to the serving device, by the wearable device, information to establish, at the serving device, one or more components necessary to support the lossless and wireless data link.
  • In one embodiment, the one or more components necessary to support the lossless and wireless data link includes a custom Bluetooth profile, a driver and compression/decompression code. In another embodiment, the wireless link of a first protocol is a lossy Bluetooth link, and the wireless link of a second protocol is a lossless Bluetooth link. In another embodiment, the lossless Bluetooth link is based on a Bluetooth SPP profile. In another embodiment, the lossless and wireless link of a first protocol is a Bluetooth link operating with a custom Bluetooth profile.
  • In another aspect, the invention may be a wearable device, including at least one microphone, at least one speaker, a voice compression engine, and a driver configured to transmit voice packets over a lossless, wireless data channel.
  • In one embodiment, the lossless, wireless data channel is based on a Bluetooth SPP profile. In another embodiment, the voice compression engine includes one or more of (i) Sub-Band-Coder, (ii) Speex, and (iii) ETSI Distributed Speech Recognition.
  • One embodiment may further include a noise cancellation engine. In another embodiment, the noise cancellation engine receives an audio signal from two or more sources, and uses linear noise cancellation algorithms to reduce ambient noise
  • One embodiment may further include a code deployment module configured to convey a custom Bluetooth profile and driver to a serving device, to facilitate implementation of the lossless link at the serving device. In another embodiment, the code deployment module conveys a applet to the serving device to install the custom Bluetooth profile and driver on the serving device.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The foregoing will be apparent from the following more particular description of example embodiments of the invention, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating embodiments of the present invention.
  • FIG. 1 is a block diagram illustrating an example embodiment of connecting a headset with a cellphone using two audio links.
  • FIG. 2 is a block diagram illustrating an example embodiment of processing and transmitting of audio signal for speech recognition according to the invention.
  • FIG. 3 is a block diagram illustrating another example embodiment of processing and transmitting of audio signal for speech recognition according to the invention.
  • DETAILED DESCRIPTION
  • A description of example embodiments of the invention follows.
  • FIG. 1, described in more detail below, is an example embodiment of the invention. This embodiment concerns two primary components—a headset 102 and a serving device 104, connected by one or more wireless links. The serving device 104 may be any device that could implement a wireless link to a hands-free headset, including but not limited to a cellphone, a smartphone, a tablet device, a laptop computer, a notebook computer, a desktop computer, a network server, a wearable mobile communications device, a wearable mobile computer or a cloud-based entity. The wearable device may include a device worn on the user's wrist, upper arm, leg, waist or neck, or any other body part suitable for supporting a communications and/or computing device. Similarly the headset 102 component may be, rather than a headset, a device worn on the user's wrist, upper arm, leg, waist or neck, or any other body part suitable for supporting wireless device (e.g., Bluetooth or WiFi device).
  • In embodiments of the present invention, the serving device directly hosts a speech recognition service. To facilitate this hosting, the embodiments establish a new, secondary data link from the serving device to the headset. The secondary data link should be lossless. The secondary data link may be a Bluetooth data link. The secondary Bluetooth data link may be used to send the near-mouth microphone input (or a second copy of the microphone input, if the HFP link is active) to the serving device, which is running a speech recognition service/speech recognition processing software. The secondary Bluetooth data link preserves the original Hands-free Profile link and ensures ongoing compatibility with the cellphone's existing firmware. In taking this approach, compression schemes can compress the audio data between serving device and headset in ways not supported by standard Hands-Free profiles (e.g., by using compression/decompression schemes that require a lossless data path).
  • With this system setup, the user can speak a command to the headset. The command (e.g., the spoken audio) is immediately conveyed to the serving device via the secondary Bluetooth data link, whereupon the audio is passed into a speech recognition system. Depending on the commands spoken, the speech recognizer is then able to take appropriate action, such as initiating a new call to a given phone number.
  • Furthermore, with this system in place, functionality is no longer confined to just establishing telephone calls. Natural sentences can be spoken by the headset wearer to action other important functions, such as “send SMS message to John that I shall be late tonight”. This sentence, when processed by speech recognition and natural language/natural language understanding engines on the serving device or on a network server through the wireless link can be used to create and send appropriate SMS messages, for example. In the same way, the user can query the state of the phone or perform web-based queries by speaking to the headset and letting the serving device perform speech recognition and execute an action appropriate to the recognized speech.
  • At the same time as using the secondary Bluetooth data link to collect microphone data and send to the serving device, the link can also send audio from the serving device back to the headset for playback via the near-ear speaker. In particular, this is used to convey information back to the headset wearer via computer generated spoken phrases, aka Text-to-Speech (TTS).
  • For example, software running on the serving device can detect an incoming SMS text message. Typically a serving device alerts the user with a chime and can display the incoming message on the screen. In an embodiment of the present invention, the SMS message can be converted to speech (e.g., text-to-speech) one the server side, and the speech audio of the reading can be sent over the Bluetooth link for playback to the user. The result here is a system that reads aloud incoming messages to the user without the user having to operate or look at the serving device.
  • This technique can be combined with the speech recognition service to provide a two-way question and answer system. For example, the user can now speak to the headset to ask a question such as “what time is it?” This audio can be processed by the speech recognition service, an answer calculated, and then spoken aloud to the user.
  • FIG. 1 is a block diagram illustrating an example embodiment of connecting a headset 102 with a serving device 104 using two bi-directional channels; a lossless data link 106 and a lossy data link 108. In this example embodiment, the lossless data link 106 is a Bluetooth link using Serial Port Profile (SPP), and the lossy data link 108 is a Bluetooth link using the headset profile (HSP) or the hands free profile (HFP). In other embodiments, the lossless data link 106 may be another digital data link such as WiFi or other wireless technologies known in the art.
  • As will be described in more detail below, while SPP may provide an underlying basis for a lossless data link, the profile itself does not provide lossless transmission. As of this time, Bluetooth does not provide a standard profile to address the problem of packet loss, in particular when used for speech recognition purposes. A customized profile is required, or at the very least a modified version of the SPP is required.
  • In this example, the lossless data link 106 is established and allowed to remain active as long as both the serving device 104 and the headset 102 are active (i.e., turned on). The lossy data link 108, on the other hand, is active only when the user of the headset 102 is making a voice call.
  • In this example embodiment, one or more microphones 110 on the headset 102 collect audio data. Audio can then, optionally, be passed through a noise cancellation module 112 on the headset 102 to reduce background noise and improve speech recognition. The use of multiple microphones 110 may further improve the overall noise cancellation performance by more effectively canceling both stationary and non-stationary noises.
  • The microphone audio 114 may then be split into two streams, as shown. One of the audio streams is sent to the lossless data link 106 and one to the lossy data link 108.
  • As described earlier, the lossy data link 108 is only established between headset 102 and serving device 104 as associated with an active telephone call. Thus, this communication link is intermittent. When the lossy data link 108 is established, one of the audio streams is sent to the serving device 104 as part of normal, hands-free system. Audio is sent from serving device 104 to headset 102 over the lossless data link 106. Audio may also be sent from the serving device 104 to the headset 102 over the lossy (HFP or HSP) data link 108, in the event that the serving device operation requires a call to occur over the HFP or HSP data link 108. In some embodiments, the audio may be in the form of computer generated spoken phrases (e.g., Text-To-Speech service), which are played back on headset.
  • If a Bluetooth Hands-free call is active, the audio is also played back on the headset 102 and merged with any spoken phrases from the lossless data link 106 (also referred to herein as command/control link). The audio received through the lossless data link 106 may be given priority by temporarily muting the telephone call speech from the lossy data link 108, or the two audio signals may be mixed so the user hears both simultaneously, or the audio from the lossy data link 108 may be temporarily attenuated (i.e., partially muted), to make it easier to hear the audio from the lossless data link 106.
  • FIG. 2 and FIG. 3 are block diagrams illustrating example embodiments of processing and transmission of audio speech signal for speech recognition. In this example embodiment, audio information is conveyed between a headset 202 and a serving device 204 across a bidirectional, lossless, wireless data link.
  • In the example embodiment shown in FIG. 2, the audio speech signal is collected from two or more microphones 206, and processed by a noise cancellation module 208. In one embodiment, noise cancellation may be processed using linear algorithms to avoid introducing any non-linear distortion to the speech signal. FIG. 2 illustrates compression of the speech signal with a voice compression module 210. The compressed speech signal is sent to the serving device 204 across a lossless, bidirectional, wireless data link 212, for example a Serial Port Profile (SPP) Bluetooth data link.
  • The serving device 204 receives the compressed speech signal from the lossless data link 212 and decompresses the compressed speech data using a voice decompression module 214. The resulting voice data, acquired through a lossless data path, can be used by an Automatic Speech Recognition (ASR) engine and/or a Natural Language Processing engine 216.
  • The serving device 204 may have digital speech files (e.g., Text-To-Speech (TTS) or WAVE (.wav format)) to send to the headset 202. The speech data is first compressed by a voice compression module, and send to the headset through the lossless data link 212. A voice decompression module 222 decompresses the speech data and provides the data to a TTS or WAVE play module 224, which converts the audio file to an audio signal that drives a speaker 226.
  • FIG. 3 illustrates an embodiment that provides front-end feature extraction and noise cancellation in the headset 302, with an ASR backend and a natural language processing (NLP) engine in the serving device. As with the embodiment of FIG. 2, audio is collected with two or more microphones 306, a noise cancelation module 308 reduces ambient noise. Data passes between the headset 302 and the serving device 304 over a lossless data link 312, to an ASR backend module 330 at the serving device 304. The ASR backend module 330 provides the processed speech data to an NLP engine. As with the embodiment shown in FIG. 2, TTS/WAVE files 318 may be transferred from the serving device 304 to the headset 302 through a voice compression module 320, the lossless data link 312, a voice decompression module 322 and a TTS or WAVE player driving a speaker 326. In other embodiments, WAVE files may be stored on the headset, and initiated for playback on the headset by a simple command conveyed by the serving device.
  • The features highlighted by FIG. 2 and FIG. 3 are examples of how the described embodiments may be provide useful functionality. These embodiments may be combined with each other, or with other embodiments that provide other features.
  • The following are examples of voice compression techniques that may be employed for speech recognition in the described embodiments:
      • Sub-Band-Coder (SBC)
      • Bluetooth WBS mSBC
      • Speex (or other Code Excited Linear Prediction (CELP) based compression algorithms)
      • Opus
      • European Telecommunications Standards Institute (ETSI) Distributed Speech Recognition (DSR)
  • As described above, the Bluetooth Serial Port Profile (SPP) does not by itself provide lossless transmission. The described embodiments, however, when used in conjunction with Bluetooth SPP, do create a lossless data link. The described embodiments implement at least a custom Bluetooth profile and driver to implement the operations necessary for a lossless link. Such operations may include retransmission protocols such as Automatic Repeat reQuest ARQ, Hybrid ARQ (HARQ), and other lost packet recovery techniques known in the art. Some embodiments include custom software in both ends of the Bluetooth link. The software may include custom Bluetooth profile(s), driver(s) and compression/decompression codes.
  • Some embodiments modify the Bluetooth SPP to provide a lossless data link, while other embodiments provide a completely custom Bluetooth profile to provide a lossless data link suitable for ASR. It should also be noted that while the example embodiments utilize Bluetooth to provide a wireless link, the described embodiments may utilize other wireless protocols and interfaces to provide the described benefits.
  • The described embodiments may also provide techniques for installing the aforementioned custom software and codes at the serving device side. In some embodiments, the serving device side may include a pre-installed custom driver. In other embodiments, the Bluetooth Hands-free device can download an applet (or other vehicle for conveying the necessary drivers and software) to the serving device through Bluetooth SPP link described above, once that Bluetooth link is established.
  • The described embodiments can easily be extended to accommodate a display on the Bluetooth headset. In such an extension, the information required for display on the headset can be sent from the cellphone to the headset using the always-on command and control link. Information can be sent and rendered by the headset. Alternatively, information can be rendered by the cellphone and sent as an image or partial image to the headset for display. This latter method allows for the headset firmware to be simple and flexible—all of the hard work is done by the cellphone.
  • It will be apparent that one or more embodiments, described herein, may be implemented in many different forms of software and hardware. Software code and/or specialized hardware used to implement embodiments described herein is not limiting of the invention. Thus, the operation and behavior of embodiments were described without reference to the specific software code and/or specialized hardware—it being understood that one would be able to design software and/or hardware to implement the embodiments based on the description herein.
  • Further, certain embodiments of the invention may be implemented as logic that performs one or more functions. This logic may be hardware-based, software-based, or a combination of hardware-based and software-based. Some or all of the logic may be stored on one or more tangible computer-readable storage media and may include computer-executable instructions that may be executed by a controller or processor. The computer-executable instructions may include instructions that implement one or more embodiments of the invention. The tangible computer-readable storage media may be volatile or non-volatile and may include, for example, flash memories, dynamic memories, removable disks, and non-removable disks.
  • While this invention has been particularly shown and described with references to example embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the invention encompassed by the appended claims.

Claims (15)

What is claimed is:
1. A method of interfacing with a serving device from a wearable device worn by a user, the method comprising:
establishing a lossless and wireless data link between the serving device and the wearable device;
sending, by the serving device, display information to the wearable device through the lossless and wireless data link; and
presenting, by the wearable device, the display information to a display on the wearable device.
2. The method of claim 1, wherein the wearable device is a headset device.
3. The method of claim 1, wherein the wearable device is a wrist watch device.
4. The method of claim 1, further comprising rendering the display information at the serving device, such that the display information is one of an image and a partial image.
5. The method of claim 1, further comprising rendering the display information at the wearable device, such that the display information is un-rendered image data.
6. The method of claim 1, wherein the serving device is one or more of a cellphone, a smartphone, a tablet device, a laptop computer, a notebook computer, a desktop computer, a network server, a wearable mobile communications device, a wearable mobile computer and a cloud-based computing entity.
7. The method of claim 1, further including sending, from the wearable device to the serving device, information to establish, at the serving device, one or more components necessary to support the lossless and wireless data link.
8. The method of claim 7, wherein the one or more components necessary to support the lossless and wireless data link includes (i) one or more of a custom WIFI connection and a custom Bluetooth profile, (ii) a driver and (iii) compression/decompression code.
9. The method of claim 1, wherein the lossless and wireless data link is a Bluetooth link operating with a custom Bluetooth profile.
10. A wearable device, comprising:
a display;
a receiver configured to receive display information over a lossless, wireless data channel; and
a driver configured to present the display information on the display.
11. The wearable device of claim 10, wherein the driver is further configured to render the display information into one of an image and a partial image and to present the one of an image and a partial image on the display
12. The wearable device of claim 10, wherein the display information received is one of an image and a partial image.
13. The wearable device of claim 10, wherein the lossless, wireless data channel is based on a Bluetooth SPP profile.
14. The headset of claim 10, further including a code deployment module configured to convey a custom Bluetooth profile and driver to a serving device, to facilitate implementation of the lossless link at the serving device.
15. The headset of claim 14, wherein the code deployment module conveys a applet to the serving device to install the custom Bluetooth profile and driver on the serving device.
US15/912,519 2014-02-03 2018-03-05 Smart Bluetooth Headset For Speech Command Abandoned US20180295656A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/912,519 US20180295656A1 (en) 2014-02-03 2018-03-05 Smart Bluetooth Headset For Speech Command

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201461935141P 2014-02-03 2014-02-03
US14/612,832 US9913302B2 (en) 2014-02-03 2015-02-03 Smart Bluetooth headset for speech command
US15/912,519 US20180295656A1 (en) 2014-02-03 2018-03-05 Smart Bluetooth Headset For Speech Command

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US14/612,832 Continuation US9913302B2 (en) 2014-02-03 2015-02-03 Smart Bluetooth headset for speech command

Publications (1)

Publication Number Publication Date
US20180295656A1 true US20180295656A1 (en) 2018-10-11

Family

ID=52463243

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/612,832 Active 2035-06-08 US9913302B2 (en) 2014-02-03 2015-02-03 Smart Bluetooth headset for speech command
US15/912,519 Abandoned US20180295656A1 (en) 2014-02-03 2018-03-05 Smart Bluetooth Headset For Speech Command

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US14/612,832 Active 2035-06-08 US9913302B2 (en) 2014-02-03 2015-02-03 Smart Bluetooth headset for speech command

Country Status (7)

Country Link
US (2) US9913302B2 (en)
EP (1) EP3090531B1 (en)
JP (1) JP6518696B2 (en)
KR (1) KR102287182B1 (en)
CN (1) CN105960794B (en)
TW (1) TWI650034B (en)
WO (1) WO2015117138A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023085642A1 (en) * 2021-11-12 2023-05-19 삼성전자 주식회사 Operation control method and electronic device therefor

Families Citing this family (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9913302B2 (en) 2014-02-03 2018-03-06 Kopin Corporation Smart Bluetooth headset for speech command
US10397388B2 (en) * 2015-11-02 2019-08-27 Hand Held Products, Inc. Extended features for network communication
KR102459370B1 (en) * 2016-02-18 2022-10-27 삼성전자주식회사 Electronic device and method for controlling thereof
DK3242491T3 (en) * 2016-05-04 2018-11-26 D & L High Tech Company Ltd BLUETOOTH MICROPHONE
US9906893B2 (en) * 2016-06-16 2018-02-27 I/O Interconnect, Ltd. Method for making a host personal computer act as an accessory in bluetooth piconet
US10165612B2 (en) * 2016-06-16 2018-12-25 I/O Interconnected, Ltd. Wireless connecting method, computer, and non-transitory computer-readable storage medium
TW201806371A (en) * 2016-08-03 2018-02-16 絡達科技股份有限公司 Mobile electronic device and operation method therefor
US11099716B2 (en) 2016-12-23 2021-08-24 Realwear, Inc. Context based content navigation for wearable display
US10620910B2 (en) 2016-12-23 2020-04-14 Realwear, Inc. Hands-free navigation of touch-based operating systems
US10437070B2 (en) 2016-12-23 2019-10-08 Realwear, Inc. Interchangeable optics for a head-mounted display
US11507216B2 (en) 2016-12-23 2022-11-22 Realwear, Inc. Customizing user interfaces of binary applications
US10936872B2 (en) 2016-12-23 2021-03-02 Realwear, Inc. Hands-free contextually aware object interaction for wearable display
US10393312B2 (en) 2016-12-23 2019-08-27 Realwear, Inc. Articulating components for a head-mounted display
KR20180082043A (en) 2017-01-09 2018-07-18 삼성전자주식회사 Electronic device and method for connecting communication using voice
CN106847280B (en) * 2017-02-23 2020-09-15 海信集团有限公司 Audio information processing method, intelligent terminal and voice control terminal
JP2018156646A (en) * 2017-03-15 2018-10-04 キャンプ モバイル コーポレーション Method and system for chatting on mobile device using external device
CN108538289B (en) * 2018-03-06 2020-12-22 深圳市沃特沃德股份有限公司 Method, device and terminal equipment for realizing voice remote control based on Bluetooth
CN108648756A (en) * 2018-05-21 2018-10-12 百度在线网络技术(北京)有限公司 Voice interactive method, device and system
US10802791B2 (en) * 2019-03-01 2020-10-13 Bose Corporation Methods and systems for streaming audio and voice data
CN110265043B (en) * 2019-06-03 2021-06-01 同响科技股份有限公司 Adaptive lossy or lossless audio compression and decompression calculation method
CN110248032B (en) * 2019-06-19 2021-08-27 北京智合大方科技有限公司 High-efficiency telephone calling system
KR20220044530A (en) 2019-08-09 2022-04-08 가부시키가이샤 한도오따이 에네루기 켄큐쇼 Sound device and method of operation thereof
US11418875B2 (en) 2019-10-14 2022-08-16 VULAI Inc End-fire array microphone arrangements inside a vehicle
US11627417B2 (en) * 2020-03-26 2023-04-11 Expensify, Inc. Voice interactive system
US11687317B2 (en) * 2020-09-25 2023-06-27 International Business Machines Corporation Wearable computing device audio interface
CN113380249A (en) * 2021-06-11 2021-09-10 北京声智科技有限公司 Voice control method, device, equipment and storage medium
WO2023058795A1 (en) * 2021-10-08 2023-04-13 엘지전자 주식회사 Audio processing method and apparatus

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6714233B2 (en) * 2000-06-21 2004-03-30 Seiko Epson Corporation Mobile video telephone system
US20080144645A1 (en) * 2006-10-31 2008-06-19 Motorola, Inc. Methods and devices of a queue controller for dual mode bidirectional audio communication
US20080205664A1 (en) * 2007-02-27 2008-08-28 Samsung Electronics Co.; Ltd Multi-type audio processing system and method
US20080300025A1 (en) * 2007-05-31 2008-12-04 Motorola, Inc. Method and system to configure audio processing paths for voice recognition
US20100273417A1 (en) * 2009-04-23 2010-10-28 Motorola, Inc. Establishing Full-Duplex Audio Over an Asynchronous Bluetooth Link
US7844224B2 (en) * 2005-04-30 2010-11-30 Ivt (Beijing) Software Technology, Inc. Method for supporting simultaneously multiple-path Bluetooth audio applications
US7865211B2 (en) * 2004-03-15 2011-01-04 General Electric Company Method and system for remote image display to facilitate clinical workflow
US20110254829A1 (en) * 2010-04-16 2011-10-20 Sony Ericsson Mobile Communications Ab Wearable electronic device, viewing system and display device as well as method for operating a wearable electronic device and method for operating a viewing system
US8150323B2 (en) * 2006-08-08 2012-04-03 Samsung Electronics Co., Ltd Mobile communication terminal and method for inputting/outputting voice during playback of music data by using bluetooth
US9498128B2 (en) * 2012-11-14 2016-11-22 MAD Apparel, Inc. Wearable architecture and methods for performance monitoring, analysis, and feedback
US9554127B2 (en) * 2012-11-23 2017-01-24 Samsung Electronics Co., Ltd. Display apparatus, method for controlling the display apparatus, glasses and method for controlling the glasses
US9621976B2 (en) * 2012-01-10 2017-04-11 Samsung Electronics Co., Ltd. Glasses apparatus for watching display image
US9791937B2 (en) * 2013-11-29 2017-10-17 Lg Electronics Inc. Wearable device and method for controlling display of the same
US9972324B2 (en) * 2014-01-10 2018-05-15 Verizon Patent And Licensing Inc. Personal assistant application

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US2008300A (en) 1931-11-27 1935-07-16 Worthington Pump & Mach Corp Sheave
US6285757B1 (en) * 1997-11-07 2001-09-04 Via, Inc. Interactive devices and methods
JP3927133B2 (en) * 2003-03-05 2007-06-06 株式会社東芝 Electronic device and communication control method used in the same
US7856240B2 (en) * 2004-06-07 2010-12-21 Clarity Technologies, Inc. Distributed sound enhancement
JP2005354302A (en) * 2004-06-09 2005-12-22 Interenergy Co Ltd Bluetooth communication apparatus and function addition / revision system thereof
JP4745837B2 (en) * 2006-01-25 2011-08-10 Kddi株式会社 Acoustic analysis apparatus, computer program, and speech recognition system
CN101399568B (en) * 2007-09-29 2013-07-31 联想(北京)有限公司 Device for using mobile terminal as input output device of computer, system and method thereof
US8055307B2 (en) 2008-01-18 2011-11-08 Aliphcom, Inc. Wireless handsfree headset method and system with handsfree applications
US8983552B2 (en) * 2011-09-02 2015-03-17 Gn Netcom A/S Battery powered electronic device comprising a movable part and adapted to be set into shipping mode
US9913302B2 (en) 2014-02-03 2018-03-06 Kopin Corporation Smart Bluetooth headset for speech command

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6714233B2 (en) * 2000-06-21 2004-03-30 Seiko Epson Corporation Mobile video telephone system
US7865211B2 (en) * 2004-03-15 2011-01-04 General Electric Company Method and system for remote image display to facilitate clinical workflow
US7844224B2 (en) * 2005-04-30 2010-11-30 Ivt (Beijing) Software Technology, Inc. Method for supporting simultaneously multiple-path Bluetooth audio applications
US8150323B2 (en) * 2006-08-08 2012-04-03 Samsung Electronics Co., Ltd Mobile communication terminal and method for inputting/outputting voice during playback of music data by using bluetooth
US20080144645A1 (en) * 2006-10-31 2008-06-19 Motorola, Inc. Methods and devices of a queue controller for dual mode bidirectional audio communication
US20080205664A1 (en) * 2007-02-27 2008-08-28 Samsung Electronics Co.; Ltd Multi-type audio processing system and method
US20080300025A1 (en) * 2007-05-31 2008-12-04 Motorola, Inc. Method and system to configure audio processing paths for voice recognition
US20100273417A1 (en) * 2009-04-23 2010-10-28 Motorola, Inc. Establishing Full-Duplex Audio Over an Asynchronous Bluetooth Link
US20110254829A1 (en) * 2010-04-16 2011-10-20 Sony Ericsson Mobile Communications Ab Wearable electronic device, viewing system and display device as well as method for operating a wearable electronic device and method for operating a viewing system
US9621976B2 (en) * 2012-01-10 2017-04-11 Samsung Electronics Co., Ltd. Glasses apparatus for watching display image
US9498128B2 (en) * 2012-11-14 2016-11-22 MAD Apparel, Inc. Wearable architecture and methods for performance monitoring, analysis, and feedback
US9554127B2 (en) * 2012-11-23 2017-01-24 Samsung Electronics Co., Ltd. Display apparatus, method for controlling the display apparatus, glasses and method for controlling the glasses
US9791937B2 (en) * 2013-11-29 2017-10-17 Lg Electronics Inc. Wearable device and method for controlling display of the same
US9972324B2 (en) * 2014-01-10 2018-05-15 Verizon Patent And Licensing Inc. Personal assistant application

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023085642A1 (en) * 2021-11-12 2023-05-19 삼성전자 주식회사 Operation control method and electronic device therefor

Also Published As

Publication number Publication date
US9913302B2 (en) 2018-03-06
CN105960794B (en) 2019-11-08
CN105960794A (en) 2016-09-21
TW201543942A (en) 2015-11-16
WO2015117138A1 (en) 2015-08-06
JP6518696B2 (en) 2019-05-22
EP3090531B1 (en) 2019-04-10
EP3090531A1 (en) 2016-11-09
US20150223272A1 (en) 2015-08-06
KR102287182B1 (en) 2021-08-05
KR20160115951A (en) 2016-10-06
TWI650034B (en) 2019-02-01
JP2017513411A (en) 2017-05-25

Similar Documents

Publication Publication Date Title
US20180295656A1 (en) Smart Bluetooth Headset For Speech Command
US9940923B2 (en) Voice and text communication system, method and apparatus
JP2017513411A5 (en)
JP2015060423A (en) Voice translation system, method of voice translation and program
AU2014357638B2 (en) Multi-path audio processing
US10817674B2 (en) Multifunction simultaneous interpretation device
CN105551491A (en) Voice recognition method and device
WO2014194273A2 (en) Systems and methods for enhancing targeted audibility
CN105825854A (en) Voice signal processing method, device, and mobile terminal
US11056106B2 (en) Voice interaction system and information processing apparatus
KR101820369B1 (en) Bluetooth Communication Method Of Smartphone To A Headset
JP2019110447A (en) Electronic device, control method of electronic device, and control program of electronic device
US20140372111A1 (en) Voice recognition enhancement
JP2012203172A (en) Voice output device, voice output method, and program
Spittle The Applications and Challenges of Processing Audio over Bluetooth
CN114449408A (en) Communication device and method for adjusting side-tone volume thereof
JP2005244394A (en) Portable telephone with image pick-up function
JP2008072582A (en) Vehicle hands-free system
JP2014179745A (en) Mobile terminal device, and program for controlling the same
JPH118711A (en) Telephone system
TW200640187A (en) Mobile communication apparatus with far-end speech control function and far-end speech control method

Legal Events

Date Code Title Description
AS Assignment

Owner name: KOPIN CORPORATION, MASSACHUSETTS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PARKINSON, CHRISTOPHER;FAN, DASHEN;HERRMANN, FREDERICK;AND OTHERS;SIGNING DATES FROM 20150211 TO 20150623;REEL/FRAME:046416/0447

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION