WO2017099938A1 - System for sound capture and generation via nasal vibration - Google Patents

System for sound capture and generation via nasal vibration Download PDF

Info

Publication number
WO2017099938A1
WO2017099938A1 PCT/US2016/061420 US2016061420W WO2017099938A1 WO 2017099938 A1 WO2017099938 A1 WO 2017099938A1 US 2016061420 W US2016061420 W US 2016061420W WO 2017099938 A1 WO2017099938 A1 WO 2017099938A1
Authority
WO
WIPO (PCT)
Prior art keywords
vibration
audio
voice
electronic signal
audio data
Prior art date
Application number
PCT/US2016/061420
Other languages
English (en)
French (fr)
Inventor
Paulo LOPEZ MEYER
Hector A. Cordourier Maruri
Julio C. ZAMORA ESQUIVEL
Alejandro IBARRA VON BORSTEL
Jose R. Camacho Perez
Willem M. Beltman
Original Assignee
Intel Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US14/965,095 external-priority patent/US9872101B2/en
Application filed by Intel Corporation filed Critical Intel Corporation
Priority to DE112016005688.5T priority Critical patent/DE112016005688T5/de
Priority to JP2018523483A priority patent/JP6891172B2/ja
Priority to CN201680065774.XA priority patent/CN108351524A/zh
Publication of WO2017099938A1 publication Critical patent/WO2017099938A1/en

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R17/00Piezoelectric transducers; Electrostrictive transducers
    • H04R17/02Microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/02Casings; Cabinets ; Supports therefor; Mountings therein
    • H04R1/028Casings; Cabinets ; Supports therefor; Mountings therein associated with devices performing functions other than acoustics, e.g. electric candles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2420/00Details of connection covered by H04R, not provided for in its groups
    • H04R2420/07Applications of wireless loudspeakers or wireless microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2460/00Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
    • H04R2460/13Hearing devices using bone conduction transducers

Definitions

  • the present disclosure relates to electronic communication, and more particularly, to a system for capturing a user's voice and generating sound for the user utilizing nasal resonation.
  • GANs global-area networks
  • WANs wide-area networks
  • LANs local-area networks
  • PANs personal-area networks
  • Hands free peripheral equipment may provide interfaces over which a user may interact with a mobile device that remains stored, in a charger, etc. This interaction may take place over a wired or wireless communication link. Examples of hands free peripheral equipment may include, but are not limited to, speakerphones, headsets, microphones, remote controls, etc. While these devices may be helpful, they are not all-purpose fixes.
  • headsets may facilitate hands-free communication, but may also experience problems in certain noisy situations. Wearing a headset also requires a user to maintain another device that they would not normally wear unless hands-free operations was desired or required, and in some regions wearing a headset (e.g., earpiece) may have negative stylistic implications.
  • FIG. 1 illustrates an example system for voice capture via nasal vibration sensing in accordance with at least one embodiment of the present disclosure
  • FIG. 2 illustrates an example configuration for a sensor in accordance with at least one embodiment of the present disclosure
  • FIG. 3 illustrates example operations for voice capture via nasal vibration sensing in accordance with at least one embodiment of the present disclosure
  • FIG. 4 illustrates an example configuration for a sensor further operating as a transducer in accordance with at least one embodiment of the present disclosure
  • FIG. 5 illustrates example operations for sound capture and generation via nasal vibration in accordance with at least one embodiment of the present disclosure.
  • the present disclosure pertains to a system for voice capture via nasal vibration sensing.
  • a system worn by a user may be able to sense vibrations through the nose of the user when the user speaks, generate an electronic signal based on the sensed vibration and generate voice data based on the electronic signal.
  • the system may capture a user's voice for use in, for example, dictation, telephonic communications, etc., while also screening out external noise (e.g., based on the natural sound dampening properties of the human skull).
  • An example system may include a wearable frame (e.g., an eyeglass frame) on which is mounted at least one sensor and a device. The at least one sensor may sense vibration in the nose of a user and may generate the electronic signal based on the vibration.
  • the device may receive the electronic signal from the at least one sensor and may generate voice data based on the electronic signal.
  • Other features may include, for example, compensation for situations where vibration cannot be sensed, sound generation based on received audio data for use in, for example, telephonic communications, etc.
  • an example system to capture a voice of a user may comprise at least a frame, at least one sensor mounted to the frame and a device mounted to the frame.
  • the frame may be wearable by a user.
  • the at least one sensor may be to generate an electronic signal based on vibration sensed in a nose of the user when the user talks.
  • the device may be to at least receive the electronic signal from the at least one sensor and process the electronic signal to generate voice data.
  • the frame may be for eyeglasses.
  • the at least one sensor may be incorporated within at least one nosepiece for the eyeglasses. It may also be possible for two sensors to be embedded in two sides of the nosepiece.
  • the two sensors may be coupled in series and the device is to receive a combined electronic signal generated by the two sensors. Alternatively, the device may be to select to process the electronic signal generated from one of the two sensors.
  • the at least one sensor may comprise a piezoelectric diaphragm to generate the electronic signal.
  • the device may comprise at least control circuitry to generate the voice data from the electronic signal.
  • the control circuitry may also be to determine whether the voice data includes a local command, and if determined to include a local command, perform at least one activity based on the local command.
  • the device may also comprise at least communication circuitry to transmit the voice data to an external device and at least user interface circuitry to allow the user to interact with the system.
  • the user interface circuitry is to generate sound based on audio data received from the external device via the communication circuitry.
  • an example method for capturing voice data from a user may comprise activating sensing for nasal vibration in a wearable system, sensing nasal vibration with at least one sensor in the wearable system, generating an electronic signal based on the nasal vibration and generating voice data based on the electronic signal.
  • FIG. 1 illustrates an example system 100 for voice capture via nasal vibration sensing in accordance with at least one embodiment of the present disclosure. While examples of specific implementations (e.g., in eyeglasses) and/or technologies (e.g., piezoelectric sensors, Bluetooth wireless communications, etc.) will be employed herein, these examples are presented merely to provide a readily comprehensible perspective from which the more generalized devices, systems, methods, etc. taught herein may be understood. Other applications, configurations, technologies, etc. may result in implementations that remain consistent with the teachings presented herein.
  • specific implementations e.g., in eyeglasses
  • technologies e.g., piezoelectric sensors, Bluetooth wireless communications, etc.
  • System 100 may comprise a frame 102 on which at least one sensor 104 (e.g., hereafter, "sensor 104") and device 106 may be mounted.
  • Sensor 104 e.g., hereafter, "sensor 104”
  • “Mounting” may include sensor 104 and device 106 being attached to frame 102 via mechanical attachment (e.g., screw, nail or other fastener), adhesive attachment (e.g., a glue, epoxy, etc.) or being incorporated within the structure of frame 102.
  • Frame 102 is disclosed as a pair of eyeglasses only for the sake of explanation. Eyeglasses make an appropriate foundation on which various features consistent with the present disclosure may be implemented. Moreover, since eyeglasses, sunglasses, safety glasses, etc. are already routinely worn by people, it also means that there is little barrier to adoption of the technology.
  • the teachings disclosed herein may alternatively be embodied in different form factors including, for example, any structure that touches, or is at least in proximity to, the nose and may be able to act as a platform for the variety of systems, devices, components, etc. that are described herein.
  • Sensor 104 may comprise vibration sensing circuitry.
  • the sensing circuitry may comprise, for example, piezoelectric components such as a diaphragm. Piezoelectric diaphragms may convert vibration (e.g., mechanical pressure waves) into electronic signals.
  • the vibration sensing circuity in sensor 104 may be in contact with, or at least proximate to, the nose of a user wearing frame 102. For example, the bridge of the user's nose is bone, and may resonate when the user speaks. Sensor 104 may be able to detect the vibration caused by the nasal bones resonating with the user's voice, and may convert the sensed vibration into an electronic signal that is then provided to device 106.
  • Device 106 may be configured to perform activities in system 100 such as, for example, generating voice data from the electronic signal generated by sensor 104, transmitting the voice data to external device 116, receiving audio data from external device 116, generating sound based on the received audio data, identifying and processing local commands, etc.
  • Device 106 may comprise, for example, control circuitry 108,
  • Control circuitry 108 may comprise at least data processing and memory resources.
  • Data processing resources may include, for example, one or more processors situated in separate components, or alternatively one or more processing cores embodied in a component (e.g., in a System-on-a- Chip (SoC) configuration), and any processor-related support circuitry (e.g., bridging interfaces, etc.).
  • SoC System-on-a- Chip
  • Example processors may include, but are not limited to, various x86-based microprocessors available from the Intel Corporation including those in the Pentium®, Xeon®, Itanium®, Celeron®, Atom®, Quark®, Core i-series, product families, Advanced RISC (e.g., Reduced Instruction Set Computing) Machine or "ARM" processors, etc.
  • Intel Corporation including those in the Pentium®, Xeon®, Itanium®, Celeron®, Atom®, Quark®, Core i-series, product families, Advanced RISC (e.g., Reduced Instruction Set Computing) Machine or "ARM" processors, etc.
  • Advanced RISC e.g., Reduced Instruction Set Computing
  • support circuitry may include chipsets (e.g., Northbridge, Southbridge, etc. available from the Intel Corporation) to provide an interface through which the data processing resources may interact with other system components that may be operating at different speeds, on different buses, etc. in device 106. Some or all of the functionality commonly associated with the support circuitry may also be included in the same physical package as the processor (e.g., such as in the Sandy Bridge family of processors available from the Intel Corporation).
  • chipsets e.g., Northbridge, Southbridge, etc. available from the Intel Corporation
  • Some or all of the functionality commonly associated with the support circuitry may also be included in the same physical package as the processor (e.g., such as in the Sandy Bridge family of processors available from the Intel Corporation).
  • the data processing resources may be configured to execute various instructions in device 106. Instructions may include program code configured to cause the data processing resources to perform activities related to reading data, writing data, processing data, formulating data, converting data, transforming data, etc. Information (e.g., instructions, data, etc.) may be stored in the memory resources.
  • the memory resources may comprise random access memory (RAM) or read-only memory (ROM) in a fixed or removable format. RAM may include volatile memory configured to hold information during the operation of device 106 such as, for example, static RAM (SRAM) or Dynamic RAM (DRAM). ROM may include non-volatile (NV) memory circuitry configured based on BIOS, UEFI, etc.
  • programmable memories such as electronic programmable ROMs (EPROMS), Flash, etc.
  • Other fixed/removable memory may include, but are not limited to, magnetic memories such as, for example, floppy disks, hard drives, etc., electronic memories such as solid state flash memory (e.g., embedded multimedia card (eMMC), etc.), removable memory cards or sticks (e.g., micro storage device (uSD), USB, etc.), optical memories such as compact disc-based ROM (CD-ROM), Digital Video Disks (DVD), Blu-Ray Disks, etc.
  • solid state flash memory e.g., embedded multimedia card (eMMC), etc.
  • uSD embedded multimedia card
  • uSD micro storage device
  • USB etc.
  • optical memories such as compact disc-based ROM (CD-ROM), Digital Video Disks (DVD), Blu-Ray Disks, etc.
  • Communication circuitry 110 may manage communications-related operations for device 106, which may include resources configured to support wired and/or wireless communications.
  • Device 106 may comprise multiple sets of communication circuitry 110 (e.g., including separate physical interface circuitry for wired protocols and/or wireless radios).
  • Wired communications may include serial and parallel wired mediums such as, for example, Ethernet, Universal Serial Bus (USB), Firewire, Thunderbolt, Digital Video Interface (DVI), High-Definition Multimedia Interface (HDMI), etc.
  • communications may include, for example, close-proximity wireless mediums (e.g., radio frequency (RF) such as based on the RF Identification (RFID) or Near Field
  • RF radio frequency
  • RFID RF Identification
  • communication circuitry 110 may be configured to prevent wireless communications from interfering with each other. In performing this function, communication circuitry 110 may schedule communication activities based on, for example, the relative priority of messages awaiting transmission.
  • User interface circuitry 112 may include hardware and/or software to allow users to interact with device 106 such as, for example, various input mechanisms (e.g., microphones, switches, buttons, knobs, keyboards, speakers, touch-sensitive surfaces, one or more sensors configured to capture images and/or sense proximity, distance, motion, gestures, orientation, biometric data, etc.) and various output mechanisms (e.g., speakers, displays, lighted/flashing indicators, electromechanical components for vibration, motion, etc.).
  • the hardware in user interface circuitry 112 may be incorporated within device 106 and/or may be coupled to device 106 via a wired or wireless communication medium.
  • Power circuitry 114 may include internal power sources (e.g., battery, fuel cell, etc.) and/or external power sources (e.g., power grid, electromechanical or solar generator, external fuel cell, etc.) and related circuitry configured to supply device 106 with the power needed to operate.
  • internal power sources e.g., battery, fuel cell, etc.
  • external power sources e.g., power grid, electromechanical or solar generator, external fuel cell, etc.
  • related circuitry configured to supply device 106 with the power needed to operate.
  • External device 116 may include equipment that is at least able to process the voice data generated by device 106.
  • Examples of external device 116 may include, but are not limited to, a mobile communication device such as a cellular handset or a smartphone based on the Android® OS from the Google Corporation, iOS® or Mac OS® from the Apple
  • Windows® OS from the Microsoft Corporation Windows® OS from the Microsoft Corporation, Linux® OS, Tizen® OS and/or other similar operating systems that may be deemed derivatives of Linux® OS from the Linux Foundation, Firefox® OS from the Mozilla Project, Blackberry® OS from the
  • HPC
  • system 100 may be worn by a user and activated manually by user interaction with user interface circuitry 112, or automatically by the user activating external device 116, activating an application on external device 116, speaking a local command, etc.
  • device 106 may be in a power conservation mode and the speaking of a certain sound, word, phrase, etc. may be recognized by device 106 (e.g., in electronic signal form or after converted to voice data) as a local command to activate system 100 (e.g., transition device 106 from the power conservation mode to an active mode).
  • Other local commands may, for example, deactivate system 100, mute system 100 (e.g., temporarily stop sensing operations or transmission operations), increase or decrease speaker volume, etc.
  • sensor 104 may sense vibration in the nose of the user (e.g., the bony bridge of the user's nose), and may generate an electronic signal based on the vibration.
  • the electronic signal may be received by device 106, which may generate voice data based on the electronic signal.
  • control circuitry 108 may convert the analog electronic signal into digital voice data.
  • control circuitry 108 may store the voice data in memory for later retrieval. If engaged in a telephone call then communication circuitry 110 may transmit the voice data to external device 116 (e.g., a mobile communication device) and may receive audio data from external device 116 pertaining to the other party in the call.
  • external device 116 e.g., a mobile communication device
  • User interface circuitry 112 may then generate sound via, for example, speaker 118 so that the user may interact with the other caller.
  • the sound of the user' s own voice may be generated through speaker 118 to provide auditory feedback to the user of system 100.
  • FIG. 2 illustrates an example configuration for sensor 104' in accordance with at least one embodiment of the present disclosure.
  • FIG. 2 shows sensor 104' within a nosepiece 200.
  • Nosepiece 200 may comprise, for example, at least sensing circuitry 202 affixed to structural support 204.
  • Sensing circuitry 202 may include, for example, a piezoelectric diaphragm to convert vibration 206 into an electronic signal. Vibration 206 may occur due to cranial bones resonating from a user talking. This effect has dual benefits in that it allows the user' s voice to be captured while also screening out external noise based on the human skulls natural ability to dampen external noise.
  • the use of piezoelectric diaphragms is beneficial in that they are able to accurately generate an electronic signal indicative of voice and do not require external power (e.g., the pressure waves may compress a piezoelectric crystal to generate the electronic signal).
  • wire 208 is shown in FIG. 2 to convey the electronic signal to device 106
  • the use of wireless communication is also possible to transmit the electronic signal.
  • a variety of sensor configurations may be implemented consistent with the present disclosure. For example, given that two nosepieces 200 exist in a common pair of glasses, at least one of the two nosepieces 200 may include sensor 104'. In another example implementation, both nosepieces 200 may include sensor 104'. The sensors 104' in each nosepiece 200 may be wired in series to generate stronger electronic signals. In another embodiment, the sensors 104' in each nosepiece 200 may be wired individually, and resources in device 106 (e.g., control circuitry 108) may then select the sensor 104' to employ based on the strength of the electronic signals received from each sensor 104'.
  • resources in device 106 e.g., control circuitry 108
  • system 100 may be able to account for the particularities in each user's nasal bones (e.g., breaks, natural deformities such as a deviated septum, etc.) and select the particular sensor 104' that may provide the strongest and cleanest electronic signal to use in generating voice data.
  • the particularities in each user's nasal bones e.g., breaks, natural deformities such as a deviated septum, etc.
  • FIG. 3 illustrates example operations for voice capture via nasal vibration sensing in accordance with at least one embodiment of the present disclosure.
  • Operations in FIG. 3 shown with a dotted outline may be optional based on the particulars of an implementation including, for example, the capabilities of the system (e.g., of the sensors, devices, etc. within the system), the configuration of the system, the use for which the system is intended, etc.
  • nasal vibration sensing may be activated. Activation may be manual (e.g. instigated by a user of the system) or automatic (e.g., triggered by external device activity, local commands, etc.).
  • a determination may be made in operation 302 as to whether nasal vibration is sensed by at least one sensor in the system.
  • At least one corrective action may occur. Examples of correction action may include generating an audible, visible and/or tactile notification to the user, reinitiating the system as illustrated by the arrow leading back to operation 300, the selection of another sensor in the system (e.g., when the system is eyeglasses, of a sensor in the opposite nosepiece), etc.
  • voice data may be generated based on an electronic signal generated by the at least one sensor. A determination may be made in operation 308 as to whether the electronic signal and/or voice data included a local command.
  • a set of local commands may be configured in the system, and control circuitry in the system may compare the electronic signal and/or voice data to the set of local commands to determine if a match exists. If in operation 308 it is determined that a local command was received, then in operation 310 at least one activity may be executed based on the sensed local command. Examples of activities that may be performed include, but are not limited to, turning on/off the system, adjusting system volumes, temporarily disabling voice capture and/or voice data
  • a determination in operation 308 that a local command was not received may be followed by transmitting the voice data to the external device (e.g., a mobile communication device like smartphone) in operation 312.
  • audio data e.g., voice data corresponding to other participants in a telephone call
  • Sound may be generated based on the received audio data in operation 316, which may be followed by a return to operation 302 to continue nasal vibration sensing.
  • device 106 may also cause sensor 104 to operate "in reverse" to communicate sound to a user by inducing vibration in the nose (e.g., nasal bone) of the user. Vibration induced in the nasal bone is conveyed through the skull to sound sensing organs in the inner ear (e.g., the cochlea), which interpret the induced vibration as sound. This operation may occur even if the user has some defect, injury, etc. that ordinarily would prevent the user from hearing sound (e.g., a ruptured eardrum).
  • device 106 may have audio data, or may receive audio data, that is used to cause sensor 104 to induce audio vibration in the nose of the user.
  • the communication may be one-way (e.g., wherein the user only listens to the incoming sound) or two-way (e.g., wherein the user both listens to incoming sound and sound is also captured from the user).
  • Different modes for facilitating two-way communication will be discussed herein including, but not limited to, one-channel mode and signal modulation mode.
  • a system to capture and generate sound may comprise, for example, at least a frame wearable by a user, sensing circuitry mounted to the frame and a device also mounted to the frame.
  • the sensing circuitry may be to sense voice vibration induced in the user' s nose by the user' s voice, generate an electronic signal based on the sensed voice vibration and induce audio vibration in the nose based on audio data.
  • the device may be to at least control the operation of the sensing circuitry.
  • the frame may be an eyeglass frame comprising at least one nosepiece to contact the nose, the at least one nosepiece including at least the sensing circuitry.
  • the sensing circuitry may comprise at least one piezoelectric diaphragm to generate the electronic signal and induce the audio vibration.
  • the sensing circuitry may comprise a first sensing circuit to sense the voice vibration and a second sensing circuit to induce the audio vibration.
  • an example device may comprise at least control circuitry to determine whether the system is initiating or engaged in two-way communication. If the control circuitry determines that the system is not initiating or engaged in two-way communication, the control circuitry may then generate voice data based on the electronic signal or cause the sensing circuitry to induce the audio vibration based on the audio data. If it is determined that the system is initiating or engaged in two-way communication, the control circuitry may operate in single channel mode or signal modulation mode. In single channel mode the control circuitry may cause the sensing circuitry to generate an indication at least when audio data is incoming, cause the sensing circuitry to induce the audio vibration based on the audio data and, when no audio data is incoming, generate the voice data based on the electronic signal.
  • control circuitry may modulate the audio data, cause the sensing circuitry to induce the audio vibration based on the modulated data, receive the electronic signal, filter out the audio vibration from the electronic signal and generate the voice data based on the electronic signal.
  • the device may further comprise at least communication circuitry to at least one of transmit the voice data to an external device or receive the audio data from the external device.
  • an example method for capturing and generating sound may comprise activating a system wearable by a user, determining whether the system is initiating or engaged in two-way communication, controlling, based on the determination, sensing circuitry in the system to at least one of sense voice vibration induced in the user' s nose by the user' s voice and generate an electronic signal based on the voice vibration, or induce audio vibration in the nose based on audio data.
  • FIG. 4 illustrates an example configuration for a sensor further operating as a transducer in accordance with at least one embodiment of the present disclosure.
  • sensor 104' was only relied upon to capture sound (e.g., voice) generated locally by a user.
  • FIG. 4 shows an embodiment wherein sensor 104' may operate as a transducer (e.g., both to capture local sound and also induce audio vibration in a user's nose) so that the user may hear sound based on audio data.
  • Audio data may include any data stored in, or received by, device 106 for use in causing sensor 104' to generate audio vibration. Audio data is described in the above examples a being a voice of another party in a telephone call in which the user of system 100 is participating.
  • system 100 may support one-way or two-way communications.
  • One-way communications may comprise a user of system 100 either capturing voice data (e.g., recording for dictation, recording reminders, etc.) or listening to sound induced by nasal vibration.
  • voice data e.g., recording for dictation, recording reminders, etc.
  • a user may only listen to sound when, for example, they are listening to music, spoken content such as recorded messages, an audio book, a lecture or other similar presentation.
  • Audio data may be stored in device 106 (e.g., in memory within control circuitry 108) or may be received in device 106 (e.g., via communication circuitry 110).
  • a user may interact with user interface circuitry 112 in device 106 and/or external device 116 (e.g., with an audio application in external device 116) to select audio data and trigger the playback of audio data.
  • control circuitry 108 may process the audio data, if required (e.g., to adjust the volume, frequency, pitch, tone, etc. of the audio data based on, for example, the manner in which sensor 104' induces the audio vibration), and generate a signal (e.g., driving signal) that causes sensor 104' to induce the audio vibration in the nose of the user.
  • Two-way communications may involve both capturing sound locally generated by the user and also inducing audio vibration based on audio data. The most typical example of two-way communication is the user of system 100 talking on the phone.
  • FIG. 4 illustrates that, in addition to capturing vibration 206 generated by the user' s voice (hereafter, "voice vibration 206"'), sensor 104' may further be able to induce audio vibration 400 (e.g., based on audio data). Examples of audio vibration are shown at 400A and 400B in FIG. 4.
  • a single sensing circuit 202 may both sense voice vibration 206' and induce audio vibration 400A. This may occur in that the actual sensor (e.g., a piezoelectric diaphragm) may continuously sense voice vibration 206' in a substantially passive manner until provided with a signal from device 106 that causes to the piezoelectric diaphragm to actuate and induce audio vibration 400A.
  • transitioning from passively sensing voice vibration 206' to actively inducing audio vibration 400 A may principally be controlled by device 106, which may selectively receive electronic signals generated by passive sensing or generate driving signals to induce audio vibration 400A.
  • sensor 104' may include more than one sensing circuit 202.
  • multiple piezoelectric diaphragms may be incorporated in, or may at least be coupled to, one or both nosepieces 200 (e.g., depending on nosepiece and/or diaphragm size, shape, etc.).
  • the plurality of piezoelectric diaphragms may be coupled in series to generate stronger electronic signals when sensing voice vibration 206' and/or induce stronger audio vibration 400.
  • nosepieces 200 may each include piezoelectric diaphragms to generate audio vibrations 400A and 400B, respectively. Audio vibrations 400A and 400B may be induced to have amplitude or phase differences so that the source for each of the induced audio vibrations may be determined. The source indication may be utilized for, for example, calibration, debugging, etc.
  • each of the plurality of piezoelectric diaphragms may be dedicated to only sensing voice vibration 206' or inducing audio vibration 400.
  • sensing circuitry 202 may be dedicated to only sensing voice vibration 206', while sensing circuitry 402 in the other nosepiece 200 may be dedicated to only inducing audio vibration 400B.
  • second coupling 404 may be used to couple sensing circuitry 402 in the other nosepiece 200 to device 106. While illustrated as wire, coupling 404 may also be wireless connection (e.g., via Bluetooth, NFC, etc.).
  • At least one challenge posed by two-way operation is managing operation of sensor 104'.
  • Voice vibration 206' caused by the user's voice and audio vibration 400 induced based on incoming audio data could foreseeably occur concurrently, and thus, could interfere with each other.
  • a single piezoelectric diaphragm cannot both sense and generate vibration at the same time.
  • voice vibration 206' would be missed when audio vibration 400A is induced.
  • a sensing piezoelectric diaphragm may sense both voice vibration 206' and audio vibration 400B when they occur concurrently. Voice data generated using this captured mix of voice vibration 206' and audio vibration 400B may be may be garbled, unintelligible, etc., and thus, unusable.
  • control circuitry 108 in device 106 may facilitate two-way communication by operating in a mode that avoids the above situations. While various operational modes are disclosed herein, these operational modes are offered merely as examples of ways to avoid the above issues, and are not intended to limit the disclosed embodiments to any particular manner of operation.
  • a first example mode of operation is single channel operation. In single channel operation, control circuitry 108 may limit sensor 104' to only to sensing voice vibration 206' or generating audio vibration 400. An indication such as, for example, a short tone, a short vibration (e.g., from
  • electromechanical circuitry in user interface circuitry 512, etc. may indicate to the user at least when audio data is incoming.
  • the incoming audio indication may help prevent the user from attempting to talk over audio vibration 400, which would result in the voice of the user not being recorded.
  • another indication may inform the user that all of the incoming audio data has been presented (e.g., via audio vibration 400), allowing the user to proceed with voice capture.
  • control circuitry 108 may modulate the audio data, or a signal generated from the audio data for driving sensor 104' to induce audio vibration 400, so that noise including audio vibration 400 may later be filtered out from the capture of voice vibration 206'.
  • Modulation generally comprises the modification of at least one property of a waveform. For example, the frequency of the signal driving sensor 104' may be modified to make it higher or lower than the expected frequency of voice vibration 206'.
  • control circuitry 108 may later be employed by control circuitry 108 to filter out captured the noise (e.g., audio waveform 400) from the desired waveform (e.g., voice vibration 206').
  • the waveform may be converted to voice data for storage, transmission, etc.
  • FIG. 5 illustrates example operations for sound capture and generation via nasal vibration in accordance with at least one embodiment of the present disclosure.
  • the operations illustrated in FIG. 5 relate to different operational modes available in a system. Not all of the operations are required for all of the operational modes. Thus, while the entire operational flow shown in FIG. 5 allows a system to select a particular operational mode from the available operational modes, a subset of these operations may be used in systems that operate utilizing fewer operational modes.
  • operation 500 the system may be activated. A determination may then be made in operation 502 as to whether the system is initiating or engaged in two-way communication. If in operation 502 it is determined that the system is not engaged in two way
  • a further determination may be made as to whether to only capture the voice of the user.
  • a determination in operation 504 that only user voice will be captured may be followed by operations 300 to 312 in FIG. 3 to capture the user's voice via nasal vibration sensing. Operation 312 may be followed by a return to operation 502.
  • a determination that user voice will not be captured may be followed operation 506 wherein incoming audio data may be received from an external device.
  • the dotted outline of operation 506 indicates that the operation may only be performed if necessary (e.g., the audio data may already be present in the system if it is not being downloaded, streamed, etc. from an external device).
  • audio vibration may be induced in the nose (e.g., nasal bone) of the user based on the audio data. Operation 508 may be followed by a return to operation 502.
  • a further determination may be made as to whether the system will utilize a single channel operational mode. If in operation 510 it is determined that the system will utilize a single channel operational mode, then in operation 512 a determination may be made as to whether there is incoming audio data (e.g., from another person participating in a phone call). A determination in operation 512 that there is no incoming audio data may be followed by operations 300 to 312 in FIG. 3 to capture the user's voice via nasal vibration sensing. Operation 312 may be followed by a return to operation 502.
  • the system may indicate the incoming audio data to the user (e.g., with a tone, vibration, etc.). Audio data may then be received in operation 516, and in operation 518 audio vibration may be induced in the nose of the user based on the audio data.
  • the end of the audio data may be indicated to the user (e.g., with a tone, vibration, etc.) so that the user may be informed that he/she may talk (e.g., that voice capture may resume).
  • Operation 520 may be followed by operations 300 to 312 in FIG. 3 to capture the user's voice via nasal vibration sensing. Operation 312 may be followed by a return to operation 502.
  • a determination that the system will not utilize the single channel operational mode may be followed by operation 522 wherein a further determination may be made as to whether the system will utilize signal modulation mode. If it is determined in operation 524 that the system will utilize signal modulation mode, then in operation 524 a further determination may be made as to whether there is any incoming audio data. A determination in operation 524 that there is no incoming audio data may be followed by operations 300 to 312 in FIG. 3 to capture the user's voice via nasal vibration sensing. Operation 312 may be followed by a return to operation 502.
  • the incoming audio data may be received and modulated, or alternatively the signal used to drive sensing circuitry in the system to generate the audio vibration may be modulated.
  • audio vibrations may be induced in the nose of the user using the modulated audio data.
  • the system comprises separate sets of sensing circuitry (e.g., piezoelectric diaphragms) to induce audio vibration and sense voice vibration, respectively, then the user of the system speaking during the generation of the audio vibration may result in both the user' s voice and the audio vibration being captured.
  • the electronic signal generated by the sensing circuitry that senses voice vibration may then include combined voice and audio data.
  • the electronic signal that possibly includes both voice and audio data may then be received in operation 530.
  • the modulated audio data may be filtered out from the electronic signal (e.g., by control circuitry in the system) to yield an electronic signal that only includes the voice data.
  • Operation 532 may be followed by operations 306 to 312 in FIG. 3 to process the remaining voice data extracted from the mixed voice and audio vibration signal in operation 532.
  • Operation 312 may be followed by a return to operation 502. Returning to operation 522, if it is determined that signal modulation mode will not be utilized then the
  • Operation 522 may be followed by a return to operation 500 to await the next activation of the system.
  • FIG 3 and 5 illustrate operations according to different embodiments, it is to be understood that not all of the operations depicted in FIG. 3 and 5 are necessary for other embodiments. Indeed, it is fully contemplated herein that in other embodiments of the present disclosure, the operations depicted in FIG. 3 and 5, and/or other operations described herein, may be combined in a manner not specifically shown in any of the drawings, but still fully consistent with the present disclosure. Thus, claims directed to features and/or operations that are not exactly shown in one drawing are deemed within the scope and content of the present disclosure.
  • module may refer to software, firmware and/or circuitry configured to perform any of the aforementioned operations.
  • Circuitry may comprise, for example, singly or in any combination, hardwired circuitry, programmable circuitry such as computer processors comprising one or more individual instruction processing cores, state machine circuitry, and/or firmware that stores instructions executed by programmable circuitry.
  • the modules may, collectively or individually, be embodied as circuitry that forms part of a larger system, for example, an integrated circuit (IC), system on- chip (SoC), desktop computers, laptop computers, tablet computers, servers, smartphones, etc.
  • IC integrated circuit
  • SoC system on- chip
  • any of the operations described herein may be implemented in a system that includes one or more storage mediums (e.g., non-transitory storage mediums) having stored thereon, individually or in combination, instructions that when executed by one or more processors perform the methods.
  • the processor may include, for example, a server CPU, a mobile device CPU, and/or other programmable circuitry. Also, it is intended that operations described herein may be distributed across a plurality of physical devices, such as processing structures at more than one different physical location.
  • the storage medium may include any type of tangible medium, for example, any type of disk including hard disks, floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritables (CD- RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic and static RAMs, erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), flash memories, Solid State Disks (SSDs), embedded multimedia cards (eMMCs), secure digital input/output (SDIO) cards, magnetic or optical cards, or any type of media suitable for storing electronic instructions.
  • ROMs read-only memories
  • RAMs random access memories
  • EPROMs erasable programmable read-only memories
  • EEPROMs electrically erasable programmable read-only memories
  • flash memories Solid State Disks (SSDs), embedded multimedia cards (eMMC
  • An example system to capture and generate sound may comprise at least a frame wearable by a user, sensing circuitry mounted to the frame and a device also mounted to the frame.
  • the sensing circuitry may sense voice vibration induced in the user's nose by the user' s voice, generate an electronic signal based on the sensed voice vibration and induce audio vibration in the nose based on audio data.
  • the device may be to at least control the operation of the sensing circuitry.
  • the sensing circuitry may comprise at least one piezoelectric diaphragm to generate the electronic signal and induce the audio vibration.
  • the frame may be for eyeglasses and may comprise at least one nosepiece structure for contacting the nose, the at least one structure including the sensing circuitry.
  • the following examples pertain to further embodiments.
  • the following examples of the present disclosure may comprise subject material such as a device, a method, at least one machine -readable medium for storing instructions that when executed cause a machine to perform acts based on the method, means for performing acts based on the method and/or a system for sound capture and generation via nasal vibration, as provided below.
  • a system to capture and generate sound may comprise a frame wearable by a user, sensing circuitry mounted to the frame, wherein the sensing circuitry is to sense voice vibration induced in the user's nose by the user' s voice, generate an electronic signal based on the sensed voice vibration and induce audio vibration in the nose based on audio data and a device mounted to the frame, wherein the device is to at least control the operation of the sensing circuitry.
  • Example 2 may include the elements of example 1 , wherein the frame is an eyeglass frame comprising at least one nosepiece to contact the nose, the at least one nosepiece including at least the sensing circuitry.
  • Example 3 may include the elements of any of examples 1 to 2, wherein the sensing circuitry comprises at least one piezoelectric diaphragm to generate the electronic signal and induce the audio vibration.
  • Example 4 may include the elements of example 3, wherein the at least one piezoelectric diaphragm is coupled to the device via at least one wire.
  • Example 5 may include the elements of any of examples 3 to 4, wherein the at least one piezoelectric diaphragm is coupled to the device via a wireless link.
  • Example 6 may include the elements of any of examples 3 to 5, wherein the sensing circuitry comprises a plurality of piezoelectric diaphragms coupled in series.
  • Example 7 may include the elements of any of examples 3 to 6, wherein the sensing circuitry comprises a first sensing circuit to sense the voice vibration and a second sensing circuit to induce the audio vibration.
  • Example 8 may include the elements of example 7, wherein the first sensing circuit is configured to engage a first side of the nose and the second sensing circuit is configured to engage a second side of the nose.
  • Example 9 may include the elements of any of examples 7 to 8, wherein the second sensing circuit comprises at least one piezoelectric diaphragm configured to operate as a transducer.
  • Example 10 may include the elements of any of examples 1 to 9, wherein the device comprises at least control circuitry to determine whether the system is initiating or engaged in two-way communication.
  • Example 11 may include the elements of example 10, wherein if the control circuitry determines that the system is not initiating or engaged in two-way communication, the control circuitry is to generate voice data based on the electronic signal or cause the sensing circuitry to induce the audio vibration based on the audio data.
  • Example 12 may include the elements of example 11, wherein if the control circuitry determines that the system is initiating or engaged in two-way communication, the control circuitry is to operate in single channel mode.
  • Example 13 may include the elements of example 12, wherein in single channel mode the control circuitry is to cause the sensing circuitry to generate an indication at least when audio data is incoming, cause the sensing circuitry to induce the audio vibration based on the audio data and, when no audio data is incoming, generate the voice data based on the electronic signal.
  • Example 14 may include the elements of example 13, wherein the indication comprises at least one of an audible or tactile notification.
  • Example 15 may include the elements of any of examples 13 to 14, and may further comprise the control circuitry causing the sensing circuitry to generate a second indication when the incoming audio data is complete.
  • Example 16 may include the elements of any of examples 11 to 15, wherein if the control circuitry determines that the system is initiating or engaged in two-way
  • control circuitry is to operate in single channel mode in which the control circuitry is to cause the sensing circuitry to generate an indication at least when audio data is incoming, cause the sensing circuitry to induce the audio vibration based on the audio data and, when no audio data is incoming, generate the voice data based on the electronic signal.
  • Example 17 may include the elements of any of examples 11 to 16, wherein if the control circuitry determines that the system is initiating or engaged in two-way
  • control circuitry is to operate in signal modulation mode.
  • Example 18 may include the elements of example 17, wherein in signal modulation mode the control circuitry is to modulate the audio data, cause the sensing circuitry to induce the audio vibration based on the modulated data, receive the electronic signal, filter out the audio vibration from the electronic signal and generate the voice data based on the electronic signal.
  • Example 19 may include the elements of any of examples 11 to 18, wherein if the control circuitry determines that the system is initiating or engaged in two-way
  • control circuitry is to operate in signal modulation mode in which the control circuitry is to modulate the audio data, cause the sensing circuitry to induce the audio vibration based on the modulated data, receive the electronic signal, filter out the audio vibration from the electronic signal and generate the voice data based on the electronic signal.
  • Example 20 may include the elements of any of examples 1 to 19, wherein the device comprises at least communication circuitry to at least one of transmit the voice data to an external device or receive the audio data from the external device.
  • a method for capturing and generating sound may comprise activating a system wearable by a user, determining whether the system is initiating or engaged in two-way communication, controlling, based on the determination, sensing circuitry in the system to at least one of sense voice vibration induced in the user' s nose by the user' s voice and generate an electronic signal based on the voice vibration; or induce audio vibration in the nose based on audio data.
  • Example 22 may include the elements of example 21, and may further comprise generating voice data based on the electronic signal.
  • Example 23 may include the elements of any of examples 21 to 22, wherein if it determined that the system is initiating or engaged in two-way communication, further comprising operating in single channel mode.
  • Example 24 may include the elements of example 23, wherein operating in single channel mode comprises causing the sensing circuitry to generate an indication at least when audio data is incoming, causing the sensing circuitry to induce the audio vibration based on the audio data; and when no audio data is incoming, generating the voice data based on the electronic signal.
  • Example 25 may include the elements of example 24, wherein the indication comprises at least one of an audible or tactile notification.
  • Example 26 may include the elements of any of examples 24 to 25, and may further comprise causing the sensing circuitry to generate a second indication when the incoming audio data is complete.
  • Example 27 may include the elements of any of examples 21 to 26, wherein if it determined that the system is initiating or engaged in two-way communication, further comprising operating in signal modulation mode.
  • Example 28 may include the elements of example 27, wherein operating in signal modulation mode comprises modulating the audio data, causing the sensing circuitry to induce the audio vibration based on the modulated audio data, receiving the electronic signal, filtering out the audio vibration from the electronic signal and generating voice data based on the electronic signal.
  • Example 29 may include the elements of any of examples 21 to 28, and may further comprise at least one of transmitting the voice data to an external device or receiving the audio data from the external device.
  • example 30 there is provided a system for capturing and generating sound including at least one device, the system being arranged to perform the method of any of the above examples 21 to 29.
  • example 31 there is provided a chipset arranged to perform the method of any of the above examples 21 to 29.
  • example 32 there is provided at least one machine readable medium comprising a plurality of instructions that, in response to be being executed on a computing device, cause the computing device to carry out the method according to any of the above examples 21 to 29.
  • At least one device configured for capturing and generating sound, the at least one device being arranged to perform the method of any of the above examples 21 to 29.
  • the system may comprise means for activating a system wearable by a user, means for determining whether the system is initiating or engaged in two-way communication, means for controlling, based on the determination, sensing circuitry in the system to at least one of sense voice vibration induced in the user's nose by the user's voice and generate an electronic signal based on the voice vibration or induce audio vibration in the nose based on audio data.
  • Example 35 may include the elements of example 34, and may further comprise means for generating voice data based on the electronic signal.
  • Example 36 may include the elements of any of examples 34 to 35, wherein if it determined that the system is initiating or engaged in two-way communication, further comprising means for operating in single channel mode.
  • Example 37 may include the elements of example 36, wherein the means for operating in single channel mode comprise means for causing the sensing circuitry to generate an indication at least when audio data is incoming, means for causing the sensing circuitry to induce the audio vibration based on the audio data and means for, when no audio data is incoming, generating the voice data based on the electronic signal.
  • Example 38 may include the elements of example 37, wherein the indication comprises at least one of an audible or tactile notification.
  • Example 39 may include the elements of any of examples 37 to 38, and may further comprise means for causing the sensing circuitry to generate a second indication when the incoming audio data is complete.
  • Example 40 may include the elements of any of examples 34 to 39, wherein if it determined that the system is initiating or engaged in two-way communication, further comprising means for operating in signal modulation mode.
  • Example 41 may include the elements of example 40, wherein the means for operating in signal modulation mode comprise means for modulating the audio data, means for causing the sensing circuitry to induce the audio vibration based on the modulated audio data, means for receiving the electronic signal, means for filtering out the audio vibration from the electronic signal and means for generating voice data based on the electronic signal.
  • Example 42 may include the elements of any of examples 34 to 41, and may further comprise at least one of means for transmitting the voice data to an external device or means for receiving the audio data from the external device.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Telephone Function (AREA)
  • Piezo-Electric Transducers For Audible Bands (AREA)
  • Details Of Audible-Bandwidth Transducers (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Eyeglasses (AREA)
PCT/US2016/061420 2015-12-10 2016-11-10 System for sound capture and generation via nasal vibration WO2017099938A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
DE112016005688.5T DE112016005688T5 (de) 2015-12-10 2016-11-10 System zur Tonerfassung und -erzeugung über Nasalvibration
JP2018523483A JP6891172B2 (ja) 2015-12-10 2016-11-10 鼻振動を介した音響のキャプチャ及び生成のためのシステム
CN201680065774.XA CN108351524A (zh) 2015-12-10 2016-11-10 用于经由鼻振动进行声音捕捉和生成的系统

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US14/965,095 2015-12-10
US14/965,095 US9872101B2 (en) 2015-09-15 2015-12-10 System for sound capture and generation via nasal vibration

Publications (1)

Publication Number Publication Date
WO2017099938A1 true WO2017099938A1 (en) 2017-06-15

Family

ID=59014035

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2016/061420 WO2017099938A1 (en) 2015-12-10 2016-11-10 System for sound capture and generation via nasal vibration

Country Status (4)

Country Link
JP (1) JP6891172B2 (ja)
CN (1) CN108351524A (ja)
DE (1) DE112016005688T5 (ja)
WO (1) WO2017099938A1 (ja)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020067263A1 (ja) * 2018-09-28 2020-04-02 株式会社ファインウェル 聴取装置
US10778823B2 (en) 2012-01-20 2020-09-15 Finewell Co., Ltd. Mobile telephone and cartilage-conduction vibration source device
US10778824B2 (en) 2016-01-19 2020-09-15 Finewell Co., Ltd. Pen-type handset
US10779075B2 (en) 2010-12-27 2020-09-15 Finewell Co., Ltd. Incoming/outgoing-talk unit and incoming-talk unit
US10795321B2 (en) 2015-09-16 2020-10-06 Finewell Co., Ltd. Wrist watch with hearing function
US10834506B2 (en) 2012-06-29 2020-11-10 Finewell Co., Ltd. Stereo earphone
US10848607B2 (en) 2014-12-18 2020-11-24 Finewell Co., Ltd. Cycling hearing device and bicycle system
US10967521B2 (en) 2015-07-15 2021-04-06 Finewell Co., Ltd. Robot and robot system

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
IT201800010186A1 (it) 2018-11-09 2020-05-09 Luxottica Srl Aletta per nasello di occhiale.
TWI816609B (zh) * 2021-08-27 2023-09-21 華南商業銀行股份有限公司 用於監控的眼部穿戴式資訊系統
TWI784692B (zh) * 2021-08-27 2022-11-21 華南商業銀行股份有限公司 眼部穿戴式資訊系統
TWI817838B (zh) * 2021-08-27 2023-10-01 華南商業銀行股份有限公司 具隱私保護的眼部穿戴式資訊系統

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060140422A1 (en) * 2004-12-29 2006-06-29 Zurek Robert A Apparatus and method for receiving inputs from a user
US20100110368A1 (en) * 2008-11-02 2010-05-06 David Chaum System and apparatus for eyeglass appliance platform
KR20120080852A (ko) * 2011-01-10 2012-07-18 주식회사 이랜텍 골전도 진동자가 구비된 액정 셔터 안경
KR20130035290A (ko) * 2011-09-30 2013-04-09 주식회사 이랜텍 골전도 스피커가 구비된 3d 안경
US20130242262A1 (en) * 2005-10-07 2013-09-19 Percept Technologies Inc. Enhanced optical and perceptual digital eyewear

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005534269A (ja) * 2002-07-26 2005-11-10 オークレイ・インコーポレイテッド 無線対話式ヘッドセット
AU2005276865B2 (en) * 2004-08-27 2009-12-03 Victorion Technology Co., Ltd. The nasal bone conduction wireless communication transmission equipment
CN101753221A (zh) * 2008-11-28 2010-06-23 新兴盛科技股份有限公司 蝶颞骨传导通讯与/或助听装置
JP5269618B2 (ja) * 2009-01-05 2013-08-21 株式会社オーディオテクニカ 骨伝導マイクロホン内蔵ヘッドセット
EP2458586A1 (en) * 2010-11-24 2012-05-30 Koninklijke Philips Electronics N.V. System and method for producing an audio signal
CN103369418A (zh) * 2012-03-27 2013-10-23 新兴盛科技股份有限公司 喉震式麦克风及包含该麦克风的通讯免持装置
CN103873997B (zh) * 2012-12-11 2017-06-27 联想(北京)有限公司 电子设备和声音采集方法
CN204044454U (zh) * 2014-09-18 2014-12-24 石永涛 多功能骨导助听式眼镜

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060140422A1 (en) * 2004-12-29 2006-06-29 Zurek Robert A Apparatus and method for receiving inputs from a user
US20130242262A1 (en) * 2005-10-07 2013-09-19 Percept Technologies Inc. Enhanced optical and perceptual digital eyewear
US20100110368A1 (en) * 2008-11-02 2010-05-06 David Chaum System and apparatus for eyeglass appliance platform
KR20120080852A (ko) * 2011-01-10 2012-07-18 주식회사 이랜텍 골전도 진동자가 구비된 액정 셔터 안경
KR20130035290A (ko) * 2011-09-30 2013-04-09 주식회사 이랜텍 골전도 스피커가 구비된 3d 안경

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10779075B2 (en) 2010-12-27 2020-09-15 Finewell Co., Ltd. Incoming/outgoing-talk unit and incoming-talk unit
US10778823B2 (en) 2012-01-20 2020-09-15 Finewell Co., Ltd. Mobile telephone and cartilage-conduction vibration source device
US10834506B2 (en) 2012-06-29 2020-11-10 Finewell Co., Ltd. Stereo earphone
US10848607B2 (en) 2014-12-18 2020-11-24 Finewell Co., Ltd. Cycling hearing device and bicycle system
US11601538B2 (en) 2014-12-18 2023-03-07 Finewell Co., Ltd. Headset having right- and left-ear sound output units with through-holes formed therein
US10967521B2 (en) 2015-07-15 2021-04-06 Finewell Co., Ltd. Robot and robot system
US10795321B2 (en) 2015-09-16 2020-10-06 Finewell Co., Ltd. Wrist watch with hearing function
US10778824B2 (en) 2016-01-19 2020-09-15 Finewell Co., Ltd. Pen-type handset
WO2020067263A1 (ja) * 2018-09-28 2020-04-02 株式会社ファインウェル 聴取装置
CN112740718A (zh) * 2018-09-28 2021-04-30 株式会社精好 听取装置
US11526033B2 (en) 2018-09-28 2022-12-13 Finewell Co., Ltd. Hearing device

Also Published As

Publication number Publication date
DE112016005688T5 (de) 2018-08-30
CN108351524A (zh) 2018-07-31
JP2019506018A (ja) 2019-02-28
JP6891172B2 (ja) 2021-06-18

Similar Documents

Publication Publication Date Title
US9872101B2 (en) System for sound capture and generation via nasal vibration
US9924265B2 (en) System for voice capture via nasal vibration sensing
WO2017099938A1 (en) System for sound capture and generation via nasal vibration
JP7324313B2 (ja) 音声対話方法及び装置、端末、並びに記憶媒体
US11178707B2 (en) Connection request processing method and apparatus, bluetooth earphone, wearable device, system and storage medium
EP3304951B1 (en) Changing companion communication device behavior based on status of wearable device
EP4167590A1 (en) Earphone noise processing method and device, and earphone
KR102384519B1 (ko) 이어 피스를 제어하기 위한 방법 및 이를 지원하는 전자 장치
US10149067B2 (en) Method for controlling function based on battery information and electronic device therefor
US20190213973A1 (en) Low power driving method and electronic device performing thereof
US9743169B2 (en) Sound output method and device utilizing the same
US20150213796A1 (en) Adjusting speech recognition using contextual information
CN114245256A (zh) 移动通信设备及其操作方法以及移动系统
US20120235896A1 (en) Bluetooth or other wireless interface with power management for head mounted display
JP2019159305A (ja) ファーフィールド音声機能の実現方法、設備、システム及び記憶媒体
KR20200015267A (ko) 음성 인식을 수행할 전자 장치를 결정하는 전자 장치 및 전자 장치의 동작 방법
US20140092004A1 (en) Audio information and/or control via an intermediary device
WO2012040030A2 (en) Bluetooth or other wireless interface with power management for head mounted display
KR102374620B1 (ko) 음성 인식을 위한 전자 장치 및 시스템
CN111819533A (zh) 一种触发电子设备执行功能的方法及电子设备
KR20150032011A (ko) 전자 기기 및 그 제어 방법
CN103853646A (zh) 被叫提示系统及方法
CN111971977A (zh) 电子装置及其用于处理立体声音频信号的方法
US10748535B2 (en) Transcription record comparison
CN109144462B (zh) 发声控制方法、装置、电子装置及计算机可读介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16873548

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2018523483

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 112016005688

Country of ref document: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16873548

Country of ref document: EP

Kind code of ref document: A1