US20180288519A1 - Haptic feedback for head-wearable speaker mount such as headphones or earbuds to indicate ambient sound - Google Patents
Haptic feedback for head-wearable speaker mount such as headphones or earbuds to indicate ambient sound Download PDFInfo
- Publication number
- US20180288519A1 US20180288519A1 US15/471,977 US201715471977A US2018288519A1 US 20180288519 A1 US20180288519 A1 US 20180288519A1 US 201715471977 A US201715471977 A US 201715471977A US 2018288519 A1 US2018288519 A1 US 2018288519A1
- Authority
- US
- United States
- Prior art keywords
- sound
- haptic
- wearable
- head
- parameter
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/10—Earpieces; Attachments therefor ; Earphones; Monophonic headphones
- H04R1/1058—Manufacture or assembly
- H04R1/1075—Mountings of transducers in earphones or headphones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
- H04R3/12—Circuits for transducers, loudspeakers or microphones for distributing signals to two or more loudspeakers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/10—Earpieces; Attachments therefor ; Earphones; Monophonic headphones
- H04R1/1083—Reduction of ambient noise
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/10—Earpieces; Attachments therefor ; Earphones; Monophonic headphones
- H04R1/1041—Mechanical or electronic switches, or control elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2420/00—Details of connection covered by H04R, not provided for in its groups
- H04R2420/01—Input selection or mixing for amplifiers or loudspeakers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2420/00—Details of connection covered by H04R, not provided for in its groups
- H04R2420/03—Connection circuits to selectively connect loudspeakers or headphones to amplifiers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2420/00—Details of connection covered by H04R, not provided for in its groups
- H04R2420/07—Applications of wireless loudspeakers or wireless microphones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2460/00—Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
- H04R2460/13—Hearing devices using bone conduction transducers
Definitions
- the present application relates to technically inventive, non-routine solutions that are necessarily rooted in computer technology and that produce concrete technical improvements.
- headphones may employ noise canceling/isolating features which cancel/block ambient sound.
- noise reduction carries with it the risk of accident as people use the headphones in a variety of situations, such as close to traffic, in which traffic sound is reduced by the headphones.
- people using noise-canceling headphones are more likely to miss other audible cues such as someone calling their name. The same concern applies when a user is using headphones with volume so loud that the user cannot hear ambient sound.
- present principles detect ambient contexts that require user attention, and notify the user of such using haptic feedback without disrupting use of the headphones.
- a storage that is not a transitory signal includes instructions executable by a processor to sense ambient sound using at least one microphone on a head-wearable speaker assembly.
- the instructions are executable to determine at least one parameter of the ambient sound, and based at least in part on the parameter, activate at least one haptic generator on the head-wearable speaker assembly.
- the parameter may include a type of sound and/or a direction of sound and/or a location of sound origination and/or an amplitude of sound and/or speech in the ambient sound.
- the instructions may be executable to, based at least in part on the parameter, establish a location of haptic feedback on the head-wearable speaker assembly.
- the instructions are executable to, based at least in part on the parameter, establish an intensity of haptic feedback on the head-wearable speaker assembly.
- the instructions can be executable to, based at least in part on a speaker volume of the head-wearable speaker assembly, establish an intensity of haptic feedback on the head-wearable speaker assembly. Different haptic intensity may help users notify ambient situations (the higher volume the headphones are in, the stronger vibration feedback to get user attention).
- a method in another aspect, includes determining a context of ambient sound impinging on a wearable listening device, and based at least in part on the context, activating at least one haptic generator to provide feedback of the ambient sound.
- an apparatus in another aspect, includes at least one head-wearable mount, at least one speaker on the head-wearable mount, and at least one microphone on the head-wearable mount. At least one haptic generator is on the head-wearable mount. The apparatus is adapted to activate the haptic generator responsive to ambient sound sensed by the microphone.
- FIG. 1 is a block diagram of an example system in accordance with present principles
- FIG. 2 is a block diagram of an example network of devices in accordance with present principles
- FIG. 3 is a schematic diagram illustrating an example earbud-type headphone with ambient sound detecting microphones and haptic feedback generators
- FIG. 4 is a flow chart of example logic consistent with present principles
- FIG. 5 is an example data structure correlating ambient sounds to haptic feedback.
- FIG. 6 is a schematic diagram illustrating various types of ambient sound that a user may wish to know about but that would be suppressed by noise-canceling headphones.
- FIG. 7 is a screen shot of an example user interface consistent with present principles.
- a system may include server and client components, connected over a network such that data may be exchanged between the client and server components.
- the client components may include one or more computing devices including televisions (e.g., smart TVs, Internet-enabled TVs), computers such as desktops, laptops and tablet computers, so-called convertible devices (e.g., having a tablet configuration and laptop configuration), and other mobile devices including smart phones.
- These client devices may employ, as non-limiting examples, operating systems from Apple Inc. of Cupertino Calif., Google Inc. of Mountain View, Calif., or Microsoft Corp. of Redmond, Wash. A Unix® or similar such as Linux® operating system may be used.
- These operating systems can execute one or more browsers such as a browser made by Microsoft or Google or Mozilla or another browser program that can access web pages and applications hosted by Internet servers over a network such as the Internet, a local intranet, or a virtual private network.
- instructions refer to computer-implemented steps for processing information in the system. Instructions can be implemented in software, firmware or hardware, or combinations thereof and include any type of programmed step undertaken by components of the system; hence, illustrative components, blocks, modules, circuits, and steps are sometimes set forth in terms of their functionality.
- a processor may be any conventional general purpose single- or multi-chip processor that can execute logic by means of various lines such as address lines, data lines, and control lines and registers and shift registers. Moreover, any logical blocks, modules, and circuits described herein can be implemented or performed with a general purpose processor, a digital signal processor (DSP), a field programmable gate array (FPGA) or other programmable logic device such as an application specific integrated circuit (ASIC), discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein.
- DSP digital signal processor
- FPGA field programmable gate array
- ASIC application specific integrated circuit
- a processor can be implemented by a controller or state machine or a combination of computing devices.
- Software modules and/or applications described by way of flow charts and/or user interfaces herein can include various sub-routines, procedures, etc. Without limiting the disclosure, logic stated to be executed by a particular module can be redistributed to other software modules and/or combined together in a single module and/or made available in a shareable library.
- Logic when implemented in software can be written in an appropriate language such as but not limited to C# or C++, and can be stored on or transmitted through a computer-readable storage medium (e.g., that is not a transitory signal) such as a random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), compact disk read-only memory (CD-ROM) or other optical disk storage such as digital versatile disc (DVD), magnetic disk storage or other magnetic storage devices including removable thumb drives, etc.
- RAM random access memory
- ROM read-only memory
- EEPROM electrically erasable programmable read-only memory
- CD-ROM compact disk read-only memory
- DVD digital versatile disc
- magnetic disk storage or other magnetic storage devices including removable thumb drives, etc.
- a processor can access information over its input lines from data storage, such as the computer readable storage medium, and/or the processor can access information wirelessly from an Internet server by activating a wireless transceiver to send and receive data.
- Data typically is converted from analog signals to digital by circuitry between the antenna and the registers of the processor when being received and from digital to analog when being transmitted.
- the processor then processes the data through its shift registers to output calculated data on output lines, for presentation of the calculated data on the device.
- a system having at least one of A, B, and C includes systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.
- circuitry includes all levels of available integration, e.g., from discrete logic circuits to the highest level of circuit integration such as VLSI, and includes programmable logic components programmed to perform the functions of an embodiment as well as general-purpose or special-purpose processors programmed with instructions to perform those functions.
- the system 100 may be a desktop computer system, such as one of the ThinkCentre® or ThinkPad® series of personal computers sold by Lenovo (US) Inc. of Morrisville, N.C., or a workstation computer, such as the ThinkStation®, which are sold by Lenovo (US) Inc. of Morrisville, N.C.; however, as apparent from the description herein, a client device, a server or other machine in accordance with present principles may include other features or only some of the features of the system 100 .
- the system 100 may be, e.g., a game console such as XBOX®, and/or the system 100 may include a mobile communication device such as a mobile telephone, notebook computer, and/or other portable computerized device.
- the system 100 may include a so-called chipset 110 .
- a chipset refers to a group of integrated circuits, or chips, that are designed to work together. Chipsets are usually marketed as a single product (e.g., consider chipsets marketed under the brands INTEL®, AMD®, etc.).
- the chipset 110 has a particular architecture, which may vary to some extent depending on brand or manufacturer.
- the architecture of the chipset 110 includes a core and memory control group 120 and an I/O controller hub 150 that exchange information (e.g., data, signals, commands, etc.) via, for example, a direct management interface or direct media interface (DMI) 142 or a link controller 144 .
- DMI direct management interface or direct media interface
- the DMI 142 is a chip-to-chip interface (sometimes referred to as being a link between a “northbridge” and a “southbridge”).
- the core and memory control group 120 include one or more processors 122 (e.g., single core or multi-core, etc.) and a memory controller hub 126 that exchange information via a front side bus (FSB) 124 .
- processors 122 e.g., single core or multi-core, etc.
- memory controller hub 126 that exchange information via a front side bus (FSB) 124 .
- FSA front side bus
- various components of the core and memory control group 120 may be integrated onto a single processor die, for example, to make a chip that supplants the conventional “northbridge” style architecture.
- the memory controller hub 126 interfaces with memory 140 .
- the memory controller hub 126 may provide support for DDR SDRAM memory (e.g., DDR, DDR2, DDR3, etc.).
- DDR SDRAM memory e.g., DDR, DDR2, DDR3, etc.
- the memory 140 is a type of random-access memory (RAM). It is often referred to as “system memory.”
- the memory controller hub 126 can further include a low-voltage differential signaling interface (LVDS) 132 .
- the LVDS 132 may be a so-called LVDS Display Interface (LDI) for support of a display device 192 (e.g., a CRT, a flat panel, a projector, a touch-enabled display, etc.).
- a block 138 includes some examples of technologies that may be supported via the LVDS interface 132 (e.g., serial digital video, HDMI/DVI, display port).
- the memory controller hub 126 also includes one or more PCI-express interfaces (PCI-E) 134 , for example, for support of discrete graphics 136 .
- PCI-E PCI-express interfaces
- the memory controller hub 126 may include a 16-lane (x16) PCI-E port for an external PCI-E-based graphics card (including, e.g., one of more GPUs).
- An example system may include AGP or PCI-E for support of graphics.
- the I/O hub controller 150 can include a variety of interfaces.
- the example of FIG. 1 includes a SATA interface 151 , one or more PCI-E interfaces 152 (optionally one or more legacy PCI interfaces), one or more USB interfaces 153 , a LAN interface 154 (more generally a network interface for communication over at least one network such as the Internet, a WAN, a LAN, etc.
- the I/O hub controller 150 may include integrated gigabit Ethernet controller lines multiplexed with a PCI-E interface port. Other network features may operate independent of a PCI-E interface.
- the interfaces of the I/O hub controller 150 may provide for communication with various devices, networks, etc.
- the SATA interface 151 provides for reading, writing or reading and writing information on one or more drives 180 such as HDDs, SDDs or a combination thereof, but in any case the drives 180 are understood to be, e.g., tangible computer readable storage mediums that are not transitory signals.
- the I/O hub controller 150 may also include an advanced host controller interface (AHCI) to support one or more drives 180 .
- AHCI advanced host controller interface
- the PCI-E interface 152 allows for wireless connections 182 to devices, networks, etc.
- the USB interface 153 provides for input devices 184 such as keyboards (KB), mice and various other devices (e.g., cameras, phones, storage, media players, etc.).
- the LPC interface 170 provides for use of one or more ASICs 171 , a trusted platform module (TPM) 172 , a super I/O 173 , a firmware hub 174 , BIOS support 175 as well as various types of memory 176 such as ROM 177 , Flash 178 , and non-volatile RAM (NVRAM) 179 .
- TPM trusted platform module
- this module may be in the form of a chip that can be used to authenticate software and hardware devices.
- a TPM may be capable of performing platform authentication and may be used to verify that a system seeking access is the expected system.
- the system 100 upon power on, may be configured to execute boot code 190 for the BIOS 168 , as stored within the SPI Flash 166 , and thereafter processes data under the control of one or more operating systems and application software (e.g., stored in system memory 140 ).
- An operating system may be stored in any of a variety of locations and accessed, for example, according to instructions of the BIOS 168 .
- the system 100 may also include one or more sensors 191 from which input may be received for the system 100 .
- the sensor 191 may be an audio receiver/microphone that provides input from the microphone to the processor 122 based on audio that is detected, such as via a user providing audible input to the microphone, so that the user may be identified based on voice identification.
- the sensor 191 may be a camera that gathers one or more images and provides input related thereto to the processor 122 so that the user may be identified based on facial recognition or other biometric recognition.
- the camera may be a thermal imaging camera, a digital camera such as a webcam, a three-dimensional (3D) camera, and/or a camera otherwise integrated into the system 100 and controllable by the processor 122 to gather pictures/images and/or video.
- the sensor 191 may also be, for instance, another kind of biometric sensor for use for such purposes, such as a fingerprint reader, a pulse monitor, a heat sensor, etc.
- the sensor 191 may even be a motion sensor such as a gyroscope that senses and/or measures the orientation of the system 100 and provides input related thereto to the processor 122 , and/or an accelerometer that senses acceleration and/or movement of the system 100 and provides input related thereto to the processor 122 .
- a motion sensor such as a gyroscope that senses and/or measures the orientation of the system 100 and provides input related thereto to the processor 122
- an accelerometer that senses acceleration and/or movement of the system 100 and provides input related thereto to the processor 122 .
- unique and/or particular motion or motion patterns may be identified to identify a user as being associated with the motions/patterns in accordance with present principles.
- the system 100 may include a location sensor such as but not limited to a global positioning satellite (GPS) transceiver 193 that is configured to receive geographic position information from at least one satellite and provide the information to the processor 122 .
- GPS global positioning satellite
- another suitable position receiver other than a GPS receiver may be used in accordance with present principles to determine the location of the system 100 .
- the GPS transceiver 193 may even establish a sensor for use in accordance with present principles to identify a particular user based on the user being associated with a particular location (e.g., a particular building, a particular location within a room of a personal residence, etc.)
- an example client device or other machine/computer may include fewer or more features than shown on the system 100 of FIG. 1 .
- the system 100 is configured to undertake present principles.
- example devices are shown communicating over a network 200 such as the Internet in accordance with present principles. It is to be understood that each of the devices described in reference to FIG. 2 may include at least some of the features, components, and/or elements of the system 100 described above.
- FIG. 2 shows a notebook computer and/or convertible computer 202 , a desktop computer 204 , a wearable device 206 such as an earbud-type or other headphone, a smart television (TV) 208 , a smart phone 210 , a tablet computer 212 , a server 214 such as an Internet server that may provide cloud storage accessible to the devices shown in FIG. 2 , and a game console 218 .
- the devices shown in FIG. 2 are configured to communicate with each other over the network 200 to undertake present principles.
- FIG. 3 shows a head-wearable speaker assembly 300 embodied by earbuds having left and right head-wearable mounts 302 , 304 each holding one or more speakers 305 .
- the electronics in the mounts 302 , 304 are connected via a flaccid cord 306 .
- other types of head-wearable speaker assemblies are contemplated, such as headphones connected by a non-flaccid head band in which the left and right speaker mounts include cushions that surround the entire ear.
- both mounts 302 , 304 include one more haptic generators 308 .
- each speaker mount includes four haptic generators, one near the top of the mount (relative to when the mount is worn), one near the bottom, and two near each side intermediate the top and bottom of the mount.
- both mounts may support one or more microphones 310 , which can include ultrasonic microphones.
- both mounts include two microphones that are laterally spaced from each other as shown. It is to be understood that in some embodiments one mount may have two microphones laterally spaced from each and the other mount may have two microphones vertically spaced from each other for purposes of three dimensional triangulation.
- one or both mounts may support one or more network interfaces 312 such as but not limited to Bluetooth transceivers, Wi-Fi transceivers, and wireless telephony transceivers.
- a processor and a storage medium with instructions executable by the processor may be incorporated into the head-wearable speaker assembly 300 .
- signals from the microphone and activation signals to the haptic generators may be exchanged through the interface 312 with a nearby mobile device processor or even a cloud processor to execute logic herein.
- FIG. 4 illustrates example logic that may be executed by a processor in the assembly 300 or other processor in wired or wireless communication therewith.
- a processor in the assembly 300 or other processor in wired or wireless communication therewith.
- specific sound frequencies that represent the sound can be used as features.
- a specialized/dedicated chip can be used, such as a NPU (neural processing unit) or GPU unit.
- ambient sound is sensed by the microphones 310 .
- ambient sound or noise is meant sound or noise outside the assembly 300 that is not generated by the speakers 305 .
- the context of the ambient sound is identified.
- the context may include the direction of the ambient sound relative to the assembly 300 .
- the direction is determined by triangulation using differences in times of arrival of the same sound at the different microphones 310 , with the differences being converted to distances and the distances used to triangulate the direction of sound.
- the triangulation can also indicate the location of the source of the ambient sound as being the convergence of the triangulated lines of bearing derived from the different times of arrival of the sound at the various microphones 310 .
- the logic may employ several cues, including time differences between sound arrivals at microphones and ambient level differences (or intensity differences) between multiple microphones, which may be implemented as arrays. Other cues may include spectral information, timing analysis, correlation analysis, and pattern matching. Localization can be described in terms of three-dimensional position: the azimuth or horizontal angle, the elevation or vertical angle, and the distance (for static sounds) or velocity (for moving sounds). The localization can be implemented in various ways by using different techniques.
- the context of the sound can also include amplitude, which may be determined at block 404 .
- the amplitude may be used to infer distance of the source of the sound using, e.g., a lookup table correlating amplitudes with distance, with distance having a squared relationship with amplitude, in non-limiting examples.
- the context of the ambient sound can also include a type of sound, which may be determined at block 406 .
- the type of sound may be determined using pattern recognition. It may first be determined using voice recognition whether the sound is a spoken word or phrase and if so, the spoken word or phrase is identified. For example, to detect human voice and speech, noise reduction may first be applied to sound detected by one or more microphones 310 . This may include spectral subtraction. Then, one or more features or quantities of the detected sound may be calculated from a section of the input signal and a classification rule is applied to classify the section as speech or non-speech.
- a digital fingerprint of the sound may be used as entering argument to a library of fingerprints and a match returned, with the library correlating the matching fingerprint with a sound type, e.g., horn honking, tires screeching, engine running, etc.
- the sound may include a Doppler shift, with an up-shift indicating that the source of sound is approaching and a down-shift indicating that the source of sound is receding.
- determining a type of sound can include determining different importances for types of ambient sound depend on context. For example, ambient sound classified as noise from an approaching vehicle approaching can be accorded a high importance (and thus a first type haptic feedback as described below) responsive to identifying, using, for example, location information from a GPS sensor such as that shown in FIG. 1 and embedded in the headphones, when the user is walking across the street. The same type of sound may be accorded a lower importance (and hence a second haptic feedback) when, for example, GPS location information indicates the user is walking on a sidewalk.
- Types of sound of interest include a human voice (audible cues such as someone calling), alert-type noises (such as sirens, honks, etc.), machine noises (such as vehicle engine sounds, braking noises, etc.)
- a human voice audible cues such as someone calling
- alert-type noises such as sirens, honks, etc.
- machine noises such as vehicle engine sounds, braking noises, etc.
- the logic may proceed to block 408 to correlate the sound to haptic feedback.
- a haptic generator may be activated. More complex implementations are envisioned. For example, a data structure correlating different ambient sound contexts to different haptic feedback types may be accessed.
- FIG. 5 illustrates an example structure in which ambient sounds in a left column are correlated with haptic feedback types in a right column.
- haptic generators 308 when the logic of blocks 402 - 406 identifies a loud (from amplitude) vehicle (from digital fingerprint) is approaching (from Doppler shift or triangulation), some but not all of the haptic generators 308 are activated, at, for instance, a relatively high amplitude of haptic generation in a pulsed fashion for short period.
- the haptic generators 308 closest to the direction of the approaching vehicle as identified from triangulation described above may be activated to give an indication of the direction of the vehicle, and other haptic generators can remain inactive.
- ambient context-haptic feedback shown in FIG. 5 include a loud vehicle honk causing all haptic generators to be activated at maximum energy level (maximum haptic generation), continuously.
- a haptic generator in the speaker mount 302 , 304 that is closest to the source of the spoken name may be activated to generate, e.g., a soft, short buzz.
- haptic activation type may include one or more of amplitude of haptic signal, type of haptic signal, number of haptic generators activated, and location on the assembly 300 of the haptic generators that are activated.
- haptic feedback directional information of the ambient sound can be presented by operating different motors embedded in different position on the earphone units.
- Distance and importance of the sound can be represented by using different intensity of the vibration along with the different number of motors to operate. As an example, the closer and the more important the sound is, the stronger haptic vibration is generated.
- Different types of haptic feedbacks can be generated using one or combinations of variations of 1) different frequency of vibration, 2) different intensity of the vibration (generated by different torque), 3) different number of motors that are generating the haptic feedback.
- ‘_’ be a weak & long vibration
- ‘*’ be a strong & short vibration
- one or more motors on different positions vibrate.
- an example haptic type identified at block 408 may be regarded as a baseline particularly in terms of the amplitude of the demanded haptic feedback, and that this baseline may be increased or decreased in step with higher and lower speaker 305 volumes.
- Haptic feedback with directional information can be implemented by using multiple vibration motors built in different spots on the earbuds or headphones unit.
- FIG. 6 illustrates various types of ambient context that is sensed by using the microphones 310 of FIG. 3 and disclosure herein, as noise suppression features of the wearable device 300 may block the user's hearing capability.
- Events that a user may want to notice include dangerous situations, such as a 600 car approaching and/or honking 602 , or a person 604 calling the user's attention using terms 606 like “hey”, “excuse me”, or calling the user by name, etc.
- the sensitivity of event detection and haptic feedback can be adaptive to the sound volume level of the speakers of the wearable device. For example, the higher volume the user is listening to, the stronger haptic feedback may be generated.
- FIG. 7 illustrates an example user interface (UI) 700 that may be provided, e.g., by a downloaded application on a mobile phone to allow a user of the wearable device to change the sensitivity and feedback strength of the haptic signaling.
- a selector 702 may be provided to turn off haptic signaling described herein, while another selector 704 may be provided to enable the above-disclosed haptic signaling.
- the user may also select from a list 706 if he or she always wants haptic signaling on for all ambient noise, or for only certain types of ambient noise such as someone calling the user's name, or dangerous situations.
- the user may also select from a list 708 whether to employ normal baseline haptic feedback intensity, gentle baseline haptic feedback intensity, or high baseline haptic feedback intensity.
- present principles apply in instances where such an application is downloaded from a server to a device over a network such as the Internet. Furthermore, present principles apply in instances where such an application is included on a computer readable storage medium that is being vended and/or provided, where the computer readable storage medium is not a transitory signal and/or a signal per se.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Otolaryngology (AREA)
- Manufacturing & Machinery (AREA)
- User Interface Of Digital Computer (AREA)
- Circuit For Audible Band Transducer (AREA)
Abstract
Haptic feedback is generated on a headphone to indicate contexts of ambient sound. In this way, noise-canceling headphones can alert the wearer to audible cues of potentially dangerous situations that otherwise would be suppressed by the noise cancelation feature of the headphones.
Description
- The present application relates to technically inventive, non-routine solutions that are necessarily rooted in computer technology and that produce concrete technical improvements.
- The use of headphones for listening to music, hands-free phone calls, interacting with virtual assistants, etc. is widespread. Comfortable wireless headphones and smart-wearable technology accelerate the use of hearable devices for a wide variety of purposes.
- As recognized herein, to improve listening fidelity, headphones may employ noise canceling/isolating features which cancel/block ambient sound. As also recognized herein, such noise reduction carries with it the risk of accident as people use the headphones in a variety of situations, such as close to traffic, in which traffic sound is reduced by the headphones. Moreover, people using noise-canceling headphones are more likely to miss other audible cues such as someone calling their name. The same concern applies when a user is using headphones with volume so loud that the user cannot hear ambient sound.
- With the above problems in mind, present principles detect ambient contexts that require user attention, and notify the user of such using haptic feedback without disrupting use of the headphones.
- Accordingly, in one aspect a storage that is not a transitory signal includes instructions executable by a processor to sense ambient sound using at least one microphone on a head-wearable speaker assembly. The instructions are executable to determine at least one parameter of the ambient sound, and based at least in part on the parameter, activate at least one haptic generator on the head-wearable speaker assembly.
- The parameter may include a type of sound and/or a direction of sound and/or a location of sound origination and/or an amplitude of sound and/or speech in the ambient sound.
- In example embodiments, the instructions may be executable to, based at least in part on the parameter, establish a location of haptic feedback on the head-wearable speaker assembly. In some examples, the instructions are executable to, based at least in part on the parameter, establish an intensity of haptic feedback on the head-wearable speaker assembly. In non-limiting example implementations, the instructions can be executable to, based at least in part on a speaker volume of the head-wearable speaker assembly, establish an intensity of haptic feedback on the head-wearable speaker assembly. Different haptic intensity may help users notify ambient situations (the higher volume the headphones are in, the stronger vibration feedback to get user attention).
- In another aspect, a method includes determining a context of ambient sound impinging on a wearable listening device, and based at least in part on the context, activating at least one haptic generator to provide feedback of the ambient sound.
- In another aspect, an apparatus includes at least one head-wearable mount, at least one speaker on the head-wearable mount, and at least one microphone on the head-wearable mount. At least one haptic generator is on the head-wearable mount. The apparatus is adapted to activate the haptic generator responsive to ambient sound sensed by the microphone.
- The details of present principles, both as to their structure and operation, can best be understood in reference to the accompanying drawings, in which like reference numerals refer to like parts, and in which:
-
FIG. 1 is a block diagram of an example system in accordance with present principles; -
FIG. 2 is a block diagram of an example network of devices in accordance with present principles; -
FIG. 3 is a schematic diagram illustrating an example earbud-type headphone with ambient sound detecting microphones and haptic feedback generators; -
FIG. 4 is a flow chart of example logic consistent with present principles; -
FIG. 5 is an example data structure correlating ambient sounds to haptic feedback; and -
FIG. 6 is a schematic diagram illustrating various types of ambient sound that a user may wish to know about but that would be suppressed by noise-canceling headphones. -
FIG. 7 is a screen shot of an example user interface consistent with present principles. - With respect to any computer systems discussed herein, a system may include server and client components, connected over a network such that data may be exchanged between the client and server components. The client components may include one or more computing devices including televisions (e.g., smart TVs, Internet-enabled TVs), computers such as desktops, laptops and tablet computers, so-called convertible devices (e.g., having a tablet configuration and laptop configuration), and other mobile devices including smart phones. These client devices may employ, as non-limiting examples, operating systems from Apple Inc. of Cupertino Calif., Google Inc. of Mountain View, Calif., or Microsoft Corp. of Redmond, Wash. A Unix® or similar such as Linux® operating system may be used. These operating systems can execute one or more browsers such as a browser made by Microsoft or Google or Mozilla or another browser program that can access web pages and applications hosted by Internet servers over a network such as the Internet, a local intranet, or a virtual private network.
- As used herein, instructions refer to computer-implemented steps for processing information in the system. Instructions can be implemented in software, firmware or hardware, or combinations thereof and include any type of programmed step undertaken by components of the system; hence, illustrative components, blocks, modules, circuits, and steps are sometimes set forth in terms of their functionality.
- A processor may be any conventional general purpose single- or multi-chip processor that can execute logic by means of various lines such as address lines, data lines, and control lines and registers and shift registers. Moreover, any logical blocks, modules, and circuits described herein can be implemented or performed with a general purpose processor, a digital signal processor (DSP), a field programmable gate array (FPGA) or other programmable logic device such as an application specific integrated circuit (ASIC), discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A processor can be implemented by a controller or state machine or a combination of computing devices.
- Software modules and/or applications described by way of flow charts and/or user interfaces herein can include various sub-routines, procedures, etc. Without limiting the disclosure, logic stated to be executed by a particular module can be redistributed to other software modules and/or combined together in a single module and/or made available in a shareable library.
- Logic when implemented in software, can be written in an appropriate language such as but not limited to C# or C++, and can be stored on or transmitted through a computer-readable storage medium (e.g., that is not a transitory signal) such as a random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), compact disk read-only memory (CD-ROM) or other optical disk storage such as digital versatile disc (DVD), magnetic disk storage or other magnetic storage devices including removable thumb drives, etc.
- In an example, a processor can access information over its input lines from data storage, such as the computer readable storage medium, and/or the processor can access information wirelessly from an Internet server by activating a wireless transceiver to send and receive data. Data typically is converted from analog signals to digital by circuitry between the antenna and the registers of the processor when being received and from digital to analog when being transmitted. The processor then processes the data through its shift registers to output calculated data on output lines, for presentation of the calculated data on the device.
- Components included in one embodiment can be used in other embodiments in any appropriate combination. For example, any of the various components described herein and/or depicted in the Figures may be combined, interchanged or excluded from other embodiments.
- “A system having at least one of A, B, and C” (likewise “a system having at least one of A, B, or C” and “a system having at least one of A, B, C”) includes systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.
- The term “circuit” or “circuitry” may be used in the summary, description, and/or claims. As is well known in the art, the term “circuitry” includes all levels of available integration, e.g., from discrete logic circuits to the highest level of circuit integration such as VLSI, and includes programmable logic components programmed to perform the functions of an embodiment as well as general-purpose or special-purpose processors programmed with instructions to perform those functions.
- Now specifically in reference to
FIG. 1 , an example block diagram of an information handling system and/orcomputer system 100 is shown that is understood to have a housing for the components described below. Note that in some embodiments thesystem 100 may be a desktop computer system, such as one of the ThinkCentre® or ThinkPad® series of personal computers sold by Lenovo (US) Inc. of Morrisville, N.C., or a workstation computer, such as the ThinkStation®, which are sold by Lenovo (US) Inc. of Morrisville, N.C.; however, as apparent from the description herein, a client device, a server or other machine in accordance with present principles may include other features or only some of the features of thesystem 100. Also, thesystem 100 may be, e.g., a game console such as XBOX®, and/or thesystem 100 may include a mobile communication device such as a mobile telephone, notebook computer, and/or other portable computerized device. - As shown in
FIG. 1 , thesystem 100 may include a so-calledchipset 110. A chipset refers to a group of integrated circuits, or chips, that are designed to work together. Chipsets are usually marketed as a single product (e.g., consider chipsets marketed under the brands INTEL®, AMD®, etc.). - In the example of
FIG. 1 , thechipset 110 has a particular architecture, which may vary to some extent depending on brand or manufacturer. The architecture of thechipset 110 includes a core andmemory control group 120 and an I/O controller hub 150 that exchange information (e.g., data, signals, commands, etc.) via, for example, a direct management interface or direct media interface (DMI) 142 or alink controller 144. In the example ofFIG. 1 , theDMI 142 is a chip-to-chip interface (sometimes referred to as being a link between a “northbridge” and a “southbridge”). - The core and
memory control group 120 include one or more processors 122 (e.g., single core or multi-core, etc.) and amemory controller hub 126 that exchange information via a front side bus (FSB) 124. As described herein, various components of the core andmemory control group 120 may be integrated onto a single processor die, for example, to make a chip that supplants the conventional “northbridge” style architecture. - The
memory controller hub 126 interfaces withmemory 140. For example, thememory controller hub 126 may provide support for DDR SDRAM memory (e.g., DDR, DDR2, DDR3, etc.). In general, thememory 140 is a type of random-access memory (RAM). It is often referred to as “system memory.” - The
memory controller hub 126 can further include a low-voltage differential signaling interface (LVDS) 132. TheLVDS 132 may be a so-called LVDS Display Interface (LDI) for support of a display device 192 (e.g., a CRT, a flat panel, a projector, a touch-enabled display, etc.). Ablock 138 includes some examples of technologies that may be supported via the LVDS interface 132 (e.g., serial digital video, HDMI/DVI, display port). Thememory controller hub 126 also includes one or more PCI-express interfaces (PCI-E) 134, for example, for support ofdiscrete graphics 136. Discrete graphics using a PCI-E interface has become an alternative approach to an accelerated graphics port (AGP). For example, thememory controller hub 126 may include a 16-lane (x16) PCI-E port for an external PCI-E-based graphics card (including, e.g., one of more GPUs). An example system may include AGP or PCI-E for support of graphics. - In examples in which it is used, the I/
O hub controller 150 can include a variety of interfaces. The example ofFIG. 1 includes aSATA interface 151, one or more PCI-E interfaces 152 (optionally one or more legacy PCI interfaces), one ormore USB interfaces 153, a LAN interface 154 (more generally a network interface for communication over at least one network such as the Internet, a WAN, a LAN, etc. under direction of the processor(s) 122), a general purpose I/O interface (GPIO) 155, a low-pin count (LPC)interface 170, apower management interface 161, aclock generator interface 162, an audio interface 163 (e.g., forspeakers 194 to output audio), a total cost of operation (TCO)interface 164, a system management bus interface (e.g., a multi-master serial computer bus interface) 165, and a serial peripheral flash memory/controller interface (SPI Flash) 166, which, in the example ofFIG. 1 , includesBIOS 168 andboot code 190. With respect to network connections, the I/O hub controller 150 may include integrated gigabit Ethernet controller lines multiplexed with a PCI-E interface port. Other network features may operate independent of a PCI-E interface. - The interfaces of the I/
O hub controller 150 may provide for communication with various devices, networks, etc. For example, where used, theSATA interface 151 provides for reading, writing or reading and writing information on one ormore drives 180 such as HDDs, SDDs or a combination thereof, but in any case thedrives 180 are understood to be, e.g., tangible computer readable storage mediums that are not transitory signals. The I/O hub controller 150 may also include an advanced host controller interface (AHCI) to support one or more drives 180. The PCI-E interface 152 allows forwireless connections 182 to devices, networks, etc. TheUSB interface 153 provides forinput devices 184 such as keyboards (KB), mice and various other devices (e.g., cameras, phones, storage, media players, etc.). - In the example of
FIG. 1 , theLPC interface 170 provides for use of one ormore ASICs 171, a trusted platform module (TPM) 172, a super I/O 173, afirmware hub 174,BIOS support 175 as well as various types ofmemory 176 such asROM 177,Flash 178, and non-volatile RAM (NVRAM) 179. With respect to theTPM 172, this module may be in the form of a chip that can be used to authenticate software and hardware devices. For example, a TPM may be capable of performing platform authentication and may be used to verify that a system seeking access is the expected system. - The
system 100, upon power on, may be configured to executeboot code 190 for theBIOS 168, as stored within theSPI Flash 166, and thereafter processes data under the control of one or more operating systems and application software (e.g., stored in system memory 140). An operating system may be stored in any of a variety of locations and accessed, for example, according to instructions of theBIOS 168. - The
system 100 may also include one ormore sensors 191 from which input may be received for thesystem 100. For example, thesensor 191 may be an audio receiver/microphone that provides input from the microphone to theprocessor 122 based on audio that is detected, such as via a user providing audible input to the microphone, so that the user may be identified based on voice identification. As another example, thesensor 191 may be a camera that gathers one or more images and provides input related thereto to theprocessor 122 so that the user may be identified based on facial recognition or other biometric recognition. The camera may be a thermal imaging camera, a digital camera such as a webcam, a three-dimensional (3D) camera, and/or a camera otherwise integrated into thesystem 100 and controllable by theprocessor 122 to gather pictures/images and/or video. Thesensor 191 may also be, for instance, another kind of biometric sensor for use for such purposes, such as a fingerprint reader, a pulse monitor, a heat sensor, etc. - The
sensor 191 may even be a motion sensor such as a gyroscope that senses and/or measures the orientation of thesystem 100 and provides input related thereto to theprocessor 122, and/or an accelerometer that senses acceleration and/or movement of thesystem 100 and provides input related thereto to theprocessor 122. Thus, unique and/or particular motion or motion patterns may be identified to identify a user as being associated with the motions/patterns in accordance with present principles. - Additionally, the
system 100 may include a location sensor such as but not limited to a global positioning satellite (GPS)transceiver 193 that is configured to receive geographic position information from at least one satellite and provide the information to theprocessor 122. However, it is to be understood that another suitable position receiver other than a GPS receiver may be used in accordance with present principles to determine the location of thesystem 100. In some embodiments, theGPS transceiver 193 may even establish a sensor for use in accordance with present principles to identify a particular user based on the user being associated with a particular location (e.g., a particular building, a particular location within a room of a personal residence, etc.) - It is to be understood that an example client device or other machine/computer may include fewer or more features than shown on the
system 100 ofFIG. 1 . In any case, it is to be understood at least based on the foregoing that thesystem 100 is configured to undertake present principles. - Turning now to
FIG. 2 , example devices are shown communicating over anetwork 200 such as the Internet in accordance with present principles. It is to be understood that each of the devices described in reference toFIG. 2 may include at least some of the features, components, and/or elements of thesystem 100 described above. -
FIG. 2 shows a notebook computer and/orconvertible computer 202, adesktop computer 204, awearable device 206 such as an earbud-type or other headphone, a smart television (TV) 208, asmart phone 210, atablet computer 212, aserver 214 such as an Internet server that may provide cloud storage accessible to the devices shown inFIG. 2 , and agame console 218. It is to be understood that the devices shown inFIG. 2 are configured to communicate with each other over thenetwork 200 to undertake present principles. -
FIG. 3 shows a head-wearable speaker assembly 300 embodied by earbuds having left and right head-wearable mounts more speakers 305. Typically, the electronics in themounts flaccid cord 306. It is to be understood that other types of head-wearable speaker assemblies are contemplated, such as headphones connected by a non-flaccid head band in which the left and right speaker mounts include cushions that surround the entire ear. - At least one and if desired both
mounts haptic generators 308. In the example shown, each speaker mount includes four haptic generators, one near the top of the mount (relative to when the mount is worn), one near the bottom, and two near each side intermediate the top and bottom of the mount. - Furthermore, at least and if desired both mounts may support one or
more microphones 310, which can include ultrasonic microphones. In the example shown, both mounts include two microphones that are laterally spaced from each other as shown. It is to be understood that in some embodiments one mount may have two microphones laterally spaced from each and the other mount may have two microphones vertically spaced from each other for purposes of three dimensional triangulation. Also, one or both mounts may support one ormore network interfaces 312 such as but not limited to Bluetooth transceivers, Wi-Fi transceivers, and wireless telephony transceivers. A processor and a storage medium with instructions executable by the processor may be incorporated into the head-wearable speaker assembly 300. In addition or alternatively, signals from the microphone and activation signals to the haptic generators may be exchanged through theinterface 312 with a nearby mobile device processor or even a cloud processor to execute logic herein. -
FIG. 4 illustrates example logic that may be executed by a processor in theassembly 300 or other processor in wired or wireless communication therewith. As described more fully below, to detect alert type sound and/or machine noises, specific sound frequencies that represent the sound can be used as features. To implement the detection and recognition in a low-power, a specialized/dedicated chip can be used, such as a NPU (neural processing unit) or GPU unit. - Commencing at
block 400, ambient sound is sensed by themicrophones 310. By “ambient” sound or noise is meant sound or noise outside theassembly 300 that is not generated by thespeakers 305. - Moving to block 402, the context of the ambient sound is identified. The context may include the direction of the ambient sound relative to the
assembly 300. In one example embodiment, the direction is determined by triangulation using differences in times of arrival of the same sound at thedifferent microphones 310, with the differences being converted to distances and the distances used to triangulate the direction of sound. The triangulation can also indicate the location of the source of the ambient sound as being the convergence of the triangulated lines of bearing derived from the different times of arrival of the sound at thevarious microphones 310. - For sound localization, the logic may employ several cues, including time differences between sound arrivals at microphones and ambient level differences (or intensity differences) between multiple microphones, which may be implemented as arrays. Other cues may include spectral information, timing analysis, correlation analysis, and pattern matching. Localization can be described in terms of three-dimensional position: the azimuth or horizontal angle, the elevation or vertical angle, and the distance (for static sounds) or velocity (for moving sounds). The localization can be implemented in various ways by using different techniques.
- Thus if desired, the context of the sound can also include amplitude, which may be determined at
block 404. The amplitude may be used to infer distance of the source of the sound using, e.g., a lookup table correlating amplitudes with distance, with distance having a squared relationship with amplitude, in non-limiting examples. - The context of the ambient sound can also include a type of sound, which may be determined at
block 406. In one example, the type of sound may be determined using pattern recognition. It may first be determined using voice recognition whether the sound is a spoken word or phrase and if so, the spoken word or phrase is identified. For example, to detect human voice and speech, noise reduction may first be applied to sound detected by one ormore microphones 310. This may include spectral subtraction. Then, one or more features or quantities of the detected sound may be calculated from a section of the input signal and a classification rule is applied to classify the section as speech or non-speech. - For non-spoken sound, a digital fingerprint of the sound may be used as entering argument to a library of fingerprints and a match returned, with the library correlating the matching fingerprint with a sound type, e.g., horn honking, tires screeching, engine running, etc. Note that the sound may include a Doppler shift, with an up-shift indicating that the source of sound is approaching and a down-shift indicating that the source of sound is receding.
- Additional details regarding determining a type of sound can include determining different importances for types of ambient sound depend on context. For example, ambient sound classified as noise from an approaching vehicle approaching can be accorded a high importance (and thus a first type haptic feedback as described below) responsive to identifying, using, for example, location information from a GPS sensor such as that shown in
FIG. 1 and embedded in the headphones, when the user is walking across the street. The same type of sound may be accorded a lower importance (and hence a second haptic feedback) when, for example, GPS location information indicates the user is walking on a sidewalk. - Types of sound of interest include a human voice (audible cues such as someone calling), alert-type noises (such as sirens, honks, etc.), machine noises (such as vehicle engine sounds, braking noises, etc.)
- Once the context of the ambient sound has been identified, the logic may proceed to block 408 to correlate the sound to haptic feedback. In a simple implementation, once any ambient sound is sensed with an amplitude above a threshold, a haptic generator may be activated. More complex implementations are envisioned. For example, a data structure correlating different ambient sound contexts to different haptic feedback types may be accessed.
FIG. 5 illustrates an example structure in which ambient sounds in a left column are correlated with haptic feedback types in a right column. - In the non-limiting example shown, when the logic of blocks 402-406 identifies a loud (from amplitude) vehicle (from digital fingerprint) is approaching (from Doppler shift or triangulation), some but not all of the
haptic generators 308 are activated, at, for instance, a relatively high amplitude of haptic generation in a pulsed fashion for short period. Thehaptic generators 308 closest to the direction of the approaching vehicle as identified from triangulation described above may be activated to give an indication of the direction of the vehicle, and other haptic generators can remain inactive. - Other non-limiting examples of ambient context-haptic feedback shown in
FIG. 5 include a loud vehicle honk causing all haptic generators to be activated at maximum energy level (maximum haptic generation), continuously. Or, when a spoken name is identified to be that of the user of theassembly 300, a haptic generator in thespeaker mount assembly 300 of the haptic generators that are activated. - Still in reference to haptic feedback, directional information of the ambient sound can be presented by operating different motors embedded in different position on the earphone units. Distance and importance of the sound can be represented by using different intensity of the vibration along with the different number of motors to operate. As an example, the closer and the more important the sound is, the stronger haptic vibration is generated. Different types of haptic feedbacks can be generated using one or combinations of variations of 1) different frequency of vibration, 2) different intensity of the vibration (generated by different torque), 3) different number of motors that are generating the haptic feedback.
- In an illustrated example, let different types of haptic feedback be denoted as follows:
- ‘_’ be a weak & long vibration
- ‘=’ be a strong & long vibration
- ‘.’ be a weak & short vibration
- ‘*’ be a strong & short vibration, and
- ‘ ’ is a pause
- Then, different types of sound can be represented with the combinations of the haptic patterns. For example, responsive to identifying that someone is calling a user: ‘.’ (weak & short vibration) may be generated. Responsive to identifying that someone is calling a user urgently: ‘*’ (a strong & short vibration) can be generated. On the other hand, responsive to identifying that a car is approaching, multiple weak and short vibrations separated by short rest periods may be used (‘. . . ’)
- Continuing this illustration responsive to identifying that a car is approaching very closely, multiple strong and short vibrations separated by short rest periods may be used (‘* * *’). Responsive to identifying that there is an alarm sound that requires user attention, multiple strong and long vibrations may be used: ‘=*=’
- As mentioned above, based on the direction of the sound, one or more motors on different positions vibrate.
- Returning to
FIG. 4 , fromblock 408 the logic proceeds to block 410 to activate one or morehaptic generators 308 according to the identification of haptic type atblock 408. Note that an example haptic type identified atblock 408 may be regarded as a baseline particularly in terms of the amplitude of the demanded haptic feedback, and that this baseline may be increased or decreased in step with higher andlower speaker 305 volumes. - Haptic feedback with directional information can be implemented by using multiple vibration motors built in different spots on the earbuds or headphones unit.
-
FIG. 6 illustrates various types of ambient context that is sensed by using themicrophones 310 ofFIG. 3 and disclosure herein, as noise suppression features of thewearable device 300 may block the user's hearing capability. Events that a user may want to notice include dangerous situations, such as a 600 car approaching and/or honking 602, or aperson 604 calling the user'sattention using terms 606 like “hey”, “excuse me”, or calling the user by name, etc. The sensitivity of event detection and haptic feedback can be adaptive to the sound volume level of the speakers of the wearable device. For example, the higher volume the user is listening to, the stronger haptic feedback may be generated. -
FIG. 7 illustrates an example user interface (UI) 700 that may be provided, e.g., by a downloaded application on a mobile phone to allow a user of the wearable device to change the sensitivity and feedback strength of the haptic signaling. Aselector 702 may be provided to turn off haptic signaling described herein, while anotherselector 704 may be provided to enable the above-disclosed haptic signaling. The user may also select from alist 706 if he or she always wants haptic signaling on for all ambient noise, or for only certain types of ambient noise such as someone calling the user's name, or dangerous situations. The user may also select from alist 708 whether to employ normal baseline haptic feedback intensity, gentle baseline haptic feedback intensity, or high baseline haptic feedback intensity. - Before concluding, it is to be understood that although a software application for undertaking present principles may be vended with a device such as the
system 100, present principles apply in instances where such an application is downloaded from a server to a device over a network such as the Internet. Furthermore, present principles apply in instances where such an application is included on a computer readable storage medium that is being vended and/or provided, where the computer readable storage medium is not a transitory signal and/or a signal per se. - It is to be understood that whilst present principals have been described with reference to some example embodiments, these are not intended to be limiting, and that various alternative arrangements may be used to implement the subject matter claimed herein. Components included in one embodiment can be used in other embodiments in any appropriate combination. For example, any of the various components described herein and/or depicted in the Figures may be combined, interchanged or excluded from other embodiments.
Claims (20)
1. A device, comprising:
at least one computer memory that is not a transitory signal and that comprises instructions executable by at least one processor to:
sense ambient sound using at least one microphone on a head-wearable speaker assembly;
determine at least one parameter of the ambient sound; and
based at least in part on the at least one parameter, activate at least one haptic generator on the head-wearable speaker assembly, wherein the at least one parameter comprises a first parameter and the at least one haptic generator is activated to generate a first tactile signal responsive to the first parameter and a first signal indicating the device is in a first location, the at least one haptic generator being activated to generate a second tactile signal different from the first tactile signal responsive to the first parameter and a second signal indicating the device is in a second location different from the first location.
2. The device of claim 1 , wherein the at least one parameter comprises a type of sound.
3. The device of claim 1 , wherein the at least one parameter comprises a direction of sound.
4. The device of claim 1 , wherein the at least one parameter comprises a location of sound origination.
5. The device of claim 1 , wherein the at least one parameter comprises an amplitude of sound.
6. The device of claim 1 , wherein the at least one parameter comprises speech in the ambient sound.
7. The device of claim 1 , wherein the instructions are executable to, based at least in part on the at least one parameter, establish a location of haptic feedback on the head-wearable speaker assembly.
8. The device of claim 1 , wherein the instructions are executable to, based at least in part on the at least one parameter, establish an intensity of haptic feedback on the head-wearable speaker assembly.
9. The device of claim 1 , wherein the instructions are executable to, based at least in part on a speaker volume of the head-wearable speaker assembly, establish an intensity of haptic feedback on the head-wearable speaker assembly.
10. The device of claim 1 , comprising least one processor.
11. A method comprising:
determining a context of ambient sound impinging on a wearable listening device;
based at least in part on the context, activating at least one haptic generator to provide feedback of the ambient sound; and
present on at least one display at least one user interface (UI) comprising:
a first selector element selectable to turn off haptic signaling by the at least one haptic generator; and
at least one of a second selector element, a third selector element, the second selector element operable to establish ambient noise type for which haptic signaling is enabled, the third selector element operable to establish at least one intensity of haptic signaling.
12. The method of claim 11 , wherein the wearable listening device comprises noise-canceling earbuds.
13. The method of claim 11 , wherein the context is based at least in part on a type of sound.
14. The method of claim 11 , wherein the context is based at least in part on a direction of sound.
15. The method of claim 11 , wherein the context is based at least in part on an amplitude of sound.
16. The method of claim 11 , comprising, based at least in part on the ambient sound, establishing a location of haptic feedback on the wearable listening device.
17. The method of claim 11 , comprising, based at least in part on the ambient sound, establishing an intensity of haptic feedback on the wearable listening device.
18. The method of claim 11 , comprising, based at least in part on the ambient sound, establishing an intensity of haptic feedback on headphones.
19. An apparatus, comprising:
at least one head-wearable mount;
at least one speaker on the head-wearable mount;
at least one microphone on the head-wearable mount; and
at least first and second haptic generators on the head-wearable mount, wherein the apparatus is adapted to activate only the first haptic generator responsive to a first type of sound signal from the at least one microphone and to activate both the first and second haptic generators responsive to a second type of sound signal from the at least one microphone.
20. The apparatus of claim 19 , wherein the apparatus is adapted to activate the haptic generator responsive to ambient sound sensed by the microphone.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/471,977 US10110986B1 (en) | 2017-03-28 | 2017-03-28 | Haptic feedback for head-wearable speaker mount such as headphones or earbuds to indicate ambient sound |
US16/014,244 US10382866B2 (en) | 2017-03-28 | 2018-06-21 | Haptic feedback for head-wearable speaker mount such as headphones or earbuds to indicate ambient sound |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/471,977 US10110986B1 (en) | 2017-03-28 | 2017-03-28 | Haptic feedback for head-wearable speaker mount such as headphones or earbuds to indicate ambient sound |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/014,244 Division US10382866B2 (en) | 2017-03-28 | 2018-06-21 | Haptic feedback for head-wearable speaker mount such as headphones or earbuds to indicate ambient sound |
Publications (2)
Publication Number | Publication Date |
---|---|
US20180288519A1 true US20180288519A1 (en) | 2018-10-04 |
US10110986B1 US10110986B1 (en) | 2018-10-23 |
Family
ID=63671203
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/471,977 Active US10110986B1 (en) | 2017-03-28 | 2017-03-28 | Haptic feedback for head-wearable speaker mount such as headphones or earbuds to indicate ambient sound |
US16/014,244 Active US10382866B2 (en) | 2017-03-28 | 2018-06-21 | Haptic feedback for head-wearable speaker mount such as headphones or earbuds to indicate ambient sound |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/014,244 Active US10382866B2 (en) | 2017-03-28 | 2018-06-21 | Haptic feedback for head-wearable speaker mount such as headphones or earbuds to indicate ambient sound |
Country Status (1)
Country | Link |
---|---|
US (2) | US10110986B1 (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111817643A (en) * | 2020-06-24 | 2020-10-23 | 包头长安永磁电机有限公司 | Motor noise reduction system and method based on microphone array noise monitoring |
US10966007B1 (en) * | 2018-09-25 | 2021-03-30 | Apple Inc. | Haptic output system |
CN113711621A (en) * | 2019-02-18 | 2021-11-26 | 伯斯有限公司 | Intelligent safety masking and warning system |
US20220276708A1 (en) * | 2017-01-23 | 2022-09-01 | Naqi Logix Inc. | Apparatus, methods, and systems for using imagined direction to define actions, functions, or execution |
WO2023027998A1 (en) * | 2021-08-23 | 2023-03-02 | Peter Stevens | Haptic and visual communication system for the hearing impaired |
EP4159169A1 (en) * | 2021-09-30 | 2023-04-05 | 3M Innovative Properties Company | Hearing protection device with haptic feedback and method of operating a hearing protection device |
US11743570B1 (en) * | 2022-05-18 | 2023-08-29 | Motorola Solutions, Inc. | Camera parameter adjustment based on frequency shift |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10585480B1 (en) | 2016-05-10 | 2020-03-10 | Apple Inc. | Electronic device with an input device having a haptic engine |
US10649529B1 (en) | 2016-06-28 | 2020-05-12 | Apple Inc. | Modification of user-perceived feedback of an input device using acoustic or haptic output |
EP3445063B1 (en) * | 2017-08-18 | 2020-04-22 | Honeywell International Inc. | System and method for hearing protection device to communicate alerts from personal protection equipment to user |
US10768747B2 (en) | 2017-08-31 | 2020-09-08 | Apple Inc. | Haptic realignment cues for touch-input displays |
US11054932B2 (en) | 2017-09-06 | 2021-07-06 | Apple Inc. | Electronic device having a touch sensor, force sensor, and haptic actuator in an integrated module |
US10768738B1 (en) | 2017-09-27 | 2020-09-08 | Apple Inc. | Electronic device having a haptic actuator with magnetic augmentation |
US10942571B2 (en) | 2018-06-29 | 2021-03-09 | Apple Inc. | Laptop computing device with discrete haptic regions |
US10936071B2 (en) | 2018-08-30 | 2021-03-02 | Apple Inc. | Wearable electronic device with haptic rotatable input |
US11024135B1 (en) | 2020-06-17 | 2021-06-01 | Apple Inc. | Portable electronic device having a haptic button assembly |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150005039A1 (en) * | 2013-06-29 | 2015-01-01 | Min Liu | System and method for adaptive haptic effects |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2501767A (en) * | 2012-05-04 | 2013-11-06 | Sony Comp Entertainment Europe | Noise cancelling headset |
US9049527B2 (en) * | 2012-08-28 | 2015-06-02 | Cochlear Limited | Removable attachment of a passive transcutaneous bone conduction device with limited skin deformation |
KR102192361B1 (en) * | 2013-07-01 | 2020-12-17 | 삼성전자주식회사 | Method and apparatus for user interface by sensing head movement |
US9398367B1 (en) * | 2014-07-25 | 2016-07-19 | Amazon Technologies, Inc. | Suspending noise cancellation using keyword spotting |
US10231056B2 (en) * | 2014-12-27 | 2019-03-12 | Intel Corporation | Binaural recording for processing audio signals to enable alerts |
US10438609B2 (en) * | 2016-01-14 | 2019-10-08 | George Brandon Foshee | System and device for audio translation to tactile response |
-
2017
- 2017-03-28 US US15/471,977 patent/US10110986B1/en active Active
-
2018
- 2018-06-21 US US16/014,244 patent/US10382866B2/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150005039A1 (en) * | 2013-06-29 | 2015-01-01 | Min Liu | System and method for adaptive haptic effects |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11775068B2 (en) * | 2017-01-23 | 2023-10-03 | Naqi Logix Inc. | Apparatus, methods, and systems for using imagined direction to define actions, functions, or execution |
US20220276708A1 (en) * | 2017-01-23 | 2022-09-01 | Naqi Logix Inc. | Apparatus, methods, and systems for using imagined direction to define actions, functions, or execution |
US20230418379A1 (en) * | 2017-01-23 | 2023-12-28 | Naqi Logix Inc. | Apparatus, methods, and systems for using imagined direction to define actions, functions, or execution |
US10966007B1 (en) * | 2018-09-25 | 2021-03-30 | Apple Inc. | Haptic output system |
US20210176548A1 (en) * | 2018-09-25 | 2021-06-10 | Apple Inc. | Haptic Output System |
US20240064447A1 (en) * | 2018-09-25 | 2024-02-22 | Apple Inc. | Haptic Output System |
US11805345B2 (en) * | 2018-09-25 | 2023-10-31 | Apple Inc. | Haptic output system |
CN113711621A (en) * | 2019-02-18 | 2021-11-26 | 伯斯有限公司 | Intelligent safety masking and warning system |
CN111817643A (en) * | 2020-06-24 | 2020-10-23 | 包头长安永磁电机有限公司 | Motor noise reduction system and method based on microphone array noise monitoring |
WO2023027998A1 (en) * | 2021-08-23 | 2023-03-02 | Peter Stevens | Haptic and visual communication system for the hearing impaired |
WO2023052896A1 (en) * | 2021-09-30 | 2023-04-06 | 3M Innovative Properties Company | Hearing protection device with haptic feedback and method of operating a hearing protection device |
EP4159169A1 (en) * | 2021-09-30 | 2023-04-05 | 3M Innovative Properties Company | Hearing protection device with haptic feedback and method of operating a hearing protection device |
US11743570B1 (en) * | 2022-05-18 | 2023-08-29 | Motorola Solutions, Inc. | Camera parameter adjustment based on frequency shift |
Also Published As
Publication number | Publication date |
---|---|
US20180302707A1 (en) | 2018-10-18 |
US10110986B1 (en) | 2018-10-23 |
US10382866B2 (en) | 2019-08-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10382866B2 (en) | Haptic feedback for head-wearable speaker mount such as headphones or earbuds to indicate ambient sound | |
US10339913B2 (en) | Context-based cancellation and amplification of acoustical signals in acoustical environments | |
US10353495B2 (en) | Personalized operation of a mobile device using sensor signatures | |
US9706304B1 (en) | Systems and methods to control audio output for a particular ear of a user | |
US10103699B2 (en) | Automatically adjusting a volume of a speaker of a device based on an amplitude of voice input to the device | |
US11482237B2 (en) | Method and terminal for reconstructing speech signal, and computer storage medium | |
US20190043521A1 (en) | Automatic Gain Adjustment for Improved Wake Word Recognition in Audio Systems | |
US9766852B2 (en) | Non-audio notification of audible events | |
CN108335703B (en) | Method and apparatus for determining accent position of audio data | |
US9807499B2 (en) | Systems and methods to identify device with which to participate in communication of audio data | |
US9811311B2 (en) | Using ultrasound to improve IMU-based gesture detection | |
WO2018045743A1 (en) | Method and apparatus for adjusting display direction of screen, storage medium, and device | |
US10468022B2 (en) | Multi mode voice assistant for the hearing disabled | |
US20150205577A1 (en) | Detecting noise or object interruption in audio video viewing and altering presentation based thereon | |
US9772815B1 (en) | Personalized operation of a mobile device using acoustic and non-acoustic information | |
US20210255820A1 (en) | Presentation of audio content at volume level determined based on audio content and device environment | |
US20210158809A1 (en) | Execution of function based on user being within threshold distance to apparatus | |
US20190018493A1 (en) | Actuating vibration element on device based on sensor input | |
US10645517B1 (en) | Techniques to optimize microphone and speaker array based on presence and location | |
US11258417B2 (en) | Techniques for using computer vision to alter operation of speaker(s) and/or microphone(s) of device | |
US11217220B1 (en) | Controlling devices to mask sound in areas proximate to the devices | |
US11269667B2 (en) | Techniques to switch between different types of virtual assistance based on threshold being met | |
US10122854B2 (en) | Interactive voice response (IVR) using voice input for tactile input based on context | |
US20160150065A1 (en) | Transmission of data pertaining to use of speaker phone function and people present during telephonic communication | |
US20210256954A1 (en) | Cancellation of sound at first device based on noise cancellation signals received from second device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: MOTOROLA MOBILITY LLC, ILLINOIS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MIN, JUN-KI;REEL/FRAME:041770/0645 Effective date: 20170328 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |