CN117768829A - State detection for wearable audio devices - Google Patents

State detection for wearable audio devices Download PDF

Info

Publication number
CN117768829A
CN117768829A CN202311238760.1A CN202311238760A CN117768829A CN 117768829 A CN117768829 A CN 117768829A CN 202311238760 A CN202311238760 A CN 202311238760A CN 117768829 A CN117768829 A CN 117768829A
Authority
CN
China
Prior art keywords
audio signal
audio device
wearable
wearable audio
audio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311238760.1A
Other languages
Chinese (zh)
Inventor
T-D·W·索克斯
C·达拉布恩蒂特
M·E·约翰逊
V·赵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US18/239,718 external-priority patent/US20240107246A1/en
Application filed by Apple Inc filed Critical Apple Inc
Publication of CN117768829A publication Critical patent/CN117768829A/en
Pending legal-status Critical Current

Links

Landscapes

  • Circuit For Audible Band Transducer (AREA)

Abstract

The present disclosure relates to status detection of wearable audio devices. Aspects of the subject technology provide improved techniques for determining the state of an audio device, including low power techniques for determining whether an earplug is currently being worn in a user's ear. In aspects, a potential state change in an audio device may be detected, and in response, a measurement of the state of the device may be initiated, such as by transmitting an audio signal, and then determining the state of the audio device based on a sensed version of the transmitted audio signal.

Description

State detection for wearable audio devices
Cross Reference to Related Applications
The present application claims the benefit of priority from U.S. patent application Ser. No. 63/409,653, entitled "State Detection For Wearable Audio Devices," filed on even date 23 at 9 of 2022, the disclosure of which is hereby incorporated herein in its entirety.
Technical Field
The present description relates generally to personal audio devices.
Background
The media device may send the audio signal to one or more audio accessories to play the audio. For example, the media device may select between one or more in-ear headphones worn by the user during playback, or the media device may send audio to another speaker. The selection between the audio wearable audio accessory or other speaker may be based on the state of the audio accessory, such as whether the earbud is properly positioned within the user's ear.
Drawings
Some features of the subject technology are set forth in the following claims. However, for purposes of illustration, several implementations of the subject technology are set forth in the following drawings.
Fig. 1 illustrates an exemplary audio device.
Fig. 2 illustrates an exemplary audio system for detecting device status.
FIG. 3 illustrates an exemplary method for detecting a device state.
Fig. 4 illustrates a media device and associated audio accessory.
Figure 5 shows an isometric view of a housing for an in-ear earphone.
Fig. 6 schematically shows the anatomy of a typical human ear.
Fig. 7 schematically shows an in-ear earphone positioned in the human ear shown in fig. 6.
Fig. 8 illustrates a block diagram showing aspects of an audio appliance.
FIG. 9 illustrates a block diagram that shows aspects of a computing environment.
Detailed Description
The detailed description set forth below is intended as a description of various configurations of the subject technology and is not intended to represent the only configuration in which the subject technology may be practiced. The accompanying drawings are incorporated in and constitute a part of this specification. The specific embodiments include specific details for the purpose of providing a thorough understanding of the subject technology. The subject technology is not limited to the specific details described herein, however, and may be practiced using one or more other implementations. In one or more implementations, structures and components are shown in block diagram form in order to avoid obscuring the concepts of the subject technology.
Improved techniques for determining the state of an audio device are presented, including low power techniques for determining whether an earbud is currently being worn in a user's ear. In aspects, a potential state change in an audio device may be detected, and in response, a state of the device may be determined. For example, in response to determining a potential state change, a measurement of the device state may be initiated, such as by transmitting an audio signal, and then determining the state of the audio device based on analysis of a sensed version of the transmitted audio signal. In some cases, various different low power techniques may be used to detect potential state changes, while high confidence measurements of current device states may include using relatively more power than potential state changes. By starting a high power measurement in response to a potentially changing low power detection, the frequency of the high power process can be reduced, resulting in lower average power requirements over time.
In aspects, potential state changes may be detected from various received signals, including transmitting and analyzing substantially monotonic ultrasonic signals, detecting sufficient motion from a motion sensor, and/or analyzing or ambient noise via more than one microphone. In other aspects, the high confidence device state measurement may include analysis of the transmitted ultrasound signal with a variable tone (tone), such as chirp.
Fig. 1 shows an example 100 audio device. Example 100 includes an audio device 102 in an ear 104 environment. The device 102 may include an emitter 120 and one or more sensors. In example 100, device 102 includes a first sensor 122, a second sensor 124, and a third sensor 126. In aspects, the device 102 may include a variety of different types of sensors. In example 100, the first sensor 122 may be a microphone positioned on the housing of the device 102 that is located at a position within the user's ear canal when in a worn state (such as an "error microphone" typically used for noise cancellation), and the second sensor 124 may be positioned outside the ear when the device 102 is in a worn state (such as a microphone used to record the wearer's sound). The third sensor 126 may be a motion sensor such as an accelerometer or other type of Inertial Measurement Unit (IMU). In an aspect, the audio device 102 may be an audio accessory of a separate audio device (not depicted), such as a paired mobile phone that serves as a source or sink for a digital audio output stream (e.g., music).
Aspects of the technology disclosed herein may include detecting a potential state change of the device 102 relative to the ear 104 and determining a static state. Detecting a potential change in state of the device 102 may include detecting insertion 150 of the device 102 into the ear 104, removal 152 of the device 102 from the ear 104, or movement into or out of the ear canal of the ear 104. Determining the static state may include determining whether the audio device is worn, and determining the static state may include determining a quality of alignment or a quality of acoustic coupling between the audio device 102 and the ear 104.
In aspects, there may be many possible audio device states. For example, possible states of the audio device may include in-ear, non-in-ear, in-device housing, powered-off, in-pocket, etc. Techniques for determining audio device states described herein may include selecting between only two such states, such as between in-ear and non-in-ear, or may include selecting between more than two such states.
Fig. 2 illustrates an exemplary audio system 200 for detecting device status. The audio system 200 may be an example of the audio device 102 of fig. 1. The audio system 200 comprises a transmitter 220, a first sensor 222 and a control unit 210. Optional aspects include a second sensor 224 and a third sensor 226, a network 240 (e.g., a wired or wireless network connection to an audio source such as a paired media device), and/or ambient sound 230.
In operation, the audio system 200 may determine the state of the system by first detecting a potential state change based on data from one or more of the sensors 222, 224, 226. In response to the potential state change detection, the control unit 210 may begin determining the state of the audio device. For example, the status may be determined by having the transmitter 220 generate a status determination signal, and a sensed version of the status determination signal may be received at the first sensor 222 (such as an internal microphone of an ear bud). The state determination signal may include an ultrasonic chirp or other inaudible sound with a tone (pitch) or a change in tone. The state of the device may then be determined by classifying the received version of the state determination signal, which may include, for example, classification as in-ear (worn state) or free field (unworn state). When the state classification is uncertain, a previously known state of the audio system 200 may be assumed or retained. In one aspect, classification of the received version of the state determination signal or other analysis of the state detection signal may be performed using a neural network or other type of signal analysis technique in order to determine the state of the audio device.
In one aspect, detection of the potential state change may include first transmitting a state change signal by the transmitter 220 (in addition to a later transmission of the state detection signal), and then receiving the state change signal at one or more of the sensors of the audio system 200. In an alternative aspect (not depicted), the state change signal may be received at two separate sensors, such as the first sensor 222 and the second sensor 224, and the potential state change may be determined based on a comparison of the two received versions of the state change signal. In one aspect, the transmitted state change signal may comprise an inaudible tone and may be transmitted substantially continuously in various aspects. On the other hand, the transmitted state change signal may include human audible sounds, such as music received from a music source via the network 240.
In other aspects for detecting a potential state change, the potential state change may be determined by sensed audio that is not first transmitted by a transmitter of the system 200. For example, the state change may be determined by comparing ambient sounds 230 received at two different sensors 222 and 224, such as where sensor 222 is a microphone within the user's ear canal and sensor 224 is a microphone external to the user's ear canal. Ambient sounds sensed by two different microphones may be compared to each other and/or may be compared to previous recordings of ambient sounds in order to determine a potential state change of the audio device. In one aspect, a current difference in the envelope of the ambient sound may be compared to a difference in the envelope of the previous ambient sound, and when a change in the difference is above a threshold, then it may be determined that a potential state change has occurred between sensing the current ambient sound and sensing the previous ambient sound.
In another aspect, potential state changes may be detected from various other types of sensors, such as a mobile sensor (e.g., accelerometer), a light sensor, or a sensor indicating wireless network signal strength (e.g., wi-Fi or bluetooth). In one aspect, a potential state change is determined when a measurement from one of the sensors is above a threshold. For example, when the movement of the device, which may be sensed as a movement signal from the movement sensor, is above a threshold, then a potential state change is detected and the state determination process may begin. When the movement is below the threshold, no potential state change is detected and the state determination process may not begin.
Device 102 is depicted in fig. 1 as a wireless ear bud, such as an audio device that may be inserted into a portion of a user's ear canal. However, the technology disclosed herein is not limited to earplugs. For example, the audio system 200 may be implemented as a wired audio device (e.g., with physical wires connecting the audio system 200 to an audio source), as a headset, where a single headset structure includes headphones for both, and the headphones may be shaped for in-ear, ear-mounted, or ear-mounted positioning, including headphones that are not inserted into the ear canal.
FIG. 3 illustrates an exemplary method 300 for detecting a device state. In method 300, a state change signal is received as a first signal at a first sensor (block 306). When a potential state change is detected, a state determination signal is transmitted as a second signal (block 312). A third signal is received (block 314), the third signal including a received version of at least a portion of the transmitted state determination signal. A status of the device is determined based on the third signal (block 316). As explained above with respect to fig. 2, in aspects, the state change signal may comprise ambient sound, a substantially monotonic ultrasonic signal emitted by the device, and/or a signal from a non-audio sensor (such as a sensor for movement or light), while the state determination signal may comprise an ultrasonic chip of tones over time. In some aspects, transmitting and/or analyzing the state change signal may require less resources (such as power or computational complexity) as compared to transmitting and/or analyzing the state determination signal.
In an optional aspect of the method 300, a fourth signal may be transmitted (block 304), wherein receiving the first signal (block 306) includes receiving a version of at least a portion of the transmitted fourth signal. In one aspect, the fourth signal may be an inaudible signal. The inaudible signals may include frequencies in the ultrasonic range, or may be another type of signal that is not normally audible to humans, such as certain tone sequences, or signals that are hidden by other environmental sounds. The transmitted fourth signal may also include a Maximum Length Sequence (MLS) signal. In other aspects, the fourth signal may also include a human-audible sound, such as music.
On the other hand, the fourth signal may be transmitted periodically (e.g., once per second) or substantially continuously. In some cases, it may only be emitted in certain device states, such as when the device is in a worn state, and not emitted when not in a worn state. In other aspects, a fourth signal may be transmitted (block 304) in response to detecting movement of the device (block 301). For example, a fourth signal may be transmitted when movement above a threshold is detected from an accelerometer or Inertial Measurement Unit (IMU) (block 304). In another optional aspect, the transmitted fourth signal may include a received digital signal from a remote audio source (such as from a paired mobile phone or other audio device) (block 302).
In another optional aspect, a fifth signal may be received at the second sensor (block 308), and the fifth signal may be used in combination with the first signal from the first sensor (block 306) to detect a state change (block 310). For example, the first signal and the fifth signal may be compared with each other or with a previously received signal. In one aspect, the state change signal may be measured at an internal microphone (positioned within the ear canal when in a worn state, sometimes referred to as an "error microphone") and an external microphone (positioned outside the ear when in a worn state), and two different received versions of the state change signal (i.e., the received first signal in block 306 and the received fifth signal in block 308) may be compared to determine a potential state change (block 310). In this aspect, the state change signal may be a received version of the transmitted fourth signal (block 304) or may be ambient sound that is not transmitted by the audio device. As further explained above with respect to fig. 2, a potential state change may be detected based on a comparison of ambient sounds measured at both the internal microphone and the external microphone (block 310).
In an aspect, a state change may be detected based on an analysis of the received first signal (block 310). For example, analysis of the envelope of a portion of the first signal may indicate that movement of the wearable audio device is moving toward or away from an ear of a user of the wearable audio device. In one aspect, the portion of the first audio signal indicative of movement may comprise a version of the transmitted fourth signal that has been reflected on a portion of the ear of the user of the wearable audio device.
In one aspect, the first signal and the second signal may have different time characteristics. For example, the first signal may include audio tones that are substantially continuous over time at one or more frequencies, while the second signal may include a discontinuous set of tones, where at least two tones are generated at different frequencies at different times.
In one aspect, when the confidence of the estimate is low, determining the state of the device (block 316) may depend on a previously known state of the device. The current state of the device may be estimated along with a corresponding confidence in the state estimate (optional block 318). For example, the estimate and corresponding confidence may be generated by analysis of the third signal by a neural network. The current state of the device may be determined based on the estimated current state and the corresponding confidence and a predetermined previous state of the device. For example, when the confidence of the estimated current state is low (e.g., below a threshold), the current device state may be determined to be the predetermined previous device state, and when the confidence is high (e.g., above a threshold), the current device state is determined to be the current estimated state.
In an optional aspect, an action may be taken based on the state of the device (block 322). For example, the paired device may be notified of a device state change, the device may start audio streaming if the state transitions to the donned state, or conversely, may stop audio streaming if the state transitions to the unworn state. In another example, the device may enter the low power mode after transitioning to the unworn state.
Fig. 4 illustrates a portable media device 10 suitable for use with a variety of accessory devices. In one aspect, the method 300 of fig. 3 may include determining a status of the accessory device 20a and/or 20b (block 316), and the source of the received digital signal (block 302) may be the portable media device 10. The portable media device 10 may include, for example, a touch-sensitive display configured to provide a touch-sensitive user interface for controlling the portable media device 10 and including, in some aspects, any accessory to which the portable media device 10 is electrically or wirelessly coupled. For example, the media device 10 may include mechanical buttons, tactile/haptic buttons, or variations thereof, or any other suitable means for navigating on the device. The portable media device 10 may also include a communication connection, for example, one or more hardwired input/output (I/O) ports, which may include digital I/O ports and/or analog I/O ports, or a wireless communication connection.
The accessory device may take the form of an audio device comprising two separate earpieces 20a and 20 b. Each of earpieces 20a and 20b may include a wireless receiver, transmitter, or transceiver capable of establishing wireless link 16 with portable media device 10 and/or with each other. Alternatively and not shown in fig. 4, the accessory device may take the form of a wired or corded audio device that includes a separate earplug. Such wired earplugs may be electrically coupled to each other and/or to the connector plug by a plurality of wires. The connector plug is matingly engageable with one or more of the I/O ports and establishes a communication link between the media device and the accessory via the wire. In some wired aspects, the power and/or selected communications may be carried by one or more wires, and the selected communications may be conducted wirelessly.
In one aspect, the housing of the earplug 20 as depicted in fig. 5 may enable a determination of a current device state (block 316 of fig. 3). On the other hand, fig. 6 may illustrate features of a human ear, such as ear 104 (fig. 1), and fig. 7 may illustrate an earplug inserted into the human ear, such as device 102 after insertion 150 into ear 104 (fig. 1).
Figure 5 shows an isometric view of a housing for an in-ear earphone. Fig. 5 shows an earplug housing 20 configured to operatively engage the common anatomy of a human ear when worn by a user, and fig. 3 schematically illustrates such an ear relief structure 30. Fig. 5 shows the earplug housing 20 positioned within the user's ear 30 during use. As depicted in fig. 5, 6 and 7, the earplug housing 20 defines a major interior side surface 24 that faces the surface of the user's concha cavity 33 when the housing 20 is properly seated in the user's ear 30.
For example, when properly positioned in the user's ear 30, the earphone housing 20 may rest in the user's concha chamber 33 between the user's tragus 36 and antitragus 37, as shown in fig. 4. When the headset is properly positioned, the outer surface of the housing (e.g., the main interior side surface 24) may be complementarily contoured relative to, for example, the user's concha cavity 33 (or other anatomical structure) to provide a contact area 43 (fig. 4) between the contoured outer surface and the user's skin. The contact region 43 may span a substantial portion of the contoured outer surface 24. Those of ordinary skill in the art will understand and appreciate that while the complementarily contoured outer surface 24 is described with respect to the concha cavity 33, other outer regions of the earphone housing 20 may be complementarily contoured with respect to another region of the human ear 30. For example, the housing 20 defines a major bottom surface 21 that generally rests against an area of the user's ear between the antitragus 37 and the concha chamber 33 to define a contact area 42. Other contact areas are also possible.
The housing 20 also defines a major side surface 28 from which the post 22 extends. The post 22 may include a microphone transducer and/or one or more other components, such as a battery. Alternatively, in the case of a wired earplug, one or more wires may extend from the post 22. As in fig. 4, when the earplug is properly worn, the post 22 extends generally parallel to a plane defined by the user's earlobe 39 at a location laterally outward from a gap 38 between the user's tragus 36 and antitragus 37.
In addition, the earplug defines an acoustic port 23. The port 23 provides an acoustic path from the interior region of the housing 20 to the exterior 25 of the housing. As shown in fig. 7, when the earplug is properly worn as described above, the port 23 is aligned with and opens into the user's ear canal 31. A mesh, screen, membrane, or other protective barrier (not shown) may extend across the port 23 to inhibit or prevent debris from entering the interior of the housing.
In some earplugs, the housing 20 defines a projection or other protrusion from which the port 23 opens. The protrusion or other projection may extend into the ear canal 31 and may contact the wall of the ear canal above the contact area 41. Alternatively, the protrusions or other projections may provide a structure to which a resiliently flexible cover (not shown), such as, for example, a silicone cover, may be attached to provide an intermediate structure that forms a sealing engagement between the wall of the user's ear canal 31 and the housing 20 over the contact area 41. The sealing engagement may enhance perceived sound quality, for example, by passively attenuating external noise and suppressing acoustic power loss of the earplug.
Although not specifically shown, the housing 20 may also include compliant members to conform to contour variations among the tragus 36, antitragus 37, and concha cavity 33 from person to person. For example, the compliant members can matingly engage areas of the housing 20 corresponding to the major surfaces 24. Such compliant members (not shown) may accommodate an amount of compression that allows the housing 20 to be fixedly seated within the user's ear 30, such as within the concha cavity 33.
The housing 20 may be formed of any material or combination of materials suitable for headphones. For example, some enclosures are formed from Acrylonitrile Butadiene Styrene (ABS). Other representative materials include polycarbonates, acrylics, methacrylates, epoxies, and the like. The compliant member may be formed of a polymer such as silicone, latex, or the like.
Fig. 7 also depicts a plurality of contact areas between the earphone housing 20 and the tissue of the user's ear 30. Each region 41, 42, 43 defines a region on a surface of the housing 20 or a surface of the intermediate compliant member that abuts against user tissue.
The proximity sensor or a portion thereof may be positioned within the housing 20 opposite the selected contact areas 41, 42, 43 with respect to the housing wall. For example, a proximity sensor, or a transmitter and/or receiver thereof, may be positioned within the housing 20 opposite the contact areas 41, 42, 43 (or other intended contact areas) to define corresponding sensitive areas of the earphone housing. Each respective sensor may evaluate whether or not the corresponding contact areas 41, 42, 43, and thus the housing 20, are aligned or the extent of alignment in the user's ear.
In addition, the physical characteristics of the local environment may affect the degree to which the transmitted signal may reflect and/or dampen as it passes through the environment. For example, ultrasonic energy may dissipate through air or fabric (or other materials having a high attenuation coefficient in the frequency range of interest) more rapidly than water or human tissue. Furthermore, the reflection of the emitted ultrasonic signal through air or fabric may be attenuated more when received by the receiver than the reflection of the ultrasonic signal through water or human tissue. Likewise, reflections of the transmitted ultrasound signal passing through the dry interface between a given sensor and a given tissue may attenuate more when received by the receiver than reflections of the ultrasound signal passing through the interface between the sensor and the tissue with acoustic coupling. If the transducer is positioned to emit a signal into, for example, a user's tissue or other substance, the tissue or other substance may reflect the signal and the reflected signal may be received by the sensor or a component thereof. Thus, the reflection received by the sensor may indicate when the user's tissue (e.g., the user's ear) is positioned in proximity to the sensor. Some of the disclosed proximity sensors can detect characteristics of the local environment through a solid (e.g., non-perforated) housing wall to provide an uninterrupted exterior surface and an aesthetically pleasing housing appearance. However, some housing walls may have a plurality of visually indistinguishable perforations (sometimes referred to as "microperforations").
Some headphones define a single sensitive area corresponding to one selected contact area. When the sensitive area is adjacent to or immersed in air or textile, for example, the emitted ultrasonic signal may dissipate and reflections may not be received. Thus, the proximity sensor below may determine that the earbud is not being worn and may transmit a corresponding signal to the media device 10. However, when the sensitive area is adjacent to or in contact with, for example, a table or shelf, the underlying proximity sensor may receive a reflection of the emitted ultrasonic signal and determine (in this example, erroneously) that the earplug is being worn.
To avoid false indications that the earplug is being worn, some headphones incorporate multiple proximity sensors or transducers to define a corresponding multiple of sensitive areas on the earplug housing 20. The plurality of sensitive areas may be spaced apart from one another, for example, so that no two sensitive areas may contact a flat surface (e.g., a shelf or table) when the earplug housing 20 is resting on the flat surface. For example, if the transducer is arranged to make the contact areas 41 and 43 sensitive, the two contact areas will not simultaneously contact the flat surface on which the earplug housing 20 rests. Thus, when the earplug housing 20 rests on a flat surface, these two areas will not indicate that the earplug is being worn. The underlying sensor may be configured to determine that the earpiece housing is being worn only when two or more sensitive areas receive reflected ultrasonic signals. Otherwise, the sensor may indicate that the headset is not being worn.
Fig. 8 illustrates a block diagram showing aspects of an audio appliance. In one aspect, the audio appliance 180 of fig. 8 may be an example of the device 102 (fig. 1) or the audio system 200 (fig. 2). Fig. 8 illustrates an example of a suitable architecture for an audio device (e.g., media device 10 in fig. 1). The audio appliance 180 includes an audio acquisition module 181 and various aspects of a computing environment (e.g., described in more detail in connection with fig. 9) that can enable the appliance to communicate with audio accessories in a defined manner. For example, the illustrated appliance 180 includes a processing unit 184 and a memory 185 including instructions that are executable by the processing unit to cause the audio appliance to, for example, perform one or more aspects of receiving output from an ultrasonic proximity sensor and/or responding to an indication of the environment in which the audio device 102 (fig. 1) or the audio accessories 20a, 20b (fig. 4) are located.
Still referring to fig. 8, the audio appliance typically includes a microphone transducer to convert an incident acoustic signal into a corresponding electrical output. As used herein, the terms "microphone" and "microphone transducer" are used interchangeably and refer to an acousto-electric transducer or sensor that converts an incident acoustic signal or sound into a corresponding electrical signal representative of the incident acoustic signal. Typically, the electrical signal output by the microphone is an analog signal.
Although a single microphone is depicted in fig. 8, the present disclosure contemplates the use of multiple microphones. For example, multiple microphones may be used to obtain multiple different acoustic signals emanating from a given acoustic scene, and multiple versions may be processed independently and/or combined with one or more other versions, and then further processed by audio appliance 180.
As shown in fig. 8, the audio acquisition module 21 may include a microphone transducer 182 and a signal conditioner 183 to filter or otherwise condition the acquired representation of the ambient sound. Some audio appliances have an analog microphone transducer and a pre-amplifier to condition the signal from the microphone.
As shown in fig. 8, an audio instrument 180 or other electronic device may include, in its most basic form, a processor 184, a memory 185, and a speaker or other electroacoustic transducer 187, along with associated circuitry (e.g., a signal bus, which is omitted from fig. 8 for clarity).
The audio appliance 180 schematically shown in fig. 8 also includes a communication connection 186 for establishing communication with another computing environment, the audio device 102 (fig. 1), or an audio accessory such as the accessories 20a, 20b (fig. 1). The memory 185 may store instructions that, when executed by the processor 184, cause circuitry in the audio appliance 180 to drive the electroacoustic transducer 187 to emit sound within a selected frequency bandwidth or to send audio signals to the audio accessories 20a, 20b for playback via the communication connection 186. In addition, memory 185 may store other instructions that, when executed by the processor, cause audio instrument 180 to perform any of a variety of tasks similar to the general computing environment described more fully below in connection with fig. 9.
FIG. 9 illustrates a block diagram that shows various computing environmentsAspects are described. FIG. 9 illustrates a generalized example of a suitable computing environment 190 in which the methods, embodiments, techniques, and technologies described herein may be implemented involving, for example, evaluating a local environment of a computing environment or an accessory thereof. The computing environment 190 is not intended to suggest any limitation as to the scope of use or functionality of the technology disclosed herein, as each technology may be implemented in different general-purpose or special-purpose computing environments, including within audio equipment. For example, each of the disclosed techniques may be implemented with other computer system configurations, including wearable devices and/or handsets (e.g., mobile communication devices such as, for example, those available from Apple Inc. of Cupertino, CA.)/HOMEPOD TM Devices), multiprocessor systems, microprocessor-based or programmable consumer electronics, embedded platforms, network computers, minicomputers, mainframe computers, smart phones, tablet computers, data centers, audio appliances, and the like. Each of the disclosed techniques may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications connection or network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.
The computing environment 190 includes at least one central processing unit 191 and memory 192. In fig. 9, the most basic configuration 193 is included within the dashed line. The central processing unit 191 executes computer-executable instructions and may be a real or virtual processor. In a multi-processing system or in a multi-core central processing unit, multiple processing units execute computer-executable instructions (e.g., threads) to increase processing speed, and thus, multiple processors may be concurrently running, although processing unit 191 is represented by a single functional block.
The processing unit or processor may comprise an Application Specific Integrated Circuit (ASIC), a general purpose microprocessor, a Field Programmable Gate Array (FPGA), a digital signal controller, or a set of hardware logic structures (e.g., filters, arithmetic logic units, and dedicated state machines) arranged to process instructions.
The memory 192 may be volatile memory (e.g., registers, cache, RAM), non-volatile memory (e.g., ROM, EEPROM, flash memory, etc.), or some combination of the two. The memory 192 stores instructions for the software 198a that, when executed by a processor, may, for example, implement one or more of the techniques described herein. The disclosed techniques may be embodied in software, firmware, or hardware (e.g., ASIC).
The computing environment may have additional features. For example, computing environment 190 includes storage 194, one or more input devices 195, one or more output devices 196, and one or more communication links 197. An interconnection mechanism (not shown) such as a bus, controller, or network may interconnect the components of the computing environment 190. Typically, operating system software (not shown) provides an operating environment for other software executing in the computing environment 190 and coordinates activities of the components of the computing environment 190.
Storage 194 may be removable or non-removable and may include alternative forms of machine-readable media. Generally, machine-readable media include magnetic disks, magnetic tapes or cassettes, non-volatile solid state memories, CD-ROMs, CD-RWs, DVDs, magnetic tapes, optical data storage devices, and carrier waves, or any other machine-readable medium which can be used to store information and which can be accessed within computing environment 190. The storage 194 may store instructions of the software 198b that, when executed by a processor, may, for example, implement the techniques described herein.
Storage 194 may also be distributed, for example, over a network, in order to store and execute software instructions in a distributed fashion. In other aspects, for example, where storage 194, or a portion thereof, is embodied as an arrangement of hardwired logic structures, some (or all) of these operations may be performed by specific hardware components that contain hardwired logic structures. The storage 194 may be further distributed between a machine-readable medium and an arrangement of selected hardwired logic structures. The processing operations disclosed herein may be performed by any combination of programmed data processing components and hardwired circuitry or logic components.
The input device 195 is any one or more of the following: a touch input device such as a keyboard, keypad, mouse, pen, touch screen, touch pad, or trackball; a voice input device such as one or more microphone transducers, voice recognition technology, and a processor, as well as combinations thereof; a scanning device; or another device that provides input to the computing environment 190. For audio, the input device 195 may include a microphone or other transducer (e.g., a sound card or similar device that accepts audio input in analog or digital form), or a computer readable medium reader that provides audio samples and/or machine readable transcripts thereof to the computing environment 190.
The one or more output devices 196 may be any one or more of a display, a printer, a speaker transducer, a DVD writer, a signal transmitter, or another device that provides output from a computing environment 190, such as audio accessories 20a, 20b (fig. 1). Output devices may include or be implemented as a communication connection 197.
One or more communication connections 197 enable communication with another computing entity or accessory via or through a communication medium (e.g., a connection network). The communication connection may include a transmitter and a receiver adapted to communicate over a Local Area Network (LAN), a Wide Area Network (WAN) connection, or both. LAN and WAN connections may be facilitated by a wired or wireless connection. If the LAN or WAN connection is wireless, the communication connection may include one or more antennas or antenna arrays. The communication medium conveys information such as computer-executable instructions, compressed graphics information, processed signal information (including processed audio signals), or other data in a modulated data signal. Examples of communication media for so-called wired connections include fiber optic cables and copper wire. A communication medium for wireless communication may include electromagnetic radiation within one or more selected frequency bands.
A machine-readable medium is any available medium that can be accessed within computing environment 190. By way of example, and not limitation, within the computing environment 190, machine-readable media include memory 192, storage 194, communication media (not shown), and any combination of the preceding. The tangible machine-readable (or computer-readable) medium does not include a transitory signal.
As noted above, some of the disclosed principles may be embodied in the storage device 194. Such storage devices may include a tangible, non-transitory machine-readable medium (such as a microelectronic memory) having instructions stored thereon or therein. The instructions may program one or more data processing components (generally referred to herein as "processors") to perform one or more of the processing operations described herein, including estimating, calculating, computing, measuring, adjusting, sensing, measuring, filtering, correlating, and deciding, and for example, adding, subtracting, reciprocal, and comparing. In some aspects, some or all of these operations (of the machine process) may be performed by specific electronic hardware components that contain hardwired logic components (e.g., dedicated digital filter blocks). Alternatively, those operations may be performed by any combination of programmed data processing components and fixed or hardwired circuitry components.
The above examples generally relate to ultrasonic proximity sensors and related systems and methods. The previous description is provided to enable any person skilled in the art to make or use the disclosed principles. Embodiments other than the above detailed aspects are contemplated based on the principles disclosed herein and any accompanying variations in the configuration of the corresponding devices described herein without departing from the spirit or scope of the disclosure. Various modifications to the examples described herein will be readily apparent to those skilled in the art.
For example, the earplug may also be equipped with various other sensors, which may operate independently or in conjunction with the proximity sensor described above. For example, in some aspects, these other sensors may take the form of orientation sensors to help the earplug determine in which ear the earplug is positioned, and then adjust the operation of the earplug based on that determination. In some aspects, the orientation sensor may be a conventional inertial-based sensor, while in other aspects, a sensor reading from another biometric sensor, such as a proximity sensor or a temperature sensor, may be used for orientation determination.
Earplugs with the above-described sensors may also include additional sensors, such as microphones or microphone arrays. In some aspects, at least two microphones from the microphone array may be arranged along a line directed toward or at least near the user's mouth. Using information received by the one or more orientation sensors, a controller within the earpiece may determine which microphones in the microphone array should be activated to obtain the configuration. By activating only those microphones arranged along vectors pointing at or near the mouth, ambient audio signals that are not emitted near the mouth can be ignored by applying spatial filtering processing.
Direction and other related references (e.g., upward, downward, top, bottom, left, right, rearward, forward, etc.) may be used to aid in the discussion of the figures and principles herein, and are not intended to be limiting. For example, certain terms such as "upward," "downward," "upper," "lower," "horizontal," "vertical," "left," "right," and the like may be used. These terms are used, where applicable, to provide some definite description in handling relative relationships, particularly with respect to the aspects shown. However, such terms are not intended to imply absolute relationships, positions, and/or orientations. For example, with respect to an object, an "upper" surface may be changed to a "lower" surface simply by flipping the object. Nevertheless, it is the same surface and the object remains unchanged. As used herein, "and/or" means "and" or "and" or ". Furthermore, all patent and non-patent documents cited herein are hereby incorporated by reference in their entirety for all purposes.
Also, those of ordinary skill in the art will appreciate that the exemplary aspects disclosed herein may be adapted for a variety of configurations and/or uses without departing from the principles disclosed. A wide variety of damping acoustic tanks and associated methods and systems may be provided using the principles disclosed herein. For example, the principles described above in connection with any particular example may be combined with the principles described in connection with another example described herein. Accordingly, all structural and functional equivalents to the features and method steps of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are intended to be encompassed by the principles described herein and the features claimed. Accordingly, neither the claims nor the detailed description should be taken in a limiting sense, and those of ordinary skill in the art, after reviewing the present disclosure, will recognize a wide variety of ultrasonic proximity sensors and associated methods and systems that may be devised under the concepts disclosed and claimed.
Furthermore, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. The claim features should not be construed in accordance with 35usc 112 (f) unless features are explicitly recited using the phrase "means for … …" or "steps for … …".
The appended claims are not intended to be limited to the embodiments shown herein, but are intended to be accorded the full scope consistent with the language claims, wherein reference to a feature in the singular (such as by using the articles "a" or "an") is not intended to mean "one and only one", but rather "one or more", unless specifically so stated. Furthermore, in view of the many possible embodiments to which the disclosed principles may be applied, i reserve the right to include any and all combinations of features and techniques described herein as understood by one of ordinary skill in the art, including, for example, all those that are within the scope and spirit of the following claims.

Claims (20)

1. A method, comprising:
receiving a first audio signal via a first sensor of a wearable audio device;
detecting a potential state change of the wearable audio device based on the first audio signal;
The wearable audio device transmits a second audio signal in response to detecting the potential state change;
receiving, via the first sensor of the wearable audio device, a third audio signal comprising a sensed version of the transmitted second audio signal; and
the wearable audio device determines a current state of the wearable audio device based on the third audio signal.
2. The method of claim 1, further comprising:
the wearable audio device transmits a fourth audio signal,
wherein the fourth audio signal is inaudible and the first audio signal comprises a sensed version of the transmitted fourth audio signal.
3. The method of claim 2, further comprising:
a motion sensor of the wearable audio device detects that a motion of the wearable audio device is above a threshold;
wherein the fourth audio signal is transmitted in response to the detecting that the motion is above the threshold.
4. The method of claim 1, wherein the first audio signal comprises ambient noise surrounding the wearable audio device, and the method further comprises:
Receiving a fifth audio signal comprising the ambient noise at a second sensor of the wearable audio device;
wherein the detecting the potential state change is based on a comparison between the first audio signal and the fifth audio signal.
5. The method of claim 1, wherein the first audio signal comprises music, and the method further comprises:
receiving, at the wearable audio device, a digital version of the music from a music source; and
the wearable audio device transmits a fourth audio signal based on the digital version of the music.
6. The method of claim 1, wherein the detecting the potential state change is further based on one or more of: ambient noise sensed at the wearable audio device, a received signal strength indicator, RSSI, of a bluetooth signal, and an optical sensor of the wearable audio device.
7. The method of claim 1, wherein the first audio signal has a first time characteristic that is different from a second time characteristic of the second audio signal.
8. The method of claim 1, wherein the first temporal characteristic comprises a substantially continuous tone at one or more frequencies that are constant over time during the substantially continuous tone, and wherein the second temporal characteristic comprises a set of discontinuous tones comprising at least two tones generated at different frequencies at different times.
9. The method of claim 1, wherein detecting the potential state change comprises detecting movement of the wearable audio device toward or away from an ear of a user of the wearable audio device based on an envelope of a portion of the first audio signal.
10. The method of claim 9, further comprising:
the wearable audio device transmits a fourth audio signal,
wherein the first audio signal comprises the portion of the first audio signal comprising a version of the fourth audio signal reflected from a portion of an ear of a user of the wearable audio device.
11. The method of claim 9, wherein detecting the movement of the wearable audio device toward or away from an ear of a user of the wearable audio device comprises detecting the movement of the wearable audio device into the ear of the user, and wherein determining the current state of the wearable audio device based on the third audio signal comprises determining that the wearable audio device is in a worn state based on a reflected portion of the third audio signal corresponding to a version of the second audio signal reflected from one or more features of the ear of the user.
12. The method of claim 9, wherein detecting the movement of the wearable audio device toward or away from an ear of a user of the wearable audio device comprises detecting a removal of the wearable audio device from the ear of the user, and wherein determining the current state of the wearable audio device based on the third audio signal comprises determining that the wearable audio device is in an unworn state based on an unreflected portion of the second audio signal corresponding to a version of the second audio signal that is not reflected from one or more features of the ear of the user.
13. The method of claim 1, wherein determining the current state of the wearable audio device based on the third audio signal comprises:
determining, based on the third audio signal, that a new state of the wearable audio device cannot be determined from the third audio signal; and
maintaining the current state of the wearable audio device as a previous state of the wearable audio device.
14. A system, comprising:
a processor; and
a memory storing instructions that, when executed by the processor, cause the system to:
Receiving a first audio signal via a first sensor of a wearable audio device;
detecting a potential state change of the wearable audio device based on the first audio signal;
transmitting, by the wearable audio device and in response to detecting the potential state change, a second audio signal;
receiving, via the first sensor of the wearable audio device, a third audio signal comprising a sensed version of the transmitted second audio signal; and
determining, by the wearable audio device, a current state of the wearable audio device based on the third audio signal.
15. The system of claim 14, wherein the instructions further cause the system to:
transmitting, by the wearable audio device, a fourth audio signal;
wherein the fourth audio signal is inaudible and the first audio signal comprises a sensed version of the transmitted fourth audio signal.
16. The system of claim 15, wherein the instructions further cause the system to:
detecting, by a movement sensor of the wearable audio device, that a movement of the wearable audio device is above a threshold;
Wherein the fourth audio signal is transmitted in response to the detecting that the motion is above the threshold.
17. The system of claim 14, wherein the first audio signal comprises ambient noise surrounding the wearable audio device, and the instructions further cause the system to:
receiving a fifth audio signal comprising the ambient noise at a second sensor of the wearable audio device;
wherein the detecting the potential state change is based on a comparison between the first audio signal and the fifth audio signal.
18. A non-transitory computer-readable medium comprising instructions that, when executed by a processor, cause the processor to:
receiving a first audio signal via a first sensor of a wearable audio device;
detecting a potential state change of the wearable audio device based on the first audio signal;
transmitting, by the wearable audio device, a second audio signal in response to detecting the potential state change;
receiving, via the first sensor of the wearable audio device, a third audio signal comprising a sensed version of the transmitted second audio signal; and
Determining, by the wearable audio device, a current state of the wearable audio device based on the third audio signal.
19. The medium of claim 18, wherein the instructions further cause the processor to:
transmitting, by the wearable audio device, a fourth audio signal;
wherein the fourth audio signal is inaudible and the first audio signal comprises a sensed version of the transmitted fourth audio signal.
20. The medium of claim 19, wherein the instructions further cause the processor to:
detecting, by a movement sensor of the wearable audio device, that a movement of the wearable audio device is above a threshold;
wherein the fourth audio signal is transmitted in response to the detecting that the motion is above the threshold.
CN202311238760.1A 2022-09-23 2023-09-22 State detection for wearable audio devices Pending CN117768829A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US63/409,653 2022-09-23
US18/239,718 2023-08-29
US18/239,718 US20240107246A1 (en) 2022-09-23 2023-08-29 State detection for wearable audio devices

Publications (1)

Publication Number Publication Date
CN117768829A true CN117768829A (en) 2024-03-26

Family

ID=90324209

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311238760.1A Pending CN117768829A (en) 2022-09-23 2023-09-22 State detection for wearable audio devices

Country Status (1)

Country Link
CN (1) CN117768829A (en)

Similar Documents

Publication Publication Date Title
CN110392912B (en) Automatic noise cancellation using multiple microphones
US9674625B2 (en) Passive proximity detection
CN110177326B (en) Ultrasonic proximity sensors and related systems and methods
US10848887B2 (en) Blocked microphone detection
CN111328009B (en) Acoustic in-ear detection method for audible device and audible device
EP2983063A2 (en) Low-power environment monitoring and activation triggering for mobile devices through ultrasound echo analysis
WO2020019821A1 (en) Microphone hole-blockage detection method and related product
CN108763901B (en) Ear print information acquisition method and device, terminal, earphone and readable storage medium
KR101660670B1 (en) Heart rate detection method used in earphone and earphone capable of detecting heart rate
EP2278356B1 (en) Apparatus and method for detecting usage profiles of mobile devices
KR20130055650A (en) Systems, methods, apparatus, and computer-readable media for multi-microphone location-selective processing
CN106851460B (en) Earphone and sound effect adjusting control method
JP2015023499A (en) Sound processing system and sound processing apparatus
CN114727212B (en) Audio processing method and electronic equipment
US11533574B2 (en) Wear detection
CN116324969A (en) Hearing enhancement and wearable system with positioning feedback
WO2017067126A1 (en) Method and apparatus for controlling sound collection range of multi-microphone de-noising of terminal
CN115119124A (en) Hearing aid with sensor
US10187504B1 (en) Echo control based on state of a device
CN117768829A (en) State detection for wearable audio devices
US20240107246A1 (en) State detection for wearable audio devices
CN113196797B (en) Acoustic gesture detection for control of audible devices
CN106254991B (en) Noise cancelling headphone and its noise-reduction method
US20210099782A1 (en) State classification for audio accessories, and related systems and methods
WO2022254834A1 (en) Signal processing device, signal processing method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination