EP3445066A1 - Cartilage conduction audio system for eyewear devices - Google Patents

Cartilage conduction audio system for eyewear devices Download PDF

Info

Publication number
EP3445066A1
EP3445066A1 EP18189104.5A EP18189104A EP3445066A1 EP 3445066 A1 EP3445066 A1 EP 3445066A1 EP 18189104 A EP18189104 A EP 18189104A EP 3445066 A1 EP3445066 A1 EP 3445066A1
Authority
EP
European Patent Office
Prior art keywords
transducer
user
pressure wave
ear
vibration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
EP18189104.5A
Other languages
German (de)
French (fr)
Other versions
EP3445066B1 (en
Inventor
Antonio John Miller
Ravish MEHRA
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Meta Platforms Technologies LLC
Original Assignee
Facebook Technologies LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US15/680,836 external-priority patent/US10231046B1/en
Application filed by Facebook Technologies LLC filed Critical Facebook Technologies LLC
Publication of EP3445066A1 publication Critical patent/EP3445066A1/en
Application granted granted Critical
Publication of EP3445066B1 publication Critical patent/EP3445066B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/22Arrangements for obtaining desired frequency or directional characteristics for obtaining desired frequency characteristic only 
    • H04R1/26Spatial arrangements of separate transducers responsive to two or more frequency ranges
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/02Casings; Cabinets ; Supports therefor; Mountings therein
    • H04R1/028Casings; Cabinets ; Supports therefor; Mountings therein associated with devices performing functions other than acoustics, e.g. electric candles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1091Details not provided for in groups H04R1/1008 - H04R1/1083
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2460/00Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
    • H04R2460/13Hearing devices using bone conduction transducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R29/00Monitoring arrangements; Testing arrangements
    • H04R29/001Monitoring arrangements; Testing arrangements for loudspeakers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R7/00Diaphragms for electromechanical transducers; Cones
    • H04R7/02Diaphragms for electromechanical transducers; Cones characterised by the construction
    • H04R7/12Non-planar diaphragms or cones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/301Automatic calibration of stereophonic sound system, e.g. with test microphone

Definitions

  • This disclosure relates generally to an audio system in an eyewear device, and specifically relates to a cartilage conduction audio system for use in eyewear devices.
  • Head-mounted displays in virtual reality (VR), augmented reality (AR), and/or mixed reality (MR) systems often include features such as speakers or personal audio devices to provide sound to users. These speakers or personal audio devices are typically formed over the ear and cover the ear (e.g., headphones), or placed in the ear (e.g., in-ear headphones or earbuds).
  • a user wearing a head-mounted display in a VR, AR, and MR system can benefit from keeping the ear canal open and not covered by an audio devices.
  • the user can have a more immersive and safer experience and receive spatial cues from ambient sound when the ear is unobstructed.
  • an audio system of the eyewear device prefferably be lightweight, ergonomic, low in power consumption, and to not produce crosstalk between the ears. Such features are challenging to incorporate in a full frequency (20 Hz to 20,000 Hz) audio reproduction system on an eyewear device while leaving the ear canal open to the acoustic scene around the user.
  • An audio system includes a transducer assembly, an acoustic sensor, and a controller.
  • the transducer assembly is located behind the ear so that an ear canal of the user is clear.
  • the transducer assembly is coupled to a back of an auricle of the user to vibrate the auricle over a frequency range, creating an acoustic pressure wave in accordance with vibration instructions.
  • the auricle of the ear of the user is used as a speaker, keeping the ear canal open such that the ear is open to the acoustic scene around the user.
  • the acoustic sensor detects the acoustic pressure wave at an entrance of the ear of the user.
  • the controller adjusts a frequency response model based in part on the detected acoustic pressure wave, updates the vibration instructions using the adjusted frequency response model, and provides the updated vibration instructions to the transducer assembly. Accordingly, an audio response is individualized for each user based on the detected signal to equalize the audio response per individual.
  • the audio system can be integrated into an eyewear device (e.g., glasses-type headset, near eye display, prescription glasses) and be located behind the ear of the user.
  • the transducer assembly may include one or more transducers to generate vibrations over a range of frequencies.
  • the transducer assembly includes a piezoelectric transducer to generate vibrations over a first portion of a frequency range and a moving coil transducer to generate vibrations over a second portion of the frequency range.
  • the acoustic sensor may be a microphone positioned at the entrance of the ear canal to sense the acoustic pressure wave.
  • the acoustic sensor may be a vibration sensor coupled to the auricle of the ear of the user to sense a vibration of the auricle corresponding to the acoustic pressure wave at the entrance of the ear of the user.
  • the vibration sensor may be a piezoelectric sensor or an accelerometer.
  • Embodiments according to the invention are in particular disclosed in the attached claims directed to an audio system, an eyewear device, and a storage medium, wherein any feature mentioned in one claim category, e.g. audio system, can be claimed in another claim category, e.g. eyewear device, storage medium, system, computer program product, and method as well.
  • the dependencies or references back in the attached claims are chosen for formal reasons only. However any subject matter resulting from a deliberate reference back to any previous claims (in particular multiple dependencies) can be claimed as well, so that any combination of claims and the features thereof is disclosed and can be claimed regardless of the dependencies chosen in the attached claims.
  • an audio system may comprise:
  • the at least one transducer may be a piezoelectric transducer.
  • the second transducer may be a moving coil transducer.
  • the acoustic sensor may be a microphone configured to sense the acoustic pressure wave at the entrance of the ear canal.
  • the acoustic sensor may be a vibration sensor coupled to a third portion of the auricle, and may be configured to sense a vibration of the auricle corresponding to the acoustic pressure wave at the entrance of the ear of the user.
  • the controller may adjust the frequency response model based in part on the detected acoustic pressure wave by computing an inverse function and may apply the inverse function to the detected acoustic pressure wave.
  • the audio system may be part of an eyewear device.
  • the audio system may use a flat spectrum broadband signal to generate the adjusted frequency response model.
  • an eyewear device may comprise:
  • an eyewear device may comprise:
  • the at least one transducer may be a piezoelectric transducer.
  • the transducer assembly may be configured to generate vibrations over a range of frequencies, and the transducer assembly may include a first transducer and a second transducer, the first transducer may be configured to provide a first portion of the frequency range, and the second transducer may be configured to provide a second portion of the frequency range.
  • the first transducer may be a piezoelectric transducer and the second transducer may be a moving coil transducer.
  • the acoustic sensor may be a microphone configured to sense the acoustic pressure wave at the entrance of the ear canal.
  • the acoustic sensor may be a vibration sensor coupled to a third portion of the auricle, and may be configured to sense a vibration of the auricle corresponding to the acoustic pressure wave at the entrance of the ear of the user.
  • the controller may adjust the frequency response model based in part on the detected acoustic pressure wave by computing an inverse function and may apply the inverse function to the detected acoustic pressure wave.
  • a flat spectrum broadband signal may be used to generate the adjusted frequency response model.
  • an eyewear device may comprise:
  • a non-transitory computer-readable storage medium may store executable computer program instructions, the computer program instructions may comprise instructions for:
  • one or more computer-readable non-transitory storage media embody software that is operable when executed to perform in a system according to the invention or any of the above mentioned embodiments.
  • a computer-implemented method uses a system according to the invention or any of the above mentioned embodiments.
  • a computer program product preferably comprising a computer-readable non-transitory storage media, is used in a system according to the invention or any of the above mentioned embodiments.
  • a cartilage conduction audio system that uses cartilage conduction for providing sound to an ear of a user while keeping the ear canal of the user unobstructed.
  • the audio system includes a transducer coupled to a back of the ear of the user.
  • the transducer generates sound by vibrating the back of the ear (e.g., auricle, or may also be referred to as a pinna) of the user, which vibrates the cartilage in the ear of the user to generate acoustic waves corresponding to received audio content.
  • an audio system that uses cartilage conduction over one that only uses bone conduction include, e.g., reducing crosstalk between the ears, reducing size and power consumption of the audio system, and improving ergonomics.
  • An audio system that uses cartilage conduction uses less coupling force (e.g., less static constant force on the skin) for producing a similar hearing sensation in comparison to an audio system that uses bone conduction, resulting in improved comfort for a wearable device, which is particularly desirable for a wearable device that is worn all day.
  • FIG. 1 is an example illustrating an eyewear device 100 including a cartilage conduction audio system (audio system), in accordance with an embodiment.
  • the eyewear device 100 presents media to a user.
  • the eyewear device 100 may be a head mounted display (HMD). Examples of media presented by the eyewear device 100 include one or more images, video, audio, or some combination thereof.
  • the eyewear device 100 may include, among other components, a frame 105, a lens 110, a transducer assembly 120, an acoustic sensor 125, and a controller 130.
  • the eyewear device 100 may also optionally include a sensor device 115.
  • the eyewear device 100 may correct or enhance the vision of a user, protect the eye of a user, or provide images to a user.
  • the eyewear device 100 may be eyeglasses which correct for defects in a user's eyesight.
  • the eyewear device 100 may be sunglasses which protect a user's eye from the sun.
  • the eyewear device 100 may be safety glasses which protect a user's eye from impact.
  • the eyewear device 100 may be a night vision device or infrared goggles to enhance a user's vision at night.
  • the eyewear device 100 may be a head mounted display that produces VR, AR, or MR content for the user.
  • the eyewear device 100 may not include a lens 110 and may be a frame 105 with an audio system that provides audio (e.g., music, radio, podcasts) to a user.
  • the frame 105 includes a front part that holds the lens 110 and end pieces to attach to the user.
  • the front part of the frame 105 bridges the top of a nose of the user.
  • the end pieces e.g., temples
  • the length of the end piece may be adjustable (e.g., adjustable temple length) to fit different users.
  • the end piece may also include a portion that curls behind the ear of the user (e.g., temple tip, ear piece).
  • the lens 110 provides or transmits light to a user wearing the eyewear device 100.
  • the lens 110 may be prescription lens (e.g., single vision, bifocal and trifocal, or progressive) to help correct for defects in a user's eyesight.
  • the prescription lens transmits ambient light to the user wearing the eyewear device 100.
  • the transmitted ambient light may be altered by the prescription lens to correct for defects in the user's eyesight.
  • the lens 110 may be a polarized lens or a tinted lens to protect the user's eyes from the sun.
  • the lens 110 may be one or more waveguides as part of a waveguide display in which image light is coupled through an end or edge of the waveguide to the eye of the user.
  • the lens 110 may include an electronic display for providing image light and may also include an optics block for magnifying image light from the electronic display. Additional detail regarding the lens 110 can be found in the detailed description of FIG. 5 .
  • the lens 110 is held by a front part of the frame 105 of the eyewear device 100.
  • the sensor device 115 estimates a current position of the eyewear device 100 relative to an initial position of the eyewear device 100.
  • the sensor device 115 may be located on a portion of the frame 105 of the eyewear device 100.
  • the sensor device 115 includes a position sensor and an inertial measurement unit Additional details about the sensor device 115 can be found in the detailed description of FIG. 5 .
  • the audio system of the eyewear device 100 includes the transducer assembly 120, the acoustic sensor 125, and the controller 130.
  • the audio system provides audio content to a user by vibrating the auricle of the ear of the user to produce an acoustic pressure wave.
  • the audio system also uses feedback to create a similar audio experience across different users. Additional detail regarding the audio system can be found in the detailed description of FIG. 3 .
  • the transducer assembly 120 produces sound by vibrating the cartilage in the ear of the user.
  • the transducer assembly 120 is coupled to an end piece of the frame 105 and is configured to be coupled to the back of an auricle of the ear of the user.
  • the auricle is a portion of the outer ear that projects out of a head of the user.
  • the transducer assembly 120 receives vibration instructions from the controller 130. Vibration instructions may include a content signal, a control signal, and a gain signal.
  • the content signal may be based on audio content for presentation to the user.
  • the control signal may be used to enable or disable the transducer assembly 120 or one or more transducers of the transducer assembly.
  • the gain may be used to amplify the content signal.
  • the transducer assembly 120 may include one or more transducer to cover different parts of a frequency range.
  • a piezoelectric transducer may be used to cover a first part of a frequency range and a moving coil transducer may be used to cover a second part of a frequency range. Additional detail regarding the transducer assembly 120 can be found in the detailed description of FIG. 3 .
  • the acoustic sensor 125 detects an acoustic pressure wave at an entrance of an ear of a user.
  • the acoustic sensor 125 is coupled to an end piece of the frame 105.
  • the acoustic sensor 125 as shown in FIG. 1 is a microphone which may be positioned at the entrance of the user's ear. In this embodiment, the microphone may directly measure the acoustic pressure wave at the entrance of the ear of the user.
  • the acoustic sensor 125 is a vibration sensor that is configured to be coupled to the back of the pinna of the user. The vibration sensor may indirectly measure the acoustic pressure wave at the entrance of the ear.
  • the vibration sensor may measure a vibration that is a reflection of the acoustic pressure wave at the entrance of the ear and/or measure a vibration created by the transducer assembly on the auricle of the ear of the user which may be used to estimate the acoustic pressure wave at the entrance of the ear.
  • a mapping between acoustic pressure generated at the entrance to the ear canal and a vibration level generated on the pinna is an experimentally determined quantity that is measured on a representative sample of users and stored.
  • the vibration sensor can be an accelerometer or a piezoelectric sensor.
  • An accelerometer may be a piezoelectric accelerometer or a capacitive accelerometer.
  • the capacitive accelerometer senses change in capacitance between structures which can be moved by an accelerative force.
  • the acoustic sensor 125 is removed from the eyewear device 100 after calibration. Additional detail regarding the acoustic sensor 125 can be found in the detailed description of FIG. 3 .
  • the controller 130 provides vibration instructions to the transducer assembly 120, receives information from the acoustic sensor 125 regarding the produced sound, and updates the vibration instructions based on the received information. Vibration instructions instruct the transducer assembly 120 how to produce vibrations.
  • vibration instructions may include a content signal (e.g., electrical signal applied to the transducer assembly 120 to produce a vibration), a control signal to enable or disable the transducer assembly 120, and a gain signal to scale the content signal (e.g., increase or decrease the vibrations produced by the transducer assembly 120).
  • the vibration instructions may be generated by the controller 130.
  • the controller 130 may receive audio content (e.g., music, calibration signal) from a console for presentation to a user and generate vibration instructions based on the received audio content.
  • the controller 130 receives information from the acoustic sensor 125 that describes the produced sound at an ear of the user.
  • the acoustic sensor 125 is a vibration sensor that measures a vibration of a pinna of a user and the controller 130 applies a previously stored frequency dependent linear mapping of pressure to vibration to determine the acoustic pressure wave at the entrance of the ear based on the received detected vibration.
  • the controller 130 uses the received information as feedback to compare the produced sound to a target sound (e.g., audio content) and adjusts the vibration instructions to make the produced sound closer to the target sound.
  • the controller 130 is embedded into the frame 105 of the eyewear device 100. In other embodiments, the controller 130 may be located in a different location. For example, the controller 130 may be part of the transducer assembly or located external to the eyewear device 100. Additional detail regarding the controller 130 can be found in the detailed description of FIG. 3 .
  • FIG. 2A is an example illustrating a portion of the eyewear device 200 including a transducer assembly 220 that is a microphone and acoustic sensor 225 on an ear of the user, in accordance with an embodiment.
  • the eyewear device 200, transducer assembly 220, and acoustic sensor 225 are embodiments of the eyewear device 100, transducer assembly 120, and the acoustic sensor 125.
  • the transducer assembly 220 is coupled to a back of an ear of a user.
  • the transducer assembly vibrates the back of the ear of a user to generate a pressure wave based on vibration instructions.
  • the acoustic sensor 225 is a microphone positioned at an entrance of the ear of the user to detect the pressure wave produced by the transducer assembly 220.
  • the audio system compares the detected pressure wave (e.g. produced sound) with a target pressure wave (e.g. audio content) and adjusts vibration instructions to make a detected pressure wave more similar to a target pressure wave.
  • FIG. 2B is an example illustrating a portion of the eyewear device 250 including a transducer assembly 260 and acoustic sensor 275 that is a piezoelectric transducer, in accordance with an embodiment.
  • the eyewear device 250, transducer assembly 260, and acoustic sensor 275 are embodiments of the eyewear device 100, transducer assembly 120, and the acoustic sensor 125.
  • the transducer assembly 260 is a transducer located around the end piece of the frame (e.g., bottom of a behind-the-ear ear cup) that is to be coupled to the back of the ear of a user.
  • the transducer assembly 260 is shown to be a circular voice coil (e.g., moving coil) transducer.
  • the acoustic sensor 275 is a piezoelectric transducer that is to be coupled to the back of the ear of a user.
  • the piezoelectric transducer may be a stacked piezoelectric transducer and may have a dimension in the range of a few millimeters in size (e.g., 9 mm).
  • FIG. 3 is a block diagram of an audio system 300, in accordance with an embodiment.
  • the audio system in FIG. 1 is an embodiment of the audio system 300.
  • the audio system 300 includes a transducer assembly 310, an acoustic sensor 320, and a controller 330.
  • the transducer assembly 310 vibrates a cartilage of a user's ear in accordance with the vibration instructions (e.g., received from the controller 330).
  • the transducer assembly 310 is coupled to a first portion of a back of an auricle of an ear of a user.
  • the transducer assembly 310 includes at least one transducer to vibrate the auricle over a frequency range to cause the auricle to create an acoustic pressure wave in accordance with vibration instructions.
  • the transducer may be a single piezoelectric transducer.
  • a piezoelectric transducer can generate frequencies up to 20 kHz using a range of voltages around +/-100V.
  • the range of voltages may include lower voltages as well (e.g., +/- 10V).
  • the piezoelectric transducer may be a stacked piezoelectric actuator.
  • the stacked piezoelectric actuator includes multiple piezoelectric elements that are stacked (e.g. mechanically connected in series).
  • the stacked piezoelectric actuator may have a lower range of voltages because the movement of a stacked piezoelectric actuator can be a product of the movement of a single piezoelectric element with the number of elements in the stack.
  • a piezoelectric transducer is made of a piezoelectric material that can generate a strain (e.g., deformation in the material) in the presence of an electric field.
  • the piezoelectric material may be a polymer (e.g., polyvinyl chloride (PVC), polyvinylidene fluoroide (PVDF)), a polymer-based composite, ceramic, or crystal (e.g., quartz (silicon dioxide or SiO 2 ), lead zirconate-titanate (PZT)).
  • PVC polyvinyl chloride
  • PVDF polyvinylidene fluoroide
  • crystal e.g., quartz (silicon dioxide or SiO 2 ), lead zirconate-titanate
  • the piezoelectric transducer may be coupled to a material (e.g., silicone) that attaches well to the back of an ear of a user.
  • the transducer assembly 310 maintains good surface contact with the back of the user's ear and maintains a steady amount of application force (e.g., 1 Newton) to the user's ear.
  • the transducer assembly 310 is configured to generation vibrations over a range of frequencies and includes a first transducer and a second transducer.
  • the first transducer is configured to provide a first portion of the frequency range (e.g., higher range up to 20 kHz).
  • the first transducer may be, e.g., a piezoelectric transducer.
  • the second transducer is configured to provide a second portion of the frequency range (e.g., lower range around 20 Hz).
  • the second transducer may be a piezoelectric transducer or may be a different type of transducer such as a moving coil transducer.
  • a typical moving coil transducer includes a coil of wire and a permanent magnet to produce a permanent magnetic field.
  • the second transducer may be made of a more rigid material than the first transducer.
  • the second transducer may be coupled to a second portion different than the first portion of the back of the ear of the user. Alternatively, the second transducer may be in contact with the skull of the user.
  • the acoustic sensor 320 provides information regarding the produced sound to the controller 330.
  • the acoustic sensor 320 detects an acoustic pressure wave at an entrance of an ear of a user.
  • the acoustic sensor 320 is a microphone positioned at an entrance of an ear of a user.
  • a microphone is a transducer that converts pressure into an electrical signal.
  • the frequency response of the microphone may be relatively flat in some portions of a frequency range and may be linear in other portions of a frequency range.
  • the microphone may be configured to receive a gain signal to scale a detected signal from the microphone based on the vibration instructions provided to the transducer assembly 310. For example, the gain may be adjusted based on the vibration instructions to avoid clipping of the detected signal or for improving a signal to noise ratio in the detected signal.
  • the acoustic sensor 320 may be a vibration sensor.
  • the vibration sensor is coupled to a portion of the ear.
  • the vibration sensor and the transducer assembly 310 couple to different portions of the ear.
  • the vibration sensor is similar to the transducers used in the transducer assembly except the signal is flowing in reverse. Instead of an electrical signal producing a mechanical vibration in a transducer, a mechanical vibration is generating an electrical signal in the vibration sensor.
  • a vibration sensor may be made of piezoelectric material that can generate an electrical signal when the piezoelectric material is deformed.
  • the vibration sensor maintains good surface contact with the back of the user's ear and maintains a steady amount of application force (e.g., 1 Newton) to the user's ear.
  • the vibration sensor may be an accelerometer.
  • the vibration sensor may be integrated in an internal measurement unit (IMU) integrated circuit (IC). The IMU is further described with relation to FIG. 5 .
  • IMU internal measurement unit
  • the controller 330 controls components of the audio system 300.
  • the controller 330 generates vibration instructions to instruct the transducer assembly 310 how to produce vibrations.
  • vibration instructions may include a content signal (e.g., electrical signal applied to the transducer assembly 310 to produce a vibration), a control signal to enable or disable the transducer assembly 310, and a gain signal to scale the content signal (e.g., increase or decrease the vibrations produced by the transducer assembly 310).
  • the controller 330 generates the content signal of the vibration instructions based on audio content and a frequency response model.
  • a frequency response model describes the response of a system to inputs at certain frequencies and may indicate how an output is shifted in amplitude and phase based on the input.
  • the controller 330 may generate a content signal (e.g., input signal) of the vibration instructions with the audio content (e.g., target output) and the frequency response model (e.g., relationship of the input to the output).
  • the controller 330 may generate the content signal of the vibration instructions by applying an inverse of the frequency response to the audio content.
  • the controller 330 receives feedback from an acoustic sensor 320.
  • the acoustic sensor 320 provides information about the sound signal (e.g., acoustic pressure wave) produced by the vibration transducer 310.
  • the controller 330 may compare the detected acoustic pressure wave with a target acoustic pressure wave based on audio content provided to the user.
  • the controller 330 can then compute an inverse function to apply to the detected acoustic wave such that the detected acoustic pressure wave appears the same as the target acoustic pressure wave.
  • the controller 330 can adjust the frequency response model of the audio system using the computed inverse function specific to each user.
  • the adjustment of the frequency model may be performed while the user is listening to audio content.
  • the controller 330 can then generate updated vibration instructions using the adjusted frequency response model.
  • the controller 330 enables a similar audio experience to be produced across different users of the sound system. In a cartilage conduction audio system, the speaker of the audio system corresponds to a user's auricle.
  • the frequency response model will vary from user to user.
  • the audio system can maintain the same type of produced sound (e.g., neutral listening) regardless of the user.
  • Neutral listening is having similar listening experience across different users. In other words, the listening experience is impartial or neutral to the user (e.g., does not change from user to user).
  • the audio system uses a flat spectrum broadband signal to generate the adjusted frequency response model.
  • the controller 330 provides vibration instructions to the transducer assembly 310 based on a flat spectrum broadband signal.
  • the acoustic sensor 320 detects an acoustic pressure wave at an entrance of an ear of the user.
  • the controller 330 compares the detected acoustic pressure wave with the target acoustic pressure wave based on the flat spectrum broadband signal and adjusts the frequency model of the audio system accordingly.
  • the flat spectrum broadband signal may be used while performing calibration of the audio system for a particular user.
  • the audio system may perform an initial calibration for a user instead of continuously monitoring the audio system.
  • the acoustic sensor may be temporarily coupled to the eyewear device for calibration of the user. Responsive to completing calibration of the user, the acoustic sensor may be uncoupled to the eyewear device. Advantages of removing the acoustic sensor from the eyewear device include making it easier to wear and reducing the volume and weight of the eyewear device.
  • FIG. 4 is a flowchart illustrating a process of operating an audio system that uses cartilage conduction, in accordance with an embodiment.
  • the process 400 of FIG. 4 may be performed by an audio system that uses cartilage conduction (e.g., the audio system 300).
  • Other entities e.g., an eyewear device and/or console
  • embodiments may include different and/or additional steps, or perform the steps in different orders.
  • the audio system generates 410 vibration instructions using a frequency response model and audio content.
  • the audio system may receive audio content from a console.
  • the audio content may include content such as music, radio signal, or calibration signal.
  • the frequency response model describes a relationship between an input (e.g., audio content, vibration instructions) and output (e.g., produced audio, sound pressure wave, vibrations) of the auricle of an ear of a user which is used as a speaker in the audio system.
  • a controller e.g., the controller 330
  • the controller may start with the audio content and use the frequency response model (e.g., apply inverse frequency response) to estimate vibration instructions to produce the audio content.
  • the audio system provides 420 the vibration instructions to a transducer assembly (e.g., the transducer assembly 310).
  • the transducer assembly is coupled to the back of an auricle of an ear of a user and vibrates the auricle based on the vibration instructions.
  • the vibration of the auricle produces an acoustic pressure wave that provides sound based on the audio content to the user.
  • the audio system adjusts 440 frequency response model based in part of the detected acoustic pressure wave.
  • the controller may compare the detected acoustic pressure wave with a target acoustic pressure wave based on audio content provided to the user.
  • the controller can compute an inverse function to apply to the detected acoustic wave such that the detected acoustic pressure wave appears the same as the target acoustic pressure wave.
  • the audio system updates 450 vibration instructions using the adjusted frequency response model.
  • the updated vibration instructions may be generated by the controller which uses audio content and the adjusted frequency response model. For example, the controller may start with audio content and use the adjusted frequency response model to estimate updated vibration instructions to produce audio content closer to a target acoustic pressure wave.
  • the audio system provides 460 updated vibration instructions to the transducer assembly.
  • the transducer assembly vibrates the auricle produces an updated acoustic pressure wave that provides sound based on the updated vibration instructions to the user.
  • the updated acoustic pressure wave may appear closer to a target acoustic pressure wave.
  • the audio system may dynamically adjust the frequency response model while the user is listening to audio content or may just adjust the frequency response model during a calibration of the audio system per user.
  • FIG. 5 is a system environment 500 of the eyewear device including a cartilage conduction audio system, in accordance with an embodiment.
  • the system 500 may operate in a VR, AR, or MR environment, or some combination thereof.
  • the system 500 shown by FIG. 5 comprises an eyewear device 505 and an input/output (I/O) interface 515 that is coupled to a console 510.
  • the eyewear device 505 may be an embodiment of the eyewear device 100. While FIG. 5 shows an example system 500 including one eyewear device 505 and one I/O interface 515, in other embodiments any number of these components may be included in the system 500.
  • the eyewear device 505 may be a head-mounted display that presents content to a user comprising augmented views of a physical, real-world environment with computer-generated elements (e.g., two dimensional (2D) or three dimensional (3D) images, 2D or 3D video, sound, etc.).
  • the presented content includes audio that is presented via an audio block 520 that receives audio information from the eyewear device 505, the console 510, or both, and presents audio data based on the audio information.
  • the eyewear device 505 may comprise one or more rigid bodies, which may be rigidly or non-rigidly coupled to each other together. A rigid coupling between rigid bodies causes the coupled rigid bodies to act as a single rigid entity.
  • the eyewear device 505 presents virtual content to the user that is based in part on a real environment surrounding the user.
  • virtual content may be presented to a user of the eyewear device.
  • the user physically may be in a room, and virtual walls and a virtual floor of the room are rendered as part of the virtual content.
  • the eyewear device 505 includes an audio block 520.
  • the audio block 520 is one embodiment of the audio system 300.
  • the audio block 520 is a cartilage conduction audio system which provides audio information to a user by vibrating the cartilage in a user's ear to produce sound.
  • the audio block 520 monitors the produced sound so that it can compensate for a frequency response model for each ear of the user and can maintain the same type of produced sound across different individuals.
  • the eyewear device 505 may include an electronic display 525, an optics block 530, one or more position sensors 535, and an inertial measurement Unit (IMU) 540.
  • the electronic display 525 and the optics block 530 is one embodiment of a lens 110.
  • the position sensors 535 and the IMU 540 is one embodiment of sensor device 115.
  • Some embodiments of the eyewear device 505 have different components than those described in conjunction with FIG. 5 . Additionally, the functionality provided by various components described in conjunction with FIG. 5 may be differently distributed among the components of the eyewear device 505 in other embodiments, or be captured in separate assemblies remote from the eyewear device 505.
  • the electronic display 525 displays 2D or 3D images to the user in accordance with data received from the console 510.
  • the electronic display 525 comprises a single electronic display or multiple electronic displays (e.g., a display for each eye of a user).
  • Examples of the electronic display 525 include: a liquid crystal display (LCD), an organic light emitting diode (OLED) display, an active-matrix organic light-emitting diode display (AMOLED), some other display, or some combination thereof.
  • the optics block 530 magnifies image light received from the electronic display 525, corrects optical errors associated with the image light, and presents the corrected image light to a user of the eyewear device 505.
  • the optics block 530 includes one or more optical elements.
  • Example optical elements included in the optics block 530 include: an aperture, a Fresnel lens, a convex lens, a concave lens, a filter, a reflecting surface, or any other suitable optical element that affects image light.
  • the optics block 530 may include combinations of different optical elements.
  • one or more of the optical elements in the optics block 530 may have one or more coatings, such as partially reflective or anti-reflective coatings.
  • Magnification and focusing of the image light by the optics block 530 allows the electronic display 525 to be physically smaller, weigh less, and consume less power than larger displays. Additionally, magnification may increase the field of view of the content presented by the electronic display 525. For example, the field of view of the displayed content is such that the displayed content is presented using almost all (e.g., approximately 110 degrees diagonal), and in some cases all, of the user's field of view. Additionally in some embodiments, the amount of magnification may be adjusted by adding or removing optical elements.
  • the optics block 530 may be designed to correct one or more types of optical error.
  • optical error include barrel or pincushion distortion, longitudinal chromatic aberrations, or transverse chromatic aberrations.
  • Other types of optical errors may further include spherical aberrations, chromatic aberrations, or errors due to the lens field curvature, astigmatisms, or any other type of optical error.
  • content provided to the electronic display 525 for display is pre-distorted, and the optics block 530 corrects the distortion when it receives image light from the electronic display 525 generated based on the content.
  • the IMU 540 is an electronic device that generates data indicating a position of the eyewear device 505 based on measurement signals received from one or more of the position sensors 535.
  • a position sensor 535 generates one or more measurement signals in response to motion of the eyewear device 505.
  • Examples of position sensors 535 include: one or more accelerometers, one or more gyroscopes, one or more magnetometers, another suitable type of sensor that detects motion, a type of sensor used for error correction of the IMU 540, or some combination thereof.
  • the position sensors 535 may be located external to the IMU 540, internal to the IMU 540, or some combination thereof.
  • the IMU 540 Based on the one or more measurement signals from one or more position sensors 535, the IMU 540 generates data indicating an estimated current position of the eyewear device 505 relative to an initial position of the eyewear device 505.
  • the position sensors 535 include multiple accelerometers to measure translational motion (forward/back, up/down, left/right) and multiple gyroscopes to measure rotational motion (e.g., pitch, yaw, and roll).
  • the IMU 540 rapidly samples the measurement signals and calculates the estimated current position of the eyewear device 505 from the sampled data.
  • the IMU 540 integrates the measurement signals received from the accelerometers over time to estimate a velocity vector and integrates the velocity vector over time to determine an estimated current position of a reference point on the eyewear device 505.
  • the IMU 540 provides the sampled measurement signals to the console 510, which interprets the data to reduce error.
  • the reference point is a point that may be used to describe the position of the eyewear device 505.
  • the reference point may generally be defined as a point in space or a position related to the eyewear device's 505 orientation and position.
  • the IMU 540 receives one or more parameters from the console 510. As further discussed below, the one or more parameters are used to maintain tracking of the eyewear device 505. Based on a received parameter, the IMU 540 may adjust one or more IMU parameters (e.g., sample rate). In some embodiments, certain parameters cause the IMU 540 to update an initial position of the reference point so it corresponds to a next position of the reference point. Updating the initial position of the reference point as the next calibrated position of the reference point helps reduce accumulated error associated with the current position estimated the IMU 540. The accumulated error, also referred to as drift error, causes the estimated position of the reference point to "drift" away from the actual position of the reference point over time. In some embodiments of the eyewear device 505, the IMU 540 may be a dedicated hardware component. In other embodiments, the IMU 540 may be a software component implemented in one or more processors.
  • IMU parameters e.g., sample rate
  • certain parameters cause the IMU 540 to update an initial
  • the I/O interface 515 is a device that allows a user to send action requests and receive responses from the console 510.
  • An action request is a request to perform a particular action.
  • an action request may be an instruction to start or end capture of image or video data, or an instruction to perform a particular action within an application.
  • the I/O interface 515 may include one or more input devices.
  • Example input devices include: a keyboard, a mouse, a game controller, or any other suitable device for receiving action requests and communicating the action requests to the console 510.
  • An action request received by the I/O interface 515 is communicated to the console 510, which performs an action corresponding to the action request.
  • the I/O interface 515 includes an IMU 540, as further described above, that captures calibration data indicating an estimated position of the I/O interface 515 relative to an initial position of the I/O interface 515.
  • the I/O interface 515 may provide haptic feedback to the user in accordance with instructions received from the console 510. For example, haptic feedback is provided when an action request is received, or the console 510 communicates instructions to the I/O interface 515 causing the I/O interface 515 to generate haptic feedback when the console 510 performs an action.
  • the console 510 provides content to the eyewear device 505 for processing in accordance with information received from one or more of: the eyewear device 505 and the I/O interface 515.
  • the console 510 includes an application store 550, a tracking module 555 and an engine 545.
  • Some embodiments of the console 510 have different modules or components than those described in conjunction with FIG. 5 .
  • the functions further described below may be distributed among components of the console 510 in a different manner than described in conjunction with FIG. 5 .
  • the application store 550 stores one or more applications for execution by the console 510.
  • An application is a group of instructions, that when executed by a processor, generates content for presentation to the user. Content generated by an application may be in response to inputs received from the user via movement of the eyewear device 505 or the I/O interface 515. Examples of applications include: gaming applications, conferencing applications, video playback applications, or other suitable applications.
  • the tracking module 555 calibrates the system environment 500 using one or more calibration parameters and may adjust one or more calibration parameters to reduce error in determination of the position of the eyewear device 505 or of the I/O interface 515. Calibration performed by the tracking module 555 also accounts for information received from the IMU 540 in the eyewear device 505 and/or an IMU 540 included in the I/O interface 515. Additionally, if tracking of the eyewear device 505 is lost, the tracking module 555 may re-calibrate some or all of the system environment 500.
  • the tracking module 555 tracks movements of the eyewear device 505 or of the I/O interface 515 using information from the one or more position sensors 535, the IMU 540 or some combination thereof. For example, the tracking module 555 determines a position of a reference point of the eyewear device 505 in a mapping of a local area based on information from the eyewear device 505. The tracking module 555 may also determine positions of the reference point of the eyewear device 505 or a reference point of the I/O interface 515 using data indicating a position of the eyewear device 505 from the IMU 540 or using data indicating a position of the I/O interface 515 from an IMU 540 included in the I/O interface 515, respectively.
  • the tracking module 555 may use portions of data indicating a position or the eyewear device 505 from the IMU 540 to predict a future location of the eyewear device 505.
  • the tracking module 555 provides the estimated or predicted future position of the eyewear device 505 or the I/O interface 515 to the engine 545.
  • the engine 545 also executes applications within the system environment 500 and receives position information, acceleration information, velocity information, predicted future positions, or some combination thereof, of the eyewear device 505 from the tracking module 555. Based on the received information, the engine 545 determines content to provide to the eyewear device 505 for presentation to the user. For example, if the received information indicates that the user has looked to the left, the engine 545 generates content for the eyewear device 505 that mirrors the user's movement in a virtual environment or in an environment augmenting the local area with additional content. Additionally, the engine 545 performs an action within an application executing on the console 510 in response to an action request received from the I/O interface 515 and provides feedback to the user that the action was performed. The provided feedback may be visual or audible feedback via the eyewear device 505 or haptic feedback via the I/O interface 515.
  • a software module is implemented with a computer program product comprising a computer-readable medium containing computer program code, which can be executed by a computer processor for performing any or all of the steps, operations, or processes described.
  • Embodiments of the disclosure may also relate to an apparatus for performing the operations herein.
  • This apparatus may be specially constructed for the required purposes, and/or it may comprise a general-purpose computing device selectively activated or reconfigured by a computer program stored in the computer.
  • a computer program may be stored in a non-transitory, tangible computer readable storage medium, or any type of media suitable for storing electronic instructions, which may be coupled to a computer system bus.
  • any computing systems referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.
  • Embodiments of the disclosure may also relate to a product that is produced by a computing process described herein.
  • a product may comprise information resulting from a computing process, where the information is stored on a non-transitory, tangible computer readable storage medium and may include any embodiment of a computer program product or other data combination described herein.

Abstract

An audio system includes a transducer assembly, an audio sensor, and a controller. The transducer assembly is coupled to a back of an auricle of an ear of the user. The transducer assembly vibrates the auricle over a frequency range to cause the auricle to create an acoustic pressure wave in accordance with vibration instructions. The acoustic sensor detects the acoustic pressure wave at an entrance of the ear of the user. The controller dynamically adjusts a frequency response model based in part on the detected acoustic pressure wave, updates the vibration instructions using the adjusted frequency response model, and provides the updated vibration instructions to the transducer assembly.

Description

    BACKGROUND
  • This disclosure relates generally to an audio system in an eyewear device, and specifically relates to a cartilage conduction audio system for use in eyewear devices.
  • Head-mounted displays in virtual reality (VR), augmented reality (AR), and/or mixed reality (MR) systems often include features such as speakers or personal audio devices to provide sound to users. These speakers or personal audio devices are typically formed over the ear and cover the ear (e.g., headphones), or placed in the ear (e.g., in-ear headphones or earbuds). However, a user wearing a head-mounted display in a VR, AR, and MR system can benefit from keeping the ear canal open and not covered by an audio devices. For example, the user can have a more immersive and safer experience and receive spatial cues from ambient sound when the ear is unobstructed. It is desirable for an audio system of the eyewear device to be lightweight, ergonomic, low in power consumption, and to not produce crosstalk between the ears. Such features are challenging to incorporate in a full frequency (20 Hz to 20,000 Hz) audio reproduction system on an eyewear device while leaving the ear canal open to the acoustic scene around the user.
  • SUMMARY
  • An audio system includes a transducer assembly, an acoustic sensor, and a controller. The transducer assembly is located behind the ear so that an ear canal of the user is clear. The transducer assembly is coupled to a back of an auricle of the user to vibrate the auricle over a frequency range, creating an acoustic pressure wave in accordance with vibration instructions. The auricle of the ear of the user is used as a speaker, keeping the ear canal open such that the ear is open to the acoustic scene around the user. The acoustic sensor detects the acoustic pressure wave at an entrance of the ear of the user. The controller adjusts a frequency response model based in part on the detected acoustic pressure wave, updates the vibration instructions using the adjusted frequency response model, and provides the updated vibration instructions to the transducer assembly. Accordingly, an audio response is individualized for each user based on the detected signal to equalize the audio response per individual. The audio system can be integrated into an eyewear device (e.g., glasses-type headset, near eye display, prescription glasses) and be located behind the ear of the user.
  • The transducer assembly may include one or more transducers to generate vibrations over a range of frequencies. For example, the transducer assembly includes a piezoelectric transducer to generate vibrations over a first portion of a frequency range and a moving coil transducer to generate vibrations over a second portion of the frequency range.
  • The acoustic sensor may be a microphone positioned at the entrance of the ear canal to sense the acoustic pressure wave. Alternatively, the acoustic sensor may be a vibration sensor coupled to the auricle of the ear of the user to sense a vibration of the auricle corresponding to the acoustic pressure wave at the entrance of the ear of the user. The vibration sensor may be a piezoelectric sensor or an accelerometer.
  • Embodiments according to the invention are in particular disclosed in the attached claims directed to an audio system, an eyewear device, and a storage medium, wherein any feature mentioned in one claim category, e.g. audio system, can be claimed in another claim category, e.g. eyewear device, storage medium, system, computer program product, and method as well. The dependencies or references back in the attached claims are chosen for formal reasons only. However any subject matter resulting from a deliberate reference back to any previous claims (in particular multiple dependencies) can be claimed as well, so that any combination of claims and the features thereof is disclosed and can be claimed regardless of the dependencies chosen in the attached claims. The subject-matter which can be claimed comprises not only the combinations of features as set out in the attached claims but also any other combination of features in the claims, wherein each feature mentioned in the claims can be combined with any other feature or combination of other features in the claims. Furthermore, any of the embodiments and features described or depicted herein can be claimed in a separate claim and/or in any combination with any embodiment or feature described or depicted herein or with any of the features of the attached claims.
  • In an embodiment according to the invention, an audio system may comprise:
    • a transducer assembly configured to be coupled to a first portion of a back of an auricle of an ear of a user, the transducer assembly including at least one transducer that is configured to vibrate the auricle over a frequency range to cause the auricle to create an acoustic pressure wave in accordance with vibration instructions;
    • an acoustic sensor configured to detect the acoustic pressure wave at an entrance of the ear of the user; and
    • a controller configured to:
      • dynamically adjust a frequency response model based in part on the detected acoustic pressure wave;
      • update the vibration instructions using the adjusted frequency response model; and
      • provide the updated vibration instructions to the transducer assembly.
  • The at least one transducer may be a piezoelectric transducer.
  • The transducer assembly may be configured to generate vibrations over a range of frequencies, and the transducer assembly may include a first transducer and a second transducer, the first transducer may be configured to provide a first portion of the frequency range, and the second transducer may be configured to provide a second portion of the frequency range.
  • The second transducer may be a moving coil transducer.
  • The acoustic sensor may be a microphone configured to sense the acoustic pressure wave at the entrance of the ear canal.
  • The acoustic sensor may be a vibration sensor coupled to a third portion of the auricle, and may be configured to sense a vibration of the auricle corresponding to the acoustic pressure wave at the entrance of the ear of the user.
  • The controller may adjust the frequency response model based in part on the detected acoustic pressure wave by computing an inverse function and may apply the inverse function to the detected acoustic pressure wave.
  • The audio system may be part of an eyewear device.
  • The audio system may use a flat spectrum broadband signal to generate the adjusted frequency response model.
  • In an embodiment according to the invention, an eyewear device may comprise:
    • a transducer assembly configured to be coupled to a first portion of a back of an auricle of an ear of a user, the transducer assembly including at least one transducer that is configured to vibrate the auricle over a frequency range to cause the auricle to create an acoustic pressure wave in accordance with vibration instructions;
    • a controller configured to:
      • generate the vibration instructions using a frequency response model and audio content; and
      • provide the vibration instructions to the transducer assembly.
  • In an embodiment according to the invention, an eyewear device may comprise:
    • an acoustic sensor configured to detect the acoustic pressure wave at an entrance of the ear of the user,
    • wherein the controller is further configured to:
      • dynamically adjust the frequency response model based in part on the detected acoustic pressure wave;
      • update the vibration instructions using the adjusted frequency response model; and
      • provide the updated vibration instructions to the transducer assembly.
  • The at least one transducer may be a piezoelectric transducer.
  • The transducer assembly may be configured to generate vibrations over a range of frequencies, and the transducer assembly may include a first transducer and a second transducer, the first transducer may be configured to provide a first portion of the frequency range, and the second transducer may be configured to provide a second portion of the frequency range.
  • The first transducer may be a piezoelectric transducer and the second transducer may be a moving coil transducer.
  • The acoustic sensor may be a microphone configured to sense the acoustic pressure wave at the entrance of the ear canal.
  • The acoustic sensor may be a vibration sensor coupled to a third portion of the auricle, and may be configured to sense a vibration of the auricle corresponding to the acoustic pressure wave at the entrance of the ear of the user.
  • The controller may adjust the frequency response model based in part on the detected acoustic pressure wave by computing an inverse function and may apply the inverse function to the detected acoustic pressure wave.
  • A flat spectrum broadband signal may be used to generate the adjusted frequency response model.
  • In an embodiment according to the invention, an eyewear device may comprise:
    • an acoustic sensor configured to detect the acoustic pressure wave at an entrance of the ear of the user, wherein the acoustic sensor is temporarily coupled to the eyewear device for calibration of the user and responsive to completing calibration of the user, the acoustic sensor may be uncoupled to the eyewear device,
    • wherein the controller is further configured to:
      • adjust the frequency response model based in part on the detected acoustic pressure wave;
      • update the vibration instructions using the adjusted frequency response model; and
      • provide the updated vibration instructions to the transducer assembly.
  • In an embodiment according to the invention, a non-transitory computer-readable storage medium may store executable computer program instructions, the computer program instructions may comprise instructions for:
    • generating vibration instructions using a frequency response model and audio content;
    • providing the vibration instructions to a transducer assembly configured to be coupled to a first portion of a back of an auricle of an ear of the user;
    • detecting acoustic wave pressure at an entrance of the ear of the user;
    • adjusting the frequency response model based in part on the detected acoustic pressure wave;
    • updating the vibration instructions using the adjusted frequency response model; and
    • providing the updated vibration instructions to the transducer assembly.
  • In a further embodiment of the invention, one or more computer-readable non-transitory storage media embody software that is operable when executed to perform in a system according to the invention or any of the above mentioned embodiments.
  • In a further embodiment of the invention, a computer-implemented method uses a system according to the invention or any of the above mentioned embodiments.
  • In a further embodiment of the invention, a computer program product, preferably comprising a computer-readable non-transitory storage media, is used in a system according to the invention or any of the above mentioned embodiments.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Figure (FIG.) 1
    is an example illustrating an eyewear device including a cartilage conduction audio system (audio system), in accordance with an embodiment.
    FIG. 2A
    is an example illustrating a portion of an eyewear device including a transducer assembly and an acoustic sensor that is a microphone on an ear of a user, in accordance with an embodiment.
    FIG. 2B
    is an example illustrating a portion of the eyewear device 250 including a transducer assembly and acoustic sensor that is a piezoelectric transducer, in accordance with an embodiment.
    FIG. 3
    is a block diagram of an audio system, in accordance with an embodiment.
    FIG. 4
    is a flowchart illustrating a process of operating a cartilage conduction audio system, in accordance with an embodiment.
    FIG. 5
    is a system environment of an eyewear device including a cartilage conduction audio system, in accordance with an embodiment.
  • The figures depict various embodiments for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles described herein.
  • DETAILED DESCRIPTION
  • Disclosed is a cartilage conduction audio system (audio system) that uses cartilage conduction for providing sound to an ear of a user while keeping the ear canal of the user unobstructed. The audio system includes a transducer coupled to a back of the ear of the user. The transducer generates sound by vibrating the back of the ear (e.g., auricle, or may also be referred to as a pinna) of the user, which vibrates the cartilage in the ear of the user to generate acoustic waves corresponding to received audio content. Advantages of an audio system that uses cartilage conduction over one that only uses bone conduction (e.g., vibration of bones of the skull) include, e.g., reducing crosstalk between the ears, reducing size and power consumption of the audio system, and improving ergonomics. An audio system that uses cartilage conduction uses less coupling force (e.g., less static constant force on the skin) for producing a similar hearing sensation in comparison to an audio system that uses bone conduction, resulting in improved comfort for a wearable device, which is particularly desirable for a wearable device that is worn all day.
  • System Architecture
  • FIG. 1 is an example illustrating an eyewear device 100 including a cartilage conduction audio system (audio system), in accordance with an embodiment. The eyewear device 100 presents media to a user. In one embodiment, the eyewear device 100 may be a head mounted display (HMD). Examples of media presented by the eyewear device 100 include one or more images, video, audio, or some combination thereof. The eyewear device 100 may include, among other components, a frame 105, a lens 110, a transducer assembly 120, an acoustic sensor 125, and a controller 130. In some embodiments, the eyewear device 100 may also optionally include a sensor device 115.
  • The eyewear device 100 may correct or enhance the vision of a user, protect the eye of a user, or provide images to a user. The eyewear device 100 may be eyeglasses which correct for defects in a user's eyesight. The eyewear device 100 may be sunglasses which protect a user's eye from the sun. The eyewear device 100 may be safety glasses which protect a user's eye from impact. The eyewear device 100 may be a night vision device or infrared goggles to enhance a user's vision at night. The eyewear device 100 may be a head mounted display that produces VR, AR, or MR content for the user. Alternatively, the eyewear device 100 may not include a lens 110 and may be a frame 105 with an audio system that provides audio (e.g., music, radio, podcasts) to a user.
  • The frame 105 includes a front part that holds the lens 110 and end pieces to attach to the user. The front part of the frame 105 bridges the top of a nose of the user. The end pieces (e.g., temples) are portions of the frame 105 to which the temples of a user are attached. The length of the end piece may be adjustable (e.g., adjustable temple length) to fit different users. The end piece may also include a portion that curls behind the ear of the user (e.g., temple tip, ear piece).
  • The lens 110 provides or transmits light to a user wearing the eyewear device 100. The lens 110 may be prescription lens (e.g., single vision, bifocal and trifocal, or progressive) to help correct for defects in a user's eyesight. The prescription lens transmits ambient light to the user wearing the eyewear device 100. The transmitted ambient light may be altered by the prescription lens to correct for defects in the user's eyesight. The lens 110 may be a polarized lens or a tinted lens to protect the user's eyes from the sun. The lens 110 may be one or more waveguides as part of a waveguide display in which image light is coupled through an end or edge of the waveguide to the eye of the user. The lens 110 may include an electronic display for providing image light and may also include an optics block for magnifying image light from the electronic display. Additional detail regarding the lens 110 can be found in the detailed description of FIG. 5. The lens 110 is held by a front part of the frame 105 of the eyewear device 100.
  • The sensor device 115 estimates a current position of the eyewear device 100 relative to an initial position of the eyewear device 100. The sensor device 115 may be located on a portion of the frame 105 of the eyewear device 100. The sensor device 115 includes a position sensor and an inertial measurement unit Additional details about the sensor device 115 can be found in the detailed description of FIG. 5.
  • The audio system of the eyewear device 100 includes the transducer assembly 120, the acoustic sensor 125, and the controller 130. The audio system provides audio content to a user by vibrating the auricle of the ear of the user to produce an acoustic pressure wave. The audio system also uses feedback to create a similar audio experience across different users. Additional detail regarding the audio system can be found in the detailed description of FIG. 3.
  • The transducer assembly 120 produces sound by vibrating the cartilage in the ear of the user. The transducer assembly 120 is coupled to an end piece of the frame 105 and is configured to be coupled to the back of an auricle of the ear of the user. The auricle is a portion of the outer ear that projects out of a head of the user. The transducer assembly 120 receives vibration instructions from the controller 130. Vibration instructions may include a content signal, a control signal, and a gain signal. The content signal may be based on audio content for presentation to the user. The control signal may be used to enable or disable the transducer assembly 120 or one or more transducers of the transducer assembly. The gain may be used to amplify the content signal. The transducer assembly 120 may include one or more transducer to cover different parts of a frequency range. For example, a piezoelectric transducer may be used to cover a first part of a frequency range and a moving coil transducer may be used to cover a second part of a frequency range. Additional detail regarding the transducer assembly 120 can be found in the detailed description of FIG. 3.
  • The acoustic sensor 125 detects an acoustic pressure wave at an entrance of an ear of a user. The acoustic sensor 125 is coupled to an end piece of the frame 105. The acoustic sensor 125 as shown in FIG. 1 is a microphone which may be positioned at the entrance of the user's ear. In this embodiment, the microphone may directly measure the acoustic pressure wave at the entrance of the ear of the user. Alternatively, the acoustic sensor 125 is a vibration sensor that is configured to be coupled to the back of the pinna of the user. The vibration sensor may indirectly measure the acoustic pressure wave at the entrance of the ear. For example, the vibration sensor may measure a vibration that is a reflection of the acoustic pressure wave at the entrance of the ear and/or measure a vibration created by the transducer assembly on the auricle of the ear of the user which may be used to estimate the acoustic pressure wave at the entrance of the ear. In one embodiment, a mapping between acoustic pressure generated at the entrance to the ear canal and a vibration level generated on the pinna is an experimentally determined quantity that is measured on a representative sample of users and stored. This stored mapping between the acoustic pressure and vibration level (e.g., frequency dependent linear mapping) of the pinna is applied to a measured vibration signal from the vibration sensor which serves as a proxy for the acoustic pressure at the entrance of the ear canal. The vibration sensor can be an accelerometer or a piezoelectric sensor. An accelerometer may be a piezoelectric accelerometer or a capacitive accelerometer. The capacitive accelerometer senses change in capacitance between structures which can be moved by an accelerative force. In some embodiments, the acoustic sensor 125 is removed from the eyewear device 100 after calibration. Additional detail regarding the acoustic sensor 125 can be found in the detailed description of FIG. 3.
  • The controller 130 provides vibration instructions to the transducer assembly 120, receives information from the acoustic sensor 125 regarding the produced sound, and updates the vibration instructions based on the received information. Vibration instructions instruct the transducer assembly 120 how to produce vibrations. For example, vibration instructions may include a content signal (e.g., electrical signal applied to the transducer assembly 120 to produce a vibration), a control signal to enable or disable the transducer assembly 120, and a gain signal to scale the content signal (e.g., increase or decrease the vibrations produced by the transducer assembly 120). The vibration instructions may be generated by the controller 130. The controller 130 may receive audio content (e.g., music, calibration signal) from a console for presentation to a user and generate vibration instructions based on the received audio content. The controller 130 receives information from the acoustic sensor 125 that describes the produced sound at an ear of the user. In one embodiment the acoustic sensor 125 is a vibration sensor that measures a vibration of a pinna of a user and the controller 130 applies a previously stored frequency dependent linear mapping of pressure to vibration to determine the acoustic pressure wave at the entrance of the ear based on the received detected vibration. The controller 130 uses the received information as feedback to compare the produced sound to a target sound (e.g., audio content) and adjusts the vibration instructions to make the produced sound closer to the target sound. The controller 130 is embedded into the frame 105 of the eyewear device 100. In other embodiments, the controller 130 may be located in a different location. For example, the controller 130 may be part of the transducer assembly or located external to the eyewear device 100. Additional detail regarding the controller 130 can be found in the detailed description of FIG. 3.
  • FIG. 2A is an example illustrating a portion of the eyewear device 200 including a transducer assembly 220 that is a microphone and acoustic sensor 225 on an ear of the user, in accordance with an embodiment. The eyewear device 200, transducer assembly 220, and acoustic sensor 225 are embodiments of the eyewear device 100, transducer assembly 120, and the acoustic sensor 125. The transducer assembly 220 is coupled to a back of an ear of a user. The transducer assembly vibrates the back of the ear of a user to generate a pressure wave based on vibration instructions. The acoustic sensor 225 is a microphone positioned at an entrance of the ear of the user to detect the pressure wave produced by the transducer assembly 220. The audio system compares the detected pressure wave (e.g. produced sound) with a target pressure wave (e.g. audio content) and adjusts vibration instructions to make a detected pressure wave more similar to a target pressure wave.
  • FIG. 2B is an example illustrating a portion of the eyewear device 250 including a transducer assembly 260 and acoustic sensor 275 that is a piezoelectric transducer, in accordance with an embodiment. The eyewear device 250, transducer assembly 260, and acoustic sensor 275 are embodiments of the eyewear device 100, transducer assembly 120, and the acoustic sensor 125. The transducer assembly 260 is a transducer located around the end piece of the frame (e.g., bottom of a behind-the-ear ear cup) that is to be coupled to the back of the ear of a user. In this embodiment, the transducer assembly 260 is shown to be a circular voice coil (e.g., moving coil) transducer. The acoustic sensor 275 is a piezoelectric transducer that is to be coupled to the back of the ear of a user. The piezoelectric transducer may be a stacked piezoelectric transducer and may have a dimension in the range of a few millimeters in size (e.g., 9 mm).
  • FIG. 3 is a block diagram of an audio system 300, in accordance with an embodiment. The audio system in FIG. 1 is an embodiment of the audio system 300. The audio system 300 includes a transducer assembly 310, an acoustic sensor 320, and a controller 330.
  • The transducer assembly 310 vibrates a cartilage of a user's ear in accordance with the vibration instructions (e.g., received from the controller 330). The transducer assembly 310 is coupled to a first portion of a back of an auricle of an ear of a user. The transducer assembly 310 includes at least one transducer to vibrate the auricle over a frequency range to cause the auricle to create an acoustic pressure wave in accordance with vibration instructions. The transducer may be a single piezoelectric transducer. A piezoelectric transducer can generate frequencies up to 20 kHz using a range of voltages around +/-100V. The range of voltages may include lower voltages as well (e.g., +/- 10V). The piezoelectric transducer may be a stacked piezoelectric actuator. The stacked piezoelectric actuator includes multiple piezoelectric elements that are stacked (e.g. mechanically connected in series). The stacked piezoelectric actuator may have a lower range of voltages because the movement of a stacked piezoelectric actuator can be a product of the movement of a single piezoelectric element with the number of elements in the stack. A piezoelectric transducer is made of a piezoelectric material that can generate a strain (e.g., deformation in the material) in the presence of an electric field. The piezoelectric material may be a polymer (e.g., polyvinyl chloride (PVC), polyvinylidene fluoroide (PVDF)), a polymer-based composite, ceramic, or crystal (e.g., quartz (silicon dioxide or SiO2), lead zirconate-titanate (PZT)). By applying an electric field or a voltage across a polymer which is a polarized material, the polymer changes in polarization and may compress or expand depending on the polarity and magnitude of the applied electric field. The piezoelectric transducer may be coupled to a material (e.g., silicone) that attaches well to the back of an ear of a user. In one embodiment, the transducer assembly 310 maintains good surface contact with the back of the user's ear and maintains a steady amount of application force (e.g., 1 Newton) to the user's ear.
  • In some embodiments, the transducer assembly 310 is configured to generation vibrations over a range of frequencies and includes a first transducer and a second transducer. The first transducer is configured to provide a first portion of the frequency range (e.g., higher range up to 20 kHz). The first transducer may be, e.g., a piezoelectric transducer. The second transducer is configured to provide a second portion of the frequency range (e.g., lower range around 20 Hz). The second transducer may be a piezoelectric transducer or may be a different type of transducer such as a moving coil transducer. A typical moving coil transducer includes a coil of wire and a permanent magnet to produce a permanent magnetic field. Applying a current to the wire while it is placed in the permanent magnetic field produces a force on the coil based on the amplitude and the polarity of the current that can move the coil towards or away from the permanent magnet. The second transducer may be made of a more rigid material than the first transducer. The second transducer may be coupled to a second portion different than the first portion of the back of the ear of the user. Alternatively, the second transducer may be in contact with the skull of the user.
  • The acoustic sensor 320 provides information regarding the produced sound to the controller 330. The acoustic sensor 320 detects an acoustic pressure wave at an entrance of an ear of a user. In one embodiment, the acoustic sensor 320 is a microphone positioned at an entrance of an ear of a user. A microphone is a transducer that converts pressure into an electrical signal. The frequency response of the microphone may be relatively flat in some portions of a frequency range and may be linear in other portions of a frequency range. The microphone may be configured to receive a gain signal to scale a detected signal from the microphone based on the vibration instructions provided to the transducer assembly 310. For example, the gain may be adjusted based on the vibration instructions to avoid clipping of the detected signal or for improving a signal to noise ratio in the detected signal.
  • In some embodiments the acoustic sensor 320 may be a vibration sensor. The vibration sensor is coupled to a portion of the ear. In some embodiments, the vibration sensor and the transducer assembly 310 couple to different portions of the ear. The vibration sensor is similar to the transducers used in the transducer assembly except the signal is flowing in reverse. Instead of an electrical signal producing a mechanical vibration in a transducer, a mechanical vibration is generating an electrical signal in the vibration sensor. A vibration sensor may be made of piezoelectric material that can generate an electrical signal when the piezoelectric material is deformed. The piezoelectric material may be a polymer (e.g., PVC, PVDF), a polymer-based composite, ceramic, or crystal (e.g., SiO2, PZT). By applying a pressure on the piezoelectric material, the piezoelectric material changes in polarization and produces an electrical signal. The piezoelectric sensor may be coupled to a material (e.g., silicone) that attaches well to the back of an ear of a user. A vibration sensor can also be an accelerometer. The accelerometer may be piezoelectric or capacitive. A capacitive accelerometer measures changes in capacitance between structures which can be moved by an accelerative force. In one embodiment, the vibration sensor maintains good surface contact with the back of the user's ear and maintains a steady amount of application force (e.g., 1 Newton) to the user's ear. The vibration sensor may be an accelerometer. The vibration sensor may be integrated in an internal measurement unit (IMU) integrated circuit (IC). The IMU is further described with relation to FIG. 5.
  • The controller 330 controls components of the audio system 300. The controller 330 generates vibration instructions to instruct the transducer assembly 310 how to produce vibrations. For example, vibration instructions may include a content signal (e.g., electrical signal applied to the transducer assembly 310 to produce a vibration), a control signal to enable or disable the transducer assembly 310, and a gain signal to scale the content signal (e.g., increase or decrease the vibrations produced by the transducer assembly 310). The controller 330 generates the content signal of the vibration instructions based on audio content and a frequency response model. A frequency response model describes the response of a system to inputs at certain frequencies and may indicate how an output is shifted in amplitude and phase based on the input. Thus, the controller 330 may generate a content signal (e.g., input signal) of the vibration instructions with the audio content (e.g., target output) and the frequency response model (e.g., relationship of the input to the output). In one embodiment, the controller 330 may generate the content signal of the vibration instructions by applying an inverse of the frequency response to the audio content. The controller 330 receives feedback from an acoustic sensor 320. The acoustic sensor 320 provides information about the sound signal (e.g., acoustic pressure wave) produced by the vibration transducer 310. The controller 330 may compare the detected acoustic pressure wave with a target acoustic pressure wave based on audio content provided to the user. The controller 330 can then compute an inverse function to apply to the detected acoustic wave such that the detected acoustic pressure wave appears the same as the target acoustic pressure wave. Thus, the controller 330 can adjust the frequency response model of the audio system using the computed inverse function specific to each user. The adjustment of the frequency model may be performed while the user is listening to audio content. The controller 330 can then generate updated vibration instructions using the adjusted frequency response model. The controller 330 enables a similar audio experience to be produced across different users of the sound system. In a cartilage conduction audio system, the speaker of the audio system corresponds to a user's auricle. As each auricle of a user is different (e.g., shape and size), the frequency response model will vary from user to user. By adjusting the frequency response model for each user based on audio feedback, the audio system can maintain the same type of produced sound (e.g., neutral listening) regardless of the user. Neutral listening is having similar listening experience across different users. In other words, the listening experience is impartial or neutral to the user (e.g., does not change from user to user).
  • In one embodiment, the audio system uses a flat spectrum broadband signal to generate the adjusted frequency response model. For example, the controller 330 provides vibration instructions to the transducer assembly 310 based on a flat spectrum broadband signal. The acoustic sensor 320 detects an acoustic pressure wave at an entrance of an ear of the user. The controller 330 compares the detected acoustic pressure wave with the target acoustic pressure wave based on the flat spectrum broadband signal and adjusts the frequency model of the audio system accordingly. In this embodiment, the flat spectrum broadband signal may be used while performing calibration of the audio system for a particular user. Thus, the audio system may perform an initial calibration for a user instead of continuously monitoring the audio system. In this embodiment, the acoustic sensor may be temporarily coupled to the eyewear device for calibration of the user. Responsive to completing calibration of the user, the acoustic sensor may be uncoupled to the eyewear device. Advantages of removing the acoustic sensor from the eyewear device include making it easier to wear and reducing the volume and weight of the eyewear device.
  • FIG. 4 is a flowchart illustrating a process of operating an audio system that uses cartilage conduction, in accordance with an embodiment. The process 400 of FIG. 4 may be performed by an audio system that uses cartilage conduction (e.g., the audio system 300). Other entities (e.g., an eyewear device and/or console) may perform some or all of the steps of the process in other embodiments. Likewise, embodiments may include different and/or additional steps, or perform the steps in different orders.
  • The audio system generates 410 vibration instructions using a frequency response model and audio content. The audio system may receive audio content from a console. The audio content may include content such as music, radio signal, or calibration signal. The frequency response model describes a relationship between an input (e.g., audio content, vibration instructions) and output (e.g., produced audio, sound pressure wave, vibrations) of the auricle of an ear of a user which is used as a speaker in the audio system. A controller (e.g., the controller 330) may generate the vibration instructions using the frequency response model and the audio content. For example, the controller may start with the audio content and use the frequency response model (e.g., apply inverse frequency response) to estimate vibration instructions to produce the audio content.
  • The audio system provides 420 the vibration instructions to a transducer assembly (e.g., the transducer assembly 310). The transducer assembly is coupled to the back of an auricle of an ear of a user and vibrates the auricle based on the vibration instructions. The vibration of the auricle produces an acoustic pressure wave that provides sound based on the audio content to the user.
  • The audio system detects 430 acoustic pressure wave at an entrance of an ear of the user. The acoustic pressure wave is generated by the transducer assembly. In one embodiment, an acoustic sensor (e.g., acoustic sensor 320) may be a microphone positioned at the entrance of the ear of the user to detect the acoustic pressure wave at the entrance of the ear of the user.
  • The audio system adjusts 440 frequency response model based in part of the detected acoustic pressure wave. The controller may compare the detected acoustic pressure wave with a target acoustic pressure wave based on audio content provided to the user. The controller can compute an inverse function to apply to the detected acoustic wave such that the detected acoustic pressure wave appears the same as the target acoustic pressure wave.
  • The audio system updates 450 vibration instructions using the adjusted frequency response model. The updated vibration instructions may be generated by the controller which uses audio content and the adjusted frequency response model. For example, the controller may start with audio content and use the adjusted frequency response model to estimate updated vibration instructions to produce audio content closer to a target acoustic pressure wave.
  • The audio system provides 460 updated vibration instructions to the transducer assembly. The transducer assembly vibrates the auricle produces an updated acoustic pressure wave that provides sound based on the updated vibration instructions to the user. The updated acoustic pressure wave may appear closer to a target acoustic pressure wave.
  • The audio system may dynamically adjust the frequency response model while the user is listening to audio content or may just adjust the frequency response model during a calibration of the audio system per user.
  • FIG. 5 is a system environment 500 of the eyewear device including a cartilage conduction audio system, in accordance with an embodiment. The system 500 may operate in a VR, AR, or MR environment, or some combination thereof. The system 500 shown by FIG. 5 comprises an eyewear device 505 and an input/output (I/O) interface 515 that is coupled to a console 510. The eyewear device 505 may be an embodiment of the eyewear device 100. While FIG. 5 shows an example system 500 including one eyewear device 505 and one I/O interface 515, in other embodiments any number of these components may be included in the system 500. For example, there may be multiple eyewear devices 505 each having an associated I/O interface 515 with each eyewear device 505 and I/O interface 515 communicating with the console 510. In alternative configurations, different and/or additional components may be included in the system 500. Additionally, functionality described in conjunction with one or more of the components shown in FIG. 5 may be distributed among the components in a different manner than described in conjunction with FIG. 5 in some embodiments. For example, some or all of the functionality of the console 510 is provided by the eyewear device 505.
  • The eyewear device 505 may be a head-mounted display that presents content to a user comprising augmented views of a physical, real-world environment with computer-generated elements (e.g., two dimensional (2D) or three dimensional (3D) images, 2D or 3D video, sound, etc.). In some embodiments, the presented content includes audio that is presented via an audio block 520 that receives audio information from the eyewear device 505, the console 510, or both, and presents audio data based on the audio information. The eyewear device 505 may comprise one or more rigid bodies, which may be rigidly or non-rigidly coupled to each other together. A rigid coupling between rigid bodies causes the coupled rigid bodies to act as a single rigid entity. In contrast, a non-rigid coupling between rigid bodies allows the rigid bodies to move relative to each other. In some embodiments, the eyewear device 505 presents virtual content to the user that is based in part on a real environment surrounding the user. For example, virtual content may be presented to a user of the eyewear device. The user physically may be in a room, and virtual walls and a virtual floor of the room are rendered as part of the virtual content.
  • The eyewear device 505 includes an audio block 520. The audio block 520 is one embodiment of the audio system 300. The audio block 520 is a cartilage conduction audio system which provides audio information to a user by vibrating the cartilage in a user's ear to produce sound. The audio block 520 monitors the produced sound so that it can compensate for a frequency response model for each ear of the user and can maintain the same type of produced sound across different individuals.
  • The eyewear device 505 may include an electronic display 525, an optics block 530, one or more position sensors 535, and an inertial measurement Unit (IMU) 540. The electronic display 525 and the optics block 530 is one embodiment of a lens 110. The position sensors 535 and the IMU 540 is one embodiment of sensor device 115. Some embodiments of the eyewear device 505 have different components than those described in conjunction with FIG. 5. Additionally, the functionality provided by various components described in conjunction with FIG. 5 may be differently distributed among the components of the eyewear device 505 in other embodiments, or be captured in separate assemblies remote from the eyewear device 505.
  • The electronic display 525 displays 2D or 3D images to the user in accordance with data received from the console 510. In various embodiments, the electronic display 525 comprises a single electronic display or multiple electronic displays (e.g., a display for each eye of a user). Examples of the electronic display 525 include: a liquid crystal display (LCD), an organic light emitting diode (OLED) display, an active-matrix organic light-emitting diode display (AMOLED), some other display, or some combination thereof.
  • The optics block 530 magnifies image light received from the electronic display 525, corrects optical errors associated with the image light, and presents the corrected image light to a user of the eyewear device 505. In various embodiments, the optics block 530 includes one or more optical elements. Example optical elements included in the optics block 530 include: an aperture, a Fresnel lens, a convex lens, a concave lens, a filter, a reflecting surface, or any other suitable optical element that affects image light. Moreover, the optics block 530 may include combinations of different optical elements. In some embodiments, one or more of the optical elements in the optics block 530 may have one or more coatings, such as partially reflective or anti-reflective coatings.
  • Magnification and focusing of the image light by the optics block 530 allows the electronic display 525 to be physically smaller, weigh less, and consume less power than larger displays. Additionally, magnification may increase the field of view of the content presented by the electronic display 525. For example, the field of view of the displayed content is such that the displayed content is presented using almost all (e.g., approximately 110 degrees diagonal), and in some cases all, of the user's field of view. Additionally in some embodiments, the amount of magnification may be adjusted by adding or removing optical elements.
  • In some embodiments, the optics block 530 may be designed to correct one or more types of optical error. Examples of optical error include barrel or pincushion distortion, longitudinal chromatic aberrations, or transverse chromatic aberrations. Other types of optical errors may further include spherical aberrations, chromatic aberrations, or errors due to the lens field curvature, astigmatisms, or any other type of optical error. In some embodiments, content provided to the electronic display 525 for display is pre-distorted, and the optics block 530 corrects the distortion when it receives image light from the electronic display 525 generated based on the content.
  • The IMU 540 is an electronic device that generates data indicating a position of the eyewear device 505 based on measurement signals received from one or more of the position sensors 535. A position sensor 535 generates one or more measurement signals in response to motion of the eyewear device 505. Examples of position sensors 535 include: one or more accelerometers, one or more gyroscopes, one or more magnetometers, another suitable type of sensor that detects motion, a type of sensor used for error correction of the IMU 540, or some combination thereof. The position sensors 535 may be located external to the IMU 540, internal to the IMU 540, or some combination thereof.
  • Based on the one or more measurement signals from one or more position sensors 535, the IMU 540 generates data indicating an estimated current position of the eyewear device 505 relative to an initial position of the eyewear device 505. For example, the position sensors 535 include multiple accelerometers to measure translational motion (forward/back, up/down, left/right) and multiple gyroscopes to measure rotational motion (e.g., pitch, yaw, and roll). In some embodiments, the IMU 540 rapidly samples the measurement signals and calculates the estimated current position of the eyewear device 505 from the sampled data. For example, the IMU 540 integrates the measurement signals received from the accelerometers over time to estimate a velocity vector and integrates the velocity vector over time to determine an estimated current position of a reference point on the eyewear device 505. Alternatively, the IMU 540 provides the sampled measurement signals to the console 510, which interprets the data to reduce error. The reference point is a point that may be used to describe the position of the eyewear device 505. The reference point may generally be defined as a point in space or a position related to the eyewear device's 505 orientation and position.
  • The IMU 540 receives one or more parameters from the console 510. As further discussed below, the one or more parameters are used to maintain tracking of the eyewear device 505. Based on a received parameter, the IMU 540 may adjust one or more IMU parameters (e.g., sample rate). In some embodiments, certain parameters cause the IMU 540 to update an initial position of the reference point so it corresponds to a next position of the reference point. Updating the initial position of the reference point as the next calibrated position of the reference point helps reduce accumulated error associated with the current position estimated the IMU 540. The accumulated error, also referred to as drift error, causes the estimated position of the reference point to "drift" away from the actual position of the reference point over time. In some embodiments of the eyewear device 505, the IMU 540 may be a dedicated hardware component. In other embodiments, the IMU 540 may be a software component implemented in one or more processors.
  • The I/O interface 515 is a device that allows a user to send action requests and receive responses from the console 510. An action request is a request to perform a particular action. For example, an action request may be an instruction to start or end capture of image or video data, or an instruction to perform a particular action within an application. The I/O interface 515 may include one or more input devices. Example input devices include: a keyboard, a mouse, a game controller, or any other suitable device for receiving action requests and communicating the action requests to the console 510. An action request received by the I/O interface 515 is communicated to the console 510, which performs an action corresponding to the action request. In some embodiments, the I/O interface 515 includes an IMU 540, as further described above, that captures calibration data indicating an estimated position of the I/O interface 515 relative to an initial position of the I/O interface 515. In some embodiments, the I/O interface 515 may provide haptic feedback to the user in accordance with instructions received from the console 510. For example, haptic feedback is provided when an action request is received, or the console 510 communicates instructions to the I/O interface 515 causing the I/O interface 515 to generate haptic feedback when the console 510 performs an action.
  • The console 510 provides content to the eyewear device 505 for processing in accordance with information received from one or more of: the eyewear device 505 and the I/O interface 515. In the example shown in FIG. 5, the console 510 includes an application store 550, a tracking module 555 and an engine 545. Some embodiments of the console 510 have different modules or components than those described in conjunction with FIG. 5. Similarly, the functions further described below may be distributed among components of the console 510 in a different manner than described in conjunction with FIG. 5.
  • The application store 550 stores one or more applications for execution by the console 510. An application is a group of instructions, that when executed by a processor, generates content for presentation to the user. Content generated by an application may be in response to inputs received from the user via movement of the eyewear device 505 or the I/O interface 515. Examples of applications include: gaming applications, conferencing applications, video playback applications, or other suitable applications.
  • The tracking module 555 calibrates the system environment 500 using one or more calibration parameters and may adjust one or more calibration parameters to reduce error in determination of the position of the eyewear device 505 or of the I/O interface 515. Calibration performed by the tracking module 555 also accounts for information received from the IMU 540 in the eyewear device 505 and/or an IMU 540 included in the I/O interface 515. Additionally, if tracking of the eyewear device 505 is lost, the tracking module 555 may re-calibrate some or all of the system environment 500.
  • The tracking module 555 tracks movements of the eyewear device 505 or of the I/O interface 515 using information from the one or more position sensors 535, the IMU 540 or some combination thereof. For example, the tracking module 555 determines a position of a reference point of the eyewear device 505 in a mapping of a local area based on information from the eyewear device 505. The tracking module 555 may also determine positions of the reference point of the eyewear device 505 or a reference point of the I/O interface 515 using data indicating a position of the eyewear device 505 from the IMU 540 or using data indicating a position of the I/O interface 515 from an IMU 540 included in the I/O interface 515, respectively. Additionally, in some embodiments, the tracking module 555 may use portions of data indicating a position or the eyewear device 505 from the IMU 540 to predict a future location of the eyewear device 505. The tracking module 555 provides the estimated or predicted future position of the eyewear device 505 or the I/O interface 515 to the engine 545.
  • The engine 545 also executes applications within the system environment 500 and receives position information, acceleration information, velocity information, predicted future positions, or some combination thereof, of the eyewear device 505 from the tracking module 555. Based on the received information, the engine 545 determines content to provide to the eyewear device 505 for presentation to the user. For example, if the received information indicates that the user has looked to the left, the engine 545 generates content for the eyewear device 505 that mirrors the user's movement in a virtual environment or in an environment augmenting the local area with additional content. Additionally, the engine 545 performs an action within an application executing on the console 510 in response to an action request received from the I/O interface 515 and provides feedback to the user that the action was performed. The provided feedback may be visual or audible feedback via the eyewear device 505 or haptic feedback via the I/O interface 515.
  • Additional Configuration Information
  • The foregoing description of the embodiments of the disclosure has been presented for the purpose of illustration; it is not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. Persons skilled in the relevant art can appreciate that many modifications and variations are possible in light of the above disclosure.
  • Some portions of this description describe the embodiments of the disclosure in terms of algorithms and symbolic representations of operations on information. These algorithmic descriptions and representations are commonly used by those skilled in the data processing arts to convey the substance of their work effectively to others skilled in the art. These operations, while described functionally, computationally, or logically, are understood to be implemented by computer programs or equivalent electrical circuits, microcode, or the like. Furthermore, it has also proven convenient at times, to refer to these arrangements of operations as modules, without loss of generality. The described operations and their associated modules may be embodied in software, firmware, hardware, or any combinations thereof.
  • Any of the steps, operations, or processes described herein may be performed or implemented with one or more hardware or software modules, alone or in combination with other devices. In one embodiment, a software module is implemented with a computer program product comprising a computer-readable medium containing computer program code, which can be executed by a computer processor for performing any or all of the steps, operations, or processes described.
  • Embodiments of the disclosure may also relate to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, and/or it may comprise a general-purpose computing device selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a non-transitory, tangible computer readable storage medium, or any type of media suitable for storing electronic instructions, which may be coupled to a computer system bus. Furthermore, any computing systems referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.
  • Embodiments of the disclosure may also relate to a product that is produced by a computing process described herein. Such a product may comprise information resulting from a computing process, where the information is stored on a non-transitory, tangible computer readable storage medium and may include any embodiment of a computer program product or other data combination described herein.
  • Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the disclosure be limited not by this detailed description, but rather by any claims that issue on an application based hereon. Accordingly, the disclosure of the embodiments is intended to be illustrative, but not limiting, of the scope of the disclosure, which is set forth in the following claims.

Claims (15)

  1. An audio system comprising:
    a transducer assembly configured to be coupled to a first portion of a back of an auricle of an ear of a user, the transducer assembly including at least one transducer that is configured to vibrate the auricle over a frequency range to cause the auricle to create an acoustic pressure wave in accordance with vibration instructions;
    an acoustic sensor configured to detect the acoustic pressure wave at an entrance of the ear of the user; and
    a controller configured to:
    dynamically adjust a frequency response model based in part on the detected acoustic pressure wave;
    update the vibration instructions using the adjusted frequency response model; and
    provide the updated vibration instructions to the transducer assembly.
  2. The audio system of claim 1, wherein the at least one transducer is a piezoelectric transducer.
  3. The audio system of claim 1 or 2, wherein the transducer assembly is configured to generate vibrations over a range of frequencies, and the transducer assembly includes a first transducer and a second transducer, the first transducer is configured to provide a first portion of the frequency range, and the second transducer is configured to provide a second portion of the frequency range; optionally, wherein the second transducer is a moving coil transducer.
  4. The audio system of any of claims 1 to 3, wherein the acoustic sensor is a microphone configured to sense the acoustic pressure wave at the entrance of the ear canal; and/or
    wherein the acoustic sensor is a vibration sensor coupled to a third portion of the auricle, and is configured to sense a vibration of the auricle corresponding to the acoustic pressure wave at the entrance of the ear of the user.
  5. The audio system of any of claims 1 to 4, wherein the controller adjusts the frequency response model based in part on the detected acoustic pressure wave by computing an inverse function and applying the inverse function to the detected acoustic pressure wave.
  6. The audio system of any of claims 1 to 5, wherein the audio system is part of an eyewear device.
  7. The audio system of any of claims 1 to 6, wherein the audio system uses a flat spectrum broadband signal to generate the adjusted frequency response model.
  8. An eyewear device comprising:
    a transducer assembly configured to be coupled to a first portion of a back of an auricle of an ear of a user, the transducer assembly including at least one transducer that is configured to vibrate the auricle over a frequency range to cause the auricle to create an acoustic pressure wave in accordance with vibration instructions;
    a controller configured to:
    generate the vibration instructions using a frequency response model and audio content; and
    provide the vibration instructions to the transducer assembly.
  9. The eyewear device of claim 8, further comprising:
    an acoustic sensor configured to detect the acoustic pressure wave at an entrance of the ear of the user,
    wherein the controller is further configured to:
    dynamically adjust the frequency response model based in part on the detected acoustic pressure wave;
    update the vibration instructions using the adjusted frequency response model; and
    provide the updated vibration instructions to the transducer assembly.
  10. The eyewear device of claim 8 or 9, wherein the at least one transducer is a piezoelectric transducer.
  11. The eyewear device of any of claims 8 to 10, wherein the transducer assembly is configured to generate vibrations over a range of frequencies, and the transducer assembly includes a first transducer and a second transducer, the first transducer is configured to provide a first portion of the frequency range, and the second transducer is configured to provide a second portion of the frequency range;
    optionally, wherein the first transducer is a piezoelectric transducer and the second transducer is a moving coil transducer.
  12. The eyewear device of any of claims 8 to 11, wherein the acoustic sensor is a microphone configured to sense the acoustic pressure wave at the entrance of the ear canal; and/or
    wherein the acoustic sensor is a vibration sensor coupled to a third portion of the auricle, and is configured to sense a vibration of the auricle corresponding to the acoustic pressure wave at the entrance of the ear of the user.
  13. The eyewear device of any of claims 8 to 12, wherein the controller adjusts the frequency response model based in part on the detected acoustic pressure wave by computing an inverse function and applying the inverse function to the detected acoustic pressure wave; and/or
    wherein a flat spectrum broadband signal is used to generate the adjusted frequency response model.
  14. The eyewear device of any of claims 8 to 13, further comprising:
    an acoustic sensor configured to detect the acoustic pressure wave at an entrance of the ear of the user, wherein the acoustic sensor is temporarily coupled to the eyewear device for calibration of the user and responsive to completing calibration of the user, the acoustic sensor may be uncoupled to the eyewear device,
    wherein the controller is further configured to:
    adjust the frequency response model based in part on the detected acoustic pressure wave;
    update the vibration instructions using the adjusted frequency response model; and
    provide the updated vibration instructions to the transducer assembly.
  15. A non-transitory computer-readable storage medium storing executable computer program instructions, the computer program instructions comprising instructions for:
    generating vibration instructions using a frequency response model and audio content;
    providing the vibration instructions to a transducer assembly configured to be coupled to a first portion of a back of an auricle of an ear of the user;
    detecting acoustic wave pressure at an entrance of the ear of the user;
    adjusting the frequency response model based in part on the detected acoustic pressure wave;
    updating the vibration instructions using the adjusted frequency response model; and
    providing the updated vibration instructions to the transducer assembly.
EP18189104.5A 2017-08-18 2018-08-15 Cartilage conduction audio system for eyewear devices Active EP3445066B1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US15/680,836 US10231046B1 (en) 2017-08-18 2017-08-18 Cartilage conduction audio system for eyewear devices
PCT/US2018/046046 WO2019036279A1 (en) 2017-08-18 2018-08-09 Cartilage conduction audio system for eyewear devices

Publications (2)

Publication Number Publication Date
EP3445066A1 true EP3445066A1 (en) 2019-02-20
EP3445066B1 EP3445066B1 (en) 2021-06-16

Family

ID=63293967

Family Applications (1)

Application Number Title Priority Date Filing Date
EP18189104.5A Active EP3445066B1 (en) 2017-08-18 2018-08-15 Cartilage conduction audio system for eyewear devices

Country Status (1)

Country Link
EP (1) EP3445066B1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019212713A1 (en) * 2018-05-01 2019-11-07 Facebook Technologies, Llc Hybrid audio system for eyewear devices
US10658995B1 (en) 2019-01-15 2020-05-19 Facebook Technologies, Llc Calibration of bone conduction transducer assembly
WO2021061291A1 (en) * 2019-09-24 2021-04-01 Facebook Technologies, Llc Methods and system for controlling tactile content
CN113473347A (en) * 2021-06-30 2021-10-01 歌尔科技有限公司 Method and device for testing bone conduction sensor on product
US11561757B2 (en) 2019-09-24 2023-01-24 Meta Platforms Technologies, Llc Methods and system for adjusting level of tactile content when presenting audio content
US11678103B2 (en) 2021-09-14 2023-06-13 Meta Platforms Technologies, Llc Audio system with tissue transducer driven by air conduction transducer
EP4243441A4 (en) * 2022-01-14 2023-10-25 Shenzhen Shokz Co., Ltd. Wearable device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130216052A1 (en) * 2012-02-21 2013-08-22 Imation Corp. Headphone Response Optimization
US9288591B1 (en) * 2012-03-14 2016-03-15 Google Inc. Bone-conduction anvil and diaphragm
EP3125573A1 (en) * 2014-12-24 2017-02-01 Temco Japan Co., Ltd. Bone conduction headphone
EP3160163A1 (en) * 2015-10-21 2017-04-26 Oticon Medical A/S Measurement apparatus for a bone conduction hearing device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130216052A1 (en) * 2012-02-21 2013-08-22 Imation Corp. Headphone Response Optimization
US9288591B1 (en) * 2012-03-14 2016-03-15 Google Inc. Bone-conduction anvil and diaphragm
EP3125573A1 (en) * 2014-12-24 2017-02-01 Temco Japan Co., Ltd. Bone conduction headphone
EP3160163A1 (en) * 2015-10-21 2017-04-26 Oticon Medical A/S Measurement apparatus for a bone conduction hearing device

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11317188B2 (en) 2018-05-01 2022-04-26 Facebook Technologies, Llc Hybrid audio system for eyewear devices
US10757501B2 (en) 2018-05-01 2020-08-25 Facebook Technologies, Llc Hybrid audio system for eyewear devices
WO2019212713A1 (en) * 2018-05-01 2019-11-07 Facebook Technologies, Llc Hybrid audio system for eyewear devices
US11743628B2 (en) 2018-05-01 2023-08-29 Meta Platforms Technologies, Llc Hybrid audio system for eyewear devices
US10658995B1 (en) 2019-01-15 2020-05-19 Facebook Technologies, Llc Calibration of bone conduction transducer assembly
WO2021061291A1 (en) * 2019-09-24 2021-04-01 Facebook Technologies, Llc Methods and system for controlling tactile content
CN114270876A (en) * 2019-09-24 2022-04-01 脸谱科技有限责任公司 Method and system for controlling haptic content
US11561757B2 (en) 2019-09-24 2023-01-24 Meta Platforms Technologies, Llc Methods and system for adjusting level of tactile content when presenting audio content
US11681492B2 (en) 2019-09-24 2023-06-20 Meta Platforms Technologies, Llc Methods and system for controlling tactile content
CN113473347B (en) * 2021-06-30 2022-12-02 歌尔科技有限公司 Method and device for testing bone conduction sensor on product
CN113473347A (en) * 2021-06-30 2021-10-01 歌尔科技有限公司 Method and device for testing bone conduction sensor on product
US11678103B2 (en) 2021-09-14 2023-06-13 Meta Platforms Technologies, Llc Audio system with tissue transducer driven by air conduction transducer
EP4243441A4 (en) * 2022-01-14 2023-10-25 Shenzhen Shokz Co., Ltd. Wearable device

Also Published As

Publication number Publication date
EP3445066B1 (en) 2021-06-16

Similar Documents

Publication Publication Date Title
US11743628B2 (en) Hybrid audio system for eyewear devices
US10812890B2 (en) Cartilage conduction audio system for eyewear devices
EP3445066B1 (en) Cartilage conduction audio system for eyewear devices
US11234070B2 (en) Manufacturing a cartilage conduction audio device
CN112313969A (en) Customizing a head-related transfer function based on a monitored response to audio content
WO2020219460A1 (en) Remote inference of sound frequencies for determination of head-related transfer functions for a user of a headset
US10658995B1 (en) Calibration of bone conduction transducer assembly
US10791389B1 (en) Ear-plug assembly for acoustic conduction systems
US11422392B2 (en) Ultraminiature dynamic speaker for a fully in-ear monitor
US10616692B1 (en) Optical microphone for eyewear devices

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20190815

RBV Designated contracting states (corrected)

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20191113

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

RIC1 Information provided on ipc code assigned before grant

Ipc: H04R 7/12 20060101ALN20210118BHEP

Ipc: H04S 7/00 20060101ALN20210118BHEP

Ipc: H04R 29/00 20060101ALN20210118BHEP

Ipc: H04R 1/26 20060101AFI20210118BHEP

Ipc: H04R 1/10 20060101ALN20210118BHEP

Ipc: H04R 1/02 20060101ALN20210118BHEP

INTG Intention to grant announced

Effective date: 20210203

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602018018569

Country of ref document: DE

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 1403329

Country of ref document: AT

Kind code of ref document: T

Effective date: 20210715

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG9D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210616

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210616

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210616

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210916

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 1403329

Country of ref document: AT

Kind code of ref document: T

Effective date: 20210616

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20210616

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210916

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210616

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210616

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210616

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210917

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210616

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210616

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210616

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210616

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210616

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210616

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210616

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211018

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210616

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210616

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602018018569

Country of ref document: DE

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210616

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20210831

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20210831

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210616

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20210831

26N No opposition filed

Effective date: 20220317

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20210815

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210616

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210616

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20210815

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20210816

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20210831

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20220829

Year of fee payment: 5

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210616

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230525

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20180815

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20230830

Year of fee payment: 6