WO2018004530A1 - User input through transducer - Google Patents
User input through transducer Download PDFInfo
- Publication number
- WO2018004530A1 WO2018004530A1 PCT/US2016/039764 US2016039764W WO2018004530A1 WO 2018004530 A1 WO2018004530 A1 WO 2018004530A1 US 2016039764 W US2016039764 W US 2016039764W WO 2018004530 A1 WO2018004530 A1 WO 2018004530A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- transducer
- detecting
- user input
- action
- audio
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R29/00—Monitoring arrangements; Testing arrangements
- H04R29/001—Monitoring arrangements; Testing arrangements for loudspeakers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/1633—Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
- G06F1/1684—Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
- G06F1/1688—Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675 the I/O peripheral being integrated loudspeakers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/002—Specific input/output arrangements not covered by G06F3/01 - G06F3/16
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
- G06F3/043—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means using propagating acoustic waves
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/165—Management of the audio stream, e.g. setting of volume, audio stream path
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2400/00—Loudspeakers
- H04R2400/01—Transducers used as a loudspeaker to generate sound aswell as a microphone to detect sound
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2430/00—Signal processing covered by H04R, not provided for in its groups
- H04R2430/01—Aspects of volume control, not necessarily automatic, in sound systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2499/00—Aspects covered by H04R or H04S not otherwise provided for in their subgroups
- H04R2499/10—General applications
- H04R2499/11—Transducers incorporated or for use in hand-held devices, e.g. mobile phones, PDA's, camera's
Definitions
- the instant disclosure relates to electronic devices. More specifically, portions of this disclosure relate to receiving user input through a speaker of the electronic device.
- Mobile devices provide interactive experiences for users by receiving user input commands and responding to those commands. Receiving user input on a mobile device can be challenging, as the small size of the mobile device can restrict options for interacting with the user. A user's approval, and subsequent purchase decisions, rests largely on whether their interaction with the mobile device is pleasant, intuitive, and simple. Conventionally on a mobile device nearly all interaction with a user occurs through a touch screen display integrated with the mobile device. Although touch screens may be useful for presenting complex information and a large number of options in a programmable manner, the information display and user interface may be difficult to navigate to get to a particular command.
- a user often needs to remove the mobile device from his pocket, power on the display, enter a password to operate the device, swipe down from the top of the screen to access a settings display, and then tap a mute button.
- Some solutions to this problem may include building dedicated hardware buttons into the mobile device, such as a mute switch.
- mobile devices are continuing to shrink in dimensions, including thickness, and physical switches can be difficult to fit into a small mobile device, integrate with a casing to provide water resistance for the device, and/or achieve desired aesthetics of the device.
- a transducer such as a speaker
- a logic device such as a processor
- a speaker monitoring circuit may provide voltage and/or current signals to the processor for monitoring the conditions of the speaker.
- a user may provide input by performing an action that deliberately alters the monitored characteristic of the speaker.
- a speaker's impedance and/or resonance frequency may be modified by placing an object in the radiation field of the speaker.
- a speaker's resonance frequency may be modified by covering the housing of the speaker.
- the changes in the speaker characteristic may be determined by monitoring voltage and/or current signals from the speaker.
- the device may determine the user has input a command to the device.
- Some commands that may be issued to the mobile device through the speaker may include play, pause, fast forward, rewind, mute, unmute, or increase or decrease a volume of the audio.
- a second user input may be provided through the speaker.
- the device may wait for another change of a characteristic of the speaker or a return to a prior status of the previously-monitored characteristic.
- the second user input may be, for example, a follow-up action that reverses the first user input, such as a mute command followed by an unmute command.
- the second user input may be based on the same monitoring as monitoring for the first user input, or the second user input may be detected by monitoring for different conditions than those that indicate the first user input.
- the processor may detect a first user input of covering of the speaker and respond by muting the speaker, then the processor may wait to detect taps on the speaker and respond by unmuting the speaker.
- both the first user input and the second user input may modify the same characteristic of the speaker, but in other embodiments different characteristics may be modified.
- the covering of the speaker as part of the first user input may modify the resonance frequency of the speaker
- the tapping of the speaker as part of the second user input may modify the voltage and/or current at the speaker.
- a method may include detecting a user input through a transducer (such as a partial or complete coverage of a transducer) while the transducer is outputting audio by monitoring a characteristic (such as a voltage, current, or resonance frequency) of the transducer, and then performing a first action that modifies the audio being output through the transducer based, at least in part, on detecting the user input received through the transducer.
- the action performed may include an action that modifies the audio such as by muting the audio, pausing playback, fast forwarding, rewinding, and/or changing a volume of the audio.
- the method may include detecting coverage by monitoring at least one of a voltage across the transducer and a current through the transducer, monitoring the resonance frequency of the transducer for an increase in resonance frequency by at least a threshold amount, and/or monitoring the resonance frequency of the transducer for an increase in resonance frequency to above 1 kilohertz.
- any of these methods or variations thereof may be performed by an apparatus having a controller coupled to a transducer and configured with hard-wired circuitry, firmware, and/or software to perform the steps of the method.
- the method may also include detecting a command and performing a second action.
- the method may include detecting a second user input through the transducer to perform a second action after performing the first action, and then performing a second action that modifies the audio being output through the transducer based, at least in part, on detecting the second user input.
- the detection of the second user input may include monitoring a voltage and/or a current through the transducer, detecting a double-tap on the transducer, outputting inaudible signals through the transducer after performing the first action, and/or detecting an uncovering of the transducer.
- the apparatus having a controller described above may also be configured to perform any of these additional functions.
- an apparatus may include a transducer; at least one analog-to-digital converter (ADC) coupled to the transducer; and a processor coupled to the transducer and coupled to the at least one analog-to-digital converter (ADC).
- the processor may be configured, such as through firmware or software code, to perform certain steps in interacting with a user and controlling the transducer or other aspects of a device containing the processor, such controlling audio software on a mobile device.
- the processor may be configured to perform steps including receiving data regarding the transducer from the at least one analog-to-digital converter (ADC); detecting user input (such as a partial or complete coverage) of a transducer of a device while the transducer is outputting audio by calculating a characteristic (such as voltage, current, impedance, or resonance frequency) of the transducer based, at least in part, on the received data from the at least one analog-to-digital converter (ADC); and performing a first action that modifies the audio being output through the transducer based, at least in part, on detecting the user input received through the transducer.
- ADC analog-to-digital converter
- the apparatus may also include an amplifier coupled to the processor and to the transducer, in which the amplifier may be controlled by the processor to mute and unmute audio output or perform other actions that modify the audio output to the transducer.
- the apparatus may also include a switch coupled to the processor and to the transducer, in which the switch may be toggled to mute and unmute audio output or perform other actions that modify the audio output to the transducer.
- the processor may be further configured to detect another command and perform a second action.
- the processor may be configured to perform steps including receiving second data regarding the transducer from the at least one analog-to-digital converter (ADC); detecting a second user input through the transducer to perform a second action based, at least in part, on the second data after the step of performing the action that modifies the audio; and/or performing the second action that modifies the audio being output through the transducer based, at least in part, on detecting the second user input.
- ADC analog-to-digital converter
- the user indication may include a double-tap on the transducer or in the vicinity of the transducer that can be detected by identifying a particular signature in the second data received from the at least one ADC, such as spikes in a voltage and/or current signal from the transducer.
- the second user input may alternatively or additionally include uncovering the transducer.
- the processor may be configured to output inaudible signals through the transducer after performing the first action that modifies the audio.
- the inaudible signals may be used by the processor to monitor current and/or voltage at the transducer and identify changes of the impedance of the transducer that indicate an uncovering of the transducer.
- FIGURE 1 is a flow chart illustrating an example method for interacting with a user by detecting user input through a transducer and performing an appropriate action according to one embodiment of the disclosure.
- FIGURE 2 is a block diagram illustrating an example apparatus for interacting with a user by detecting user input through a transducer and performing an appropriate action according to one embodiment of the disclosure.
- FIGURE 3 is a block diagram illustrating an example apparatus using a processor that is configured to detect user input through a transducer and perform an appropriate action according to one embodiment of the disclosure.
- FIGURE 4A is an illustration showing a user covering a transducer of a mobile phone to mute the sound according to one embodiment of the disclosure.
- FIGURE 4B is a graph illustrating an example resonance frequency for an uncovered transducer according to one embodiment of the disclosure.
- FIGURE 4C is a graph illustrating an example resonance frequency for a covered transducer according to one embodiment of the disclosure.
- FIGURE 5 is an illustration showing a user uncovering a transducer of a mobile phone to unmute the sound according to one embodiment of the disclosure.
- FIGURE 6A is an illustration showing a user tapping the transducer of the mobile phone to unmute the sound according to one embodiment.
- FIGURE 6B is a graph illustrating example voltage and current signals for a transducer being tapped on by a user according to one embodiment of the disclosure.
- FIGURE 7 is a flow chart illustrating an example method of performing actions to modify audio output to a transducer by detecting covering and uncovering of the transducer according to one embodiment of the disclosure.
- FIGURE 8 is a flow chart illustrating an example method of detecting user input through a transducer by monitoring voltage and/or current levels at the transducer according to one embodiment of the disclosure.
- the transducer may be used as an input device for receiving user input and performing actions on a device that includes or is coupled to the transducer.
- the user input may be received by monitoring for changes in a characteristic of the transducer, such as voltage, current, impedance, or resonance frequency, and correlating the changes in the characteristic with a particular user activity intended to provide input to the device.
- FIGURE 1 is a flow chart illustrating an example method for interacting with a user by detecting user input through a transducer and performing an appropriate action according to one embodiment of the disclosure.
- a method 100 may begin at block 102 with detecting a change in characteristics of a transducer corresponding to a command from a user to perform an action, such as to modify playback of audio, including music, sounds, or voices, through the transducer.
- the change in characteristic may be detected through a monitoring circuit coupled to the transducer and configured to monitor the transducer, such as by monitoring a voltage across the transducer and/or a current through the transducer as audio is played through the transducer.
- an action may be performed, such as to modify the audio being output through the transducer.
- FIGURE 2 is a block diagram illustrating an example apparatus for interacting with a user by detecting user input through a transducer and performing an appropriate action according to one embodiment of the disclosure.
- a controller 200 may perform audio processing for reproducing sounds at a transducer 220.
- the controller 200 may receive audio signals in analog or digital format at audio input node 202.
- the controller may include an audio processing module 212 for processing the received audio signals to generate an output signal, at audio output node 204, to drive transducer 220.
- the processing module 212 may perform processing including converting from digital to analog and/or amplifying signals to drive transducer 220 at a desired volume.
- the controller 200 may be integrated with a mobile device, such as a mobile phone, tablet, entertainment device, wireless headphones, and/or a wireless speaker.
- the controller 200 may alternatively be integrated as part of a processor or other integrated circuit in an electronic device.
- the controller 200 may receive and process feedback from the transducer 220 for determining when user input is received through the transducer 220.
- the controller 200 may include a feedback processing module 214 that processes input received at a first feedback input node 206 and a second feedback input node 208.
- the feedback from the transducer 220 at input nodes 206 and 208 may be received as a signal proportional to a voltage across the transducer 220 and a signal proportional to a current through the transducer 220.
- the voltage and/or current signals may be used by feedback processing module 214 to monitor changes in one or more characteristics of the transducer 220 and to detect changes in the characteristics that are the result of specific user interaction with the transducer 220.
- the feedback processing module 214 may monitor a resonance frequency of the transducer 220. The resonance frequency may change when the user places a hand or other object that partially or completely covers the transducer 220.
- the user's hand introduces an impedance in the radiation field of the transducer 220 that alters its resonance frequency. Covering the transducer 220 may change other characteristics, such as impedance, voltage, or current, that may also or alternatively be detected by the feedback processing module 214. Further, other user actions may be detectable by monitoring the transducer 220, such as detecting when a user taps the transducer and the number of taps and force of each of the taps.
- FIGURE 3 is a block diagram illustrating an example apparatus using a processor that is configured to detect user input through a transducer and perform an appropriate action according to one embodiment of the disclosure.
- the controller 200 may include circuitry that performs the functions of the audio processing module 212 and the feedback processing module 214.
- the processor 302, the coder/decoder (CODEC) 304, and the amplifier 306 may perform functions related to audio processing.
- the processor 302, the analog- to-digital converter (ADC) 308A, and the analog-to-digital converter (ADC) 308B may perform functions related to feedback processing.
- the processor 302 may be, for example, a digital signal processor (DSP), a microcontroller, an application-specific integrated circuit (ASIC), or other logic circuitry.
- the processor 302 may receive an audio signal from an application processor 310 that may be co-located in the electronic device with the controller 200 or integrated with the controller 200.
- the received audio signal may be processed by the processor 302 to prepare the signal for output to the transducer 220, such as application of equalizers, application of adaptive noise cancellation (ANC) signals, application of speaker protection algorithms, or other processing.
- a processed audio signal is then passed to the CODEC 304 and the amplifier 306 for output to the transducer 220.
- a switch 306A may be located in circuitry before the audio signal reaches the amplifier 306.
- the switch 306A may be toggled to an open state to mute output of audio to transducer 220.
- the amplifier 306 may be toggled on and off to mute output of audio to transducer 220 without the switch 306A.
- the transducer 220 reproduces the sounds within the processed audio signals by generating pressure waves that are interpreted by users as audible sounds.
- the characteristics of the transducer 220 may change over time, and those changes monitored through one or more analog-to-digital converters (ADCs) 308A and 308B.
- ADCs 308A-B may be coupled to the transducer 220 to receive analog signals related to the transducer 220, convert those analog signals to digital values, and provide those digital values to the processor 302.
- the ADC 308A is configured to measure a voltage across the transducer 220 and to provide the voltage as a digital value to the processor 302
- the ADC 308B is configured to measure a current through the transducer 220 using a resistor and to provide the current as a digital value to the processor 302.
- different characteristics of the transducer 220 may be monitored and digital values generated therefrom and supplied to the processor 302.
- FIGURE 4A is an illustration showing a user covering a transducer of a mobile phone to mute the sound according to one embodiment of the disclosure.
- the resonance frequency of the transducer 220 may change.
- This change in resonance frequency may be correlated with the user's activity shown in FIGURE 4A and be sufficiently unique from normal changes of the resonance frequency during operation such that the processor 302 may detect the signature of the changing resonance frequency using the digital values from ADC 308 A and/or 308B.
- the signature may be detected by the processor 302, for example, when the resonance frequency increases more than a threshold amount and/or when the resonance frequency increases to a value over 1 kilohertz.
- One such change in resonance frequency is shown in the graphs of FIGURES 4B-C.
- FIGURE 4B is a graph illustrating an example resonance frequency for an uncovered transducer according to one embodiment of the disclosure.
- a resonance frequency 402 of the transducer when uncovered may be below 1000 hertz, such as approximately 850 hertz.
- FIGURE 4C is a graph illustrating an example resonance frequency for a covered transducer according to one embodiment of the disclosure.
- a resonance frequency 404 of the transducer when covered may be above 1000 hertz, such as approximately 1650 hertz.
- FIGURE 5 is an illustration showing a user uncovering the transducer of the mobile phone to unmute the sound according to one embodiment of the disclosure. The uncovering may be detected by a change of the resonance frequency back from the frequency 404 of FIGURE 4C to the frequency 402 of FIGURE 4B.
- the action performed when the transducer is covered is to mute the audio output.
- an inaudible signal may be applied to the transducer 220 during the time the audio is muted.
- an ultrasonic signal may be applied to the transducer to facilitate measurement of voltage and current by the ADCs 308A-B.
- FIGURE 6A is an illustration showing a user tapping the transducer of the mobile phone to unmute the sound according to one embodiment.
- a tap on the transducer 220 may produce a spike in the voltage or current of the transducer 220, which may be detected by the processor 302 from the ADCs 308A-B. Examples of the signatures for taps on the transducer are shown in FIGURE 6B.
- FIGURE 6B is a graph illustrating example voltage and current signals for a transducer being tapped by a user according to one embodiment of the disclosure.
- a line 602 illustrates a sample voltage measurement
- a line 604 illustrates a sample current measurement. Peaks for each of the lines 602 and 604 are shown at times 612, 614, 616, and 618 and correspond to a user tapping on the transducer as shown in FIGURE 6A.
- a reduction in false positives for detection of tapping may be obtained by detecting a signature of two or more taps on the transducer.
- the signature identified by the processor 302 may be the pattern 622 of peaks 612 and 614 or the pattern 624 of peaks 616 and 618.
- the signature may be detected and a corresponding command, such as unmute, is decoded by the processor 302.
- the processor 302 may take action to perform the action, such as to turn amplifier 306 back on. In some embodiments, the processor 302 may perform the action by relaying the decoded command to the application processor 310, where the application processor 310 executes the decoded command. In some embodiments, the number of taps and strength of the taps may be detected by the processor 302 and correspond to user input for different commands, similar to Morse code. For example, a hard tap followed by a soft tap may indicate fast forward command and a soft tap followed by a hard tap may indicate a rewind command.
- the methods and apparatuses described above for detecting user input through the transducer and taking action based on the detected user input may be adapted to detect many ways for a user to interact with the device.
- Some detailed examples of an electronic device interacting with a user through input to the transducer are described below. However, detailed examples are only some applications of the general methods and apparatuses described above.
- a user may use a first input, such as covering the transducer, to issue a first command, such as muting audio playback, and the user may use a second input, such as uncovering the transducer, to issue a second command, such as unmuting audio playback.
- FIGURE 7 is a flow chart illustrating an example method of performing actions to modify audio output to a transducer by detecting covering and uncovering of the transducer according to one embodiment of the disclosure.
- a method 700 may begin at block 702 with playing back audio through a transducer of a device, such as by playing music or a telephone call to a micro speaker of a mobile device.
- Detection at block 704 may include, for example, detecting a change in resonance frequency of the transducer.
- an action may be performed that modifies the audio output to the transducer at block 706, such as by muting the audio playback.
- the method 700 may proceed to wait to detect a second command and perform an appropriate action in response to the received second command.
- the second command may be paired with the first command.
- the first command is a mute command
- the second command may be an unmute command.
- the first command is a pause command
- the second command may be a play command. That is, the same user input activity (e.g., changing of the resonance frequency) may designate different commands based on a past command.
- the device may be expecting that the next command will be unmute.
- the uncovering of the transducer may be detected.
- the method 700 of FIGURE 7 may be performed by the processor 302 of FIGURE 3, the application processor 310 of FIGURE 3, or other logic circuitry coupled to the transducer 220 of FIGURE 3.
- FIGURE 8 is a flow chart illustrating an example method of detecting user input through a transducer by monitoring voltage and/or current levels at the transducer according to one embodiment of the disclosure.
- a method 800 begins at block 802 with receiving an input voltage and/or an input current from a transducer.
- a resonance frequency is determined for the transducer.
- the action may be, for example, muting the audio playback through the transducer when the resonance frequency is changed by a user covering the transducer.
- a resonance frequency is described at blocks 804 and 806, any characteristic of the transducer may be determined and examined to determine when user input is received, and that characteristic may be determined from the received voltage and/or current at block 802.
- the method 800 may proceed to wait to detect a second command and perform an appropriate action in response to the received second command.
- the device is configured to receive additional user input through the transducer. For example, an ultrasonic signal may be output to the transducer to facilitate further voltage and/or current measurements from the transducer.
- an input voltage and/or input current are received from the transducer. It may be determined at block 814 whether a user input is detected in the received voltage and/or current of block 816. If no user input is detected, the method 800 may return to block 812 to monitor the transducer.
- the method 800 continues to block 816 to perform a second action in response to the received second command detected at block 814.
- the second command may be paired with the first command.
- the first command is a mute command
- the second command may be an unmute command.
- the first command is a pause command
- the second command may be a play command.
- the uncovering of the transducer may be detected.
- another action may be performed that modifies the audio output at block 816, such as by unmuting the audio.
- the method 800 of FIGURE 8 may be performed by the processor 302 of FIGURE 3, the application processor 310 of FIGURE 3, or other logic circuitry coupled to the transducer 220 of FIGURE 3.
- Embodiments of the invention described above allow a user to interact with an electronic device through devices that have conventionally been limited to providing outputs. These embodiments and other embodiments of the invention may provide for thinner and lighter devices by reducing or eliminating a need for additional physical switches or other components. Further, these and other embodiments may reduce power consumption on the device by reducing the amount of time a user is interacting with a power-consuming touchscreen display.
- Monitoring a speaker for detecting changes in characteristics that indicate user input is described above, however other components could also be monitored.
- the circuitry for monitoring the speaker may be shared with other functionality on the device to reduce any additional cost or size in the electronic device. For example, the circuitry for monitoring the speaker may be used for speaker protection in addition to detecting user input through the speaker.
- FIGURE 1, FIGURE 7, and FIGURE 8 are generally set forth as a logical flow chart diagram. As such, the depicted order and labeled steps are indicative of aspects of the disclosed method. Other steps and methods may be conceived that are equivalent in function, logic, or effect to one or more steps, or portions thereof, of the illustrated method. Additionally, the format and symbols employed are provided to explain the logical steps of the method and are understood not to limit the scope of the method.
- arrow types and line types may be employed in the flow chart diagram, they are understood not to limit the scope of the corresponding method. Indeed, some arrows or other connectors may be used to indicate only the logical flow of the method. For instance, an arrow may indicate a waiting or monitoring period of unspecified duration between enumerated steps of the depicted method. Additionally, the order in which a particular method occurs may or may not strictly adhere to the order of the corresponding steps shown.
- functions described above may be stored as one or more instructions or code on a computer-readable medium. Examples include non-transitory computer-readable media encoded with a data structure and computer- readable media encoded with a computer program.
- Computer-readable media includes physical computer storage media. A storage medium may be any available medium that can be accessed by a computer.
- such computer-readable media can comprise random access memory (RAM), read-only memory (ROM), electrically-erasable programmable read-only memory (EEPROM), compact disc read-only memory (CD-ROM) or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer.
- Disk and disc includes compact discs (CD), laser discs, optical discs, digital versatile discs (DVD), floppy disks and Blu-ray discs. Generally, disks reproduce data magnetically, and discs reproduce data optically. Combinations of the above should also be included within the scope of computer-readable media.
- instructions and/or data may be provided as signals on transmission media included in a communication apparatus.
- a communication apparatus may include a transceiver having signals indicative of instructions and data. The instructions and data are configured to cause one or more processors to implement the functions outlined in the claims.
- DSPs digital signal processors
- GPUs graphics processing units
- CPUs central processing units
- Is ones
- Is ones
- Is zeros
- Is ones
- Is zeros
- highs and lows are given as example bit values throughout the description
- the function of ones and zeros may be reversed without change in operation of the processor described in embodiments above.
- processes, machines, manufacture, compositions of matter, means, methods, or steps, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Acoustics & Sound (AREA)
- Otolaryngology (AREA)
- Signal Processing (AREA)
- Computer Hardware Design (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Multimedia (AREA)
- Circuit For Audible Band Transducer (AREA)
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1721782.9A GB2556497B (en) | 2016-06-28 | 2016-06-28 | User input through transducer |
PCT/US2016/039764 WO2018004530A1 (en) | 2016-06-28 | 2016-06-28 | User input through transducer |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/US2016/039764 WO2018004530A1 (en) | 2016-06-28 | 2016-06-28 | User input through transducer |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018004530A1 true WO2018004530A1 (en) | 2018-01-04 |
Family
ID=56550327
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2016/039764 WO2018004530A1 (en) | 2016-06-28 | 2016-06-28 | User input through transducer |
Country Status (2)
Country | Link |
---|---|
GB (1) | GB2556497B (en) |
WO (1) | WO2018004530A1 (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6545612B1 (en) * | 1999-06-21 | 2003-04-08 | Telefonaktiebolaget Lm Ericsson (Publ) | Apparatus and method of detecting proximity inductively |
US20100098261A1 (en) * | 2008-10-17 | 2010-04-22 | Sony Ericsson Mobile Communications Ab | Arrangement and method for determining operational mode of a communication device |
EP2271134A1 (en) * | 2009-07-02 | 2011-01-05 | Nxp B.V. | Proximity sensor comprising an acoustic transducer for receiving sound signals in the human audible range and for emitting and receiving ultrasonic signals. |
US20120020488A1 (en) * | 2010-06-16 | 2012-01-26 | Nxp B.V. | Control of a loudspeaker output |
US20130051567A1 (en) * | 2011-08-31 | 2013-02-28 | Kirk P Gipson | Tap detection of sound output device |
US20130133431A1 (en) * | 2011-07-11 | 2013-05-30 | Ntt Docomo, Inc. | Input device |
US20140270208A1 (en) * | 2013-03-15 | 2014-09-18 | Cirrus Logic, Inc. | Monitoring of speaker impedance to detect pressure applied between mobile device and ear |
EP2945398A1 (en) * | 2014-05-15 | 2015-11-18 | Nxp B.V. | Motion sensor |
-
2016
- 2016-06-28 WO PCT/US2016/039764 patent/WO2018004530A1/en active Application Filing
- 2016-06-28 GB GB1721782.9A patent/GB2556497B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6545612B1 (en) * | 1999-06-21 | 2003-04-08 | Telefonaktiebolaget Lm Ericsson (Publ) | Apparatus and method of detecting proximity inductively |
US20100098261A1 (en) * | 2008-10-17 | 2010-04-22 | Sony Ericsson Mobile Communications Ab | Arrangement and method for determining operational mode of a communication device |
EP2271134A1 (en) * | 2009-07-02 | 2011-01-05 | Nxp B.V. | Proximity sensor comprising an acoustic transducer for receiving sound signals in the human audible range and for emitting and receiving ultrasonic signals. |
US20120020488A1 (en) * | 2010-06-16 | 2012-01-26 | Nxp B.V. | Control of a loudspeaker output |
US20130133431A1 (en) * | 2011-07-11 | 2013-05-30 | Ntt Docomo, Inc. | Input device |
US20130051567A1 (en) * | 2011-08-31 | 2013-02-28 | Kirk P Gipson | Tap detection of sound output device |
US20140270208A1 (en) * | 2013-03-15 | 2014-09-18 | Cirrus Logic, Inc. | Monitoring of speaker impedance to detect pressure applied between mobile device and ear |
EP2945398A1 (en) * | 2014-05-15 | 2015-11-18 | Nxp B.V. | Motion sensor |
Also Published As
Publication number | Publication date |
---|---|
GB2556497B (en) | 2021-10-20 |
GB2556497A (en) | 2018-05-30 |
GB201721782D0 (en) | 2018-02-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10101962B2 (en) | User input through transducer | |
US10586534B1 (en) | Voice-controlled device control using acoustic echo cancellation statistics | |
US9479883B2 (en) | Audio signal processing apparatus, audio signal processing method, and program | |
WO2019033986A1 (en) | Sound playback device detection method, apparatus, storage medium, and terminal | |
JP5695447B2 (en) | Television apparatus and remote control apparatus | |
EP3001422A1 (en) | Media player automated control based on detected physiological parameters of a user | |
US20140185834A1 (en) | Volume control apparatus | |
TWI681679B (en) | Method and apparatus for speaker adaptation with voltage-to-excursion conversion | |
US9967665B2 (en) | Adaptation of dynamic range enhancement based on noise floor of signal | |
US20130051567A1 (en) | Tap detection of sound output device | |
CN105204761B (en) | A kind of volume adjusting method and user terminal | |
CN107122161B (en) | Audio data playing control method and terminal | |
US20120197420A1 (en) | Signal processing device, signal processing method, and program | |
TW201333813A (en) | Audio player and control method thereof | |
JP2008160506A (en) | Audio output apparatus, audio output method, audio output system, and program for audio output processing | |
WO2014131054A2 (en) | Dynamic audio perspective change during video playback | |
US20150205572A1 (en) | Determination and application of audio processing presets in handheld devices | |
JP2012044276A (en) | Audio processing apparatus, audio processing method, and program | |
JP2008060759A (en) | Noise cancel headphone and its noise cancel method | |
CN106303841B (en) | Audio playing mode switching method and mobile terminal | |
KR20150084136A (en) | Audio Outputting Control Method and Electronic Device supporting the same | |
WO2012087314A1 (en) | Audio control system | |
KR102110515B1 (en) | Hearing aid device of playing audible advertisement or audible data | |
JP2014187413A (en) | Acoustic device and program | |
WO2018004530A1 (en) | User input through transducer |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 201721782 Country of ref document: GB Kind code of ref document: A Free format text: PCT FILING DATE = 20160628 |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16742473 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16742473 Country of ref document: EP Kind code of ref document: A1 |