CN112567764A - Orientation-based device interface - Google Patents

Orientation-based device interface Download PDF

Info

Publication number
CN112567764A
CN112567764A CN201980053721.XA CN201980053721A CN112567764A CN 112567764 A CN112567764 A CN 112567764A CN 201980053721 A CN201980053721 A CN 201980053721A CN 112567764 A CN112567764 A CN 112567764A
Authority
CN
China
Prior art keywords
orientation
speakers
audio
audio device
implementations
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201980053721.XA
Other languages
Chinese (zh)
Inventor
贾斯汀·沃里奇
罗兰多·埃斯帕扎·帕拉西奥斯
尼古拉斯·马塔雷斯
迈克尔·B·蒙特韦利什基
拉斯穆斯·芒克·拉森
本杰明·路易斯·沙亚
车-宇·郭
迈克尔·斯梅德加德
理查德·F·莱恩
加布里尔·费希尔·斯洛特尼克
克里斯滕·曼格姆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Google LLC
Original Assignee
Google LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/058,820 external-priority patent/US10734963B2/en
Priority claimed from US16/138,707 external-priority patent/US10897680B2/en
Application filed by Google LLC filed Critical Google LLC
Priority to CN202311490245.2A priority Critical patent/CN117676427A/en
Publication of CN112567764A publication Critical patent/CN112567764A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • H04R1/40Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R29/00Monitoring arrangements; Testing arrangements
    • H04R29/001Monitoring arrangements; Testing arrangements for loudspeakers
    • H04R29/002Loudspeaker arrays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R29/00Monitoring arrangements; Testing arrangements
    • H04R29/004Monitoring arrangements; Testing arrangements for microphones

Abstract

Various embodiments described herein include methods, devices, and systems for automatic audio equalization. In one aspect, a method is performed on an audio device comprising one or more speakers and a plurality of microphones, having one or more processors, a memory, and a plurality of device interface elements. The method comprises the following steps: (1) detecting a change in orientation of the audio device from a first orientation to a second orientation; and (2) configuring operation of two or more of the plurality of device interface elements in response to detecting the change in orientation.

Description

Orientation-based device interface
Technical Field
This generally relates to audio devices, including but not limited to orientation-based device interfaces on audio devices.
Background
Traditionally, electronic devices have been designed and manufactured to have a single orientation, such as a single mounting surface. In recent years, some devices have been designed to operate in multiple orientations, such as vertically and horizontally. However, a user manipulating the device interface in various orientations can be cumbersome and unintuitive. Accordingly, it is desirable for electronic devices to have a location-based device interface.
Disclosure of Invention
Technical problem
There is a need for methods, devices, and systems for implementing a position-based device interface. Various embodiments of the systems, methods, and devices within the scope of the appended claims each have multiple aspects, no single one of which is solely responsible for the attributes described herein. Without limiting the scope of the appended claims, it will be appreciated how aspects of the various embodiments may be used to automatically adjust the operation of a device interface to changes in orientation after considering this disclosure, and particularly after considering the section entitled "detailed description of certain embodiments".
To maximize user experience and convenience, the audio devices described herein may operate in multiple orientations. For example, an audio device having two speakers is configured to operate in a stereo mode when oriented horizontally and configured to operate in a mono mode when oriented vertically. The audio device optionally includes a removable mount (e.g., a silicone foot) adapted to attach to both sides of the audio device (e.g., by magnets). The audio device optionally includes a set of Light Emitting Diodes (LEDs), where different subsets of LEDs are used based on orientation (e.g., such that the LEDs maintain a horizontal appearance in both orientations). The audio device optionally includes a slider bar configured to interpret the directionality of the user's slide based on the device orientation (e.g., to control volume). For example, sliding from the first end to the second end of the strip corresponds to an increase in volume in a horizontal orientation. However, in this example, the sliding from the first end to the second end corresponds to a decrease in volume in the vertical orientation. The audio device also optionally adjusts the operation of its microphone based on the orientation. For example, the microphones furthest from the base are used for hotword detection, e.g., because those microphones are better positioned to obtain a clear audio signal.
Technical scheme
(A1) In one aspect, some embodiments include a method for adjusting device orientation performed at an audio device having one or more processors, memory, and a plurality of device interface elements, the audio device including one or more speakers and a plurality of microphones. The method comprises the following steps: (1) detecting a change in orientation of the audio device from a first orientation to a second orientation; and (2) configuring operation of two or more of the plurality of device interface elements in response to detecting the change in orientation. In some implementations, detecting the change in orientation includes detecting the change in orientation using an accelerometer of the audio device. As used herein, an audio device is an electronic device having one or more speakers and/or one or more microphones.
(A2) In some embodiments of a 1: (1) further comprising, prior to detecting the change in orientation, operating the audio device in a first orientation; and (2) wherein configuring the two or more device interface elements comprises reconfiguring the operation based on the change in orientation.
(A3) In some embodiments of a1 or a2, wherein the first orientation corresponds to the audio device being positioned (resting) on a first side of the audio device; and wherein the second orientation corresponds to the audio device being positioned on a second side of the audio device that is different from the first side (e.g., a change in orientation corresponds to rotating the device from a vertical orientation to a horizontal orientation).
(A4) In some embodiments of a1-A3, wherein the operation of configuring two or more of the plurality of device interface elements comprises assigning a first microphone of the plurality of microphones to a task based on the change in orientation. In some implementations, a first subset of microphones is used in a first orientation, and a second subset is used in a second orientation (e.g., the microphones on the "top" of the device in each orientation are used for hotword detection).
(A5) In some embodiments of a4, further comprising: in response to detecting the change in orientation, a second microphone of the plurality of microphones is de-allocated from the task.
(A6) In some embodiments of a4 or a5, wherein the task comprises one or more of: hotword detection, speech recognition and audio equalization.
(A7) In some embodiments of a1-a6, wherein the plurality of device interface elements comprise volume control elements; and wherein the operation of configuring two or more of the plurality of device interface elements comprises the operation of configuring a volume control element.
(A8) In some embodiments of a7, wherein, when in the first orientation, movement along the volume control element toward the first end of the volume control element corresponds to increasing the volume of the one or more speakers; and wherein configuring the volume control element comprises configuring the volume control element such that movement along the volume control element toward the first end of the volume control element corresponds to decreasing the volume of the one or more speakers. In some implementations, the volume control includes a capacitive touch element (e.g., a capacitive touch bar).
(A9) In some embodiments of a1-A8, wherein the one or more speakers comprise a plurality of speakers; and wherein configuring two or more of the plurality of device interface elements comprises configuring operation of the plurality of speakers (e.g., adjusting treble and/or bass settings of the speakers).
(A10) In some embodiments of a9, wherein, when in the first orientation, the plurality of speakers are configured to operate in a stereo mode; and wherein configuring the plurality of speakers comprises reconfiguring the plurality of speakers to operate in a mono mode. In some implementations, the audio output is time-converted when the change in orientation is determined. In some implementations, the audio output is faded briefly to silence before reconfiguring subsequent outputs. In some implementations, different audio filters (e.g., biquad or ladder filters) are used to reconfigure subsequent outputs.
(A11) In some embodiments of a10, wherein reconfiguring the plurality of speakers to operate in a mono mode includes utilizing only a subset of the plurality of speakers for subsequent audio output. For example, in the vertical orientation, only the upper speakers (e.g., upper woofer and upper tweeter) are used. In some implementations, the subsequent audio output includes TTS output or music. In some implementations, the gain of the subset of speakers is increased to compensate for using only the subset (e.g., +6 dB).
(A12) In some embodiments of a9-a11, wherein reconfiguring the plurality of speakers comprises utilizing only a subset of the plurality of speakers for subsequent audio output having an audio frequency above the threshold frequency. In some embodiments, the threshold frequency is 160 Hz. In some embodiments, all woofers are used for bass frequencies, while less than all woofers are used for higher frequencies. In some implementations, the subset is selected based on the user's location, distance from the resting surface, and/or capabilities of the individual speakers.
(A13) In some embodiments of a9-a12, wherein reconfiguring the plurality of speakers comprises: (1) utilizing only a subset of the plurality of speakers for subsequent audio output when the volume setting of the audio device is below a volume threshold; and (2) utilize the subset of the plurality of speakers and the one or more additional speakers for subsequent audio output when the volume setting of the audio device is above the volume threshold. In some implementations, an input/output matrix is used to time-convert the audio output in the conversion.
(A14) In some embodiments of a9-a13, further comprising audio pairing the audio device with an additional audio device; and wherein configuring the plurality of speakers comprises: when in a first orientation, a first subset of the plurality of speakers is utilized, and when in a second orientation, a second subset of the plurality of speakers is utilized (e.g., when in a horizontal orientation, a subset of speakers furthest from the additional audio device is utilized (to enhance the surround sound output of the device), and when in a vertical orientation, a different subset is utilized (e.g., the topmost speaker)). In some implementations, the audio device is audio paired with multiple additional audio devices, and each device operates in a mono mode such that the audio device as a whole achieves a surround sound effect. In some implementations, all speakers are used in one orientation (e.g., all speakers are used in a vertical orientation). In some implementations, the timing of audio output at each device is adjusted based on the relative position between the devices (e.g., to enhance synchronization of the output).
(A15) In some embodiments of a1-a14, wherein the plurality of device interface elements comprises a plurality of lighting elements; and wherein configuring operation of two or more of the plurality of device interface elements comprises adjusting operation of the plurality of lighting elements. In some embodiments, the plurality of lighting elements comprises a plurality of Light Emitting Diodes (LEDs). In some embodiments, adjusting the operation of the lighting elements comprises disabling the first subset of lighting elements and enabling the second subset. In some embodiments, the plurality of lighting elements includes a first row of lighting elements along a first axis and a second row of lighting elements along a second axis different from the first axis. In some embodiments, adjusting the lighting elements includes transmitting device state information with a first row of lighting elements when in a first orientation and transmitting device state information with a second row of lighting elements when in a second orientation. In some embodiments, adjusting the operation of the lighting elements comprises utilizing only a subset of the lighting elements that are substantially level with the ground.
(A16) In some embodiments of a1-a15, wherein the audio device further comprises a removable base; and wherein the detachable dock is configured to couple to two or more sides of the audio device to facilitate positioning the audio device in a plurality of orientations. In some implementations, the detachable base is configured to magnetically couple to various magnets within a housing of the audio device. In some embodiments, the removable base is comprised of silicone. In some embodiments, the mounts are configured to couple only at locations corresponding to the effective orientation of the device.
(A17) In some embodiments of a1-a16, wherein the audio device further comprises a power port; and wherein the audio device is configured such that the power port is proximate to a resting surface of the audio device in both the first orientation and the second orientation, e.g., the power port is at a corner portion of the audio device between two sides for resting the audio device in both orientations.
(A18) In some embodiments of a1-a17, wherein the audio device further comprises one or more antennas; and wherein the audio device is configured such that the antenna is at least a threshold distance away from a resting surface for the audio device in both the first orientation and the second orientation (e.g., the antenna is disposed opposite two sides for resting the audio device in both orientations).
(A19) In some embodiments of a1-a18, further comprising: detecting a change in orientation of the audio device from a first orientation to a third orientation; and presenting an error status to the user in response to detecting the change toward the third orientation. For example, outputting a message "device upside down" via one or more speakers, displaying an error status via one or more LEDs of the device, and/or sending an error alert to a client device of the user.
In another aspect, some embodiments include an audio device comprising one or more processors and memory coupled to the one or more processors, the memory storing one or more programs configured for execution by the one or more processors, the one or more programs including instructions for performing any of the methods herein (e.g., a1-a19 described above).
In yet another aspect, some embodiments include a non-transitory computer readable storage medium storing one or more programs for execution by one or more processors of an audio device, the one or more programs including instructions for performing any of the methods described herein (e.g., a1-a19 described above).
Advantageous effects
Accordingly, devices, storage media, and computing systems are provided with methods for automatically adjusting the operation of a device interface according to changes in orientation, thereby increasing the effectiveness, efficiency, and user satisfaction of such systems. Such methods may supplement or replace conventional methods for audio equalization.
Drawings
For a better understanding of the various described embodiments, reference should be made to the following description of the embodiments taken in conjunction with the following drawings in which like reference numerals refer to corresponding parts throughout.
Fig. 1A and 1B illustrate representative electronic devices according to some embodiments.
FIG. 2 is a block diagram illustrating a representative operating environment including a plurality of electronic devices and a server system, according to some embodiments.
FIG. 3 is a block diagram illustrating a representative electronic device, according to some embodiments.
FIG. 4 is a block diagram illustrating a representative server system according to some embodiments.
Fig. 5A-5B are perspective views illustrating representative electronic devices in different orientations, according to some embodiments.
6A-6B are internal views illustrating representative electronic devices in different orientations, according to some embodiments.
Fig. 7A-7B illustrate a representative electronic device having a sliding control element (e.g., a volume control), according to some embodiments.
Fig. 8A-8E are exploded views illustrating representative electronic devices according to some embodiments.
9A-9D are perspective views illustrating representative electronic devices in different orientations, according to some embodiments.
10A-10B are perspective views illustrating representative electronic devices in different orientations, according to some embodiments.
FIG. 11 is a flow diagram illustrating a representative method for orientation-based operation of an audio device, in accordance with some embodiments.
Detailed Description
Reference will now be made in detail to the embodiments, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the various embodiments described. It will be apparent, however, to one skilled in the art that the various described embodiments may be practiced without these specific details. In other instances, well-known methods, procedures, components, circuits, and networks have not been described in detail as not to unnecessarily obscure aspects of the embodiments.
This disclosure describes electronic devices that change operation based on orientation, such as audio devices having multiple speakers. For example, the audio device switches between a stereo output mode and a mono output mode based on the orientation. A representative electronic device (e.g., device 100) includes a plurality of device interface elements, such as a volume control (e.g., volume control 702), an LED (e.g., LED assembly 602), and a microphone (e.g., microphone 106). According to some embodiments, an electronic device determines its orientation and adjusts the operation of the following components based on the determined orientation: volume controls (reversing directionality), LEDs (activating different subsets of LEDs), and/or microphones (assigning different tasks to subsets of microphones).
FIG. 1A illustrates an electronic device 100 according to some embodiments. Electronic device 100 includes one or more woofers 102 (e.g., 102-1 and 102-2), one or more tweeters 104, and a plurality of microphones 106. In some embodiments, speakers 102 include different types of speakers, such as a low frequency woofer and a high frequency tweeter/tweeter. In some implementations, speaker 102 is used for frequencies below a frequency threshold, while speaker 104 is used for frequencies above the frequency threshold. In some embodiments, the frequency threshold is around about 1900Hz (e.g., 1850Hz, 1900Hz, or 1950 Hz). In some implementations, the electronic device 100 includes three or more speakers 102. In some implementations, the speakers 102 are arranged in different geometries (e.g., in a triangular configuration). In some implementations, the electronic device 100 does not include any tweeters 104. In some implementations, the electronic device 100 includes less than six microphones 106. In some implementations, the electronic device 100 includes more than six microphones 106. In some implementations, the microphone 106 includes two or more different types of microphones.
In FIG. 1A, the microphones 106 are arranged in groups of three, with one microphone (e.g., microphone 106-3) located on the front of the electronic device 100 and the other two microphones (e.g., microphones 106-1 and 106-2) located on the sides or top of the device. In some implementations, the microphone 106 is disposed at a location within the electronic device 100 other than the location shown in fig. 1A. In some implementations, the microphones 106 are grouped differently on the electronic device 100. For example, the microphones 106 are arranged in groups of four, with one microphone on the front of the device 100 and one microphone on the back of the device 100. In some implementations, the microphone 106 is oriented and/or positioned relative to the speaker 102. For example, one microphone (e.g., 106-3) faces in the same direction as the speaker 102, while the other microphones (e.g., 106-1 and 106-2) are perpendicular (or substantially perpendicular) to the direction of the speaker 102. As another example, one microphone (e.g., 106-3) is placed closer to the speaker 102 than the other microphones (e.g., 106-1 and 106-2). Thus, in some implementations, the microphones 106 are positioned such that a phase difference exists in the received audio and can be analyzed to determine room characteristics. In some implementations, the speakers (e.g., speakers 102 and/or 104) are aligned in the same plane (e.g., the two outward faces form the front face of the device). In some embodiments, the speakers face in different directions (e.g., speaker 102-1 is tilted to the left and speaker 102-2 is tilted to the right).
FIG. 1B illustrates an electronic device 120 according to some embodiments. In some implementations, the electronic device 120 includes a microphone 122, an array of illuminators 124 (e.g., LEDs), and one or more speakers located behind a mesh 126. Further, the rear side of the electronic device 120 optionally includes a power connector configured to couple to a power source (not shown). In some implementations, the electronic device 120 includes more or fewer microphones 122 than shown in fig. 1B. In some implementations, the microphone 122 is disposed at a location within the electronic device 120 other than the location shown in fig. 1B.
In some implementations, electronic device 100 and/or electronic device 120 are voice activated. In some implementations, electronic device 100 and/or electronic device 120 present a neat appearance with no visible buttons, and interaction with electronic device 120 is based on voice and touch gestures. Alternatively, in some implementations, the electronic device 100 and/or the electronic device 120 include a limited number of physical buttons (not shown), and the interaction with the electronic device is further based on pressing a button in addition to voice and/or touch gestures.
FIG. 2 is a block diagram illustrating an operating environment 200 including a plurality of electronic devices 100, 120, and 202 and server systems 206, 220, according to some embodiments. The operating environment includes one or more electronic devices 100, 120, and 202 located at one or more locations within a defined space, for example, in a space of a single room or structure, or within a defined area of an open space.
Examples of electronic device 202 include electronic device 100, electronic device 120, a handheld computer, a wearable computing device, a Personal Digital Assistant (PDA), a tablet, a laptop, a desktop computer, a cellular telephone, a smartphone, a voice-activated device, an Enhanced General Packet Radio Service (EGPRS) mobile phone, a media player, or a combination of any two or more of these or other data processing devices.
According to some embodiments, electronic devices 100, 120, and 202 are communicatively coupled to server system 206 and intelligent assistant system 220 via communication network 210. In some implementations, at least some of the electronic devices (e.g., devices 100, 120, and 202-1) can be communicatively coupled to a local network 204, which local network 204 can be communicatively coupled to one or more communication networks 210. In some implementations, the local network 204 is a local area network implemented at a network interface (e.g., a router). In some implementations, electronic devices 100, 120, and 202 communicatively coupled to local network 204 also communicate with each other through local network 204. In some implementations, the electronic devices 100, 120, and 202 are communicatively coupled to each other (e.g., not via the local network 204 or the communication network 210).
Optionally, one or more electronic devices are communicatively coupled to the communication network 210 and not on the local network 204 (e.g., electronic device 202-N). For example, these electronic devices are not on a Wi-Fi network corresponding to the local network 204, but are connected to the communication network 210 through a cellular connection. In some implementations, communication between the electronic devices 100, 120, and 202 located on the local network 204 and the electronic devices 100, 120, and 202 not on the local network 204 is performed by the voice assistance server 224. In some implementations, the electronic device 202 is registered in the device registry 222 and is therefore known to the voice assistance server 224.
In some implementations, the server system 206 includes a front-end server 212, the front-end server 212 facilitating communication between the server system 206 and the electronic devices 100, 120, and 202 via the communication network 210. For example, the front end server 212 receives audio content (e.g., the audio content is music and/or a speech) from the electronic device 202. In some implementations, the front-end server 212 is configured to send information to the electronic device 202. In some embodiments, the front end server 212 is configured to send equalization information (e.g., frequency corrections). For example, the front end server 212 sends equalization information to the electronic device in response to the received audio content. In some implementations, the front end server 212 is configured to send data and/or hyperlinks to the electronic devices 100, 120, and/or 202. For example, the front-end server 212 is configured to send updates (e.g., database updates) to the electronic device.
In some implementations, the server system 206 includes an equalization module 214 that determines information about the audio signals, such as frequencies, phase differences, transfer functions, feature vectors, frequency responses, and so forth, from the audio signals collected from the electronic device 202. In some implementations, equalization module 214 obtains the frequency correction data from correction database 216 to send to the electronic device (e.g., via front end server 212). In some embodiments, the frequency correction data is based on information about the audio signal. In some implementations, the equalization module 214 applies machine learning (e.g., in conjunction with the machine learning database 218) to the audio signal to generate the frequency correction.
In some embodiments, server system 206 includes a correction database 216 that stores frequency correction information. For example, correction database 216 includes pairs of audio feature vectors and corresponding frequency corrections.
In some implementations, the server system 206 includes a machine learning database 218 that stores machine learning information. In some embodiments, the machine learning database 218 is a distributed database. In some embodiments, the machine learning database 218 includes a deep neural network database. In some embodiments, the machine learning database 218 includes a supervised training and/or an enhanced training database.
Fig. 3 is a block diagram illustrating an electronic device 300 according to some embodiments. In some implementations, the electronic device 300 is or includes any of the electronic devices 100, 120, 202 of fig. 2. Electronic device 300 includes one or more processors 302, one or more network interfaces 304, memory 306, and one or more communication buses 308 for interconnecting these components (sometimes referred to as a chipset).
In some implementations, the electronic device 300 includes one or more input devices 312 that facilitate audio input and/or user input, such as a microphone 314, buttons 316, and a touch sensor array 318. In some implementations, the microphone 314 includes the microphone 106, the microphone 122, and/or other microphones.
In some implementations, the electronic device 300 includes one or more output devices 322 that facilitate audio output and/or visual output, including one or more speakers 324, LEDs 326 (and/or other types of illuminators), and a display 328. In some embodiments, LEDs 326 include illuminator 124 and/or other LEDs. In some implementations, speakers 324 include woofer 102, tweeter 104, speakers of device 120, and/or other speakers.
In some implementations, the electronic device 300 includes a radio 320 and one or more sensors 330. Radio 320 enables one or more communication networks and allows electronic device 300 to communicate with other devices. In some embodiments, the radio 320 is capable of data communication using any of a variety of custom or standard wireless protocols (e.g., IEEE 802.15.4, Wi-Fi, ZigBee, 6LoWPAN, Thread, Z-Wave, Bluetooth Smart, ISA100.5A, WirelessHART, Miwi, etc.), custom or standard wired protocols (e.g., Ethernet, HomePlug, etc.), and/or any other suitable communication protocol, including communication protocols not developed until the date of filing this document.
In some implementations, the sensors 330 include one or more motion sensors (e.g., accelerometers), light sensors, positioning sensors (e.g., GPS), and/or audio sensors. In some implementations, the positioning sensors include one or more position sensors (e.g., Passive Infrared (PIR) sensors) and/or one or more orientation sensors (e.g., gyroscopes).
Memory 306 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM or other random access solid state memory devices; and optionally non-volatile memory, such as one or more magnetic disk storage devices, one or more optical disk storage devices, one or more flash memory devices, or one or more other non-volatile solid-state storage devices. The memory 306 optionally includes one or more storage devices remote from the one or more processors 302. The memory 306, or alternatively non-volatile memory within the memory 306, includes non-transitory computer-readable storage media. In some implementations, memory 306 or a non-transitory computer readable storage medium of memory 306 stores the following programs, modules and data structures, or a subset or superset thereof:
operating logic 332, including procedures for handling various basic system services and for performing hardware related tasks;
a user interface module 334 for providing and displaying a user interface in which settings, captured data including hotwords, and/or other data for one or more devices (e.g., electronic device 300 and/or other devices) may be configured and/or viewed;
a radio communication module 336 for connecting to and communicating with other network devices (e.g., local network 204 (such as a router providing internet connectivity), networked storage devices, network routing devices, server system 206, smart home server system 220, etc.) coupled to one or more communication networks 210 via one or more communication interfaces 304 (wired or wireless);
an audio output module 338 for determining and/or rendering audio signals (e.g., in conjunction with the speaker 324), such as adjusting operational settings of the speaker;
a microphone module 340 for acquiring and/or analyzing audio signals (e.g., in conjunction with microphone 314);
a location module 344 for obtaining and/or analyzing location information (e.g., position and/or location information), for example, in conjunction with the sensor 330;
equalization module 346 for equalizing audio output of electronic device 300, including but not limited to:
an audio analysis sub-module 3461 for analyzing the audio signal collected from the input device (e.g., microphone), e.g., determining audio properties (e.g., frequency, phase shift, and/or phase difference) and/or generating a Fast Fourier Transform (FFT) of the audio frequencies;
a correction sub-module 3462 for obtaining frequency corrections from correction database 352 and/or applying frequency corrections to electronic device 300;
a transfer function sub-module 3463 for determining a feature vector, an acoustic transfer function (relating audio output to audio input), and/or a frequency response of the electronic device 300 using the analyzed audio signal; and
a weighting sub-module 3464 for assigning different weights to the respective audio signals and/or audio properties (e.g., phase difference and/or signal-to-noise ratio);
a training module 348 for generating and/or training audio models associated with the electronic device 300 and optionally fingerprint audio events;
a device database 350 for storing information associated with the electronic device 300, including but not limited to:
sensor information 3501 associated with sensor 330;
device settings 3502 for the electronic device 300, such as default options and preferred user settings; and
communication protocol information 3503 specifying a communication protocol to be used by the electronic device 300;
a correction database 352 for storing frequency correction information; and
and a machine learning database 354 for storing machine learning information.
In some embodiments, correction database 352 includes the following data sets, or a subset or superset thereof:
location data corresponding to different locations and/or orientations of the associated audio device (e.g., locations of microphones and/or speakers);
vector data comprising phase shifts, phase differences and/or eigenvectors corresponding to different positions and/or orientations of the associated audio device;
weight information including weights assigned to different signal-to-noise ratios, microphones, microphone pairs and/or microphone locations;
training audio, including training data used to construct correction database 352 (e.g., white noise, pink noise, etc.); and
correction data for storing information for correcting the audio frequency response of an audio device, including but not limited to:
a frequency response comprising frequency responses and/or feature vectors corresponding to different positions and/or orientations of the audio device;
frequency corrections corresponding to the respective frequency responses.
According to some embodiments, machine learning database 354 includes the following data sets, or a subset or superset thereof:
neural network data, including information corresponding to the operation of one or more neural networks, including but not limited to:
positioning information, including information corresponding to different positions and/or orientations of the audio device (e.g., feature vectors); and
correction data corresponding to the positioning information.
Each of the above-identified modules is optionally stored in one or more of the storage devices described herein and corresponds to a set of instructions for performing the functions described above. The above-identified modules or programs need not be implemented as separate software programs, procedures, modules, or data structures, and thus various subsets of these modules may be combined or otherwise rearranged in various embodiments. In some implementations, the memory 306 stores a subset of the modules and data structures identified above. Further, memory 306 optionally stores additional modules and data structures not described above (e.g., modules for hotword detection and/or speech recognition in a speech-enabled smart speaker). In some implementations, a subset of the programs, modules, and/or data stored in memory 306 are stored on and/or executed by server system 206 and/or voice assistance server 224.
FIG. 4 is a block diagram illustrating a server system 206 according to some embodiments. According to some embodiments, server system 206 includes one or more processors 402, one or more network interfaces 404, memory 410, and one or more communication buses 408 for interconnecting these components (sometimes referred to as a chipset).
The server system 206 optionally includes one or more input devices 406 that facilitate user input, such as a keyboard, mouse, voice command input unit or microphone, touch screen display, touch sensitive input panel, gesture capture camera, or other input buttons or controls. In some implementations, the server system 206 optionally uses a microphone and speech recognition or a camera and gesture recognition to supplement or replace the keyboard. The server system 206 optionally includes one or more output devices 408 that enable presentation of a user interface and display content, such as one or more speakers and/or one or more visual displays.
Memory 410 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM or other random access solid state memory devices; optionally, non-volatile memory is included, such as one or more magnetic disk storage devices, one or more optical disk storage devices, one or more flash memory devices, or one or more other non-volatile solid state storage devices. The memory 410 may optionally include one or more storage devices remote from the one or more processors 402. Memory 410, or alternatively non-volatile memory within memory 410, includes non-transitory computer-readable storage media. In some implementations, memory 410 or a non-transitory computer readable storage medium of memory 410 stores the following programs, modules and data structures, or a subset or superset thereof:
an operating system 416, including processes for handling various basic system services and performing hardware-related tasks;
a front end 212 to communicatively couple the server system 206 to other devices (e.g., electronic devices 100, 120, and 202) via a network interface 404 (wired or wireless) and one or more networks, such as the internet, other wide area networks, local area networks, metropolitan area networks, etc.;
a user interface module 420 for enabling presentation of information (e.g., a graphical user interface for presenting applications, widgets, websites and their webpages, games, audio and/or video content, text, etc.) on the server system or the electronic device;
a device registration module 422 for registering a device (e.g., electronic device 300) for use with server system 206;
equalization module 424 for equalizing audio output of an electronic device (e.g., electronic device 300), including but not limited to:
an audio analysis submodule 4241 for analyzing audio signals collected from an electronic device (e.g., electronic device 300), e.g., determining audio properties (e.g., frequency, phase shift, and/or phase difference) and/or generating a Fast Fourier Transform (FFT) of audio frequencies);
a correction submodule 4242 for obtaining frequency corrections from correction database 216 and/or applying frequency corrections to electronic device 300;
a transfer function submodule 4243 for determining a feature vector, an acoustic transfer function (relating audio output to audio input) and/or a frequency response of the electronic device 300 using the analyzed audio signal; and
a weighting submodule 4244 for assigning different weights to respective audio signals and/or audio properties (e.g. phase difference and/or signal to noise ratio);
a training module 426 for generating and/or training an audio model associated with the electronic device 300 and optionally fingerprint audio events;
server system data 428, which stores data associated with server system 206, including but not limited to:
client device settings 4281, including device settings for one or more electronic devices (e.g., electronic device 300), such as general device settings (e.g., service layer, device model, storage capacity, processing capabilities, communication capabilities, etc.), and information for automatic media display control;
audio device settings 4282, including audio settings for audio devices (e.g., electronic device 300) associated with server system 206, such as general and default settings (e.g., volume settings for speakers and/or microphones, etc.); and
voice assistance data 4283 for the voice-activated device and/or a user account of the voice assistance server 224, such as account access information and information of one or more electronic devices 300 (e.g., service layers, device models, storage capacity, processing power, communication capabilities, etc.);
a correction database 216 for storing frequency correction information, for example, the correction database 352; and
a machine learning database 218 for storing machine learning information, such as the machine learning database 354 described above.
In some implementations, the server system 206 includes a notification module (not shown) for generating alerts and/or notifications for a user of the electronic device. For example, in some implementations, the correction database is stored locally on the user's electronic device, and server system 206 may generate a notification to alert the user to download the latest version or update to the correction database.
Each of the above-identified elements may be stored in one or more storage devices described herein and correspond to a set of instructions for performing the functions described above. The above-identified modules or programs need not be implemented as separate software programs, procedures, modules, or data structures, and thus various subsets of these modules may be combined or otherwise rearranged in various embodiments. In some embodiments, memory 410 optionally stores a subset of the modules and data structures identified above. In addition, memory 410 may optionally store additional modules and data structures not described above.
Fig. 5A-5B are perspective views illustrating an electronic device 500 in different orientations, according to some embodiments. Fig. 5A shows a device 500 (e.g., electronic device 100) having a horizontal display of LEDs 502 (e.g., a first subset of LEDs 326) in a horizontal orientation. Fig. 5B shows device 500 with a horizontal display of LEDs 504 (e.g., a second subset of LEDs 326) in a vertical orientation. According to some embodiments, the LEDs 502 are arranged perpendicular to the LEDs 504.
6A-6B are internal views illustrating the electronic device 500 in different orientations, according to some embodiments. In particular, fig. 6A shows the device 500 in a horizontal orientation, and fig. 6B shows the device 500 in a vertical orientation. Fig. 6A-6B also show a device 500 that includes a speaker 102, a speaker 104, a speaker baffle 604, and an LED assembly 602 (e.g., including an LED board and LEDs 502 and 504). According to some embodiments, the LED component 602 is positioned to minimize occlusion of the speaker 102 (e.g., to minimize degradation of audio output by the speaker).
Fig. 7A-7B illustrate an electronic device 500 having a sliding control element (e.g., a volume control), according to some embodiments. Fig. 7A illustrates an electronic device 500 having a volume control 702 in a horizontal orientation. According to some implementations, the volume control 702 is configured such that a sliding input (e.g., a slide from left to right) toward the second end 706 of the volume control 702 corresponds to a user request to increase the volume. Fig. 7B shows the electronic device 500 with the volume control 702 in a vertical orientation. According to some implementations, the volume control 702 is configured such that a sliding input (e.g., sliding up) toward the first end 704 of the volume control 702 corresponds to a user request to increase the volume.
Fig. 8A-8E are exploded views illustrating representative electronic devices according to some embodiments. As shown in fig. 8A-8E, the device 500 includes a housing 804 and a grill 822, the housing 804 and grill 822 configured to couple together and enclose a speaker baffle 604, speakers 102 and 14, a stiffener 814, a power supply 812, a capacitive touchpad 808, a motherboard 830, an antenna 810, a magnet 832, and a microphone 802. In some implementations, a system-on-chip, controller, and/or processor (e.g., processor 302) is mounted on motherboard 830. In some implementations, the motherboard 830 includes control circuitry for the power supply 812, the antenna 810, the microphone 802, the speaker 102, and/or the speaker 104. In some implementations, motherboard 830 includes an accelerometer for determining the orientation of device 500.
According to some embodiments, the device 500 further includes a base 806, e.g., configured to magnetically couple to one or more magnets in the housing 804. In some embodiments, the base 806 comprises a silicone pad. In some implementations, the housing 804 includes a subset of magnets 832 on both sides of the housing 804 for coupling the base 806 in a horizontal orientation and a vertical orientation. In some implementations, the magnet 832 is disposed on a side opposite the microphone 802 (e.g., such that the microphone aperture 822 is not blocked by a resting surface of the device 500). In some embodiments, the magnet 832 includes a single magnet on each of two or more sides. In some embodiments, the magnet 832 is embedded in the housing 804. In some embodiments, a portion of the housing 804 is adapted to magnetically couple to the base 806 (e.g., composed of a magnetic material).
In some implementations, the microphone 802 is the microphone 106. In some embodiments, the housing 804 includes a microphone aperture 822 and a power port 820. In some embodiments, device 500 includes a plurality of stiffeners, such as stiffener 814, configured to provide structural support and prevent vibration of the speaker. In some implementations, antennas 810 include one or more antennas mounted on a circuit board and/or one or more antennas mounted on housing 804. In some implementations, the antenna 810 is positioned to maximize the distance between the metal components of the device (e.g., the speaker 102) and the antenna to minimize signal interference.
9A-9D are perspective views illustrating the electronic device 106 in different orientations, according to some embodiments. Fig. 9A shows the electronic device 106 in a horizontal orientation. According to some embodiments, as shown in FIG. 9A, in a horizontal orientation, left speakers (e.g., speakers 102-1 and 104-1) are assigned to stereo left audio output (also sometimes referred to as a left channel) and right speakers (e.g., speakers 102-2 and 104-2) are assigned to stereo right audio output (also sometimes referred to as a right channel). According to some embodiments, as shown in FIG. 9A, in the horizontal orientation, the right-hand microphone (e.g., one or more of microphones 106-4, 106-5, and 106-6) is assigned to auto-equalization, while the top-hand microphone (e.g., one or more of microphones 106-1, 106-2, and 106-3) is assigned to hotword detection.
Fig. 9B shows the electronic device 106 in a vertical orientation. According to some embodiments, as shown in fig. 9B, in the vertical orientation, the upper speakers (e.g., speakers 102-2 and 104-2) are assigned to mono audio output, while the lower speakers (e.g., speakers 102-1 and 104-1) are optionally disabled, enabled only for bass frequencies, or enabled only at volume levels above a volume threshold. According to some embodiments, as shown in FIG. 9B, in the vertical orientation, the left microphone (e.g., one or more of microphones 106-1, 106-2, and 106-3) is assigned to auto-equalization, while the top microphone (e.g., one or more of microphones 106-4, 106-5, and 106-6) is assigned to hotword detection. In some embodiments, lower tweeter 104-1 is disabled in the vertical orientation. In some embodiments, lower tweeter 104-1 is disabled in the vertical orientation when the volume level is below the volume threshold. In some embodiments, the lower woofer 102-1 is disabled in a vertical orientation. In some implementations, when in the vertical orientation, the lower woofer 102-1 is disabled (e.g., outputs only audio frequencies below 160 hertz (Hz)) for non-bass frequencies (e.g., frequencies above 160 Hz). In some implementations, when the volume level is below the volume threshold, the lower woofer 102-1 is disabled when in a vertical orientation (or disabled for non-bass frequencies).
Fig. 9C and 9D illustrate the electronic device 106 causing one or more microphones (and optionally one or more antennas 810) to be in an orientation proximate to a resting surface. The close proximity of the resting surface may cause interference of the microphone and the antenna. According to some embodiments, the electronic device 106 is configured to alert the user that it is in a non-optimal position. In some implementations, the device alerts the user that it is in a non-optimal position in response to the user activating the device, in response to a wake-up signal, and/or in response to detecting a change in orientation.
10A-10B are perspective views illustrating the electronic device 106 in different orientations, according to some embodiments. FIG. 10A shows devices 106-1 and 106-2 in a horizontal orientation. According to some embodiments, the device 106 is coupled and configured to operate in a surround sound mode. As shown in fig. 10A, according to some embodiments, device 106-1 is configured to output audio on the left speakers (e.g., speakers 102-1 and 104-1) while the right speakers (e.g., speakers 102-2 and 104-2) are disabled or output only bass frequencies. As shown in fig. 10A, according to some embodiments, device 106-2 is configured to output audio on the right speakers (e.g., speakers 102-2 and 104-2) while the left speakers (e.g., speakers 102-1 and 104-1) are disabled or output only bass frequencies (e.g., as described above with reference to fig. 9B). In this way, the surround sound effect can be enhanced. In some implementations, each device 106 outputs audio from each of its speakers. In some implementations, device 106-1 is configured such that right tweeter 104-2 is disabled and right woofer 102-2 is enabled. In some implementations, device 106-2 is configured such that left tweeter 104-1 is disabled and left woofer 102-1 is enabled. In some implementations, the device 106 determines its relative positioning and operates the appropriate speakers according to the determination.
FIG. 10B shows devices 106-1 and 106-2 in a vertical orientation. According to some embodiments, the device 106 is coupled and configured to operate in a surround sound mode. As shown in fig. 10B, according to some embodiments, each device 106 is configured to output audio on the upper speakers (e.g., speakers 102-2 and 104-2) while the lower speakers (e.g., speakers 102-1 and 104-1) are disabled or output only bass frequencies (e.g., as described above with reference to fig. 9B). In some implementations, each device 106 outputs audio from each of its speakers. In some implementations, each device 106 is configured such that upper woofer 104-1 is disabled and upper woofer 102-1 is enabled. In some implementations, the device 106 determines its relative positioning and operates the appropriate speakers according to the determination. In some implementations, device 106-1 is configured to output audio corresponding to stereo left, while device 106-2 is configured to output audio corresponding to stereo right.
FIG. 11 is a flow diagram illustrating a method 1100 for orientation-based operation of an audio device, in accordance with some embodiments. In some implementations, the method 1100 is performed by an audio device, such as the audio device 100, the audio device 500, or other electronic device 300. In some implementations, the method 1100 is performed by components of the electronic device 300, such as the positioning module 344 and the audio output module 338 that incorporates the input device 312 and the output device 322. In some embodiments, the operations of method 1100 described herein are interchangeable, and the various operations of method 1100 are performed by any of the aforementioned devices. In some implementations, the method 1100 is governed by instructions stored in a non-transitory computer-readable storage medium (e.g., within the memory 306) and executed by one or more processors or controllers of a device, such as the processor 302 of the electronic device 300. For convenience, the method 1100 is described below as being performed by an audio device (e.g., the electronic device 500) that includes one or more microphones and a plurality of speakers.
In some implementations, the audio device operates in a first orientation (e.g., a horizontal orientation) (1102). In some implementations, the first orientation corresponds to the audio device being positioned on the first side (e.g., as shown in fig. 5A). In some implementations, the operation at the first orientation includes outputting audio content while in the first orientation. In some implementations, the operation in the first orientation includes receiving user input via one or more device interface elements while in the first orientation.
The audio device detects a change in orientation of the audio device from a first orientation to a second orientation (1104). In some implementations, the audio device includes an accelerometer and the accelerometer is utilized to detect the change in orientation. In some implementations, the audio device determines its orientation in response to activation by the user (e.g., powering on or waking up). In some implementations, the audio device periodically checks its orientation and detects changes in orientation by comparing its current orientation to its previous orientation.
In some implementations, the second orientation corresponds to the audio device being positioned on a second side (e.g., resting on a second side) that is different from the first side (e.g., a vertical orientation as shown in fig. 5B). For example, the change in orientation corresponds to the user rotating the device from a horizontal position to a vertical position.
In response to detecting the change in orientation, the audio device configures operation of two or more of the plurality of device interface elements (1108). In some embodiments, the plurality of device interface elements includes one or more of: one or more microphones (e.g., microphone 106, 314, or 802), one or more speakers (e.g., speaker 102 and/or 104), one or more lighting elements (e.g., LED 326, 502, and/or 504), one or more slider controls (e.g., volume control 702), and the like. In some embodiments, configuring two or more of the plurality of device interface elements comprises reconfiguring one or more of the device interface elements. In some embodiments, in addition to configuring operation of the device interface elements, the device performs automatic equalization based on detecting a change in orientation. For example, the device detects a change in orientation and adjusts speaker settings based on audio equalization operations and update operations of device interface elements.
In some implementations, the audio device assigns a first microphone (e.g., microphone 106-3) to the task based on the change in orientation (1110). In some implementations, the task includes one or more of the following (1112): hotword detection, speech recognition and audio equalization. In some implementations, when the audio device is in the second orientation, the audio device identifies the first microphone as being on a top surface of the audio device and assigns the first microphone to the task (e.g., for hotword detection) based on the identification. In some implementations, the audio device identifies the microphone with the least interference to assign the task. In some implementations, the audio device assigns multiple microphones (e.g., microphones 106-1, 106-2, and 106-3) to a task (e.g., multiple microphones assigned for automatic equalization). In some implementations, a first subset of the microphones is assigned to a first task (e.g., hotword detection) and a second subset of the microphones is assigned to a second task (e.g., audio equalization).
In some implementations, configuring the two or more device interface elements includes de-assigning the second microphone from the task (1114). For example, in a first orientation, a first microphone (e.g., microphone 106).
In some implementations, the operation of configuring two or more device interface elements includes an operation of configuring a volume control element (e.g., volume control 702) (1116). In some embodiments, when in the first orientation, movement along the volume control element toward the first end of the volume control element corresponds to increasing the volume of the one or more speakers. In some embodiments, configuring the volume control element comprises reconfiguring the volume control element such that movement along the volume control element toward the first end of the volume control element corresponds to decreasing the volume of the one or more speakers. In some implementations, the volume control is a capacitive touch element (e.g., a capacitive touch bar). In some implementations, the device includes one or more sliding elements, such as volume controls, brightness controls, and/or bass amplification.
In some embodiments, configuring two or more device interface elements includes configuring a speaker (e.g., speaker 102 and/or speaker 104) (1118). For example, the speaker is configured to adjust treble, bass, and/or amplification of audio output by the speaker. As an example, when in the first orientation, the plurality of speakers is configured to operate in a stereo mode; and configuring the plurality of speakers comprises reconfiguring the plurality of speakers to operate in a mono mode. In some implementations, the audio output is time-shifted when the change in orientation is determined. In some implementations, the audio output is faded briefly to silence before reconfiguring subsequent outputs. In some implementations, a different audio filter (e.g., a biquad or ladder filter) is used to reconfigure the subsequent output. In some implementations, the treble and bass settings of the speaker are controlled by software executing on the device (e.g., audio output module 338 executing on processor 302).
In some embodiments, reconfiguring the plurality of speakers to operate in a mono mode comprises: only a subset of the plurality of speakers is utilized for subsequent audio output (e.g., to minimize destructive interference between speaker outputs). For example, in the vertical orientation, only the upper speakers (e.g., the upper woofer and the upper tweeter) are used, as shown in fig. 9B. In some implementations, the subsequent audio output includes TTS output or music. In some implementations, the gain of the subset of speakers is increased to compensate for using only the subset (e.g., by 4, 5, or 6 dB). In some embodiments, one or more tweeters are disabled and the remaining tweeters operate at a higher gain to compensate, while the woofers continue to operate in the same manner as before the reconfiguration.
In some implementations, reconfiguring the plurality of speakers includes utilizing only a subset of the plurality of speakers for subsequent audio output having audio frequencies above the threshold frequency. In some embodiments, the threshold frequency is 140Hz, 160Hz, or 200 Hz. In some implementations, all woofers (e.g., speaker 102) are used for bass frequencies, while less than all woofers are used for higher frequencies. In some implementations, the subset is selected based on the user's location, distance from the resting surface, and/or the capabilities of the individual speakers. For example, if the user is located on the left side of the device, the leftmost speaker is used, and if the user is located on the right side of the device, the rightmost speaker is used.
In some embodiments, reconfiguring the plurality of speakers comprises: (1) utilizing only a subset of the plurality of speakers for subsequent audio output when the volume setting of the audio device is below a volume threshold; and (2) utilize the subset of the plurality of speakers and the one or more additional speakers for subsequent audio output when the volume setting of the audio device is above the volume threshold. In some implementations, the volume threshold corresponds to a maximum volume setting for a subset of the speakers. In some embodiments, the volume threshold is 6dB, 3dB, or 1dB below the maximum volume of the speaker. In some implementations, an input/output matrix is used to time-convert the audio output in the conversion.
In some implementations, the audio device is audio paired with the additional audio device. In some implementations, configuring the plurality of speakers includes utilizing a first subset of the plurality of speakers when in a first orientation, and utilizing a second subset of the plurality of speakers when in a second orientation (e.g., utilizing a subset of speakers that are farthest away from the additional audio device when in a horizontal orientation (to enhance surround sound output of the device), and utilizing a different subset when in a vertical orientation (e.g., the uppermost speaker to minimize interference with the resting surface)). In some implementations, the audio device is audio paired with multiple additional audio devices, and each device operates in a mono mode such that the audio device as a whole achieves a surround sound effect. In some implementations, all speakers are used in the same orientation (e.g., all speakers are used in a vertical orientation). In some implementations, the timing of audio output at each device is adjusted based on the relative position between the devices (e.g., to enhance synchronization of the output).
In some implementations, configuring the operation of two or more device interface elements includes adjusting the operation of multiple lighting elements (e.g., LEDs 502 and 504) (1120). In some embodiments, the operation of the lighting elements is controlled by lighting control circuitry (e.g., mounted on a lighting control board, such as LED assembly 602).
In some embodiments, the plurality of lighting elements comprises a plurality of Light Emitting Diodes (LEDs). In some embodiments, adjusting the operation of the lighting elements comprises disabling the first subset of lighting elements and enabling the second subset. In some embodiments, the plurality of lighting elements includes a first row of lighting elements (e.g., LEDs 502) along a first axis and a second row of lighting elements (e.g., LEDs 504) along a second axis different from the first axis. In some embodiments, adjusting the operation of the lighting element comprises: the device state information is transmitted with a first row of lighting elements when in the first orientation and the device state information is transmitted with a second row of lighting elements when in the second orientation. In some embodiments, adjusting the operation of the lighting elements comprises utilizing only a subset of the lighting elements that are substantially level with the ground.
In some embodiments, the audio device further comprises a removable base (e.g., base 806); and a detachable dock configured to couple to two or more sides of the audio device to facilitate positioning the audio device in a plurality of orientations. In some implementations, the removable base is configured to magnetically couple to various magnets within a housing (e.g., housing 804) of the audio device. In some embodiments, the removable base is comprised of silicone. In some embodiments, the mounts are configured to couple only at locations corresponding to the effective orientation of the device.
In some implementations, an audio device includes a power port; and the audio device is configured such that the power port is proximate to a resting surface of the audio device in both the first orientation and the second orientation. For example, the power port is in two orientations, each at a corner portion of the audio device between two sides for resting the audio device, for example, as shown in fig. 8B.
In some implementations, the audio device includes one or more antennas (e.g., antenna 810); and the audio device is configured such that the antenna is at least a threshold distance away from a resting surface for the audio device in both the first orientation and the second orientation. For example, the antenna is arranged in two orientations, each being opposite to both sides for resting the audio device, as shown in fig. 8A.
In some implementations, the audio device detects a change in orientation of the audio device from a first orientation to a third orientation; and presenting an error status to the user in response to detecting the change toward the third orientation. For example, an audio message "device upside down" is output via one or more speakers, an error status is displayed via one or more LEDs of the device, and/or an error alert is sent to the user's client device.
In some implementations, the audio device detects a change in orientation of the audio device from the second orientation to the first orientation. For example, the audio device detects a change from a vertical orientation back to a horizontal orientation and reconfigures the device interface elements accordingly.
Although some of the various figures show logical steps in a particular order, steps that are not order dependent may be reordered and other steps may be combined or broken down. Although some reordering or other groupings are specifically mentioned, other groupings will be apparent to one of ordinary skill in the art, and thus the ordering and grouping presented herein is not an exhaustive list of alternatives. Further, it should be recognized that these steps could be implemented in hardware, firmware, software, or any combination thereof.
For the case where the system discussed above collects information about the user, the user may be provided with an opportunity to opt-in/out of programs or features that may collect personal information (e.g., information about the user's preferences or smart device usage). Additionally, in some embodiments, certain data may be anonymized in one or more ways before being stored or used in order to remove personally identifiable information. For example, the identity of a user may be anonymized so that no personally identifiable information can be determined for the user or associated with the user, and user preferences or user interactions may be summarized (e.g., summarized based on user demographics) rather than being associated with a particular user.
Although some of the various figures show logical steps in a particular order, steps that are not order dependent may be reordered and other steps may be combined or broken down. Although some reordering or other groupings are specifically mentioned, other groupings will be apparent to one of ordinary skill in the art, and thus the ordering and grouping presented herein is not an exhaustive list of alternatives. Further, it should be recognized that these steps could be implemented in hardware, firmware, software, or any combination thereof.
It will also be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first electronic device may be referred to as a second electronic device, and similarly, a second electronic device may be referred to as a first electronic device, without departing from the scope of the various described embodiments. The first electronic device and the second electronic device are both electronic devices, but they are not the same type of electronic device.
The terminology used in the various described embodiments herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used in the description of the various described embodiments and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. .
The term "if" is optionally to be construed as referring to "when … …" or "at … …" or "in response to a determination of … …" or "in response to detection of … …" or "a determination according to … …", depending on the context. Similarly, the phrase "if determined … …" or "if [ the condition or event ] is detected" is optionally to be construed as meaning "upon determination … …" or "in response to determination … …" or "upon detection of [ the condition or event ] or" in response to detection of [ the condition or event ] "or" in accordance with a determination that [ the condition or event ] is detected ", depending on the context.
The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the scope of the claims to the precise forms disclosed. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen in order to best explain the principles embodied in the claims and their practical application to thereby enable others skilled in the art to best utilize the embodiments with various modifications as are suited to the particular use contemplated.

Claims (20)

1. A method, comprising:
on an audio device having one or more processors, memory, and a plurality of device interface elements, the audio device comprising one or more speakers and a plurality of microphones:
detecting a change in orientation of the audio device from a first orientation to a second orientation; and
configuring operation of two or more of the plurality of device interface elements in response to detecting the change in orientation.
2. The method of claim 1, further comprising, prior to detecting the change in orientation, operating the audio device in the first orientation; and
wherein configuring the operation of the two or more device interface elements comprises reconfiguring the operation based on the change in orientation.
3. The method of any of the preceding claims, wherein the operation of configuring two or more of the plurality of device interface elements comprises assigning a first microphone of the plurality of microphones to a task based on the change in orientation.
4. The method of claim 3, further comprising, in response to detecting the change in orientation, de-assigning a second microphone of the plurality of microphones from the task.
5. The method of any preceding claim, wherein the one or more speakers comprise a plurality of speakers; and
wherein configuring operation of two or more of the plurality of device interface elements comprises configuring operation of the plurality of speakers.
6. The method of claim 5, wherein, when in the first orientation, the plurality of speakers are configured to operate in a stereo mode; and
wherein the operation of configuring the plurality of speakers comprises reconfiguring the plurality of speakers to operate in a mono mode.
7. The method of claim 6, wherein reconfiguring the plurality of speakers to operate in the mono mode comprises utilizing only a subset of the plurality of speakers for subsequent audio output.
8. The method of claim 5, wherein reconfiguring the plurality of speakers comprises utilizing only the subset of the plurality of speakers for subsequent audio output having audio frequencies above a threshold frequency.
9. The method of claim 5, wherein reconfiguring the plurality of speakers comprises:
utilizing only the subset of the plurality of speakers for subsequent audio output when a volume setting of the audio device is below a volume threshold; and
utilizing the subset of the plurality of speakers and one or more additional speakers for subsequent audio output when the volume setting of the audio device is above the volume threshold.
10. The method of claim 5, further comprising audio pairing the audio device with an additional audio device; and
wherein the operation of configuring the plurality of speakers comprises utilizing a first subset of the plurality of speakers when in the first orientation and utilizing a second subset of the plurality of speakers when in the second orientation.
11. The method of any of the preceding claims, further comprising:
detecting a change in orientation of the audio device from a first orientation to a third orientation; and
presenting an error state to the user in response to detecting the change in orientation toward the third orientation.
12. The method of any of the preceding claims, wherein the first orientation corresponds to the audio device being positioned on a first side of the audio device; and
wherein the second orientation corresponds to the audio device being positioned on a second side of the audio device different from the first side.
13. The method of any preceding claim, wherein the plurality of device interface elements comprise volume control elements; and
wherein the operation of configuring two or more of the plurality of device interface elements comprises configuring operation of the volume control element.
14. The method of claim 13, wherein, when in the first orientation, movement along the volume control element toward the first end of the volume control element corresponds to increasing the volume of the one or more speakers; and
wherein the operation of configuring the volume control element comprises configuring the volume control element such that movement along the volume control element toward the first end of the volume control element corresponds to decreasing the volume of the one or more speakers.
15. The method of any preceding claim, wherein the plurality of device interface elements comprises a plurality of lighting elements; and
wherein the operation of configuring two or more of the plurality of device interface elements comprises adjusting operation of the plurality of lighting elements.
16. The method of any of the preceding claims, wherein the audio device further comprises a detachable base; and
wherein the detachable dock is configured to couple to two or more sides of the audio device to facilitate positioning the audio device in a plurality of orientations.
17. The method of any of the preceding claims, wherein the audio device further comprises a power port; and
wherein the audio device is configured such that the power port is proximate to a resting surface for the audio device in both the first orientation and the second orientation.
18. The method of any of the preceding claims, wherein the audio device further comprises one or more antennas; and
wherein the audio device is configured such that the antenna remains at least a threshold distance from a resting surface for the audio device in both the first orientation and the second orientation.
19. An audio device, comprising:
one or more processors; and
memory coupled to one or more processors, the memory storing one or more programs configured for execution by the one or more processors, the one or more programs including instructions for performing the method of any of claims 1-18.
20. A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computing system, cause the system to perform the method of any of claims 1-18.
CN201980053721.XA 2018-08-08 2019-08-08 Orientation-based device interface Pending CN112567764A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311490245.2A CN117676427A (en) 2018-08-08 2019-08-08 Orientation-based device interface

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US16/058,820 US10734963B2 (en) 2017-10-04 2018-08-08 Methods and systems for automatically equalizing audio output based on room characteristics
US16/058,820 2018-08-08
US16/138,707 2018-09-21
US16/138,707 US10897680B2 (en) 2017-10-04 2018-09-21 Orientation-based device interface
PCT/US2019/045703 WO2020033685A1 (en) 2018-08-08 2019-08-08 Orientation-based device interface

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202311490245.2A Division CN117676427A (en) 2018-08-08 2019-08-08 Orientation-based device interface

Publications (1)

Publication Number Publication Date
CN112567764A true CN112567764A (en) 2021-03-26

Family

ID=69457769

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201980053721.XA Pending CN112567764A (en) 2018-08-08 2019-08-08 Orientation-based device interface
CN202311490245.2A Pending CN117676427A (en) 2018-08-08 2019-08-08 Orientation-based device interface

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202311490245.2A Pending CN117676427A (en) 2018-08-08 2019-08-08 Orientation-based device interface

Country Status (2)

Country Link
CN (2) CN112567764A (en)
WO (1) WO2020033685A1 (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103155526A (en) * 2010-09-23 2013-06-12 捷讯研究有限公司 System and method for rotating a user interface for a mobile device
CN103517178A (en) * 2012-06-26 2014-01-15 联想(北京)有限公司 Method, device and electronic apparatus for audio frequency regulation
CN104898970A (en) * 2015-04-30 2015-09-09 努比亚技术有限公司 Volume control method and apparatus
CN105224280A (en) * 2015-09-25 2016-01-06 联想(北京)有限公司 Control method, device and electronic equipment
CN105260071A (en) * 2015-10-20 2016-01-20 广东欧珀移动通信有限公司 Terminal control method and terminal equipment
CN105867766A (en) * 2016-03-28 2016-08-17 乐视控股(北京)有限公司 Sound volume adjustment method and terminal
US20170123755A1 (en) * 2015-10-28 2017-05-04 Smule, Inc. Wireless handheld audio capture device and multi-vocalist method for audiovisual media application
CN106708403A (en) * 2016-11-30 2017-05-24 努比亚技术有限公司 The method and device of synchronizing playing notification tone while inputting slide operation
US20170188167A1 (en) * 2015-12-23 2017-06-29 Lenovo (Singapore) Pte. Ltd. Notifying a user to improve voice quality
WO2018026799A1 (en) * 2016-08-01 2018-02-08 D&M Holdings, Inc. Soundbar having single interchangeable mounting surface and multi-directional audio output
US20180063626A1 (en) * 2012-08-02 2018-03-01 Ronald Pong Headphones With Interactive Display

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103155526A (en) * 2010-09-23 2013-06-12 捷讯研究有限公司 System and method for rotating a user interface for a mobile device
CN103517178A (en) * 2012-06-26 2014-01-15 联想(北京)有限公司 Method, device and electronic apparatus for audio frequency regulation
US20180063626A1 (en) * 2012-08-02 2018-03-01 Ronald Pong Headphones With Interactive Display
CN104898970A (en) * 2015-04-30 2015-09-09 努比亚技术有限公司 Volume control method and apparatus
CN105224280A (en) * 2015-09-25 2016-01-06 联想(北京)有限公司 Control method, device and electronic equipment
CN105260071A (en) * 2015-10-20 2016-01-20 广东欧珀移动通信有限公司 Terminal control method and terminal equipment
US20170123755A1 (en) * 2015-10-28 2017-05-04 Smule, Inc. Wireless handheld audio capture device and multi-vocalist method for audiovisual media application
US20170188167A1 (en) * 2015-12-23 2017-06-29 Lenovo (Singapore) Pte. Ltd. Notifying a user to improve voice quality
CN105867766A (en) * 2016-03-28 2016-08-17 乐视控股(北京)有限公司 Sound volume adjustment method and terminal
WO2018026799A1 (en) * 2016-08-01 2018-02-08 D&M Holdings, Inc. Soundbar having single interchangeable mounting surface and multi-directional audio output
CN106708403A (en) * 2016-11-30 2017-05-24 努比亚技术有限公司 The method and device of synchronizing playing notification tone while inputting slide operation

Also Published As

Publication number Publication date
CN117676427A (en) 2024-03-08
WO2020033685A1 (en) 2020-02-13

Similar Documents

Publication Publication Date Title
US10897680B2 (en) Orientation-based device interface
US11888456B2 (en) Methods and systems for automatically equalizing audio output based on room position
US11663305B2 (en) Controlling input/output devices
US10904612B2 (en) Method for outputting audio and electronic device for the same
CN110543289B (en) Method for controlling volume and electronic equipment
EP3504867B1 (en) Electronic device including antenna
CN104378485A (en) Volume adjustment method and volume adjustment device
US9538277B2 (en) Method and apparatus for controlling a sound input path
CN106489130A (en) For making audio balance so that the system and method play on an electronic device
US11240057B2 (en) Alternative output response based on context
US10045127B2 (en) Electronic device with micro speaker
US20140233772A1 (en) Techniques for front and rear speaker audio control in a device
US20170289954A1 (en) Intelligent notification delivery
WO2018058978A1 (en) Reminding method and device, electronic equipment and computer storage medium
CN114500442B (en) Message management method and electronic equipment
US20160127924A1 (en) Apparatus and method for determining network status
US9922635B2 (en) Minimizing nuisance audio in an interior space
CN106331235A (en) Mobile terminal main board, mobile terminal main board setting method, and mobile terminal
US20150264721A1 (en) Automated program selection for listening devices
KR20190106297A (en) Electronic device and method for connection with external device
CN109756825A (en) Classify the position of intelligent personal assistants
CN112567764A (en) Orientation-based device interface
WO2023000795A1 (en) Audio playing method, failure detection method for screen sound-production device, and electronic apparatus
US20220311636A1 (en) Electronic device and method for determining device for performing task by electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210326

RJ01 Rejection of invention patent application after publication