CN117676427A - Orientation-based device interface - Google Patents

Orientation-based device interface Download PDF

Info

Publication number
CN117676427A
CN117676427A CN202311490245.2A CN202311490245A CN117676427A CN 117676427 A CN117676427 A CN 117676427A CN 202311490245 A CN202311490245 A CN 202311490245A CN 117676427 A CN117676427 A CN 117676427A
Authority
CN
China
Prior art keywords
orientation
audio
implementations
speakers
audio device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311490245.2A
Other languages
Chinese (zh)
Inventor
贾斯汀·沃里奇
罗兰多·埃斯帕扎·帕拉西奥斯
尼古拉斯·马塔雷斯
迈克尔·B·蒙特韦利什基
拉斯穆斯·芒克·拉森
本杰明·路易斯·沙亚
车-宇·郭
迈克尔·斯梅德加德
理查德·F·莱恩
加布里尔·费希尔·斯洛特尼克
克里斯滕·曼格姆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Google LLC
Original Assignee
Google LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/058,820 external-priority patent/US10734963B2/en
Priority claimed from US16/138,707 external-priority patent/US10897680B2/en
Application filed by Google LLC filed Critical Google LLC
Publication of CN117676427A publication Critical patent/CN117676427A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • H04R1/40Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R29/00Monitoring arrangements; Testing arrangements
    • H04R29/001Monitoring arrangements; Testing arrangements for loudspeakers
    • H04R29/002Loudspeaker arrays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R29/00Monitoring arrangements; Testing arrangements
    • H04R29/004Monitoring arrangements; Testing arrangements for microphones

Landscapes

  • Health & Medical Sciences (AREA)
  • Otolaryngology (AREA)
  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • General Health & Medical Sciences (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

The application discloses a location-based device interface. Various embodiments described herein include methods, devices, and systems for automatic audio equalization. In one aspect, a method is performed on an audio device including one or more speakers and a plurality of microphones, having one or more processors, memory, and a plurality of device interface elements. The method comprises the following steps: (1) Detecting a change in the orientation of the audio device from a first orientation to a second orientation; and (2) in response to detecting the change in orientation, configuring operation of two or more of the plurality of device interface elements.

Description

Orientation-based device interface
Description of the division
The present application belongs to the divisional application of chinese patent application 201980053721.X, the application date of which is 2019, 08 and 08.
Technical Field
This generally relates to audio devices, including but not limited to a location-based device interface on an audio device.
Background
Traditionally, electronic devices have been designed and manufactured with a single orientation, such as a single mounting surface. In recent years, some devices have been designed to operate in multiple orientations, such as vertically and horizontally. However, manipulating the device interface in various orientations by the user may be cumbersome and not intuitive. Thus, it is desirable for electronic devices to have orientation-based device interfaces.
Disclosure of Invention
Technical problem
Methods, devices, and systems for implementing a location-based device interface are needed. Various embodiments of the systems, methods, and apparatus within the scope of the appended claims have aspects, none of which are solely responsible for the attributes described herein. Without limiting the scope of the appended claims, after considering this disclosure, and particularly after considering the section entitled "detailed description" one will understand how aspects of various embodiments are used to automatically adjust the operation of a device interface according to changes in orientation.
To maximize user experience and convenience, the audio device described herein may operate in multiple orientations. For example, an audio device having two speakers is configured to operate in a stereo mode when oriented horizontally and is configured to operate in a mono mode when oriented vertically. The audio device optionally includes a removable base (e.g., a silicone foot) adapted to be attached (e.g., by magnets) to both sides of the audio device. The audio device optionally includes a set of Light Emitting Diodes (LEDs), wherein different subsets of LEDs are used based on orientation (e.g., such that the LEDs maintain a horizontal appearance in both orientations). The audio device optionally includes a slider bar configured to interpret the directionality of the user's slider (e.g., to control the volume) based on the device orientation. For example, sliding from the first end to the second end of the strip corresponds to an increase in volume in a horizontal orientation. However, in this example, sliding from the first end to the second end corresponds to a volume reduction in the vertical orientation. The audio device also optionally adjusts the operation of its microphone based on the orientation. For example, microphones furthest from the base are used for hotword detection, e.g., because those microphones are better positioned to obtain a clear audio signal.
Technical proposal
(A1) In one aspect, some implementations include a method for adjusting a device orientation performed at an audio device having one or more processors, memory, and a plurality of device interface elements, the audio device including one or more speakers and a plurality of microphones. The method comprises the following steps: (1) Detecting a change in the orientation of the audio device from a first orientation to a second orientation; and (2) in response to detecting the change in orientation, configuring operation of two or more of the plurality of device interface elements. In some implementations, detecting the change in orientation includes detecting the change in orientation using an accelerometer of the audio device. As used herein, an audio device is an electronic device having one or more speakers and/or one or more microphones.
(A2) In some embodiments of A1: (1) Further comprising, prior to detecting the change in orientation, operating the audio device in a first orientation; and (2) wherein configuring the two or more device interface elements comprises reconfiguring the operations based on the change in orientation.
(A3) In some implementations of A1 or A2, wherein the first orientation corresponds to the audio device being positioned (resting) on a first side of the audio device; and wherein the second orientation corresponds to an audio device positioned on a second side of the audio device that is different from the first side (e.g., a change in orientation corresponds to rotating the device from a vertical orientation to a horizontal orientation).
(A4) In some implementations of A1-A3, wherein configuring two or more of the plurality of device interface elements includes assigning a first microphone of the plurality of microphones to the task based on the change in orientation. In some implementations, a first subset of microphones is used in a first orientation and a second subset is used in a second orientation (e.g., microphones on the "top" of the device in each orientation are used for hotword detection).
(A5) In some embodiments of A4, further comprising: in response to detecting the change in orientation, a second microphone of the plurality of microphones is de-allocated from the task.
(A6) In some embodiments of A4 or A5, wherein the task comprises one or more of: hotword detection, speech recognition and audio equalization.
(A7) In some implementations of A1-A6, wherein the plurality of device interface elements includes a volume control element; and wherein the operation of configuring two or more of the plurality of device interface elements comprises an operation of configuring a volume control element.
(A8) In some embodiments of A7, wherein when in the first orientation, movement along the volume control element toward the first end of the volume control element corresponds to increasing the volume of the one or more speakers; and wherein the operation of configuring the volume control element comprises configuring the volume control element such that movement along the volume control element toward the first end of the volume control element corresponds to decreasing the volume of the one or more speakers. In some implementations, the volume control includes a capacitive touch element (e.g., a capacitive touch bar).
(A9) In some embodiments of A1-A8, wherein the one or more speakers comprise a plurality of speakers; and wherein configuring two or more of the plurality of device interface elements comprises configuring a plurality of speakers (e.g., adjusting a high-pitch and/or low-pitch setting of the speakers).
(A10) In some implementations of A9, wherein the plurality of speakers are configured to operate in a stereo mode when in the first orientation; and wherein configuring the plurality of speakers comprises reconfiguring the plurality of speakers to operate in a mono mode. In some implementations, the audio output is time-converted upon determining the change in orientation. In some embodiments, the audio output is briefly faded to silence before the subsequent output is reconfigured. In some implementations, a different audio filter (e.g., a biquad or ladder filter) is used to reconfigure subsequent outputs.
(A11) In some implementations of a10, wherein reconfiguring the plurality of speakers to operate in the mono mode includes utilizing only a subset of the plurality of speakers for subsequent audio output. For example, in a vertical orientation, only the upper speakers (e.g., upper woofers and upper tweeters) are used. In some implementations, the subsequent audio output includes a TTS output or music. In some implementations, the gain of the subset of speakers is increased to compensate for using only the subset (e.g., +6 dB).
(A12) In some implementations of A9-a11, wherein reconfiguring the plurality of speakers includes utilizing only a subset of the plurality of speakers for subsequent audio outputs having audio frequencies above the threshold frequency. In some embodiments, the threshold frequency is 160Hz. In some embodiments, all woofers are used for bass frequencies, while less than all woofers are used for higher frequencies. In some implementations, the subset is selected based on the user's location, distance from the resting surface, and/or the capabilities of the individual speakers.
(A13) In some embodiments of A9-a12, wherein reconfiguring the plurality of speakers comprises: (1) When the volume setting of the audio device is below the volume threshold, utilizing only a subset of the plurality of speakers for subsequent audio output; and (2) when the volume setting of the audio device is above the volume threshold, utilizing a subset of the plurality of speakers and one or more additional speakers for subsequent audio output. In some implementations, an input/output matrix is used to time-convert the audio output in the conversion.
(A14) In some implementations of A9-a13, further comprising audio pairing the audio device with an additional audio device; and wherein configuring the plurality of speakers comprises: a first subset of the plurality of speakers is utilized when in a first orientation, and a second subset of the plurality of speakers is utilized when in a second orientation (e.g., a subset of speakers furthest from the additional audio device (to enhance the surround sound output of the device) is utilized when in a horizontal orientation, and a different subset (e.g., the topmost speaker) is utilized when in a vertical orientation). In some implementations, the audio device is audio paired with a plurality of additional audio devices, and each device operates in a mono mode such that the audio device as a whole achieves a surround sound effect. In some implementations, all speakers are used in one orientation (e.g., all speakers are used in a vertical orientation). In some implementations, the timing of the audio output at each device is adjusted based on the relative locations between the devices (e.g., to enhance synchronization of the outputs).
(A15) In some embodiments of A1-A14, wherein the plurality of device interface elements comprises a plurality of lighting elements; and wherein configuring the operation of two or more of the plurality of device interface elements comprises adjusting the operation of the plurality of lighting elements. In some embodiments, the plurality of lighting elements comprises a plurality of Light Emitting Diodes (LEDs). In some implementations, adjusting the operation of the lighting elements includes disabling the first subset of the lighting elements and enabling the second subset. In some embodiments, the plurality of lighting elements includes a first row of lighting elements along a first axis and a second row of lighting elements along a second axis different from the first axis. In some implementations, adjusting the lighting elements includes transmitting device state information with a first row of lighting elements when in a first orientation and transmitting device state information with a second row of lighting elements when in a second orientation. In some embodiments, adjusting the operation of the lighting elements includes utilizing only a subset of the lighting elements that are substantially level with the ground.
(A16) In some embodiments of A1-a15, wherein the audio device further comprises a detachable base; and wherein the removable base is configured to couple to two or more sides of the audio device to facilitate positioning the audio device in a plurality of orientations. In some implementations, the removable base is configured to magnetically couple to respective magnets within a housing of the audio device. In some embodiments, the removable base is composed of silicone. In some implementations, the base is configured to couple only at a location corresponding to an effective orientation of the device.
(A17) In some implementations of A1-A16, wherein the audio device further includes a power port; and wherein the audio device is configured such that the power port is proximate to the rest surface of the audio device in both the first orientation and the second orientation, e.g., the power port is in a corner portion of the audio device between two sides for resting the audio device in both orientations.
(A18) In some embodiments of A1-a17, wherein the audio device further comprises one or more antennas; and wherein the audio device is configured such that the antenna is maintained at least a threshold distance from the rest surface for the audio device in both the first orientation and the second orientation (e.g., the antenna is disposed opposite both sides for resting the audio device in both orientations).
(A19) In some embodiments of A1-A18, further comprising: detecting a change in the orientation of the audio device from the first orientation to a third orientation; and in response to detecting the change toward the third orientation, presenting an error state to the user. For example, outputting a message "device reversed" via one or more speakers, displaying an error status via one or more LEDs of the device, and/or sending an error alert to a user's client device.
In another aspect, some embodiments include an audio device including one or more processors and a memory coupled to the one or more processors, the memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the methods herein (e.g., A1-a19 described above).
In yet another aspect, some embodiments include a non-transitory computer-readable storage medium storing one or more programs for execution by one or more processors of an audio device, the one or more programs including instructions (e.g., A1-a19 described above) for performing any of the methods described herein.
Advantageous effects
Accordingly, devices, storage media, and computing systems are provided with methods for automatically adjusting the operation of a device interface according to changes in orientation, thereby increasing the effectiveness, efficiency, and user satisfaction of such systems. Such methods may supplement or replace conventional methods for audio equalization.
Drawings
For a better understanding of the various described embodiments, reference should be made to the description of the embodiments below in conjunction with the following drawings, in which like reference numerals designate corresponding parts throughout the figures.
Fig. 1A and 1B illustrate a representative electronic device according to some embodiments.
FIG. 2 is a block diagram illustrating a representative operating environment including a plurality of electronic devices and a server system, according to some embodiments.
Fig. 3 is a block diagram illustrating a representative electronic device, according to some embodiments.
Fig. 4 is a block diagram illustrating a representative server system, according to some embodiments.
Fig. 5A-5B are perspective views illustrating representative electronic devices in different orientations, according to some embodiments.
Fig. 6A-6B are interior views illustrating representative electronic devices in different orientations, according to some embodiments.
7A-7B illustrate a representative electronic device having a sliding control element (e.g., volume control) according to some embodiments.
Fig. 8A-8E are exploded views illustrating a representative electronic device, according to some embodiments.
Fig. 9A-9D are perspective views illustrating representative electronic devices in different orientations, according to some embodiments.
10A-10B are perspective views illustrating representative electronic devices in different orientations, according to some embodiments.
FIG. 11 is a flow chart illustrating a representative method for location-based operation of an audio device in accordance with some embodiments.
Detailed Description
Reference will now be made in detail to embodiments, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the various embodiments described. It will be apparent, however, to one skilled in the art that the various described embodiments may be practiced without these specific details. In other instances, well-known methods, procedures, components, circuits, and networks have not been described in detail so as not to unnecessarily obscure aspects of the embodiments.
The present disclosure describes electronic devices that change operation based on orientation, such as audio devices having multiple speakers. For example, the audio device switches between a stereo output mode and a mono output mode based on the orientation. A representative electronic device (e.g., device 100) includes a plurality of device interface elements, such as a volume control (e.g., volume control 702), an LED (e.g., LED assembly 602), and a microphone (e.g., microphone 106). According to some implementations, an electronic device determines its orientation and adjusts the operation of the following components based on the determined orientation: volume control (reversing directionality), LED (activating different subsets of LEDs), and/or microphone (assigning different tasks to subsets of microphones).
Fig. 1A illustrates an electronic device 100 according to some embodiments. Electronic device 100 includes one or more woofers 102 (e.g., 102-1 and 102-2), one or more tweeters 104, and a plurality of microphones 106. In some implementations, the speakers 102 include different types of speakers, such as low frequency bass speakers and high frequency treble/treble speakers. In some implementations, speaker 102 is used for frequencies below a frequency threshold, while speaker 104 is used for frequencies above a frequency threshold. In some embodiments, the frequency threshold is around 1900Hz (e.g., 1850Hz,1900Hz, or 1950 Hz). In some implementations, the electronic device 100 includes three or more speakers 102. In some implementations, speakers 102 are arranged in different geometries (e.g., in a triangular configuration). In some implementations, the electronic device 100 does not include any tweeter 104. In some implementations, the electronic device 100 includes fewer than six microphones 106. In some implementations, the electronic device 100 includes more than six microphones 106. In some implementations, the microphone 106 includes two or more different types of microphones.
In fig. 1A, microphones 106 are arranged in groups of three, with one microphone (e.g., microphone 106-3) located on the front of electronic device 100 and the other two microphones (e.g., microphones 106-1 and 106-2) located on the sides or top of the device. In some implementations, the microphone 106 is disposed at a location within the electronic device 100 other than the location shown in fig. 1A. In some implementations, the microphones 106 are grouped differently on the electronic device 100. For example, the microphones 106 are arranged in groups of four, with one microphone on the front of the device 100 and one microphone on the back of the device 100. In some implementations, the microphone 106 is oriented and/or positioned relative to the speaker 102. For example, one microphone (e.g., 106-3) faces the same direction as the speaker 102, while the other microphones (e.g., 106-1 and 106-2) are perpendicular (or substantially perpendicular) to the direction of the speaker 102. As another example, one microphone (e.g., 106-3) is placed closer to speaker 102 than the other microphones (e.g., 106-1 and 106-2). Thus, in some implementations, the microphone 106 is positioned such that a phase difference exists in the received audio and can be analyzed to determine room characteristics. In some implementations, speakers (e.g., speakers 102 and/or 104) are aligned on the same plane (e.g., two faces outward form the front face of the device). In some implementations, the speakers face in different directions (e.g., speaker 102-1 is tilted left and speaker 102-2 is tilted right).
Fig. 1B illustrates an electronic device 120 according to some embodiments. In some implementations, the electronic device 120 includes a microphone 122, an array of illuminators 124 (e.g., LEDs), and one or more speakers located behind a grid 126. Further, the rear side of the electronic device 120 optionally includes a power connector configured to couple to a power source (not shown). In some implementations, the electronic device 120 includes more or less microphones 122 than shown in fig. 1B. In some implementations, the microphone 122 is disposed at a location within the electronic device 120 other than the location shown in fig. 1B.
In some implementations, the electronic device 100 and/or the electronic device 120 are voice activated. In some implementations, the electronic device 100 and/or the electronic device 120 presents a compact appearance without visible buttons, and interactions with the electronic device 120 are based on voice and touch gestures. Alternatively, in some implementations, electronic device 100 and/or electronic device 120 includes a limited number of physical buttons (not shown), and interactions with the electronic device are further based on pressing the buttons in addition to voice and/or touch gestures.
Fig. 2 is a block diagram illustrating an operating environment 200 including a plurality of electronic devices 100, 120, and 202 and server systems 206, 220, according to some embodiments. The operating environment includes one or more electronic devices 100, 120, and 202 that are located at one or more locations within a defined space, for example, within a single room or space of a structure, or within a defined area of an open space.
Examples of electronic device 202 include electronic device 100, electronic device 120, a handheld computer, a wearable computing device, a Personal Digital Assistant (PDA), a tablet, a laptop, a desktop computer, a cellular telephone, a smart phone, a voice-activated device, an Enhanced General Packet Radio Service (EGPRS) mobile phone, a media player, or a combination of any two or more of these or other data processing devices.
According to some embodiments, electronic devices 100, 120, and 202 are communicatively coupled to server system 206 and intelligent assistant system 220 through communication network 210. In some implementations, at least some of the electronic devices (e.g., devices 100, 120, and 202-1) can be communicatively coupled to a local network 204, which local network 204 can be communicatively coupled to one or more communication networks 210. In some implementations, the local network 204 is a local area network implemented at a network interface (e.g., a router). In some implementations, the electronic devices 100, 120, and 202 communicatively coupled to the local network 204 also communicate with each other through the local network 204. In some implementations, the electronic devices 100, 120, and 202 are communicatively coupled to each other (e.g., without going through the local network 204 or the communication network 210).
Optionally, one or more electronic devices are communicatively coupled to the communication network 210 and not on the local network 204 (e.g., electronic devices 202-N). For example, these electronic devices are not on a Wi-Fi network corresponding to the local network 204, but are connected to the communication network 210 through a cellular connection. In some implementations, communication between electronic devices 100, 120, and 202 located on local network 204 and electronic devices 100, 120, and 202 not located on local network 204 is performed by voice assistance server 224. In some implementations, the electronic device 202 is registered in the device registry 222 and is therefore known to the voice assistance server 224.
In some implementations, the server system 206 includes a front-end server 212, the front-end server 212 facilitating communication between the server system 206 and the electronic devices 100, 120, and 202 via the communication network 210. For example, the front-end server 212 receives audio content (e.g., the audio content is music and/or speech) from the electronic device 202. In some implementations, the front-end server 212 is configured to send information to the electronic device 202. In some implementations, the front-end server 212 is configured to transmit equalization information (e.g., frequency correction). For example, the front-end server 212 transmits equalization information to the electronic device in response to the received audio content. In some implementations, the front-end server 212 is configured to send data and/or hyperlinks to the electronic devices 100, 120, and/or 202. For example, the front-end server 212 is configured to send updates (e.g., database updates) to the electronic device.
In some implementations, server system 206 includes an equalization module 214, where equalization module 214 determines information about the audio signal, such as frequency, phase difference, transfer function, eigenvectors, frequency response, and the like, from the audio signal collected from electronic device 202. In some implementations, the equalization module 214 obtains the frequency correction data from the correction database 216 to send it to the electronic device (e.g., via the front-end server 212). In some embodiments, the frequency correction data is based on information about the audio signal. In some implementations, the equalization module 214 applies machine learning (e.g., in conjunction with the machine learning database 218) to the audio signal to generate the frequency correction.
In some implementations, the server system 206 includes a correction database 216 that stores frequency correction information. For example, correction database 216 includes pairs of audio feature vectors and corresponding frequency corrections.
In some implementations, the server system 206 includes a machine learning database 218 that stores machine learning information. In some implementations, the machine learning database 218 is a distributed database. In some implementations, the machine learning database 218 includes a deep neural network database. In some implementations, the machine learning database 218 includes a supervised training and/or an enhanced training database.
Fig. 3 is a block diagram illustrating an electronic device 300 according to some embodiments. In some implementations, the electronic device 300 is any one of the electronic devices 100, 120, 202 of fig. 2 or includes any one of the electronic devices 100, 120, 202 of fig. 2. The electronic device 300 includes one or more processors 302, one or more network interfaces 304, memory 306, and one or more communication buses 308 for interconnecting these components (sometimes called a chipset).
In some implementations, the electronic device 300 includes one or more input devices 312 that facilitate audio input and/or user input, such as a microphone 314, buttons 316, and a touch sensor array 318. In some implementations, the microphone 314 includes the microphone 106, the microphone 122, and/or other microphones.
In some implementations, the electronic device 300 includes one or more output devices 322 that facilitate audio output and/or visual output, including one or more speakers 324, LEDs 326 (and/or other types of illuminators), and a display 328. In some implementations, the LEDs 326 include the illuminator 124 and/or other LEDs. In some implementations, speakers 324 include woofer 102, tweeter 104, speakers of device 120, and/or other speakers.
In some implementations, the electronic device 300 includes a radio 320 and one or more sensors 330. Radio 320 enables one or more communication networks and allows electronic device 300 to communicate with other devices. In some embodiments, radio 320 is capable of data communication using any of a variety of custom or standard wireless protocols (e.g., IEEE 802.15.4, wi-Fi, zigBee, 6LoWPAN, thread, Z-Wave, bluetooth Smart, ISA100.5A, wirelessHART, miWi, etc.), custom or standard wired protocols (e.g., ethernet, homePlug, etc.), and/or any other suitable communication protocol, including communication protocols not yet developed by the date of filing of this document.
In some implementations, the sensors 330 include one or more motion sensors (e.g., accelerometers), light sensors, positioning sensors (e.g., GPS), and/or audio sensors. In some implementations, the positioning sensor includes one or more position sensors (e.g., passive Infrared (PIR) sensors) and/or one or more orientation sensors (e.g., gyroscopes).
Memory 306 includes high-speed random access memory such as DRAM, SRAM, DDR RAM or other random access solid state memory devices; and optionally nonvolatile memory such as one or more magnetic disk storage devices, one or more optical disk storage devices, one or more flash memory devices, or one or more other nonvolatile solid state storage devices. Memory 306 optionally includes one or more storage devices remote from the one or more processors 302. The memory 306 or alternatively the non-volatile memory within the memory 306 includes a non-transitory computer-readable storage medium. In some implementations, the memory 306 or a non-transitory computer readable storage medium of the memory 306 stores the following programs, modules, and data structures, or a subset or superset thereof:
Operational logic 332, including processes for handling various basic system services and for performing hardware-related tasks;
a user interface module 334 for providing and displaying a user interface in which settings, captured data including hotwords, and/or other data for one or more devices (e.g., electronic device 300 and/or other devices) may be configured and/or viewed;
a radio communication module 336 for connecting to and communicating with other network devices (e.g., local network 204 (such as a router providing internet connectivity), networking storage devices, network routing devices, server system 206, smart home server system 220, etc.) coupled to one or more communication networks 210 via one or more communication interfaces 304 (wired or wireless);
an audio output module 338 for determining and/or presenting audio signals (e.g., in conjunction with speaker 324), such as adjusting operational settings of the speaker;
a microphone module 340 for acquiring and/or analyzing audio signals (e.g., in conjunction with microphone 314);
a positioning module 344 for obtaining and/or analyzing positioning information (e.g., position and/or location information), such as in conjunction with the sensors 330;
Equalization module 346 for equalizing audio output of electronic device 300, including but not limited to:
an audio analysis submodule 3461 for analyzing audio signals collected from an input device (e.g., microphone), e.g., determining audio properties (e.g., frequency, phase shift, and/or phase difference) and/or generating a Fast Fourier Transform (FFT) of audio frequencies;
a correction sub-module 3462 for obtaining frequency corrections from the correction database 352 and/or applying frequency corrections to the electronic device 300;
a transfer function sub-module 3463 for determining a feature vector, an acoustic transfer function (relating audio output to audio input) and/or a frequency response of the electronic device 300 using the analyzed audio signal; and
a weighting submodule 3464 for assigning different weights to the individual audio signals and/or audio properties (e.g., phase differences and/or signal-to-noise ratios);
a training module 348 for generating and/or training an audio model and optionally a fingerprint audio event associated with the electronic device 300;
a device database 350 for storing information associated with electronic device 300, including, but not limited to:
sensor information 3501 associated with the sensor 330;
device settings 3502 for electronic device 300 such as default options and preferred user settings; and
Communication protocol information 3503 specifying a communication protocol to be used by the electronic device 300;
a correction database 352 for storing frequency correction information; and
a machine learning database 354 for storing machine learning information.
In some implementations, correction database 352 includes the following data sets, or a subset or superset thereof:
location data (e.g., positioning of microphones and/or speakers) corresponding to different locations and/or orientations of associated audio devices;
vector data including phase shifts, phase differences and/or eigenvectors corresponding to different locations and/or orientations of the associated audio device;
weight information including weights assigned to different signal-to-noise ratios, microphones, microphone pairs, and/or microphone locations;
training audio, including training data (e.g., white noise, pink noise, etc.) used to construct correction database 352; and
correction data for storing information for correcting an audio frequency response of an audio device, including but not limited to:
a frequency response comprising frequency responses and/or feature vectors corresponding to different locations and/or orientations of the audio device;
frequency correction corresponding to each frequency response.
According to some embodiments, the machine learning database 354 includes the following data sets, or a subset or superset thereof:
neural network data, including information corresponding to the operation of one or more neural networks, including, but not limited to:
positioning information, including information (e.g., feature vectors) corresponding to different locations and/or orientations of the audio device; and
correction data corresponding to the positioning information.
Each of the above identified modules is optionally stored in one or more storage devices described herein and corresponds to a set of instructions for performing the functions described above. The above identified modules or programs need not be implemented as separate software programs, procedures, modules or data structures, and thus various subsets of these modules may be combined or otherwise rearranged in various embodiments. In some implementations, the memory 306 stores a subset of the modules and data structures identified above. Further, memory 306 optionally stores additional modules and data structures not described above (e.g., modules for hotword detection and/or speech recognition in a speech-enabled intelligent speaker). In some implementations, a subset of the programs, modules, and/or data stored in the memory 306 is stored on and/or executed by the server system 206 and/or the voice assistance server 224.
Fig. 4 is a block diagram illustrating a server system 206 according to some embodiments. According to some embodiments, the server system 206 includes one or more processors 402, one or more network interfaces 404, memory 410, and one or more communication buses 408 for interconnecting these components (sometimes called a chipset).
The server system 206 optionally includes one or more input devices 406 that facilitate user input, such as a keyboard, mouse, voice command input unit or microphone, touch screen display, touch sensitive tablet, gesture capture camera, or other input buttons or controls. In some implementations, the server system 206 optionally uses microphone and voice recognition or camera and gesture recognition to supplement or replace the keyboard. The server system 206 optionally includes one or more output devices 408 that enable presentation of user interfaces and display content, such as one or more speakers and/or one or more visual displays.
Memory 410 includes high-speed random access memory such as DRAM, SRAM, DDR RAM, or other random access solid state memory devices; optionally, nonvolatile memory such as one or more magnetic disk storage devices, one or more optical disk storage devices, one or more flash memory devices, or one or more other nonvolatile solid state storage devices. Memory 410 optionally includes one or more storage devices remote from the one or more processors 402. Memory 410, or alternatively non-volatile memory within memory 410, includes a non-transitory computer-readable storage medium. In some implementations, the memory 410 or a non-transitory computer readable storage medium of the memory 410 stores the following programs, modules, and data structures, or a subset or superset thereof:
An operating system 416, including processes for handling various basic system services and performing hardware-related tasks;
a front end 212 for communicatively coupling the server system 206 to other devices (e.g., electronic devices 100, 120, and 202) via a network interface 404 (wired or wireless) and one or more networks (such as the internet, other wide area networks, local area networks, metropolitan area networks, etc.);
a user interface module 420 for enabling presentation of information (e.g., a graphical user interface for presenting applications, widgets, websites and their web pages, games, audio and/or video content, text, etc.) on a server system or electronic device;
a device registration module 422 for registering devices (e.g., electronic device 300) for use with server system 206;
equalization module 424 for equalizing audio output of electronic devices (e.g., electronic device 300), including, but not limited to:
an audio analysis submodule 4241 for analyzing audio signals collected from an electronic device (e.g., electronic device 300), e.g., determining audio properties (e.g., frequency, phase shift, and/or phase difference) and/or generating a Fast Fourier Transform (FFT) of audio frequencies);
a correction sub-module 4242 for obtaining frequency corrections from correction database 216 and/or applying frequency corrections to electronic device 300;
A transfer function sub-module 4243 for determining a feature vector, an acoustic transfer function (relating audio output to audio input) and/or a frequency response of the electronic device 300 using the analyzed audio signal; and
a weighting submodule 4244 for assigning different weights to the individual audio signals and/or audio attributes (e.g., phase differences and/or signal-to-noise ratios);
a training module 426 for generating and/or training an audio model and optionally a fingerprint audio event associated with the electronic device 300;
server system data 428 stores data associated with server system 206 including, but not limited to:
client device settings 4281, device settings including one or more electronic devices (e.g., electronic device 300), such as generic device settings (e.g., service layer, device model, storage capacity, processing capabilities, communication capabilities, etc.), and information for automatic media display control;
audio device settings 4282, including audio settings for audio devices associated with server system 206 (e.g., electronic device 300), such as general and default settings (e.g., volume settings for speakers and/or microphones, etc.); and
the voice assistance data 4283 for the voice activated device and/or the user account of the voice assistance server 224, such as account access information and information (e.g., service layer, device model, storage capacity, processing power, communication capability, etc.) of the one or more electronic devices 300;
A correction database 216 storing frequency correction information, such as correction database 352 described above; and
machine learning database 218 is used to store machine learning information, such as machine learning database 354 described above.
In some implementations, the server system 206 includes a notification module (not shown) for generating alerts and/or notifications for a user of the electronic device. For example, in some implementations, the correction database is stored locally on the user's electronic device, and the server system 206 may generate a notification to alert the user to download the latest version or update to the correction database.
Each of the above-identified elements may be stored in one or more storage devices described herein and correspond to a set of instructions for performing the functions described above. The above identified modules or programs need not be implemented as separate software programs, procedures, modules or data structures, and thus various subsets of these modules may be combined or otherwise rearranged in various embodiments. In some implementations, the memory 410 optionally stores a subset of the modules and data structures identified above. Further, memory 410 may optionally store additional modules and data structures not described above.
Fig. 5A-5B are perspective views illustrating an electronic device 500 in different orientations, according to some embodiments. Fig. 5A illustrates a device 500 (e.g., electronic device 100) having a horizontal display of LEDs 502 (e.g., a first subset of LEDs 326) in a horizontal orientation. Fig. 5B illustrates a device 500 having a horizontal display of LEDs 504 (e.g., a second subset of LEDs 326) in a vertical orientation. According to some embodiments, the LEDs 502 are arranged perpendicular to the LEDs 504.
Fig. 6A-6B are interior views illustrating electronic device 500 in different orientations according to some embodiments. In particular, fig. 6A shows the device 500 in a horizontal orientation, and fig. 6B shows the device 500 in a vertical orientation. Fig. 6A-6B also illustrate an apparatus 500 that includes a speaker 102, a speaker 104, a speaker baffle 604, and an LED assembly 602 (e.g., including an LED board and LEDs 502 and 504). According to some embodiments, the LED component 602 is positioned to minimize occlusion of the speaker 102 (e.g., to minimize degradation of audio output by the speaker).
Fig. 7A-7B illustrate an electronic device 500 having a sliding control element (e.g., volume control) according to some embodiments. Fig. 7A shows an electronic device 500 with a volume control 702 in a horizontal orientation. According to some implementations, volume control 702 is configured such that a sliding input (e.g., a sliding from left to right) toward second end 706 of volume control 702 corresponds to a user request to increase volume. Fig. 7B shows the electronic device 500 with the volume control 702 in a vertical orientation. According to some implementations, volume control 702 is configured such that a sliding input (e.g., sliding up) toward first end 704 of volume control 702 corresponds to a user request to increase volume.
Fig. 8A-8E are exploded views illustrating a representative electronic device, according to some embodiments. As shown in fig. 8A-8E, the device 500 includes a housing 804 and a grill 822, the housing 804 and grill 822 configured to couple together and enclose the speaker baffle 604, speakers 102 and 14, stiffener 814, power source 812, capacitive touch pad 808, motherboard 830, antenna 810, magnet 832, and microphone 802. In some implementations, the system on a chip, the controller, and/or the processor (e.g., processor 302) are mounted on the motherboard 830. In some implementations, the motherboard 830 includes control circuitry for the power supply 812, the antenna 810, the microphone 802, the speaker 102, and/or the speaker 104. In some implementations, motherboard 830 includes an accelerometer for determining the orientation of device 500.
According to some embodiments, the device 500 further includes a mount 806, e.g., configured to magnetically couple to one or more magnets in the housing 804. In some embodiments, the mount 806 includes a silicone pad. In some embodiments, the housing 804 includes a subset of magnets 832 on both sides of the housing 804 for coupling the mount 806 in both a horizontal orientation and a vertical orientation. In some implementations, the magnet 832 is disposed on a side opposite the microphone 802 (e.g., such that the microphone aperture 822 is not blocked by a resting surface of the device 500). In some embodiments, the magnet 832 includes a single magnet on each of two or more sides. In some embodiments, the magnet 832 is embedded in the housing 804. In some embodiments, a portion of the housing 804 is adapted to be magnetically coupled to the base 806 (e.g., comprised of a magnetic material).
In some implementations, the microphone 802 is the microphone 106. In some implementations, the housing 804 includes a microphone aperture 822 and a power port 820. In some implementations, the device 500 includes a plurality of stiffeners, such as stiffener 814, configured to provide structural support and prevent vibration of the speaker. In some embodiments, antenna 810 includes one or more antennas mounted on a circuit board and/or one or more antennas mounted on housing 804. In some implementations, the antenna 810 is positioned to maximize the distance between the metal components of the device (e.g., speaker 102) and the antenna to minimize signal interference.
Fig. 9A-9D are perspective views illustrating the electronic device 106 in different orientations, according to some embodiments. Fig. 9A shows the electronic device 106 in a horizontal orientation. According to some embodiments, as shown in fig. 9A, in a horizontal orientation, left speakers (e.g., speakers 102-1 and 104-1) are assigned to stereo left audio outputs (sometimes also referred to as left channels), and right speakers (e.g., speakers 102-2 and 104-2) are assigned to stereo right audio outputs (sometimes also referred to as right channels). According to some embodiments, as shown in fig. 9A, in a horizontal orientation, the right microphone (e.g., one or more of microphones 106-4, 106-5, and 106-6) is assigned to auto-equalization, while the top microphone (e.g., one or more of microphones 106-1, 106-2, and 106-3) is assigned to hotword detection.
Fig. 9B shows the electronic device 106 in a vertical orientation. According to some embodiments, as shown in fig. 9B, in a vertical orientation, the upper speakers (e.g., speakers 102-2 and 104-2) are assigned to the mono audio output, while the lower speakers (e.g., speakers 102-1 and 104-1) are selectively disabled, enabled only for bass frequencies, or enabled only at volume levels above a volume threshold. According to some embodiments, as shown in fig. 9B, in the vertical orientation, the left microphone (e.g., one or more of microphones 106-1, 106-2, and 106-3) is assigned to auto-equalization, while the top microphone (e.g., one or more of microphones 106-4, 106-5, and 106-6) is assigned to hotword detection. In some implementations, the lower tweeter 104-1 is disabled in a vertical orientation. In some implementations, the lower tweeter 104-1 is disabled in a vertical orientation when the volume level is below the volume threshold. In some embodiments, the lower woofer 102-1 is disabled in a vertical orientation. In some implementations, the lower woofer 102-1 is disabled (e.g., outputs only audio frequencies below 160 Hz) for non-bass frequencies (e.g., frequencies above 160 hertz (Hz)) when in a vertical orientation. In some implementations, the lower woofer 102-1 is disabled when in a vertical orientation (or disabled for non-bass frequencies) when the volume level is below the volume threshold.
Fig. 9C and 9D illustrate the electronic device 106 in an orientation that results in one or more microphones (and optionally one or more antennas 810) being proximate to a resting surface. The close proximity of the rest surface may cause interference with the microphone and antenna. According to some implementations, the electronic device 106 is configured to alert the user that a non-optimal location is present. In some implementations, the device alerts the user that a non-optimal location is in response to the user activating the device, in response to a wake-up signal, and/or in response to detecting a change in orientation.
Fig. 10A-10B are perspective views illustrating electronic device 106 in different orientations, according to some embodiments. FIG. 10A shows devices 106-1 and 106-2 in a horizontal orientation. According to some implementations, the device 106 is coupled and configured to operate in a surround sound mode. As shown in fig. 10A, according to some embodiments, device 106-1 is configured to output audio on the left side speakers (e.g., speakers 102-1 and 104-1) while the right side speakers (e.g., speakers 102-2 and 104-2) are disabled or only output bass frequencies. As shown in fig. 10A, according to some embodiments, device 106-2 is configured to output audio on the right side speakers (e.g., speakers 102-2 and 104-2), while the left side speakers (e.g., speakers 102-1 and 104-1) are disabled or only output bass frequencies (e.g., as described above with reference to fig. 9B). In this way, the surround sound effect can be enhanced. In some implementations, each device 106 outputs audio from each of its speakers. In some implementations, the device 106-1 is configured such that the right tweeter 104-2 is disabled and the right woofer 102-2 is enabled. In some implementations, the device 106-2 is configured such that the left tweeter 104-1 is disabled and the left woofer 102-1 is enabled. In some implementations, the device 106 determines its relative positioning and operates the appropriate speaker based on the determination.
FIG. 10B shows devices 106-1 and 106-2 in a vertical orientation. According to some implementations, the device 106 is coupled and configured to operate in a surround sound mode. As shown in fig. 10B, according to some embodiments, each device 106 is configured to output audio on the upper speakers (e.g., speakers 102-2 and 104-2) while the lower speakers (e.g., speakers 102-1 and 104-1) are disabled or only output bass frequencies (e.g., as described above with reference to fig. 9B). In some implementations, each device 106 outputs audio from each of its speakers. In some implementations, each device 106 is configured such that the upper tweeter 104-1 is disabled and the upper woofer 102-1 is enabled. In some implementations, the device 106 determines its relative positioning and operates the appropriate speaker based on the determination. In some implementations, the device 106-1 is configured to output audio corresponding to the left stereo, while the device 106-2 is configured to output audio corresponding to the right stereo.
Fig. 11 is a flow chart illustrating a method 1100 for location-based operation of an audio device, according to some embodiments. In some implementations, the method 1100 is performed by an audio device, such as the audio device 100, the audio device 500, or other electronic device 300. In some implementations, the method 1100 is performed by components of the electronic device 300, such as the positioning module 344 and the audio output module 338 that combines the input device 312 and the output device 322. In some embodiments, the operations of method 1100 described herein are interchangeable, and the individual operations of method 1100 are performed by any of the foregoing devices. In some implementations, the method 1100 is governed by instructions stored in a non-transitory computer-readable storage medium (e.g., within the memory 306) and executed by one or more processors or controllers of a device (such as the processor 302 of the electronic device 300). For convenience, method 1100 is described below as being performed by an audio device (e.g., electronic device 500) that includes one or more microphones and a plurality of speakers.
In some implementations, the audio device operates in a first orientation (e.g., a horizontal orientation) (1102). In some implementations, the first orientation corresponds to the audio device being positioned on the first side (e.g., as shown in fig. 5A). In some implementations, the operation in the first orientation includes outputting the audio content while in the first orientation. In some implementations, the operation in the first orientation includes receiving user input via one or more device interface elements while in the first orientation.
The audio device detects a change in the orientation of the audio device from a first orientation to a second orientation (1104). In some implementations, the audio device includes an accelerometer and the accelerometer is utilized to detect a change in orientation. In some implementations, the audio device determines its orientation in response to activation (e.g., powering on or waking up) by the user. In some implementations, the audio device periodically checks its orientation by comparing its current orientation to its previous orientation and detects a change in orientation.
In some implementations, the second orientation corresponds to the audio device being positioned on a second side (e.g., resting on the second side) that is different from the first side (e.g., a vertical orientation as shown in fig. 5B). For example, a change in orientation corresponds to a user rotating the device from a horizontal position to a vertical position.
In response to detecting the change in orientation, the audio device configures operation of two or more of the plurality of device interface elements (1108). In some implementations, the plurality of device interface elements includes one or more of: one or more microphones (e.g., microphones 106, 314, or 802), one or more speakers (e.g., speakers 102 and/or 104), one or more lighting elements (e.g., LEDs 326, 502, and/or 504), one or more slider controls (e.g., volume control 702), and so forth. In some implementations, configuring two or more of the plurality of device interface elements includes reconfiguring one or more of the device interface elements. In some implementations, in addition to configuring operation of the device interface element, the device performs automatic equalization based on detecting a change in orientation. For example, the device detects a change in orientation and adjusts speaker settings based on an audio equalization operation and an update operation of the device interface elements.
In some implementations, the audio device assigns a first microphone (e.g., microphone 106-3) to the task (1110) based on the change in orientation. In some implementations, the task includes one or more of the following (1112): hotword detection, speech recognition and audio equalization. In some implementations, when the audio device is in the second orientation, the audio device identifies the first microphone as being on a top surface of the audio device and assigns the first microphone to the task (e.g., for hotword detection) based on the identification. In some implementations, the audio device identifies the microphone with the least interference to assign the task. In some implementations, the audio device assigns multiple microphones (e.g., microphones 106-1, 106-2, and 106-3) to tasks (e.g., multiple microphones assigned for automatic equalization). In some implementations, a first subset of microphones is assigned to a first task (e.g., hotword detection) and a second subset of microphones is assigned to a second task (e.g., audio equalization).
In some implementations, configuring the two or more device interface elements includes de-assigning the second microphone from the task (1114). For example, in a first orientation, a first microphone (e.g., microphone 106).
In some implementations, the operation of configuring two or more device interface elements includes an operation of configuring a volume control element (e.g., volume control 702) (1116). In some embodiments, when in the first orientation, movement along the volume control element toward the first end of the volume control element corresponds to increasing the volume of the one or more speakers. In some implementations, configuring the volume control element includes reconfiguring the volume control element such that movement along the volume control element toward the first end of the volume control element corresponds to decreasing the volume of the one or more speakers. In some implementations, the volume control is a capacitive touch element (e.g., a capacitive touch bar). In some implementations, the device includes one or more sliding elements, such as a volume control, a brightness control, and/or a bass boost.
In some embodiments, the operation of configuring two or more device interface elements includes an operation of configuring a speaker (e.g., speaker 102 and/or speaker 104) (1118). For example, the speakers are configured to adjust treble, bass, and/or amplification of audio output by the speakers. As an example, when in the first orientation, the plurality of speakers are configured to operate in a stereo mode; and configuring the plurality of speakers comprises reconfiguring the plurality of speakers to operate in the mono mode. In some implementations, the audio output is time-converted when a change in orientation is determined. In some embodiments, the audio output is briefly faded to silence before the subsequent output is reconfigured. In some implementations, a different audio filter (e.g., a biquad or ladder filter) is used to reconfigure subsequent outputs. In some implementations, the high and low tone settings of the speakers are controlled by software executing on the device (e.g., audio output module 338 executing on processor 302).
In some implementations, reconfiguring the plurality of speakers to operate in the mono mode includes: only a subset of the plurality of speakers is utilized for subsequent audio output (e.g., to minimize destructive interference between speaker outputs). For example, in the vertical orientation, only the upper speakers (e.g., upper woofers and upper tweeters) are used, as shown in fig. 9B. In some implementations, the subsequent audio output includes a TTS output or music. In some implementations, the gain of the subset of speakers is increased to compensate for using only the subset (e.g., by 4, 5, or 6 dB). In some embodiments, one or more tweeters are disabled and the remaining tweeters operate with higher gain to compensate, while the woofers continue to operate in the same manner as before the reconfiguration.
In some implementations, reconfiguring the plurality of speakers includes utilizing only a subset of the plurality of speakers for subsequent audio outputs having audio frequencies above the threshold frequency. In some embodiments, the threshold frequency is 140Hz,160Hz, or 200Hz. In some implementations, all woofers (e.g., speaker 102) are used for bass frequencies, while fewer than all woofers are used for higher frequencies. In some implementations, the subset is selected based on the user's location, distance from the resting surface, and/or the capabilities of the individual speakers. For example, if the user is located on the left side of the device, the leftmost speaker is used, while if the user is located on the right side of the device, the rightmost speaker is used.
In some implementations, reconfiguring the plurality of speakers includes: (1) When the volume setting of the audio device is below the volume threshold, utilizing only a subset of the plurality of speakers for subsequent audio output; and (2) when the volume setting of the audio device is above the volume threshold, utilizing the subset of the plurality of speakers and the one or more additional speakers for subsequent audio output. In some implementations, the volume threshold corresponds to a maximum volume setting for a subset of speakers. In some embodiments, the volume threshold is 6dB, 3dB, or 1dB below the maximum volume of the speaker. In some implementations, an input/output matrix is used to time-convert the audio output in the conversion.
In some implementations, the audio device is audio paired with an additional audio device. In some implementations, configuring the plurality of speakers includes utilizing a first subset of the plurality of speakers when in a first orientation and utilizing a second subset of the plurality of speakers when in a second orientation (e.g., utilizing a subset of speakers furthest from the additional audio device when in a horizontal orientation (to enhance surround sound output of the device), and utilizing a different subset when in a vertical orientation (e.g., uppermost speaker to minimize interference with a resting surface)). In some implementations, the audio device is audio paired with a plurality of additional audio devices, and each device operates in a mono mode such that the audio device as a whole achieves a surround sound effect. In some implementations, all speakers are used in the same orientation (e.g., all speakers are used in a vertical orientation). In some implementations, the timing of the audio output at each device is adjusted based on the relative locations between the devices (e.g., to enhance synchronization of the outputs).
In some implementations, the operation of configuring two or more device interface elements includes an operation (1120) of adjusting a plurality of lighting elements (e.g., LEDs 502 and 504). In some embodiments, the operation of the lighting elements is controlled by a lighting control circuit (e.g., mounted on a lighting control board, such as LED assembly 602).
In some embodiments, the plurality of lighting elements comprises a plurality of Light Emitting Diodes (LEDs). In some implementations, adjusting the operation of the lighting elements includes disabling the first subset of the lighting elements and enabling the second subset. In some implementations, the plurality of lighting elements includes a first row of lighting elements (e.g., LEDs 502) along a first axis and a second row of lighting elements (e.g., LEDs 504) along a second axis that is different from the first axis. In some embodiments, adjusting the lighting element comprises: the device status information is transmitted with a first row of lighting elements when in a first orientation and with a second row of lighting elements when in a second orientation. In some embodiments, adjusting the operation of the lighting elements includes utilizing only a subset of the lighting elements that are substantially level with the ground.
In some implementations, the audio device further includes a removable base (e.g., base 806); and the removable base is configured to couple to two or more sides of the audio device to facilitate positioning the audio device in a plurality of orientations. In some implementations, the removable base is configured to magnetically couple to respective magnets within a housing (e.g., housing 804) of the audio device. In some embodiments, the removable base is composed of silicone. In some implementations, the base is configured to couple only at a location corresponding to an effective orientation of the device.
In some implementations, the audio device includes a power port; and the audio device is configured such that the power port is proximate to the resting surface of the audio device in both the first orientation and the second orientation. For example, the power ports are in both orientations at a corner portion of the audio device between two sides for resting the audio device, e.g., as shown in fig. 8B.
In some implementations, the audio device includes one or more antennas (e.g., antenna 810); and the audio device is configured such that the antenna is maintained at least a threshold distance from a rest surface for the audio device in both the first orientation and the second orientation. For example, the antenna is arranged opposite to both sides for resting the audio device in both orientations, as shown in fig. 8A.
In some implementations, the audio device detects a change in the orientation of the audio device from a first orientation to a third orientation; and in response to detecting the change toward the third orientation, presenting an error state to the user. For example, an audio message "device inverted" is output via one or more speakers, an error status is displayed via one or more LEDs of the device, and/or an error alert is sent to the user's client device.
In some implementations, the audio device detects a change in the orientation of the audio device from the second orientation to the first orientation. For example, the audio device detects a change from a vertical orientation back to a horizontal orientation and reconfigures the device interface elements accordingly.
Although some of the various figures show a number of logical steps in a particular order, order-independent steps may be reordered and other steps may be combined or broken down. Although some reordering or other groupings are specifically mentioned, other groupings will be apparent to those of ordinary skill in the art and, therefore, the ordering and groupings presented herein are not an exhaustive list of alternatives. Furthermore, it should be appreciated that these steps may be implemented in hardware, firmware, software, or any combination thereof.
For the case of the system discussed above collecting information about a user, the user may be provided with an opportunity to opt-in/out of programs or features that might collect personal information (e.g., information about the user's preferences or smart device usage). Additionally, in some embodiments, certain data may be anonymized in one or more ways before being stored or used in order to remove personally identifiable information. For example, the identity of a user may be anonymously processed so that personal identity information cannot be determined for or associated with the user, and user preferences or user interactions may be summarized (e.g., based on user demographics) instead of being associated with a particular user.
Although some of the various figures show a number of logical steps in a particular order, order-independent steps may be reordered and other steps may be combined or broken down. Although some reordering or other groupings are specifically mentioned, other groupings will be apparent to those of ordinary skill in the art and, therefore, the ordering and groupings presented herein are not an exhaustive list of alternatives. Furthermore, it should be appreciated that these steps may be implemented in hardware, firmware, software, or any combination thereof.
It will be further understood that, although the terms first, second, etc. may be used herein to describe various elements in some instances, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. For example, a first electronic device may be referred to as a second electronic device, and similarly, a second electronic device may be referred to as a first electronic device, without departing from the scope of the various described embodiments. The first electronic device and the second electronic device are both electronic devices, but they are not the same type of electronic device.
The terminology used in the various described embodiments is for the purpose of describing particular embodiments only and is not intended to be limiting. As used in the description of the various described embodiments and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms "comprises," "comprising," "includes," and/or "including," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. .
The term "if" is optionally interpreted as referring to "when … …" or "at … …" or "in response to determination … …" or "in response to detection … …" or "in accordance with determination … …" depending on the context. Similarly, the phrase "if determination … …" or "if [ the condition or event ] is detected" is optionally interpreted depending on the context to mean "at determination … …" or "in response to determination … …" or "at detection of [ the condition or event ]" or "in response to detection of [ the condition or event ]" or "in accordance with detection of [ the condition or event ]".
The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the scope of the claims to the precise forms disclosed. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen in order to best explain the principles embodied in the claims and its practical application to thereby enable others skilled in the art to best utilize the embodiments with various modifications as are suited to the particular use contemplated.

Claims (15)

1. A method performed at an audio device having one or more processors, memory, a volume control element located on a predefined side of the audio device, and at least three speakers, the method comprising:
Detecting a change in the orientation of the audio device from a first orientation to a second orientation, wherein:
the first orientation corresponds to the audio device being positioned on a first side of the audio device;
the second orientation corresponds to the audio device being positioned on a second side of the audio device different from the first side; and
when in the first orientation, a first touch interaction associated with the first end of the volume control element corresponds to increasing the volume of the at least three speakers, and
based on the change in orientation, the volume control element is configured such that a second touch interaction associated with the first end of the volume control element corresponds to a reduction in volume of the at least three speakers.
2. The method of claim 1, further comprising:
the audio device is operated in the first orientation prior to detecting the first change in orientation.
3. The method of claim 1, further comprising:
a first microphone of a plurality of microphones of the audio device is assigned to a task based on the first change in orientation.
4. A method according to claim 3, further comprising:
A second microphone of the plurality of microphones is de-allocated from the task based on the first change in orientation.
5. The method of claim 1, wherein the first touch interaction is a movement along the volume control element toward the first end of the volume control element.
6. The method of claim 1, wherein the second touch interaction is a movement along the volume control element toward the first end of the volume control element.
7. The method of claim 1, further comprising:
the at least three speakers are configured to operate in a mono mode.
8. The method of claim 7, wherein configuring the at least three speakers to operate in mono mode comprises utilizing only a subset of the at least three speakers for subsequent audio output.
9. An audio device, comprising:
a volume control element positioned on a predefined side of the audio device;
at least three speakers;
at least one processor; and
a memory coupled to the at least one processor, the memory storing one or more programs configured to be executed by the one or more processors, the one or more programs comprising instructions for:
Detecting a change in the orientation of the audio device from a first orientation to a second orientation, wherein:
the first orientation corresponds to the audio device being positioned on a first side of the audio device;
the second orientation corresponds to the audio device being positioned on a second side of the audio device different from the first side; and
when in the first orientation, a first touch interaction associated with the first end of the volume control element corresponds to increasing the volume of the at least three speakers, and
based on the change in orientation, the volume control element is configured such that a second touch interaction associated with the first end of the volume control element corresponds to a reduction in volume of the at least three speakers.
10. The audio device of claim 9, wherein the one or more programs further comprise instructions for:
the audio device is operated in the first orientation prior to detecting the first change in orientation.
11. The audio device of claim 9, further comprising:
a plurality of microphones;
wherein the one or more programs further comprise instructions for:
A first microphone of a plurality of microphones of the audio device is assigned to a task based on the first change in orientation.
12. The audio device of claim 11, wherein the one or more programs further comprise instructions for:
a second microphone of the plurality of microphones is de-allocated from the task based on the first change in orientation.
13. The audio device of claim 9, wherein the first touch interaction is a movement along the volume control element toward the first end of the volume control element.
14. The audio device of claim 9, wherein the second touch interaction is a movement along the volume control element toward the first end of the volume control element.
15. The audio device of claim 9, wherein the one or more programs further comprise instructions for:
the at least three speakers are configured to operate in a mono mode.
CN202311490245.2A 2018-08-08 2019-08-08 Orientation-based device interface Pending CN117676427A (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US16/058,820 2018-08-08
US16/058,820 US10734963B2 (en) 2017-10-04 2018-08-08 Methods and systems for automatically equalizing audio output based on room characteristics
US16/138,707 2018-09-21
US16/138,707 US10897680B2 (en) 2017-10-04 2018-09-21 Orientation-based device interface
CN201980053721.XA CN112567764A (en) 2018-08-08 2019-08-08 Orientation-based device interface
PCT/US2019/045703 WO2020033685A1 (en) 2018-08-08 2019-08-08 Orientation-based device interface

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201980053721.XA Division CN112567764A (en) 2018-08-08 2019-08-08 Orientation-based device interface

Publications (1)

Publication Number Publication Date
CN117676427A true CN117676427A (en) 2024-03-08

Family

ID=69457769

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201980053721.XA Pending CN112567764A (en) 2018-08-08 2019-08-08 Orientation-based device interface
CN202311490245.2A Pending CN117676427A (en) 2018-08-08 2019-08-08 Orientation-based device interface

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201980053721.XA Pending CN112567764A (en) 2018-08-08 2019-08-08 Orientation-based device interface

Country Status (2)

Country Link
CN (2) CN112567764A (en)
WO (1) WO2020033685A1 (en)

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2810910C (en) * 2010-09-23 2017-10-31 Research In Motion Limited System and method for rotating a user interface for a mobile device
CN103517178A (en) * 2012-06-26 2014-01-15 联想(北京)有限公司 Method, device and electronic apparatus for audio frequency regulation
US10433044B2 (en) * 2012-08-02 2019-10-01 Ronald Pong Headphones with interactive display
CN104898970A (en) * 2015-04-30 2015-09-09 努比亚技术有限公司 Volume control method and apparatus
CN105224280A (en) * 2015-09-25 2016-01-06 联想(北京)有限公司 Control method, device and electronic equipment
CN105260071B (en) * 2015-10-20 2018-12-11 广东欧珀移动通信有限公司 A kind of terminal control method and terminal device
US11093210B2 (en) * 2015-10-28 2021-08-17 Smule, Inc. Wireless handheld audio capture device and multi-vocalist method for audiovisual media application
US10257631B2 (en) * 2015-12-23 2019-04-09 Lenovo (Singapore) Pte. Ltd. Notifying a user to improve voice quality
CN105867766A (en) * 2016-03-28 2016-08-17 乐视控股(北京)有限公司 Sound volume adjustment method and terminal
EP3491839A4 (en) * 2016-08-01 2020-02-19 D&M Holdings, Inc. Soundbar having single interchangeable mounting surface and multi-directional audio output
CN106708403A (en) * 2016-11-30 2017-05-24 努比亚技术有限公司 The method and device of synchronizing playing notification tone while inputting slide operation

Also Published As

Publication number Publication date
WO2020033685A1 (en) 2020-02-13
CN112567764A (en) 2021-03-26

Similar Documents

Publication Publication Date Title
US10897680B2 (en) Orientation-based device interface
US11888456B2 (en) Methods and systems for automatically equalizing audio output based on room position
US10904612B2 (en) Method for outputting audio and electronic device for the same
US20140370817A1 (en) Determining proximity for devices interacting with media devices
US11558848B2 (en) Intelligent notification delivery
KR102127622B1 (en) Method and apparatus for controlling an input of sound
US9922635B2 (en) Minimizing nuisance audio in an interior space
US11601757B2 (en) Audio input prioritization
US20150264721A1 (en) Automated program selection for listening devices
CN108337926A (en) Sound collection means
CN109508168B (en) Multi-channel volume adjusting method and related device
CN109756825A (en) Classify the position of intelligent personal assistants
KR20190106297A (en) Electronic device and method for connection with external device
CN108874364A (en) game volume adjusting method and device
KR20220012054A (en) Edge computing system and method for recommendation of connecting device
US11257511B1 (en) Voice equalization based on face position and system therefor
CN117676427A (en) Orientation-based device interface
US11792868B2 (en) Proximity-based connection for Bluetooth devices
KR20220014213A (en) Electronic device and method for controlling audio volume thereof
US20230154500A1 (en) Electronic device, and method of synchronizing video data and audio data by using same
CN105824597A (en) Terminal audio processing method and terminal
US20180070315A1 (en) Control method for real-time scene detection by a wireless communication apparatus
GB2567527A (en) Loudspeaker system
WO2023000795A1 (en) Audio playing method, failure detection method for screen sound-production device, and electronic apparatus
US20220311636A1 (en) Electronic device and method for determining device for performing task by electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination