CN111866674B - Speaker assembly control method, device and storage medium - Google Patents

Speaker assembly control method, device and storage medium Download PDF

Info

Publication number
CN111866674B
CN111866674B CN201910341203.XA CN201910341203A CN111866674B CN 111866674 B CN111866674 B CN 111866674B CN 201910341203 A CN201910341203 A CN 201910341203A CN 111866674 B CN111866674 B CN 111866674B
Authority
CN
China
Prior art keywords
terminal
sensor
user
head
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910341203.XA
Other languages
Chinese (zh)
Other versions
CN111866674A (en
Inventor
项吉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN201910341203.XA priority Critical patent/CN111866674B/en
Publication of CN111866674A publication Critical patent/CN111866674A/en
Application granted granted Critical
Publication of CN111866674B publication Critical patent/CN111866674B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R9/00Transducers of moving-coil, moving-strip, or moving-wire type
    • H04R9/06Loudspeakers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R9/00Transducers of moving-coil, moving-strip, or moving-wire type
    • H04R9/02Details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2400/00Loudspeakers
    • H04R2400/11Aspects regarding the frame of loudspeaker transducers

Abstract

The disclosure discloses a loudspeaker assembly control method, and belongs to the technical field of audio processing. The method comprises the following steps: acquiring sensor data acquired by a sensor in a terminal; acquiring a relative position relation between the head of a user of the terminal and the terminal according to the sensor data; acquiring target parameters from at least two loudspeaker control parameters according to the relative position relation; and controlling at least two loudspeaker assemblies in the terminal according to the target parameters. The terminal can control two at least speaker components to reach better sound field effect in this position department according to user's head for the position at terminal to make user's head in the different positions department for the terminal, can both experience better sound effect, thereby improve speaker component's vocal effect.

Description

Speaker assembly control method, device and storage medium
Technical Field
The present disclosure relates to the field of audio processing technologies, and in particular, to a method and an apparatus for controlling a speaker assembly, and a storage medium.
Background
In a terminal such as a smart phone, a user often has a need to switch an application running in a background to an application running in a foreground.
With the continuous popularization of terminals such as smart phones, how to improve the user experience of the terminals is a problem that various manufacturers are constantly trying to solve.
In the related art, in view of the continuous development of applications such as games, audio and video, and the continuous richness of contents such as games, video, audio, and the like that can be provided by a terminal, many terminal manufacturers choose to set a higher-performance speaker component in the terminal so as to provide better audio service experience for users of the terminal.
Disclosure of Invention
The present disclosure provides a speaker assembly control method, apparatus, and storage medium. The technical scheme is as follows:
according to a first aspect of embodiments of the present disclosure, there is provided a speaker assembly control method, the method including:
acquiring sensor data acquired by a sensor in a terminal, wherein the sensor comprises at least one of a distance sensor and a motion sensor;
acquiring a relative position relation between the head of a user of the terminal and the terminal according to the sensor data;
acquiring target parameters from at least two loudspeaker control parameters according to the relative position relation;
and controlling at least two loudspeaker assemblies in the terminal according to the target parameters.
Optionally, the sensor comprises a distance sensor and a motion sensor;
the acquiring of the relative positional relationship between the head of the user of the terminal and the terminal according to the sensor data includes:
acquiring the distance between the terminal and the head of the user according to the distance sensor data acquired by the distance sensor;
acquiring the motion track of the terminal according to the motion sensor data acquired by the motion sensor;
and acquiring the relative position relation between the head of the user and the terminal according to the distance between the terminal and the head of the user and the motion track of the terminal.
Optionally, the motion sensor includes a gravity sensor and a gyroscope sensor;
the acquiring the motion track of the terminal according to the motion sensor data acquired by the motion sensor comprises:
acquiring the movement acceleration of the terminal according to the gravity sensor data acquired by the gravity sensor;
acquiring angular acceleration of the terminal according to gyroscope data acquired by the gyroscope sensor;
and acquiring the motion trail of the terminal according to the movement acceleration of the terminal and the angular acceleration of the terminal.
Optionally, the obtaining the relative position relationship between the head of the user and the terminal according to the distance between the terminal and the head of the user and the motion trajectory of the terminal includes:
determining a spatial region where the head of the user is located according to the distance between the terminal and the head of the user and the motion track of the terminal; the spatial region is a region divided in a space with the terminal as a reference;
and acquiring the relative position relation between the head of the user and the terminal according to the space region where the head of the user is located.
Optionally, the obtaining a target parameter from at least two speaker control parameters according to the relative position relationship includes:
inquiring the loudspeaker control parameter corresponding to the relative position relation from the at least two loudspeaker control parameters;
and acquiring the inquired loudspeaker control parameters as the target parameters.
Optionally, the speaker control parameter includes at least one of the following parameters:
a phase calibration parameter, a loudness gain parameter, a filter parameter, an equalizer parameter, and a comfort noise parameter.
Optionally, the distance sensor comprises at least one of the following sensors:
proximity sensors, ultrasonic ranging sensors, and Time of flight (TOF) sensors.
According to a second aspect of the embodiments of the present disclosure, there is provided a speaker assembly control apparatus, the apparatus including:
the sensor data acquisition module is used for acquiring sensor data acquired by a sensor in the terminal, wherein the sensor comprises at least one of a distance sensor and a motion sensor;
the position relation acquisition module is used for acquiring the relative position relation between the head of the user of the terminal and the terminal according to the sensor data;
the parameter acquisition module is used for acquiring target parameters from at least two loudspeaker control parameters according to the relative position relation;
and the control module is used for controlling at least two loudspeaker assemblies in the terminal according to the target parameters.
Optionally, the sensor comprises a distance sensor and a motion sensor; the position relation obtaining module includes:
the distance acquisition submodule is used for acquiring the distance between the terminal and the head of the user according to the distance sensor data acquired by the distance sensor;
the motion track acquisition submodule is used for acquiring the motion track of the terminal according to the motion sensor data acquired by the motion sensor;
and the position relation acquisition submodule is used for acquiring the relative position relation between the head of the user and the terminal according to the distance between the terminal and the head of the user and the motion track of the terminal.
Optionally, the motion sensor includes a gravity sensor and a gyroscope sensor; the motion trail obtaining submodule comprises:
the mobile acceleration acquisition submodule is used for acquiring the mobile acceleration of the terminal according to the gravity sensor data acquired by the gravity sensor;
the angular acceleration acquisition submodule is used for acquiring the angular acceleration of the terminal according to the gyroscope data acquired by the gyroscope sensor;
and the track acquisition submodule is used for acquiring the motion track of the terminal according to the movement acceleration of the terminal and the angular acceleration of the terminal.
Optionally, the position relationship obtaining sub-module includes:
the area determining submodule is used for determining a space area where the head of the user is located according to the distance between the terminal and the head of the user and the motion track of the terminal; the spatial region is a region divided in a space with the terminal as a reference;
and the relation acquisition submodule is used for acquiring the relative position relation between the head of the user and the terminal according to the space region where the head of the user is located.
Optionally, the parameter obtaining module includes:
the area query submodule is used for querying the loudspeaker control parameters corresponding to the relative position relation from the at least two loudspeaker control parameters;
and the parameter acquisition submodule is used for acquiring the inquired loudspeaker control parameters as the target parameters.
Optionally, the speaker control parameter includes at least one of the following parameters:
a phase calibration parameter, a loudness gain parameter, a filter parameter, an equalizer parameter, and a comfort noise parameter.
Optionally, the distance sensor comprises at least one of the following sensors:
proximity sensors, ultrasonic ranging sensors, and time-of-flight ranging TOF sensors.
According to a third aspect of the embodiments of the present disclosure, there is provided a speaker assembly control apparatus, the apparatus including:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to:
acquiring sensor data acquired by a sensor in a terminal, wherein the sensor comprises at least one of a distance sensor and a motion sensor;
acquiring a relative position relation between the head of a user of the terminal and the terminal according to the sensor data;
acquiring target parameters from at least two loudspeaker control parameters according to the relative position relation;
and controlling at least two loudspeaker assemblies in the terminal according to the target parameters.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium containing executable instructions, which are invoked by a processor in a terminal to implement the speaker assembly control method according to the first aspect or any one of the alternatives of the first aspect.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
the relative position relation between the head of the user and the terminal is determined through sensor data collected by a sensor in the terminal, a proper target parameter is selected from at least two speaker control parameters according to the relative position relation, at least two speaker assemblies are controlled through the selected target parameter, namely, the terminal can control the at least two speaker assemblies to achieve a better sound field effect at the position according to the position of the head of the user relative to the terminal, so that the head of the user can experience better sound effect at different positions relative to the terminal, and the sound production effect of the speaker assemblies is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a schematic diagram of a hardware structure of a terminal according to an exemplary embodiment;
FIG. 2 is a schematic diagram of a stereo field according to the embodiment shown in FIG. 1;
FIG. 3 is a schematic diagram of the sound signal conditioning involved in the embodiment of FIG. 1;
FIG. 4 is a schematic diagram of the sound sensitivity and phase difference involved in the embodiment of FIG. 1;
fig. 5 is a method flow diagram of a speaker assembly control method provided in accordance with an exemplary embodiment;
FIG. 6 is a method flow diagram of a speaker assembly control method provided in accordance with an exemplary embodiment;
fig. 7 is a schematic structural diagram of a terminal according to the embodiment shown in fig. 6;
FIG. 8 is a schematic diagram of the relative position of the terminal and the head of the user according to the embodiment shown in FIG. 6;
fig. 9 is a method flow diagram of a speaker assembly control method provided in accordance with an exemplary embodiment;
FIG. 10 is a schematic view of the embodiment of FIG. 9 showing the relative position of the terminal with respect to the head of the user;
fig. 11 is a block diagram illustrating a speaker assembly control apparatus according to an exemplary embodiment;
FIG. 12 is a block diagram illustrating a computer device according to an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
It is to be understood that reference herein to "a number" means one or more and "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
The solution shown in the embodiments of the present disclosure can be applied to a terminal having at least two speaker assemblies and a sensor. For example, the terminal may be a mobile terminal such as a smartphone, a tablet computer, an e-book reader, a mobile game device, and the like.
Please refer to fig. 1, which is a schematic diagram of a hardware structure of a terminal according to an exemplary embodiment. As shown in fig. 1, at least two speaker assemblies 101 (shown as two speaker assemblies in fig. 1) are included in the terminal 100.
Wherein the at least two speaker assemblies 101 may be disposed at different positions in the terminal 100. For example, taking fig. 1 as an example, the two speaker assemblies 101 may be respectively disposed at the top and bottom of the terminal 100.
Alternatively, each speaker assembly 101 may be composed of a speaker unit (speakerbox) and a control circuit, wherein the speaker unit may be classified into a moving coil type (electrodynamic type), a capacitive type (electrostatic type), a piezoelectric type (crystal or ceramic type), an electromagnetic type (compression spring type), an electric ion type, a pneumatic type speaker, and the like, based on a difference in sound production principle.
The control circuit may include a Power Amplifier (PA) unit, a filter unit, and the like.
The power amplifier unit may include one or more power amplifiers. Optionally, the power amplifier may be a Smart power amplifier (Smart PA).
The Filter unit may comprise one or more filters (filters), for example, the Filter unit may be a Filter combination circuit composed of a plurality of filters.
In the embodiment of the present disclosure, at least two speaker assemblies 101 in the terminal may create a specific sound effect in the space around the terminal through the control of the control circuit.
For example, according to the binaural law, the farther the position distance of at least two spatker box sound sources of the terminal is, the wider the sound field effect of each of the at least two spatker box sound sources is, so that the more obvious the user hears the stereo sound. For example, please refer to fig. 2 and fig. 3, wherein fig. 2 shows a stereo field schematic diagram according to an embodiment of the disclosure, and fig. 3 shows a sound signal adjustment schematic diagram according to an embodiment of the disclosure. As shown in fig. 2 and fig. 3, after two audio signals are processed by the filter combination circuit after Smart PA, sound is emitted through a speaker unit (speaker box), since the filter combination circuit can change the phase and other characteristics of the signals, such as sensitivity (also called loudness or volume), through two sets of filter combination circuits with different combinations, the two audio signals can be debugged into sound signals with a larger difference between the phase difference and the sensitivity of partial frequency, for example, in fig. 3, after the two audio signals respectively pass through different filter combination circuits, it can be seen that the two audio signals have an obvious phase difference. Referring to fig. 4, which shows a schematic diagram of sound sensitivity and phase difference according to an embodiment of the present disclosure, as shown in fig. 4, it can be seen from two sound signals after being debugged by the filter combination circuit that a sensitivity difference and a phase difference of frequencies from 400Hz to 7KHz are large, and the phase and sensitivity difference is large, so that a difference between a stereoscopic sense and a sound field is obviously felt in a subjective auditory sense of a user, and accordingly, the difference between two or more spatker boxes is judged to be more obvious in the subjective sense of the user, that is, sounds sent by two or more spatker boxes are heard more perceptually.
Based on the above principle, through the collocation of analog or digital filters, various sound field effects can be made, for example, in fig. 2, the sound field width of sound played by the terminal is expanded, the listening effect of the user is improved, so that the user can experience better sound effect when playing games or watching movies.
In a possible implementation manner, based on the schemes shown in fig. 2 and fig. 3, a developer may determine a position where a head of a user is located when the user uses the terminal (for example, a position at a specified distance right in front of a screen of the terminal) according to a most common posture of the user when the user uses the terminal daily, debug a control parameter with a better sound field effect at the position for the at least two speaker assemblies 101 according to the determined position, and subsequently, when playing sound, the at least two speaker assemblies of the terminal sound according to the pre-debugged control parameter.
However, the user may have different postures during the use of the terminal, such as sitting, lying, leaning, standing, etc., and the body shape and the posture of holding the terminal may also change, so that the head of the user may not always be in the position with the highest sound field effect during the use of the terminal, for example, in some positions, the user may hear the sound emitted by the terminal as the sound emitted by two or more speaker units (i.e., the sense of being able to hear stereo sound), and in other positions, the user may not hear stereo sound.
In order to solve the above problem, the present disclosure also provides a scheme of controlling at least two speaker assemblies in a terminal. Through the scheme, various loudspeaker control parameters can be set for at least two loudspeaker assemblies, wherein each loudspeaker control parameter can achieve a high sound field effect (such as achieving a relatively obvious stereo effect) at a specific position relative to the terminal through at least two loudspeaker assemblies in the terminal, when the terminal controls the at least two loudspeaker assemblies to sound, the terminal can select a proper loudspeaker control parameter to control the at least two loudspeaker assemblies to sound according to the current position relative to the terminal of the head of a user, and therefore the head of the user can experience a good sound effect at different positions relative to the terminal.
In one possible implementation, the terminal may determine the relative position between the user's head and the terminal through built-in sensors. Please refer to fig. 5, which is a flowchart illustrating a method for controlling a speaker assembly according to an exemplary embodiment. Wherein the method may be performed by a terminal comprising at least two speaker assemblies and a sensor. As shown in fig. 5, the speaker assembly control method may include the steps of:
in step 501, sensor data collected by a sensor in a terminal is acquired.
In one possible example, the sensor may include at least one of a distance sensor and a motion sensor.
In step 502, the relative position relationship between the head of the user of the terminal and the terminal is obtained according to the sensor data.
In step 503, a target parameter is obtained from the at least two speaker control parameters according to the relative positional relationship.
In step 504, at least two speaker assemblies in the terminal are controlled according to the target parameter.
Optionally, the sensor comprises the distance sensor and the motion sensor;
the acquiring the relative position relationship between the head of the user of the terminal and the terminal according to the sensor data includes:
acquiring the distance between the terminal and the head of the user according to the distance sensor data acquired by the distance sensor;
acquiring the motion track of the terminal according to the motion sensor data acquired by the motion sensor;
and acquiring the relative position relation between the head of the user and the terminal according to the distance between the terminal and the head of the user and the motion track of the terminal.
Optionally, the motion sensor comprises a gravity sensor and a gyroscope sensor;
the obtaining of the motion trajectory of the terminal according to the motion sensor data collected by the motion sensor includes:
acquiring the movement acceleration of the terminal according to the gravity sensor data acquired by the gravity sensor;
acquiring the angular acceleration of the terminal according to the gyroscope data acquired by the gyroscope sensor;
and acquiring the motion trail of the terminal according to the movement acceleration of the terminal and the angular acceleration of the terminal.
Optionally, the obtaining of the relative position relationship between the head of the user and the terminal according to the distance between the terminal and the head of the user and the motion trajectory of the terminal includes:
determining a spatial region where the head of the user is located according to the distance between the terminal and the head of the user and the motion track of the terminal; the spatial region is a region divided in a space with the terminal as a reference;
and acquiring the relative position relation between the head of the user and the terminal according to the space region where the head of the user is located.
Optionally, the obtaining a target parameter from at least two speaker control parameters according to the relative position relationship includes:
inquiring the loudspeaker control parameter corresponding to the relative position relation from the at least two loudspeaker control parameters;
and acquiring the inquired loudspeaker control parameters as the target parameters.
Optionally, the speaker control parameter includes at least one of the following parameters:
a phase calibration parameter, a loudness gain parameter, a filter parameter, an equalizer parameter, and a comfort noise parameter.
Optionally, the distance sensor comprises at least one of the following sensors:
proximity sensors, ultrasonic ranging sensors, and time-of-flight ranging TOF sensors.
To sum up, according to the scheme shown in the embodiment of the present disclosure, the relative position relationship between the head of the user and the terminal is determined according to the sensor data collected by the sensor in the terminal, and a suitable target parameter is selected from at least two speaker control parameters according to the relative position relationship, and at least two speaker assemblies are controlled through the selected target parameter, that is, the terminal can control the at least two speaker assemblies to achieve a better sound field effect at the position according to the position of the head of the user relative to the terminal, so that the head of the user can both experience a better sound effect at different positions relative to the terminal, and the sound production effect of the speaker assemblies is improved.
Wherein the scheme shown in fig. 5 above may be executed by a processor unit in the terminal. For example, please refer to fig. 6, which is a flowchart illustrating a method for controlling a speaker assembly according to an exemplary embodiment. Wherein the method may be performed by a terminal comprising at least two speaker assemblies and a sensor. As shown in fig. 6, the speaker assembly control method may include the steps of:
in step 601, sensor data collected by a sensor in a terminal is acquired.
In the embodiment of the present disclosure, a processor unit for controlling at least two speaker assemblies may be provided in the terminal, the processor unit may be connected to the control circuit of each speaker assembly, and the processor unit may further acquire sensor data acquired by a sensor in the terminal.
In a possible implementation manner, the processor unit may be electrically connected to a sensor in the terminal to obtain sensor data acquired by the sensor in the terminal in real time.
For example, please refer to fig. 7, which shows a schematic structural diagram of a terminal according to an embodiment of the present disclosure. As shown in fig. 7, the terminal 70 includes a processor unit 71, at least two speaker assemblies 72 (two speaker assemblies are shown in fig. 7), and a sensor 73.
The speaker assembly 72 includes a smart power amplifier 72a, a filter unit 72b, and a speaker unit 72c, wherein the speaker unit 72c of the two speaker assemblies is disposed at different positions in the terminal 70, and the smart power amplifier 72a and the filter unit 72b of the two speaker assemblies may be disposed at different positions in the terminal 70 or may be disposed near the terminal 70, for example, the smart power amplifier 72a and the filter unit 72b of the two speaker assemblies may be disposed in the same circuit board of the terminal 70.
The processor unit 71 is electrically connected to at least two speaker assemblies 72 and the sensor 73, respectively.
The processor unit 71 may include a dedicated processor for controlling audio output, for example, a Digital Signal Processor (DSP) for controlling audio output. In another possible implementation, the processor unit 71 may also include a general-purpose processor.
In another possible implementation manner, the processor unit 71 may also indirectly acquire sensor data acquired by a sensor in the terminal. For example, the processor unit 71 may be a digital signal processor DSP for controlling audio output, the DSP being electrically connected to at least two speaker assemblies 72 and an application processor in the terminal 70, respectively, and the application processor being electrically connected to the sensor 73. The sensors 73 send the collected sensor data to the application processor, which forwards the sensor data to the DSP.
In one possible example, the sensor may include at least one of a distance sensor and a motion sensor.
Distance sensors and motion sensors are commonly used sensors in terminals such as smart phones. In embodiments of the present disclosure, the terminal may reuse existing distance sensors and/or motion sensors to determine the relative positional relationship between the user's head and the terminal.
Taking the example that the sensor includes the distance sensor and the motion sensor, the step of the terminal in the embodiment of the present disclosure acquiring the relative positional relationship between the head of the user of the terminal and the terminal according to the sensor data may be as shown in the subsequent step.
In step 602, the distance between the terminal and the head of the user is obtained according to the distance sensor data collected by the distance sensor.
Wherein the distance sensor may be arranged at the front side of the terminal, e.g. near the top of the front side of the terminal.
In the embodiment of the disclosure, after the processor unit in the terminal acquires the distance sensor data acquired by the distance sensor, the distance from the obstacle on the front side of the terminal to the front side of the terminal may be determined according to the distance sensor data, and whether the obstacle on the front side of the terminal is the head of the user may be determined according to the distance sensor data.
For example, in the case that the distance sensor is a proximity sensor (P-sensor) on the front surface of the terminal, the proximity sensor may periodically emit an infrared beam toward the front surface of the terminal and receive light returned from an obstacle on the front surface of the terminal, and determine the distance between the obstacle on the front surface of the terminal and the terminal through the difference between the transmission time and the return time of the infrared beam (calculated through the propagation speed and the time difference of the light).
In step 603, the motion trajectory of the terminal is obtained according to the motion sensor data collected by the motion sensor.
In the embodiment of the present disclosure, the motion sensor includes a gravity sensor (G-sensor) and a gyro sensor, and accordingly, the motion trajectory of the terminal may be divided into a movement trajectory (i.e., a movement trajectory of a central point of the terminal) and a turning trajectory (i.e., a change in orientation of the terminal), which may be obtained by the gravity sensor and the gyro sensor, respectively. For example, the scheme for the processor unit to obtain the motion trajectory of the terminal may be as follows:
the processor unit acquires the movement acceleration of the terminal according to the gravity sensor data acquired by the gravity sensor, acquires the angular acceleration of the terminal according to the gyroscope data acquired by the gyroscope sensor, and acquires the motion track of the terminal according to the movement acceleration of the terminal and the angular acceleration of the terminal.
The gravity sensor can acquire the moving acceleration of the terminal in each direction at each time point, the gyroscope sensor can acquire the rotating angular acceleration of the terminal on each rotating shaft at each time point, the processor assembly can calculate and obtain the orientation, the rotating angular velocity, the central position and the moving velocity of the terminal at the current time point by combining the moving acceleration and the angular acceleration of the terminal at the previous time point and the orientation, the rotating angular velocity, the central position and the moving velocity of the terminal at the previous time point, and the movement track of the terminal can be acquired by combining the calculation results of each time point.
In step 604, the relative position relationship between the head of the user and the terminal is obtained according to the distance between the terminal and the head of the user and the motion track of the terminal.
When a user moves or rotates the terminal during using the terminal, the head is usually kept still, and the position and orientation of the terminal are adjusted, that is, the motion trajectory of the terminal can be regarded as the trajectory of the motion of the terminal relative to the head of the user. Therefore, in the embodiment of the present disclosure, the processor unit of the terminal may determine the relative position relationship between the head of the user of the terminal and the terminal at each time point by combining the distance between the terminal and the head of the user and the motion track of the terminal.
For example, referring to fig. 8, which shows a schematic diagram of a relative position relationship between a terminal and a head of a user according to an embodiment of the present disclosure, as shown in fig. 8, 5 relative position relationships may be preset in the terminal, that is, the terminal is located at a central position, a left position, a right position, a forward position, and a backward position.
For example, the terminal presets that the middle point of the center position is 20cm right in front of the terminal screen, and the front, back, left and right boundaries of the center position are not more than 5cm from the center point, that is, when the distance obtained by the processor unit in the terminal through the distance sensor is 5cm in front of the center point, it is determined that the head of the user moves forward relative to the center position (i.e., the head of the user is in a forward moving position), when the distance obtained by the processor unit in the terminal through the distance sensor is 5cm behind the center point, it is determined that the head of the user moves forward relative to the center position (i.e., the head of the user is in a backward moving position), accordingly, when the processor unit in the terminal determines that the head of the user of the terminal is 5cm left of the center point of the center position through the movement locus, it can be determined that the head of the user moves left relative to the center position (i.e., the head of the user is in a left moving position), when the processor unit in the terminal determines that the head of the user of the terminal is outside 5cm right of the center point of the center position through the motion trail, it can be determined that the head of the user moves to the right relative to the center position (i.e. the head of the user is in the left movement position), and when the processor unit determines that the head of the user is within 5cm near the center point through the distance acquired by the distance sensor data or through the motion trail, it can be considered that the head of the user is in the center position.
In one possible implementation, since the processor unit obtains the front-back offset distance and the left-right offset distance of the head of the user relative to the central position at the same time, when determining the relative positional relationship between the head of the user and the terminal, the processor unit determines the relative positional relationship between the head of the user and the terminal by using one of the front-back offset distance and the left-right offset distance, in which the absolute offset amount is larger. For example, assuming that the processor unit determines that the head of the user is shifted forward by 6cm and leftward by 8cm with respect to the center position, the processor unit determines that the relative positional relationship between the head of the user and the terminal is a leftward shift position; alternatively, assuming that the processor unit determines that the user's head is shifted forward by 8cm with respect to the center position and shifted leftward by 6cm, the processor unit determines that the relative positional relationship between the user's head and the terminal is a forward shift position.
In step 605, a target parameter is obtained from the at least two speaker control parameters according to the relative positional relationship.
Optionally, the processor profile may query the speaker control parameter corresponding to the relative position relationship from the at least two speaker control parameters; and acquiring the inquired loudspeaker control parameters as the target parameters.
In this embodiment of the present disclosure, a processor unit of the terminal may preset a corresponding relationship between at least two speaker control parameters and at least two relative position relationships, and after the relative position relationships are obtained, may query corresponding speaker control parameters through the relative position relationships, and obtain the queried speaker control parameters as target parameters.
In step 606, at least two speaker assemblies in the terminal are controlled according to the target parameters.
In this disclosure, the speaker control parameters may include control parameters corresponding to at least two speaker components, respectively, and the processor unit of the terminal may control the at least two speaker components according to the control parameters corresponding to the at least two speaker components, respectively, in the target parameters.
Taking the terminal as an example, based on the above-mentioned scheme of the embodiment of the present disclosure, the control of at least two speaker assemblies in the smart phone can be realized in the following manner:
first, because the sound field effect is very large in relation to the position of the human ear from two sound sources, developers match the sound field in advance through different digital filters, and aim at the following five positions: the center position, the forward position, the backward position, the left position and the right position are used for debugging control parameters corresponding to the effect of the expansion of the five sound fields.
Secondly, a Sensor monitoring (Sensor failure) function is added, and the distance between the mobile phone and the user, namely the distance to the position of the human head, is detected through a proximity Sensor. The principle of proximity sensor ranging is to use the difference of a direct light source and a reflected light source for ranging. In the disclosed embodiment, the added P-Sensor can detect the distance between the head and the handset, and this can be implemented by a gyroscope and a gravity Sensor to determine cheap positions to the left and right.
And thirdly, adding data of a gyroscope (angular velocity) and a gravity sensor (linear velocity). When a user who plays games or watches movies by using a mobile phone moves to the left or right position, even moves to any position, the combination of the gyroscope (such as a six-axis gyroscope) and the G-Sensor can accurately judge the position from the mobile phone to the head and the current moving track, and the P-Sensor, the gyroscope and the G-Sensor can jointly form a Sensor failure system by combining the data of the P-Sensor detection distance.
Fourth, when the Sensor failure system detects a different positional relationship of the user's head with respect to the terminal, assume that the entire model is divided into five positions (more scenes and positions can be defined) as shown in fig. 8 above: the center position, the forward moving position, the backward moving position, the left moving position and the right moving position respectively write the sound effect parameters (namely the loudspeaker control parameters) corresponding to the five positions into a DSP (namely a processor unit) of the mobile phone. When a user uses the mobile phone, the DSP judges the given position according to the Sensor failure system and calls out different parameters, so that the excellent sound field effect and stereo effect can be provided no matter what position the user holds the mobile phone, and the existing Sensor in the mobile phone is multiplexed, so that the hardware cost is not increased.
To sum up, according to the scheme shown in the embodiment of the present disclosure, the relative position relationship between the head of the user and the terminal is determined according to the sensor data collected by the sensor in the terminal, and a suitable target parameter is selected from at least two speaker control parameters according to the relative position relationship, and at least two speaker assemblies are controlled through the selected target parameter, that is, the terminal can control the at least two speaker assemblies to achieve a better sound field effect at the position according to the position of the head of the user relative to the terminal, so that the head of the user can both experience a better sound effect at different positions relative to the terminal, and the sound production effect of the speaker assemblies is improved.
Please refer to fig. 9, which is a flowchart illustrating a method for controlling a speaker assembly according to an exemplary embodiment. Wherein the method may be performed by a terminal comprising at least two speaker assemblies and a sensor. As shown in fig. 9, the speaker assembly control method may include the steps of:
in step 901, sensor data collected by a sensor in a terminal is acquired.
In step 902, the distance between the terminal and the head of the user is obtained according to the distance sensor data collected by the distance sensor.
Optionally, the distance sensor comprises at least one of the following sensors:
proximity sensors, ultrasonic ranging sensors, and time-of-flight ranging TOF sensors.
In step 903, the motion trajectory of the terminal is obtained according to the motion sensor data collected by the motion sensor.
The execution process of steps 901 to 903 may refer to the execution process of steps 701 to 703 in the embodiment shown in fig. 7, and details of the embodiment of the present disclosure are not repeated.
In step 904, determining a spatial region where the head of the user is located according to the distance between the terminal and the head of the user and the motion track of the terminal; the spatial region is a region divided in a space with the terminal as a reference.
In the embodiment of the present disclosure, in addition to determining the relative positional relationship between the head of the user and the terminal in the manner shown in fig. 8 described above, the terminal may be provided with more relative positional relationships. For example, the processor unit may pre-divide a space with the terminal as a reference into a plurality of regions, and at each time point, the processor unit determines which of the pre-divided regions the head of the user is located with respect to the terminal at the current time point according to a distance between the terminal and the head of the user and a motion trajectory of the terminal.
In step 905, the relative position relationship between the head of the user and the terminal is obtained according to the spatial region where the head of the user is located.
After determining to which of the plurality of pre-divided regions the spatial region of the head of the user belongs, the processor unit may determine the relative positional relationship corresponding to the region.
For example, the processor unit may set a relative positional relationship between the head of the user and the terminal in each of the plurality of spatial regions divided in advance, and after the terminal determines the spatial region where the head of the user is located at a certain time point, the relative positional relationship between the head of the user and the terminal may be determined according to the spatial region.
For example, please refer to fig. 10, which shows a schematic diagram of a relative position relationship between a terminal and a head of a user according to an embodiment of the present disclosure, as shown in fig. 10, a spatial region right in front of the terminal may be divided into 9 regions, which are respectively a region 1 to a region 9, each region corresponds to one relative position relationship, where the region 1 corresponds to a front left position, the region 2 corresponds to a front forward position, the region 3 corresponds to a front right position, the region 4 corresponds to a left shift position, the region 5 corresponds to a center position, the region 6 corresponds to a right shift position, the region 7 corresponds to a rear left position, the region 8 corresponds to a rear shift position, and the region 9 corresponds to a rear right position. When the processor unit determines that the head of the user is in the area 1 relative to the terminal, the relative positional relationship between the head of the user and the terminal may be determined to be a left-front position, when the processor unit determines that the head of the user is in the area 2 relative to the terminal, the relative positional relationship between the head of the user and the terminal may be determined to be a forward-moving position, and so on.
In step 906, a target parameter is obtained from the at least two speaker control parameters according to the relative positional relationship.
In step 907, at least two speaker assemblies in the terminal are controlled according to the target parameters.
The execution process of steps 906 to 907 may refer to the execution process of steps 705 to 706 in the embodiment shown in fig. 7, and details of the embodiment of the present disclosure are not repeated.
Optionally, the speaker control parameter includes at least one of the following parameters: a phase calibration parameter, a loudness gain parameter, a filter parameter, an equalizer parameter, and a comfort noise parameter.
To sum up, according to the scheme shown in the embodiment of the present disclosure, the relative position relationship between the head of the user and the terminal is determined according to the sensor data collected by the sensor in the terminal, and a suitable target parameter is selected from at least two speaker control parameters according to the relative position relationship, and at least two speaker assemblies are controlled through the selected target parameter, that is, the terminal can control the at least two speaker assemblies to achieve a better sound field effect at the position according to the position of the head of the user relative to the terminal, so that the head of the user can both experience a better sound effect at different positions relative to the terminal, and the sound production effect of the speaker assemblies is improved.
Fig. 11 is a block diagram of a speaker assembly control apparatus according to an exemplary embodiment, as shown in fig. 11, the speaker assembly control apparatus may be implemented as all or part of a terminal in hardware or a combination of hardware and software to perform the steps shown in any one of the embodiments of fig. 5, 6 or 9; the speaker assembly control apparatus may include:
a sensor data acquiring module 1101, configured to acquire sensor data acquired by a sensor in a terminal, where the sensor includes at least one of a distance sensor and a motion sensor;
a position relation obtaining module 1102, configured to obtain a relative position relation between a head of a user of the terminal and the terminal according to the sensor data;
a parameter obtaining module 1103, configured to obtain a target parameter from at least two speaker control parameters according to the relative position relationship;
a control module 1104, configured to control at least two speaker assemblies in the terminal according to the target parameter.
Optionally, the sensor comprises the distance sensor and the motion sensor; the position relation obtaining module includes:
the distance acquisition submodule is used for acquiring the distance between the terminal and the head of the user according to the distance sensor data acquired by the distance sensor;
the motion track acquisition submodule is used for acquiring the motion track of the terminal according to the motion sensor data acquired by the motion sensor;
and the position relation acquisition submodule is used for acquiring the relative position relation between the head of the user and the terminal according to the distance between the terminal and the head of the user and the motion track of the terminal.
Optionally, the motion sensor includes a gravity sensor and a gyroscope sensor; the motion trail obtaining submodule comprises:
the mobile acceleration acquisition submodule is used for acquiring the mobile acceleration of the terminal according to the gravity sensor data acquired by the gravity sensor;
the angular acceleration acquisition submodule is used for acquiring the angular acceleration of the terminal according to the gyroscope data acquired by the gyroscope sensor;
and the track acquisition submodule is used for acquiring the motion track of the terminal according to the movement acceleration of the terminal and the angular acceleration of the terminal.
Optionally, the position relationship obtaining sub-module includes:
the area determining submodule is used for determining a space area where the head of the user is located according to the distance between the terminal and the head of the user and the motion track of the terminal; the spatial region is a region divided in a space with the terminal as a reference;
and the relation acquisition submodule is used for acquiring the relative position relation between the head of the user and the terminal according to the space region where the head of the user is located.
Optionally, the parameter obtaining module includes:
the area query submodule is used for querying the loudspeaker control parameters corresponding to the relative position relation from the at least two loudspeaker control parameters;
and the parameter acquisition submodule is used for acquiring the inquired loudspeaker control parameters as the target parameters.
Optionally, the speaker control parameter includes at least one of the following parameters:
a phase calibration parameter, a loudness gain parameter, a filter parameter, an equalizer parameter, and a comfort noise parameter.
Optionally, the distance sensor comprises at least one of the following sensors:
proximity sensors, ultrasonic ranging sensors, and time-of-flight ranging TOF sensors.
To sum up, according to the scheme shown in the embodiment of the present disclosure, the relative position relationship between the head of the user and the terminal is determined according to the sensor data collected by the sensor in the terminal, and a suitable target parameter is selected from at least two speaker control parameters according to the relative position relationship, and at least two speaker assemblies are controlled through the selected target parameter, that is, the terminal can control the at least two speaker assemblies to achieve a better sound field effect at the position according to the position of the head of the user relative to the terminal, so that the head of the user can both experience a better sound effect at different positions relative to the terminal, and the sound production effect of the speaker assemblies is improved.
It should be noted that, when the apparatus provided in the foregoing embodiment implements the functions thereof, only the division of the above functional modules is illustrated, and in practical applications, the above functions may be distributed by different functional modules according to actual needs, that is, the content structure of the device is divided into different functional modules, so as to complete all or part of the functions described above.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
An exemplary embodiment of the present disclosure provides a speaker assembly control apparatus, which can implement all or part of the steps in any one of the embodiments shown in fig. 5, fig. 6, or fig. 9 of the present disclosure, and the speaker assembly control apparatus further includes: a processor, a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring sensor data acquired by a sensor in a terminal, wherein the sensor comprises at least one of a distance sensor and a motion sensor;
acquiring a relative position relation between the head of a user of the terminal and the terminal according to the sensor data;
acquiring target parameters from at least two loudspeaker control parameters according to the relative position relation;
and controlling at least two loudspeaker assemblies in the terminal according to the target parameters.
Optionally, the sensor comprises the distance sensor and the motion sensor;
the acquiring of the relative positional relationship between the head of the user of the terminal and the terminal according to the sensor data includes:
acquiring the distance between the terminal and the head of the user according to the distance sensor data acquired by the distance sensor;
acquiring the motion track of the terminal according to the motion sensor data acquired by the motion sensor;
and acquiring the relative position relation between the head of the user and the terminal according to the distance between the terminal and the head of the user and the motion track of the terminal.
Optionally, the motion sensor includes a gravity sensor and a gyroscope sensor;
the acquiring the motion track of the terminal according to the motion sensor data acquired by the motion sensor comprises:
acquiring the movement acceleration of the terminal according to the gravity sensor data acquired by the gravity sensor;
acquiring angular acceleration of the terminal according to gyroscope data acquired by the gyroscope sensor;
and acquiring the motion trail of the terminal according to the movement acceleration of the terminal and the angular acceleration of the terminal.
Optionally, the obtaining the relative position relationship between the head of the user and the terminal according to the distance between the terminal and the head of the user and the motion trajectory of the terminal includes:
determining a spatial region where the head of the user is located according to the distance between the terminal and the head of the user and the motion track of the terminal; the spatial region is a region divided in a space with the terminal as a reference;
and acquiring the relative position relation between the head of the user and the terminal according to the space region where the head of the user is located.
Optionally, the obtaining a target parameter from at least two speaker control parameters according to the relative position relationship includes:
inquiring the loudspeaker control parameter corresponding to the relative position relation from the at least two loudspeaker control parameters;
and acquiring the inquired loudspeaker control parameters as the target parameters.
Optionally, the speaker control parameter includes at least one of the following parameters:
a phase calibration parameter, a loudness gain parameter, a filter parameter, an equalizer parameter, and a comfort noise parameter.
Optionally, the distance sensor comprises at least one of the following sensors:
proximity sensors, ultrasonic ranging sensors, and time-of-flight ranging TOF sensors.
FIG. 12 is a block diagram illustrating a computer device according to an example embodiment. The computer device may be implemented as a terminal having at least two speaker assemblies and a sensor in the above aspects of the disclosure. The computer device 1200 includes a processing unit 1201, a system memory 1204 including a Random Access Memory (RAM)1202 and a Read Only Memory (ROM)1203, and a system bus 1205 connecting the system memory 1204 and the processing unit 1201. The computer device 1200 also includes a basic input/output system (I/O system) 1206 for facilitating information transfer between various devices within the computer, and a mass storage device 1207 for storing an operating system 1213, application programs 1214, and other program modules 1215.
The basic input/output system 1206 includes a speaker assembly 1208 and sensors 1209. Where the speaker assembly 1208 includes at least two speakers, the sensors 1209 may include distance sensors, gyroscopes, gravity sensors, and so on.
The mass storage device 1207 is connected to the processing unit 1201 through a mass storage controller (not shown) connected to the system bus 1205. The mass storage device 1207 and its associated computer-readable media provide non-volatile storage for the computer device 1200. That is, the mass storage device 1207 may include a computer-readable medium (not shown) such as a hard disk.
Without loss of generality, the computer-readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, DVD, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will appreciate that the computer storage media is not limited to the foregoing. The system memory 1204 and mass storage device 1207 described above may be collectively referred to as memory.
The computer device 1200 may also operate as a remote computer connected to a network via a network, such as the internet, in accordance with various embodiments of the present disclosure. That is, the computer device 1200 may connect to the network 1212 through a network interface unit 1211 coupled to the system bus 1205, or may connect to other types of networks or remote computer systems (not shown) using the network interface unit 1211.
The memory further includes one or more programs, the one or more programs are stored in the memory, and the central processing unit 1201 implements all or part of the steps of the method shown in fig. 5, 6 or 9 by executing the one or more programs.
Those skilled in the art will recognize that, in one or more of the examples described above, the functions described in embodiments of the disclosure may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
The embodiment of the disclosure also provides a computer storage medium for storing computer software instructions for the terminal, which contains a program designed for executing the speaker control method.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A speaker assembly control method, the method comprising:
acquiring sensor data acquired by a sensor in a terminal, wherein the sensor comprises a distance sensor and a motion sensor;
acquiring the distance between the terminal and the head of the user according to the distance sensor data acquired by the distance sensor;
acquiring the motion track of the terminal according to the motion sensor data acquired by the motion sensor;
determining a spatial region where the head of the user is located according to the distance between the terminal and the head of the user and the motion track of the terminal; the spatial region is a region divided in a space with the terminal as a reference;
acquiring the relative position relation between the head of the user and the terminal according to the space area where the head of the user is located;
inquiring the loudspeaker control parameters corresponding to the relative position relation from the corresponding relation between at least two preset loudspeaker control parameters and at least two relative position relations, and acquiring the inquired loudspeaker control parameters as target parameters;
and controlling at least two loudspeaker assemblies in the terminal according to the target parameters.
2. The method of claim 1, wherein the motion sensors comprise a gravity sensor and a gyroscope sensor;
the acquiring the motion track of the terminal according to the motion sensor data acquired by the motion sensor comprises:
acquiring the movement acceleration of the terminal according to the gravity sensor data acquired by the gravity sensor;
acquiring angular acceleration of the terminal according to gyroscope data acquired by the gyroscope sensor;
and acquiring the motion trail of the terminal according to the movement acceleration of the terminal and the angular acceleration of the terminal.
3. The method according to any of claims 1 to 2, wherein the loudspeaker control parameters comprise at least one of the following parameters:
a phase calibration parameter, a loudness gain parameter, a filter parameter, an equalizer parameter, and a comfort noise parameter.
4. The method according to any one of claims 1 to 3, wherein the distance sensor comprises at least one of the following sensors:
proximity sensors, ultrasonic ranging sensors, and time-of-flight ranging TOF sensors.
5. A speaker assembly control apparatus, the apparatus comprising:
the sensor data acquisition module is used for acquiring sensor data acquired by a sensor in the terminal, wherein the sensor comprises a distance sensor and a motion sensor;
the distance acquisition submodule is used for acquiring the distance between the terminal and the head of the user according to the distance sensor data acquired by the distance sensor;
the motion track acquisition submodule is used for acquiring the motion track of the terminal according to the motion sensor data acquired by the motion sensor;
the area determining submodule is used for determining a space area where the head of the user is located according to the distance between the terminal and the head of the user and the motion track of the terminal; the spatial region is a region divided in a space with the terminal as a reference;
the relation acquisition submodule is used for acquiring the relative position relation between the head of the user and the terminal according to the space region where the head of the user is located;
the parameter acquisition module is used for inquiring the loudspeaker control parameters corresponding to the relative position relation from the corresponding relation between at least two preset loudspeaker control parameters and at least two relative position relations, and acquiring the inquired loudspeaker control parameters as target parameters;
and the control module is used for controlling at least two loudspeaker assemblies in the terminal according to the target parameters.
6. The apparatus of claim 5, wherein the motion sensor comprises a gravity sensor and a gyroscope sensor; the motion trail obtaining submodule comprises:
the mobile acceleration acquisition submodule is used for acquiring the mobile acceleration of the terminal according to the gravity sensor data acquired by the gravity sensor;
the angular acceleration acquisition submodule is used for acquiring the angular acceleration of the terminal according to the gyroscope data acquired by the gyroscope sensor;
and the track acquisition submodule is used for acquiring the motion track of the terminal according to the movement acceleration of the terminal and the angular acceleration of the terminal.
7. The apparatus of any of claims 5 to 6, wherein the speaker control parameters comprise at least one of:
a phase calibration parameter, a loudness gain parameter, a filter parameter, an equalizer parameter, and a comfort noise parameter.
8. The apparatus of any of claims 5 to 6, wherein the distance sensor comprises at least one of the following sensors:
proximity sensors, ultrasonic ranging sensors, and time-of-flight ranging TOF sensors.
9. A speaker assembly control apparatus, the apparatus comprising:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to:
acquiring sensor data acquired by a sensor in a terminal, wherein the sensor comprises a distance sensor and a motion sensor;
acquiring the distance between the terminal and the head of the user according to the distance sensor data acquired by the distance sensor;
acquiring the motion track of the terminal according to the motion sensor data acquired by the motion sensor;
determining a spatial region where the head of the user is located according to the distance between the terminal and the head of the user and the motion track of the terminal; the spatial region is a region divided in a space with the terminal as a reference;
acquiring the relative position relation between the head of the user and the terminal according to the space area where the head of the user is located;
according to the relative position relationship, target parameters are obtained from at least two loudspeaker control parameters, and the method comprises the following steps: inquiring the loudspeaker control parameters corresponding to the relative position relation from the corresponding relation between at least two preset loudspeaker control parameters and at least two relative position relations, and acquiring the inquired loudspeaker control parameters as the target parameters;
and controlling at least two loudspeaker assemblies in the terminal according to the target parameters.
10. A computer-readable storage medium containing executable instructions that are invoked by a processor in a terminal to implement the speaker assembly control method of any of claims 1 to 4.
CN201910341203.XA 2019-04-25 2019-04-25 Speaker assembly control method, device and storage medium Active CN111866674B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910341203.XA CN111866674B (en) 2019-04-25 2019-04-25 Speaker assembly control method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910341203.XA CN111866674B (en) 2019-04-25 2019-04-25 Speaker assembly control method, device and storage medium

Publications (2)

Publication Number Publication Date
CN111866674A CN111866674A (en) 2020-10-30
CN111866674B true CN111866674B (en) 2022-02-22

Family

ID=72951604

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910341203.XA Active CN111866674B (en) 2019-04-25 2019-04-25 Speaker assembly control method, device and storage medium

Country Status (1)

Country Link
CN (1) CN111866674B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113504890A (en) * 2021-07-14 2021-10-15 炬佑智能科技(苏州)有限公司 ToF camera-based speaker assembly control method, apparatus, device, and medium
CN114189788B (en) * 2021-11-24 2024-01-23 深圳市豪恩声学股份有限公司 Tuning method, tuning device, electronic device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102064781A (en) * 2010-10-29 2011-05-18 华为终端有限公司 Method and device for adjusting audio frequency of terminal and terminal
CN106713793A (en) * 2015-11-18 2017-05-24 天津三星电子有限公司 Sound playing control method and device thereof
CN106792341A (en) * 2016-11-23 2017-05-31 广东小天才科技有限公司 A kind of audio-frequency inputting method, device and terminal device
CN107071648A (en) * 2017-06-19 2017-08-18 深圳市泰衡诺科技有限公司上海分公司 Sound plays regulating system, device and method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007092420A2 (en) * 2006-02-07 2007-08-16 Anthony Bongiovi Collapsible speaker and headliner

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102064781A (en) * 2010-10-29 2011-05-18 华为终端有限公司 Method and device for adjusting audio frequency of terminal and terminal
CN106713793A (en) * 2015-11-18 2017-05-24 天津三星电子有限公司 Sound playing control method and device thereof
CN106792341A (en) * 2016-11-23 2017-05-31 广东小天才科技有限公司 A kind of audio-frequency inputting method, device and terminal device
CN107071648A (en) * 2017-06-19 2017-08-18 深圳市泰衡诺科技有限公司上海分公司 Sound plays regulating system, device and method

Also Published As

Publication number Publication date
CN111866674A (en) 2020-10-30

Similar Documents

Publication Publication Date Title
US10484813B2 (en) Systems and methods for delivery of personalized audio
US20220116723A1 (en) Filter selection for delivering spatial audio
CN110771182B (en) Audio processor, system, method and computer program for audio rendering
KR102008771B1 (en) Determination and use of auditory-space-optimized transfer functions
WO2018149275A1 (en) Method and apparatus for adjusting audio output by speaker
US20160212535A1 (en) System and method for controlling output of multiple audio output devices
US20220394409A1 (en) Listening optimization for cross-talk cancelled audio
WO2014179633A1 (en) Sound field adaptation based upon user tracking
EP2731360B1 (en) Automatic audio enhancement system
CN111866674B (en) Speaker assembly control method, device and storage medium
CN112423175B (en) Earphone noise reduction method and device, storage medium and electronic equipment
CN108712704B (en) Sound box, audio data playing method and device, storage medium and electronic device
CN112911065B (en) Audio playing method and device for terminal, electronic equipment and storage medium
US6859417B1 (en) Range finding audio system
US11451923B2 (en) Location based audio signal message processing
CN106792365B (en) Audio playing method and device
EP3248398A1 (en) System and method for changing a channel configuration of a set of audio output devices
KR102571518B1 (en) Electronic device including a plurality of speaker
CN114257924A (en) Method for distributing sound channels and related equipment
CN116684777A (en) Audio processing and model training method, device, equipment and storage medium
CN116916240A (en) Audio output method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant