CN118233822A - Sound field calibration method and electronic equipment - Google Patents

Sound field calibration method and electronic equipment Download PDF

Info

Publication number
CN118233822A
CN118233822A CN202211652603.0A CN202211652603A CN118233822A CN 118233822 A CN118233822 A CN 118233822A CN 202211652603 A CN202211652603 A CN 202211652603A CN 118233822 A CN118233822 A CN 118233822A
Authority
CN
China
Prior art keywords
sound
electronic device
user
information
space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211652603.0A
Other languages
Chinese (zh)
Inventor
蔡双林
程力
梁志涛
郑磊
谢殿晗
徐昊玮
董伟
朱焱
惠少博
孙渊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202211652603.0A priority Critical patent/CN118233822A/en
Publication of CN118233822A publication Critical patent/CN118233822A/en
Pending legal-status Critical Current

Links

Landscapes

  • Telephone Function (AREA)

Abstract

The application provides a sound field calibration method and electronic equipment, and relates to the technical field of terminals. According to the application, through detecting the space, the acoustic parameters in the space are obtained, so that the accuracy of determining the information of the event occurring in the space is improved. The method comprises the following steps: the first electronic device sends first information for detection into the space. Then, based on the first information for detection, acoustic parameters in the space are acquired. And thereby determine second information associated with events occurring within the space based on the acoustic parameters.

Description

Sound field calibration method and electronic equipment
Technical Field
The present application relates to the field of terminal technologies, and in particular, to a sound field calibration method and an electronic device.
Background
Home theatres are generally formed by networking an intelligent screen and a plurality of intelligent sound box devices, and provide good audio and video experience for users through collaborative operation among a plurality of devices.
To ensure the audio effect of the home theater, a sound field calibration of the home theater is required. In the sound field calibration process, a user is required to hold a pickup device (such as a mobile phone, a microphone, a professional audio collection device, etc.) to collect audio. Then, the intelligent screen can complete sound field calibration according to the audio data sent by the pickup device.
In the sound field calibration process, the accuracy and the integrity of the collected audio of the handheld pickup device of the user are relied on in the sound field calibration, so that the operation difficulty of the user is high. And, the sound field can only calibrate to the position of user's handheld pickup device, if user's position changes, need the user to hold pickup device again and gather audio to calibrate again the sound field.
Disclosure of Invention
In order to solve the technical problems, the application provides a sound field calibration method and electronic equipment. According to the technical scheme provided by the application, the acoustic parameters in the space are acquired through the detection of the space, so that the accuracy of determining the information of the event occurring in the space is improved.
In order to achieve the technical purpose, the application provides the following technical scheme:
in a first aspect, a sound field calibration method is provided, which is applied to a first electronic device. The method comprises the following steps: first information for detection is transmitted into space. Based on the first information for detection, acoustic parameters in the space are acquired. Second information associated with an event occurring within the space is determined from the acoustic parameters.
The first electronic device is, for example, a smart screen or a sound box. The first information is a wireless signal, and the wireless signal is one or more of the following: ultrasonic signals, UWB signals, bluetooth signals, wi-Fi signals, millimeter wave signals.
In this way, by detecting the space, the acoustic parameters in the space are determined, so that in the process of subsequently determining the second information associated with the event in the space, the influence of the space environment can be considered, and the accuracy of the second information can be improved.
According to the first aspect, the second information comprises placement advice information, or sound field calibration information.
According to the first aspect, or any implementation manner of the first aspect, the method further includes: and sending the placement suggestion information to the second electronic device, wherein the placement suggestion information is used for adjusting the placement positions of a plurality of electronic devices in the space, and the plurality of electronic devices comprise the first electronic device.
In some examples, a second electronic device (e.g., a user handset) displays a map including positional relationships of a plurality of electronic devices according to the placement suggestion information. Therefore, a user can adjust the placement positions of a plurality of electronic devices (such as a plurality of sound boxes and intelligent screen devices forming a home audio-video environment in space) according to the map, and the accuracy of subsequent sound field calibration is improved.
According to a first aspect, or any implementation of the first aspect above, the acoustic parameter is an acoustic parameter acquired in space during a first period of time, and the event is an event occurring after the end of the first period of time or before the start of the first period of time.
In some examples, after the first electronic device determines the placement suggestion information associated with the event occurring in the space, the user adjusts the placement position of the electronic device according to the placement suggestion information. Then the event occurring in the space is an event after the first time.
In other examples, after the first electronic device determines the placement suggestion information associated with the event occurring in the space, the user does not adjust the placement position of the electronic device according to the placement suggestion information. Then the event occurring in the space is the event before the first time.
According to a first aspect, or any implementation manner of the first aspect, the event includes that a new electronic device enters the space, or that a position of the electronic device in the space is moved.
According to the first aspect, or any implementation manner of the first aspect, the method further includes: third information for positioning is sent to at least one third electronic device. And determining first position information of the at least one third electronic device relative to the first electronic device respectively according to the third information for positioning received by the at least one third electronic device. Determining second information associated with an event occurring within the space based on the acoustic parameters, comprising: and carrying out sound field calibration according to the first position information and the acoustic parameters, and determining sound field calibration information associated with an event occurring in the space.
Therefore, compared with the prior art, the user needs to hold the pickup equipment to gather audio frequency to the distance information of first electronic equipment and second electronic equipment is input manually, just can realize the sound field calibration. According to the sound field calibration method provided by the embodiment of the application, the first electronic equipment or the second electronic equipment automatically completes sound field calibration by sending the ultrasonic signals, so that the operation difficulty of a user is effectively reduced. In addition, in the sound field calibration process, the accuracy of sound field calibration is effectively improved by combining acoustic parameters in the space.
According to the first aspect, or any implementation manner of the first aspect, the method further includes: second position information of the first user in the space relative to the first electronic device is acquired, and the second position information is used for determining the second information.
Therefore, the user position is determined and used for calibrating the sound field to the area where the user position is located, so that better audio experience can be brought to the user after the sound field calibration is completed.
According to a first aspect, or any implementation manner of the first aspect, the clocks of the first electronic device and the at least one third electronic device are synchronized, and the determining, according to the acoustic parameter, second information associated with an event occurring in the space includes: and adjusting sounding time of the first electronic device and the at least one third electronic device according to the first position information, the second position information and the acoustic parameters, so that the sounding time of the first electronic device and the at least one third electronic device reaches the range of the area indicated by the second position information to be the same or similar.
Therefore, through positioning the position relation between each electronic device and the user in the home theater, the sounding time of the loudspeaker of each electronic device is adjusted, so that the sound reaches the user's ear in the same time phase or similar time phases, and the listening experience of the user is improved.
According to a first aspect, or any implementation manner of the first aspect above, determining second information associated with an event occurring in a space according to acoustic parameters, comprises: one or more sound tracks corresponding to one or more sound objects included in the first audio to be played are determined. And rearranging one or more sound tracks in the sound field calibration process according to the first position information, the second position information and the acoustic parameters, so that the calibrated sound field is matched with the one or more sound tracks in the area indicated by the second position information.
Illustratively, in video content, there are multiple hives that are flying. Then, the sound of the plurality of buzzers flaring the wings corresponds to the plurality of sound objects by arranging and rendering the sound tracks of the plurality of sound objects. And based on the determined position relation among the intelligent screen, the sound box and the user and the acoustic parameters, calibrating the sound track to the listening position of the user, and enabling the user to obtain the listening experience of flying the humming bird around in the video playing process through the playing cooperation of at least one second electronic device and the first electronic device.
Thus, compared with the prior art, all the sounds corresponding to the video content in the home theater are emitted by the front of the user, and the user cannot obtain immersive experience. According to the sound field calibration method provided by the embodiment of the application, the sound track of the sound object can be rearranged and rendered, and the sound field is calibrated to the position of the user, so that the user obtains immersive listening experience.
According to the first aspect, or any implementation manner of the first aspect, after determining one or more sound tracks corresponding to one or more sound objects included in the first audio to be played, the method further includes: in response to a user operation, a target sound object of the one or more sound objects selected by the user is determined. Rearranging one or more sound tracks in a sound field calibration process according to the first position information, the second position information and the acoustic parameters, so that the calibrated sound field matches with the one or more sound tracks in the area indicated by the second position information, including: and rearranging the target sound track corresponding to the target sound object in the sound field calibration process according to the first position information, the second position information and the acoustic parameters, so that the calibrated sound field is matched with the target sound track in the area indicated by the second position information.
Thus, compared with the prior art, the home theater has the advantage that the user cannot select the role viewing angle in the video playing process. According to the sound field calibration method provided by the embodiment of the application, the sound field is calibrated to the position of the user, and the immersive listening experience of the character view angle selected by the user is provided for the user, so that the use experience of the user is improved.
According to a first aspect, or any implementation of the first aspect above, the first information comprises at least one wireless signal transmitted in at least one direction in space; acquiring acoustic parameters in the space according to the first information for detection, including: and determining the acoustic parameters of the space according to the sending time of at least one wireless signal and the receiving time of the reflected signal corresponding to the received at least one wireless signal. Wherein the reflected signal is obtained by reflection of the wireless signal by hitting an object or person in the space.
According to a first aspect, or any implementation of the first aspect above, the acoustic parameter comprises at least one of a reflection coefficient, an absorption coefficient and a transmission coefficient of an object in space for sound.
In a second aspect, an electronic device is provided. The electronic device includes: a processor and a memory coupled to the processor, the memory for storing computer program code, the computer program code comprising computer instructions that, when read from the memory by the processor, cause the electronic device to perform: first information for detection is transmitted into space. Based on the first information for detection, acoustic parameters in the space are acquired. Second information associated with an event occurring within the space is determined from the acoustic parameters.
According to a second aspect, the second information comprises placement advice information, or sound field calibration information.
According to a second aspect, or any implementation manner of the second aspect, the computer instructions, when read from the memory by the processor, cause the electronic device to perform: and sending the placement suggestion information to the second electronic device, wherein the placement suggestion information is used for adjusting the placement positions of a plurality of electronic devices in the space, and the plurality of electronic devices comprise the first electronic device.
According to a second aspect, or any implementation of the second aspect above, the acoustic parameter is an acoustic parameter at a first time in space, and the event is an event occurring after or before the first time.
According to a second aspect, or any implementation manner of the second aspect, the event includes that a new electronic device enters the space, or that a position of the electronic device in the space is moved.
According to a second aspect, or any implementation manner of the second aspect, the computer instructions, when read from the memory by the processor, cause the electronic device to perform: third information for positioning is sent to at least one third electronic device. And determining first position information of the at least one third electronic device relative to the first electronic device respectively according to the third information for positioning received by the at least one third electronic device. Determining second information associated with an event occurring within the space based on the acoustic parameters, comprising: and carrying out sound field calibration according to the first position information and the acoustic parameters, and determining sound field calibration information associated with an event occurring in the space.
According to a second aspect, or any implementation manner of the second aspect, the computer instructions, when read from the memory by the processor, cause the electronic device to perform: second position information of the first user in the space relative to the first electronic device is acquired, and the second position information is used for determining the second information.
According to a second aspect, or any implementation manner of the second aspect, the clocks of the first electronic device and the at least one third electronic device are synchronized, and the determining, according to the acoustic parameter, second information associated with an event occurring in the space includes: and adjusting sounding time of the first electronic device and the at least one third electronic device according to the first position information, the second position information and the acoustic parameters, so that the sounding time of the first electronic device and the at least one third electronic device reaches the range of the area indicated by the second position information to be the same or similar.
According to a second aspect, or any implementation of the second aspect above, determining second information associated with an event occurring within a space according to acoustic parameters, comprises: one or more sound tracks corresponding to one or more sound objects included in the first audio to be played are determined. And rearranging one or more sound tracks in the sound field calibration process according to the first position information, the second position information and the acoustic parameters, so that the calibrated sound field is matched with the one or more sound tracks in the area indicated by the second position information.
According to a second aspect, or any implementation manner of the second aspect, the computer instructions, when read from the memory by the processor, cause the electronic device to perform: in response to a user operation, a target sound object of the one or more sound objects selected by the user is determined. Rearranging one or more sound tracks in a sound field calibration process according to the first position information, the second position information and the acoustic parameters, so that the calibrated sound field matches with the one or more sound tracks in the area indicated by the second position information, including: and rearranging the target sound track corresponding to the target sound object in the sound field calibration process according to the first position information, the second position information and the acoustic parameters, so that the calibrated sound field is matched with the target sound track in the area indicated by the second position information.
According to a second aspect, or any implementation of the second aspect above, the first information comprises at least one wireless signal transmitted in at least one direction in space; acquiring acoustic parameters in the space according to the first information for detection, including: and determining the acoustic parameters of the space according to the sending time of at least one wireless signal and the receiving time of the reflected signal corresponding to the received at least one wireless signal.
According to a second aspect, or any implementation of the second aspect above, the acoustic parameters include a reflection coefficient, an absorption coefficient and a transmission coefficient of an object in space for sound.
The technical effects corresponding to the second aspect and any implementation manner of the second aspect may be referred to the technical effects corresponding to the first aspect and any implementation manner of the first aspect, which are not described herein.
In a third aspect, an electronic device is provided, which has the functionality to implement the sound field calibration method as described in the first aspect and any one of the possible implementations. The functions may be implemented by hardware, or by corresponding software executed by hardware. The hardware or software includes one or more modules corresponding to the functions described above.
The technical effects corresponding to the third aspect and any implementation manner of the third aspect may be referred to the technical effects corresponding to the first aspect and any implementation manner of the first aspect, and are not described herein again.
In a fourth aspect, a computer-readable storage medium is provided. The computer readable storage medium stores a computer program (which may also be referred to as instructions or code) which, when executed by an electronic device, causes the electronic device to perform the method of the first aspect or any implementation of the first aspect.
The technical effects corresponding to the fourth aspect and any implementation manner of the fourth aspect may be referred to the technical effects corresponding to the first aspect and any implementation manner of the first aspect, which are not described herein.
In a fifth aspect, there is provided a computer program product for, when run on an electronic device, causing the electronic device to perform the method of the first aspect or any of the embodiments of the first aspect.
The technical effects corresponding to the fifth aspect and any implementation manner of the fifth aspect may be referred to the technical effects corresponding to the first aspect and any implementation manner of the first aspect, which are not described herein.
In a sixth aspect, circuitry is provided, the circuitry comprising processing circuitry configured to perform the first aspect or the method of any of the embodiments of the first aspect.
The technical effects corresponding to the sixth aspect and any implementation manner of the sixth aspect may be referred to the technical effects corresponding to the first aspect and any implementation manner of the first aspect, which are not described herein.
In a seventh aspect, a system on a chip is provided, comprising at least one processor and at least one interface circuit, the at least one interface circuit being configured to perform a transceiving function and to send instructions to the at least one processor, when executing the instructions, performing the method of the first aspect or any implementation of the first aspect.
The technical effects corresponding to the seventh aspect and any implementation manner of the seventh aspect may be referred to the technical effects corresponding to the first aspect and any implementation manner of the first aspect, which are not described herein again.
Drawings
Fig. 1 is a schematic diagram of a communication system to which a sound field calibration method according to an embodiment of the present application is applied;
fig. 2A is a schematic hardware structure of a first electronic device according to an embodiment of the present application;
fig. 2B is a schematic diagram of a speaker position of a first electronic device according to an embodiment of the present application;
FIG. 3 is a schematic diagram of "emperor's bits in a sound field according to an embodiment of the present application;
fig. 4A is a schematic diagram of a rectangular standard sound field according to an embodiment of the present application;
fig. 4B is a schematic diagram of a circular standard sound field according to an embodiment of the present application;
FIG. 5 is a first schematic interface diagram according to an embodiment of the present application;
fig. 6A is a schematic diagram of a home theater sound field calibration scenario 1 according to an embodiment of the present application;
fig. 6B is a schematic view of an angle confirmation scene in a home theater sound field calibration process according to an embodiment of the present application;
fig. 6C is a schematic diagram of a distance confirmation scene in a home theater sound field calibration process according to an embodiment of the present application;
fig. 7A is a second schematic diagram of a home theater sound field calibration scenario provided by an embodiment of the present application;
Fig. 7B is a schematic diagram of a process for confirming a position relationship between sound boxes in a home theater sound field calibration process according to an embodiment of the present application;
Fig. 7C is a schematic diagram of sound production and sound receiving of a sound box in a home theater sound field calibration process according to an embodiment of the present application;
fig. 8 is a third schematic diagram of a home theater sound field calibration scenario provided by an embodiment of the present application;
Fig. 9 is a schematic diagram of a home theater sound field calibration scene provided by an embodiment of the present application;
Fig. 10 is a schematic diagram fifth view of a home theater sound field calibration scenario provided by an embodiment of the present application;
fig. 11 is a schematic diagram sixth view of a home theater sound field calibration scenario provided by an embodiment of the present application;
fig. 12 is a schematic diagram seventh of a home theater sound field calibration scenario provided by an embodiment of the present application;
FIG. 13 is a second schematic interface diagram according to an embodiment of the present application;
FIG. 14 is a schematic diagram of a sound object according to an embodiment of the present application;
Fig. 15 is a schematic diagram eight of a home theater sound field calibration scenario provided by an embodiment of the present application;
FIG. 16 is a third interface diagram according to an embodiment of the present application;
FIG. 17 is a fourth schematic interface diagram according to an embodiment of the present application;
FIG. 18 is a schematic diagram of sound propagation according to an embodiment of the present application;
fig. 19 is a second schematic diagram of sound propagation according to an embodiment of the present application;
fig. 20 is a flow chart of a sound field calibration method according to an embodiment of the present application;
Fig. 21 is a schematic structural diagram of a first electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application are described below with reference to the accompanying drawings in the embodiments of the present application. In the description of embodiments of the application, the terminology used in the embodiments below is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the application and the appended claims, the singular forms "a," "an," "the," and "the" are intended to include, for example, "one or more" such forms of expression, unless the context clearly indicates to the contrary. It should also be understood that in the following embodiments of the present application, "at least one", "one or more" means one or more than two (including two).
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise. The term "coupled" includes both direct and indirect connections, unless stated otherwise. The terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated.
In embodiments of the application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "for example" is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
Fig. 1 is a schematic diagram of a communication system to which a sound field calibration method according to an embodiment of the present application is applied. As shown in fig. 1, the first electronic device 100 establishes a communication connection with the second electronic device 200.
Illustratively, the first electronic device 100 establishes a wireless communication connection with the second electronic device 200. The first electronic device 100 may send the sound to be played on the first electronic device 100 to the second electronic device 200 for playing through a wireless communication connection with the second electronic device 200. The sound to be played may be an audio file. The first electronic device 100 and the second electronic device 200 cooperate to play audio files to provide the user with audio-visual effects of the home theater.
By way of example, the first electronic device 100 may include, but is not limited to, a large screen display (e.g., smart screen, large screen device, etc.), a notebook computer, a smart phone, a tablet computer, a projection device, a Laptop computer (Laptop), a Personal Digital Assistant (PDA), an artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) device, a wearable device (e.g., smart watch, etc.), etc. The operating system installed by the first electronic device 100 includes, but is not limited to Or other operating system. The first electronic device 100 may not be equipped with an operating system. In some embodiments, the first electronic device 100 may be a fixed device or a portable device. The present application is not limited to the specific type of first electronic device 100, nor to the operating system installed.
Illustratively, the second electronic device 200 may include, but is not limited to, an electronic device with sound playing function such as a speaker, a wireless speaker, etc. The second electronic device 200 may install an operating system. The operating system installed by the second electronic device 200 includes, but is not limited toOr other operating system. The second electronic device 200 may not be equipped with an operating system. The present application is not limited to the specific type of the second electronic device 200, whether an operating system is installed or not, and the type of the operating system with the operating system installed.
The first electronic device 100 may establish a wireless communication connection with the second electronic device 200 through wireless communication technology. Among other wireless communication technologies, at least one of the following is included but not limited to: bluetooth (BT) (e.g., conventional bluetooth or low energy (bluetooth low energy, BLE) bluetooth), wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (WIRELESS FIDELITY, wi-Fi) network), near field communication (NEAR FIELD communication, NFC), zigbee (Zigbee), frequency modulation (frequency modulation, FM), and the like.
In some embodiments, both the first electronic device 100 and the second electronic device 200 support a proximity discovery function. Illustratively, after the first electronic device 100 approaches the second electronic device 200, the first electronic device 100 and the second electronic device 200 can discover each other, and then establish a wireless communication connection such as a bluetooth connection, a Wi-Fi end-to-end (P2P) connection, or the like.
In some embodiments, the first electronic device 100 and the second electronic device 200 establish a wireless communication connection through a local area network. For example, the first electronic device 100 and the second electronic device 200 are both connected to the same router.
In some embodiments, the number of second electronic devices 200 is one or more, and the one or more second electronic devices 200 and the first electronic device 100 constitute a home theater. The sound field calibration of the home theater is completed by the sound field calibration method described in each of the following embodiments.
In other embodiments, the communication system may not include the first electronic device 100, and the sound field calibration of the communication system formed by the plurality of second electronic devices 200 may be completed by the sound field calibration method described in the following embodiments.
Fig. 2A shows a schematic structural diagram of the first electronic device 100.
The first electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a power management module 140, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a sensor module 180, keys 190, an indicator 191, a camera 192, and a display 193, etc.
It should be understood that the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the first electronic device 100. In other embodiments of the application, the first electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (IMAGE SIGNAL processor, ISP), a controller, a video codec, a digital signal processor (DIGITAL SIGNAL processor, DSP), a baseband processor, and/or a neural-Network Processor (NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-INTEGRATED CIRCUIT, I2C) interface, an integrated circuit built-in audio (inter-INTEGRATED CIRCUIT SOUND, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
The I2C interface is a bi-directional synchronous serial bus comprising a serial data line (SERIAL DATA LINE, SDA) and a serial clock line (derail clock line, SCL). In some embodiments, the processor 110 may contain multiple sets of I2C buses. The processor 110 may be coupled to a touch sensor, charger, flash, camera 192, etc., respectively, through different I2C bus interfaces. For example: the processor 110 may be coupled to the touch sensor through an I2C interface, such that the processor 110 communicates with the touch sensor through an I2C bus interface to implement a touch function of the first electronic device 100.
The MIPI interface may be used to connect processor 110 to peripheral devices such as display 193, camera 192, and the like. The MIPI interfaces include camera serial interfaces (CAMERA SERIAL INTERFACE, CSI), display serial interfaces (DISPLAY SERIAL INTERFACE, DSI), and the like. In some embodiments, processor 110 and camera 192 communicate through a CSI interface to implement the photographing function of first electronic device 100. The processor 110 and the display 193 communicate via a DSI interface to implement the display functionality of the first electronic device 100.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to transfer data between the first electronic device 100 and a peripheral device. But may also be used to connect other electronic devices, such as AR devices, etc.
It should be understood that the interfacing relationship between the modules illustrated in the embodiment of the present application is only illustrative, and is not limited to the structure of the first electronic device 100. In other embodiments of the present application, the first electronic device 100 may also use different interfacing manners, or a combination of multiple interfacing manners in the foregoing embodiments.
The power management module 140 is configured to supply power to modules such as the processor 110 included in the first electronic device 100. In some embodiments, the power management module 140 may be configured to receive power supply inputs to support operation of the first electronic device 100.
The wireless communication function of the first electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the first electronic device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc. applied on the first electronic device 100. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), and the like. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be provided in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device or displays images or video through a display screen 193. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional module, independent of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (WIRELESS FIDELITY, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation SATELLITE SYSTEM, GNSS), frequency modulation (frequency modulation, FM), near Field Communication (NFC), infrared (IR), etc., applied on the first electronic device 100.
The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 and mobile communication module 150 of first electronic device 100 are coupled, and antenna 2 and wireless communication module 160 are coupled, such that first electronic device 100 may communicate with a network and other devices through wireless communication techniques. The wireless communication techniques can include the Global System for Mobile communications (global system for mobile communications, GSM), general packet radio service (GENERAL PACKET radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation SATELLITE SYSTEM, GLONASS), a beidou satellite navigation system (beidou navigation SATELLITE SYSTEM, BDS), a quasi zenith satellite system (quasi-zenith SATELLITE SYSTEM, QZSS) and/or a satellite based augmentation system (SATELLITE BASED AUGMENTATION SYSTEMS, SBAS).
The first electronic device 100 implements display functions by a GPU, a display screen 193, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 193 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display 193 is used to display images, videos, and the like. The display 193 includes a display panel. The display panel may employ a Liquid Crystal Display (LCD) CRYSTAL DISPLAY, an organic light-emitting diode (OLED), an active-matrix organic LIGHT EMITTING diode (AMOLED), a flexible light-emitting diode (FLED), miniled, microLed, micro-oLed, a quantum dot LIGHT EMITTING diode (QLED), or the like. In some embodiments, the first electronic device 100 may include 1 or N display screens 193, N being a positive integer greater than 1.
The camera 192 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, the first electronic device 100 may include 1 or N cameras 192, N being a positive integer greater than 1.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the first electronic device 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 121 may be used to store computer executable program code including instructions. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the first electronic device 100 (e.g., audio data, phonebook, etc.), and so on.
In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like. The processor 110 performs various functional applications of the first electronic device 100 and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or a portion of the functional modules of the audio module 170 may be disposed in the processor 110. The first electronic device 100 may implement audio functions through an audio module 170, an application processor, and the like. Such as music playing, recording, etc. The audio module 170 may include, for example, a speaker, a receiver, a microphone, etc.
Wherein a speaker, also called "horn", is used to convert an audio electrical signal into a sound signal. The first electronic device 100 may transmit an ultrasonic signal through a speaker, or play audio, etc. The speaker may be a built-in component of the first electronic device 100, or may be an external accessory of the first electronic device 100.
In some embodiments, the first electronic device 100 may include one or more speakers, where each speaker or multiple speakers cooperate to enable calibration of a sound field, etc.
Illustratively, as shown in FIG. 2B, a layout schematic of a plurality of speakers on the first electronic device 100 is illustratively provided. As shown in fig. 2B, when the first electronic device 100 is placed in the position shown in the drawing, the front surface of the first electronic device 100 is the plane on which the display 193 is located, the speaker 21 is located at the top (typically, the side on which the display is located) of the first electronic device 100 in a left position, and the speaker 22 is located at the top right side of the first electronic device 100 in a right position. Further, the speaker 21 and the speaker 22 may be symmetrical about the axis of the display 193 of the first electronic device 100.
It should be noted that, in the following embodiments, "upper", "lower", "left" and "right" refer to the orientations shown in fig. 2B, and will not be described in detail later.
In some embodiments, as shown in fig. 2B, the first electronic device 100 transmits an ultrasonic signal through the speaker 21 and the speaker 22, respectively, which the corresponding second electronic device 200 may receive. The second electronic device 200 may perform positioning calculation according to the received ultrasonic signal, and determine a distance and an angular relationship between the first electronic device 100 and the second electronic device 200. Or the second electronic device 200 may also send the time information of receiving the ultrasonic signal to the first electronic device 100, and the first electronic device 100 performs positioning calculation to determine the distance and angle relationship between the first electronic device 100 and the second electronic device 200.
In some embodiments, the first electronic device 100 may also include a greater number of speakers. The number of speakers is not particularly limited in the embodiment of the present application.
Wherein a microphone, also called "microphone", is used for converting sound signals into analog audio electrical signals, the first electronic device 100 may collect surrounding sound signals through the microphone. The microphone may be a built-in component of the first electronic device 100, or may be an external accessory of the first electronic device 100.
In some embodiments, the first electronic device 100 may include one or more microphones, where each microphone or microphones cooperate to perform the functions of capturing sound signals in various directions and converting the captured sound signals into analog audio electrical signals, and may also perform the functions of identifying the source of sound, reducing noise, or directing sound recordings.
Illustratively, the first electronic device 100 captures user sound signals through a microphone to locate the user's position.
The sensor module 180 may include a pressure sensor, a distance sensor, a proximity sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, and the like.
The pressure sensor is used for sensing a pressure signal and can convert the pressure signal into an electric signal. In some embodiments, the pressure sensor may be provided on the display 193. Pressure sensors are of many kinds, such as resistive pressure sensors, inductive pressure sensors, capacitive pressure sensors, etc. The capacitive pressure sensor may be a capacitive pressure sensor comprising at least two parallel plates with conductive material. When a force is applied to the pressure sensor, the capacitance between the electrodes changes. The first electronic device 100 determines the intensity of the pressure according to the change of the capacitance. When a touch operation is applied to the display 193, the first electronic device 100 detects the intensity of the touch operation according to the pressure sensor. The first electronic device 100 may also calculate the position of the touch according to the detection signal of the pressure sensor.
Touch sensors, also known as "touch devices". The touch sensor may be disposed on the display screen 193, and the touch sensor and the display screen 193 form a touch screen, which is also called a "touch screen". The touch sensor is used to detect a touch operation acting on or near it. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to the touch operation may be provided through the display 193. In other embodiments, the touch sensor may also be disposed on a surface of the first electronic device 100 at a different location than the display 193.
In some embodiments, the first electronic device 100 displays the sound field calibration interface through the display 193, and automatically completes sound field calibration after detecting an operation of indicating sound field calibration on the sound field calibration interface by the user through the touch sensor.
The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. The first electronic device 100 may receive key inputs, generating key signal inputs related to user settings and function controls of the first electronic device 100.
The indicator 191 may be an indicator light, may be used to indicate a power state, may be used to indicate a message, a notification, or the like.
In some embodiments, the second electronic device 200 may have the same or similar hardware structure as the first electronic device 100. The second electronic device 200 may also include more or fewer components than the structure shown in fig. 2A, or may combine certain components, or split certain components, or a different arrangement of components. The structure of the second electronic device 200 is not particularly limited in the embodiment of the present application.
For convenience of explanation, the following embodiments take the first electronic device 100 as an intelligent screen, and take the second electronic device 200 as a sound box (e.g. including a sound box a, a sound box B, a sound box C, and a sound box D) as an example, and describe the sound field calibration method provided in the embodiments of the present application in detail.
In some embodiments, in a home theater, a "emperor 'sound field calibration is performed for the user's location, providing better listening to the user. In some examples, the "emperor" is, for example, a balanced position of sound loudness, stereo surround effect, etc. in terms of sound, and the sound playing parameters of the sound box can be adjusted through sound field calibration, so as to meet the requirement of the "emperor".
For example, as shown in FIG. 3, during the "emperor' position" debugging process, the optimal viewing or listening position may be determined first. For example, the "emperor position" is set to a position just in front of the intelligent screen, 3 meters away from the intelligent screen, etc. Then, the sound field is calibrated by taking the set "emperor position" as the central position, and the sound field center is calibrated to the "emperor position", so that the optimal positions of the user for watching and listening can be integrated to the "emperor position".
In some embodiments, the standard sound field generally includes a rectangular standard sound field and a circular standard sound field. For example, as shown in fig. 4A, the rectangular standard sound field, where a plurality of sound generating devices (such as a smart screen and a sound box) are located, may be wired to form a rectangle. As shown in fig. 4B, the circular standard sound field, where the sound generating devices are located, may be connected to form a circle.
In the sound field calibration process, a user can arrange an intelligent screen and a sound box which are included in the cinema according to positions corresponding to the rectangular standard sound field and the round standard sound field at home, so that the sound field calibration effect is ensured.
It should be appreciated that the number of sound emitting devices included in the sound field may be greater or less than the number of sound emitting devices included in the standard sound field shown in fig. 4A or 4B.
In some embodiments, the smart screen may initiate sound field calibration of the home theater in response to user operation. Optionally, before the sound field calibration starts, the intelligent screen may display a prompt message according to the number of sound boxes, the home environment, etc. so as to prompt the user of the placement position of the sound boxes. Therefore, the placement positions of the sound box and the intelligent screen correspond to the placement requirements of the sound equipment in the standard sound field, and the subsequent sound field calibration effect is guaranteed. Optionally, the user operation includes, for example, an operation on a sound field calibration interface, a voice command, and the like.
Illustratively, as shown in interface 501 (a) of fig. 5, the smart screen may display sound field calibration interface 502 (b) of fig. 5, in which the smart screen detects a user's click on sound setting control 511 displayed on setting menu 51 during the display of setting menu 51. On the sound field calibration interface 502, a prompt box 52 is displayed on the smart screen for prompting the user whether to confirm that sound field calibration is to be started. If the user is detected to click on the start control 521, it may be determined that the current user needs to perform sound field calibration on the home theater.
Optionally, after determining that the user instructs to perform sound field calibration, the smart screen may automatically generate corresponding sound box placement suggestions according to the number of sound boxes, home environments, etc. that have already established communication connection, so that placement positions of the sound boxes and the smart screen correspond to the placement requirements of the sound emitting devices in the standard sound field, thereby ensuring a subsequent sound field calibration effect.
Illustratively, as shown in interface 503 of fig. 5 (c), the smart screen displays prompt information 53 for prompting the user of the placement position of the speaker. After that, the smart screen detects that the user clicks the confirm control 531, and then can determine that the placement of the sound box is completed. For example, as shown in fig. 6A, the user places sound box a, sound box B, sound box C, and sound box D according to the prompt 53 shown in the interface 503.
Optionally, in the sound field calibration process, if the smart screen determines that the current loudspeaker box placement position deviation is large, the sound field calibration is affected. Then, the intelligent screen can display prompt information to prompt the user to adjust the placement position of the sound box, so that the sound field calibration effect is ensured.
For example, as shown in interface 503 of fig. 5 (c), the sound box placement proposal displayed by the smart screen is that four sound boxes are placed around the room separately. For example, sound box a, sound box B, sound box C, sound box D may be positioned as shown in the scene of fig. 6A. However, in the sound field calibration process, the intelligent screen determines that the sound box A and the sound box C are positioned at the left side of the intelligent screen and are put together; the sound box B and the sound box D are arranged on the right side of the intelligent screen and are placed together. I.e. no loudspeaker box is placed behind the sofa. The intelligent screen determines that the current loudspeaker box placement position can lead to poor sound field calibration effect, and then the intelligent screen can display prompt information to prompt a user to put the loudspeaker box again.
In some embodiments, the smart screen may automatically perform sound field calibration based on a positional relationship between the smart screen and the plurality of speakers. Alternatively, the smart screen may transmit ultrasonic signals time-division through both left and right speakers as shown in fig. 2B. Correspondingly, a microphone array is arranged in the sound box, and the sound box can determine the distance and the angle relation between the sound box and the intelligent screen through a positioning algorithm according to the time difference of two ultrasonic signals received by the microphone array.
The sound box can then send the determined distance and angle relationship to the smart screen. The intelligent screen can determine the geometric relationship of the distance and the angle between each sound box and the intelligent screen according to the acquired distance and the angle relationship sent by the sound boxes, and then calibrate the emperor position to the default position in front of the intelligent screen.
Optionally, the default user views the movie right in front of the smart screen, and then the default position should be right in front of the smart screen. And the default position can be determined according to the display screen size of the intelligent screen, and the default position is determined as the optimal viewing and listening position.
For example, the display screen size of the smart screen is 75 inches, and the default position is, for example, a position 3 meters to 4 meters in front of the smart screen. For another example, the display screen size of the smart screen is 100 inches, and the default position is, for example, a position 5 meters to 6 meters in front of the smart screen.
For example, as shown in the scenario of fig. 6A, the smart screen faces the sofa, four speakers are placed in the direction facing the smart screen, and the smart screen is in communication with the four speakers. And, the definition of the front and back directions has been preset in the smart screen. Taking the example of the smart screen configured with two left and right speakers as shown in fig. 2B, the smart screen can distinguish the left and right positions of the two speakers.
In the sound field calibration process, the intelligent screen sends a start positioning instruction to the sound box for indicating the sound box to start positioning. Then, the left and right speakers of the smart screen transmit ultrasonic signals in a time-sharing manner, and according to the left and right positions of the speakers, the smart screen can determine whether the sound box is positioned on the left side of the smart screen or on the right side of the smart screen.
Correspondingly, the sound box can receive ultrasonic signals sent by the intelligent screen, and the position relation between the sound box and the intelligent screen is determined through a positioning algorithm.
As shown in fig. 6B, an example is taken in which the sound box a determines the angular relationship between the sound box a and the smart screen. The distance between the speakers Spk R and Spk L of the smart screen is D.
Optionally, the distance information of the speaker may be carried in a start positioning instruction sent by the smart screen to the sound box, where the distance is D; or the distance D may be obtained by the sound box a in other ways.
The two speakers of the intelligent screen transmit ultrasonic signals in a time-sharing mode, and the time interval is T. Alternatively, sound box a may acquire the time interval T. The time when the sound box a receives the two ultrasonic signals is t R1 and t L1, respectively, for example, as shown in fig. 6B, the time when the sound box a receives the ultrasonic signal sent by the speaker Spk R is t R1, and the time when the sound box a receives the ultrasonic signal sent by the speaker Spk L is t L1.
Then, based on the propagation velocity Vs of the ultrasonic signal, the angle θ between the sound box a and the smart screen can be determined according to the formula (T L1-tR1 -T) ·vs=dsin θ. Then, the sound box A can send the determined angle information to the intelligent screen.
As shown in fig. 6C, in the position information determining process, one of the speakers of the smart screen transmits an ultrasonic signal to the sound box a, and after the sound box a receives the ultrasonic signal, the sound box a replies the ultrasonic signal to the smart screen. The time of the sound box A receiving the ultrasonic signal is T1, the time of the sound box A replying the ultrasonic signal is T2, the time of the intelligent screen sending the ultrasonic signal is T3, and the time of the sound box A receiving the replying ultrasonic signal is T4. Then, the distance between the sound box A and the intelligent screen can be determined by the intelligent screen or the sound box A according to the four time information of T1, T2, T3 and T4. Optionally, after determining the distance between the sound box a and the smart screen, the determined distance information may be sent to the smart screen.
Thus, based on the process of determining the angle information and the distance information of the sound box A, other sound boxes can also determine the angle information and the distance information between each sound box and the intelligent screen.
Optionally, the sound box can determine the angle information and the distance information between the sound box and the intelligent screen through one or more interactions with the intelligent screen.
Optionally, the sound box can collect a plurality of ultrasonic signals through the microphone array configured in the process of receiving the ultrasonic signals. The loudspeaker box can determine the average value of the angles and the distances corresponding to the microphones through a preset algorithm, so that the influence of signal jitter is reduced.
Subsequently, the intelligent screen calibrates the "emperor position" to a default position in front of the intelligent screen (such as a position 3 meters in front of the intelligent screen where the sofa is located) by default according to angle information and distance information fed back by the sound box. Thus, when a user uses the home theater on the sofa, a better audio-visual effect can be obtained.
So, compare in prior art, the user needs handheld pickup equipment to gather audio frequency to manual distance information of entering wisdom screen and audio amplifier just can realize the sound field calibration. According to the sound field calibration method provided by the embodiment of the application, the intelligent screen can automatically complete sound field calibration by sending the ultrasonic signals, so that the operation difficulty of a user is effectively reduced.
In other embodiments, the determination of the location information between the smart screen and the audio box may also be accomplished by the smart screen. Wherein the position information includes angle information and distance information.
For example, as shown in the scenario of fig. 6A, taking the sound box a as an example, two speakers configured by the smart screen respectively transmit ultrasonic signals, and after the sound box a receives the ultrasonic signals, the received ultrasonic signals are transmitted back to the smart screen through a communication connection (such as Wi-Fi connection) with the smart screen. Correspondingly, after the intelligent screen receives the ultrasonic signal returned by the sound box A, the angle information between the sound box A and the intelligent screen can be determined through a preset algorithm.
The intelligent screen sends a sound reception instruction to the sound box A through communication connection (such as Wi-Fi connection, bluetooth connection and the like) between the intelligent screen and the sound box A, and the intelligent screen is used for indicating the sound box A to start sound reception and also starts sound reception. Then, the intelligent screen transmits ultrasonic signals through any one of two speakers arranged on the intelligent screen, and the sound box A transmits the ultrasonic signals to the intelligent screen after receiving the ultrasonic signals. In the process, the intelligent screen and the sound box A record ultrasonic signals sent by the intelligent screen and the opposite end, and the sound box A returns the recorded ultrasonic signals to the intelligent screen through communication connection with the intelligent screen. Then, the intelligent screen determines the azimuth of the sound box A according to the ultrasonic signal recorded by the intelligent screen and the acquired ultrasonic signal recorded by the sound box A, and determines the distance information between the sound box A and the intelligent screen.
Thus, based on the process of determining the angle information and the distance information of the sound box A, other sound boxes can also determine the angle information and the distance information between each sound box and the intelligent screen.
Optionally, the sound box can determine the angle information and the distance information between the sound box and the intelligent screen through one or more interactions with the intelligent screen.
Optionally, the sound box and the intelligent screen can collect a plurality of ultrasonic signals through the microphone array configured by the sound box and the intelligent screen in the process of receiving the ultrasonic signals. The intelligent screen can determine the average value of the angles and the distances corresponding to the microphones through a preset algorithm, so that the influence of signal jitter is reduced.
Subsequently, the smart screen may calibrate the sound field to a default position based on the determined angle information and distance information.
In other embodiments, the calibration of the sound field may also be accomplished independently of the sound box. For example, a user configures a plurality of speakers in a bedroom, the plurality of speakers form a communication system after establishing a connection, and a smart screen is not included in the communication system. Then, the sound box can automatically complete sound field calibration of the communication system through the ultrasonic signal positioning method.
Illustratively, as shown in FIG. 7A, speaker B, speaker C, and speaker D comprise a communication system. Wherein, in response to user operation, one of the sound boxes can be determined to be a master sound box, and the other sound boxes are determined to be slave sound boxes. Hereinafter, sound field calibration process will be described using sound box a as the main sound box.
In the sound field calibration process, the main sound box needs to determine the distance between every two sound boxes. If a communication system includes N speakers, then N (N-1)/2 spacings are determined. As shown in the scenario of fig. 7A, there are 4 speakers (i.e., n=4). Then, the sound box a needs to determine 6 pitches including the pitch L AB between the sound box a and the sound box B, the pitch L AD between the sound box a and the sound box D, the pitch L AC between the sound box a and the sound box C, the pitch L BC between the sound box B and the sound box C, the pitch L BD between the sound box B and the sound box D, and the pitch L CD between the sound box C and the sound box D.
In addition, the master speaker also needs to determine the direction sequence of each slave speaker under the view angle of the master speaker.
And then, the master sound box can determine the position relation between each slave sound box and the master sound box under the view angle of the master sound box according to the distance between the sound boxes and the direction sequence of each slave sound box, and further perform sound field calibration based on the position relation.
Illustratively, as shown in fig. 7B, the sound box a is a main sound box, and each sound box in the communication system establishes a communication connection (such as Wi-Fi connection, etc.). Then, the sound box a sends indication information to each sound box based on the communication connection between the sound box a and each sound box, wherein the indication information is used for indicating the sound box to begin to receive sound and sending ultrasonic signals (namely notifying the sounding sequence) after waiting for a period of time. The sound box A can customize the waiting time of each sound box waiting for sending the ultrasonic signals, and the waiting time of each sound box is different, so that the subsequent sound boxes can record the ultrasonic signals in sequence. That is, the sound emission sequence of each sound box is determined by the sound box a, for example, as shown in fig. 7B, the sound emission sequence of each sound box is sequentially the sound box a, the sound box B, the sound box C, and the sound box D.
And then, the sound box A starts to send ultrasonic signals, and the sound box B, the sound box C and the sound box D start to send ultrasonic signals after waiting for a period of time according to the received indication information. As shown in fig. 7C, in the process of sending the ultrasonic signals, each sound box records the ultrasonic signals sent by the other sound boxes, and can determine the arrival time of receiving the ultrasonic signals sent by each sound box, that is, as shown in fig. 7B, each sound box calculates the intermediate result. And then, the sound box B, the sound box C and the sound box D can transmit the recorded time information (namely the determined intermediate result) back to the sound box A, and the sound box A can determine the interval between every two sound boxes in the communication system according to the time information recorded by the sound box A and the acquired time information recorded by other sound boxes.
In addition, sound box a may determine the azimuth angle of the slave sound box relative to sound box a by the time difference between different microphones in the microphone array receiving the same ultrasonic signal transmitted from sound box. Then, in the process of determining the distance between the sound boxes by the sound box a, the sound box a can determine the direction sequence of the sound boxes under the view angle of the sound box a according to the received ultrasonic signals sent by the sound boxes.
Then, as shown in fig. 7B, the sound box a may determine the positional relationship between the sound boxes according to the distance between the sound boxes and the direction sequence of each sound box. Then, the sound box A can calibrate the sound field based on the position relation.
Alternatively, the user may enter his or her orientation information during the main sound box confirmation process. In the scenario shown in fig. 7A, a speaker that is positioned in front of the user by default is taken as a main speaker.
For example, one or more speakers voice announce "please press the play key on your left front speaker device" to prompt the user to press the button on the left front speaker device. Then, the sound box operated by the user, such as the sound box A, is detected, and the sound box can be determined to be the main sound box. Then the subsequent loudspeaker a may itself be the origin of coordinates to locate the orientation of other loudspeakers, such as in front of the user right, behind left, behind right, etc.
For another example, the plurality of sound boxes alternately broadcast voice prompts to prompt a user to determine the direction of the sound box, and when the user is detected to determine that the sound box is positioned at the left front, the corresponding sound box is determined to be the main sound box.
For example, a plurality of sound boxes alternately voice-announce "which direction you are in," you can say me, front left, front right, rear left, rear right. When the user replies with voice to the left front, the sound box recognizes that the intention set by the voice reply of the user is the left front direction through the voice recognition capability, and then the sound box can be calibrated to be the main sound box at the left front position, and other sound boxes are positioned by taking the sound box as the origin of coordinates.
Optionally, after determining the main sound box, other sound boxes which have not been subjected to voice broadcast can be positioned by directly taking the determined main sound box as the origin of coordinates instead of performing voice broadcast. Or each sound box broadcasts voice prompts in turn, and the azimuth of each sound box is determined according to the reply of the user through the voice recognition capability of each sound box.
For another example, the main sound box orientation is determined by means of gesture recognition. The voice prompt is broadcasted by the voice of the plurality of sound boxes in turn to prompt a user to determine the direction of the sound box, and when the user is detected to determine that the sound box is positioned at the left front, the corresponding sound box is determined to be the main sound box.
For example, a plurality of sound boxes alternately voice-announce "please confirm by gesture which direction i are in. If in the front left, you can swing your arm from left to right; if in the front right, you can swing your arm from right to left; if at the left rear, you can swing your arm from top to bottom; if at the right rear, you can swing your arm from bottom to top. The sound box can determine the gesture of the user through the gesture recognition capability. When the sound box detects the gesture of the user 'waving the arm from left to right', the sound box can be identified that the gesture of the user is set to be the left front direction, and then the sound box can be marked as the main sound box at the left front position, and other sound boxes are positioned by taking the sound box as the origin of coordinates.
Optionally, after determining the main sound box, other sound boxes which have not been subjected to voice broadcast can be positioned by directly taking the determined main sound box as the origin of coordinates instead of performing voice broadcast. Or each sound box broadcasts voice prompts in turn, and the direction of each sound box is determined according to the gesture of the user through the gesture recognition capability of each sound box.
It should be understood that the foregoing voice broadcast content and gestures are all exemplary descriptions, and embodiments of the present application are not specifically limited to the voice broadcast content and gestures.
For another example, the user may enter azimuth information, such as the direction of the front of the speaker a, in the main speaker (e.g., speaker a); or the user can input the position relation of each sound box in the sound box A, such as the information that the sound box B is positioned on the right side of the sound box A.
Therefore, the electronic equipment in the communication system automatically determines the position and positioning relation among the equipment, automatic sound field calibration is realized, user operation is simplified, user operation difficulty is reduced, and user experience is improved.
In some embodiments, a microphone array configured for an electronic device (e.g., including a smart screen, a sound box) may include a plurality of microphones arranged vertically, based on which spatial height information of a location of the electronic device may be located. In the sound field calibration process, the electronic equipment performs sound field calibration through the determined position relationship among the electronic equipment and the space height information, so that the accuracy of sound field calibration can be improved.
For example, in a scenario where the communication system includes a smart screen and a speaker, the smart screen may determine a spatial horizon. Optionally, the spatial horizontal line is, for example, one or more of a lower edge position, an upper edge position, a speaker position, a floor position, a ceiling position, and the like of the smart screen. Optionally, the smart screen is configured with sky-sound speakers sounding in a ceiling direction, speakers disposed at 360 ° different directions, one or more speakers among speakers disposed at a back side of the smart screen, etc., and the location of the floor, the location of the ceiling, etc. can be determined by ultrasonic signals transmitted from the speakers by the smart screen.
The smart screen may send an indication to the speaker to indicate the spatial horizon. Subsequently, in the sound field calibration process, the sound box receives an ultrasonic signal sent by one speaker of the intelligent screen, and based on the time difference of receiving the ultrasonic signal by each microphone in the vertically arranged microphone array and the distance of each microphone, the height of the sound box relative to the space horizontal line can be determined, and the determined space height information is fed back to the intelligent screen. Then, the intelligent screen can combine the determined position relation of the intelligent screen and each sound box, the acquired space height information sent by each sound box and the space height information of the intelligent screen, so that sound field calibration is performed, and more accurate sound field calibration is realized.
For another example, in a scenario in which a speaker is included in a communication system, a master speaker may determine a spatial horizon. Optionally, the space horizontal line is, for example, one or more of a lower edge position, an upper edge position, a speaker position, a floor position, a ceiling position, and the like of the main speaker. The smart screen may send an indication to the slave speaker to indicate the spatial horizon. Subsequently, in the sound field calibration process, the heights of the sound boxes relative to the horizontal lines of the space can be determined based on the time differences of the ultrasonic signals received by the microphones in the vertically arranged microphone arrays and the distances of the microphones.
Then, the main sound box can combine the determined distance, direction sequence and space height information of each sound box to perform sound field calibration so as to realize more accurate sound field calibration.
In the above embodiments, the process of automatically determining the positional relationship between the devices by the electronic device using the ultrasonic positioning technique is described as an example. It should be appreciated that the electronic devices in the communication system may also implement the automatic determination of the positional relationship between the devices in a variety of other ways.
For example, by configuring one or more of an Ultra Wideband (UWB) sensor, a millimeter wave sensor, multiple antennas for Wi-Fi protocol positioning, multiple antennas for bluetooth protocol positioning, etc. on an electronic device, the position positioning relationship between the devices is automatically determined.
In the following, an ultrasonic positioning technology is taken as an example of the electronic device, the positioning process of a plurality of electronic devices is illustrated, and the process of implementing the automatic positioning of the electronic devices in other manners can refer to the process of implementing the automatic positioning of the electronic devices by the ultrasonic positioning technology, which will not be described in detail.
In addition, in the sound field calibration process, the sound field calibration algorithm used by the intelligent screen can refer to the existing sound field calibration algorithm, and the embodiment of the application is not particularly limited and described.
The following describes the sound field calibration process in detail, taking the communication system including the smart screen and the sound box as an example. It should be understood that, in the case that the communication system does not include the smart screen, the sound field calibration process may refer to the sound field calibration process including the smart screen in the communication system, which is not repeated in the embodiments of the present application.
In some embodiments, during the sound field calibration process, the intelligent screen can also calibrate the sound field to the position of the user by locating the position of the user, thereby providing better listening experience for the user.
In some examples, sound field calibration may be performed after the speaker, smart screen, has been awakened. Optionally, the user can wake up the sound box and the intelligent screen through a preset voice command, such as 'small skill, small skill'. Or preset a voice command for performing sound field calibration, such as "sound field calibration", etc. Then, the smart screen and the sound box in the communication system can determine that the user instructs sound field calibration when a preset voice command is detected. Then the smart screen and speaker can determine the time information that the voice command was detected for subsequent locating of the user's location.
For example, as shown in fig. 8, the smart screen and the 4 speakers each detect a voice command issued by the user, for example, the speaker a detects the voice command for a time t1, the smart screen detects the voice command for a time t2, the speaker B detects the voice command for a time t3, the speaker D detects the voice command for a time t4, and the speaker C detects the voice command for a time t5. Then, each sound box can send the time information of receiving the voice command to the intelligent screen.
The intelligent screen and the plurality of sound boxes in the communication system complete clock synchronization through a high-precision clock synchronization algorithm, and the time error is within a preset range (for example, 1 mu s level). Then, the intelligent screen can determine the time difference according to the acquired time information fed back by the plurality of sound boxes and respectively receiving the voice command. The intelligent screen can then determine the distance difference between each sound box and the user according to the propagation speed of the sound, and further determine the distances between the user and the intelligent screen and between the user and each sound box, for example, the distances are d1-d5 respectively.
The smart screen may then determine the positional relationship among the user, the smart screen, and the speaker in combination with the positional relationship (e.g., including distance and angle relationships) between the smart screen and the speaker determined by the ultrasonic positioning technique. Then, the intelligent screen can adjust the sound effect of the sound box based on the position relation among the user, the intelligent screen and the sound box, and calibrate the 'emperor' sound field to the position of the user.
Therefore, the sound field is calibrated to the position of the user through the time information of the user voice command detected by each sound box. Compared with the prior art, the user position changes, and the handheld pickup device can only be used for collecting the audio frequency, so that the sound field calibration is performed again. According to the sound field calibration method provided by the embodiment of the application, when the position of the user changes, the sound field can be automatically calibrated to the position of the user in response to the voice command of the user, so that the use requirement of the user is met, and meanwhile, the operation difficulty of the user is reduced.
In other examples, the user issues a preset voice command, such as "small skill, small skill". The intelligent screen and the voice box in the communication system can respond to the detected preset voice command to determine the position of the user.
For example, as shown in fig. 9, the smart screen and the 4 speakers each detect a voice command issued by the user, and then the smart screen and the speakers can determine the position of the user relative to themselves through a sound source localization algorithm. After that, the intelligent screen and the sound box can send ultrasonic signals to the direction of the user, and the distance between the intelligent screen and the user is detected by utilizing the ultrasonic signals.
For example, as shown in fig. 9, the distance between the sound box a and the user is D1 by using the ultrasonic signal, the distance between the sound box B and the user is D3 by using the ultrasonic signal, the distance between the sound box D and the user is D4 by using the ultrasonic signal, and the distance between the sound box C and the user is D5 by using the ultrasonic signal. The sound box may then send the determined distance between the user and the sound box to the smart screen.
Accordingly, the smart screen may obtain the distance between each speaker device and the user, and may determine the positional relationship among the user, the smart screen, and the speaker by combining the positional relationship (including the distance and the angular relationship) between the smart screen and the speaker determined by the above-mentioned ultrasonic positioning technique.
Then, the intelligent screen can adjust the sound effect of the sound box based on the position relation among the user, the intelligent screen and the sound box, and calibrate the 'emperor' sound field to the position of the user.
Therefore, the user direction is determined through the user voice command detected by each electronic device, and then the distance between the user direction and the user is determined, so that the sound field is calibrated to the position of the user. Compared with the prior art, the user position changes, and the handheld pickup device can only be used for collecting the audio frequency, so that the sound field calibration is performed again. According to the sound field calibration method provided by the embodiment of the application, when the position of the user changes, the sound field can be automatically calibrated to the position of the user in response to the voice command of the user, so that the use requirement of the user is met, and meanwhile, the operation difficulty of the user is reduced.
In other examples, smart home systems include sensors (e.g., millimeter wave sensors) that may be used to locate the user's position. These sensors may then send the user location to the smart screen after determining the user location. Then, the intelligent screen can calibrate the sound field according to the position of the user and the determined position relation (such as distance and angle relation) between the intelligent screen and the sound box, so as to calibrate the emperor position of the sound field to the position of the user.
In other examples, the user may carry an electronic device such as a mobile phone, and the electronic device is configured with a UWB sensor, a millimeter wave sensor, etc. to determine a positional relationship between the electronic device and the smart screen, and between the electronic device and each of the sound boxes, where the positional relationship is a positional relationship between the user and the smart screen, and between the electronic device and each of the sound boxes. Then, the intelligent screen can acquire the position relation, and the sound field calibration is performed by combining the determined position relation (such as the distance and angle relation) of the intelligent screen and the sound box, so that the emperor position of the sound field is calibrated to the position of the user.
In other examples, the user may carry a wearable device, such as a smart watch, smart glasses, smart headphones, or the like. Through the sensors such as bluetooth, wi-Fi, UWB sensor, millimeter wave sensor that dispose on the wearable equipment, can realize confirming the positional relationship between this wearable equipment and wisdom screen, each audio amplifier, this positional relationship is the positional relationship between user and wisdom screen, each audio amplifier. Then, the intelligent screen can acquire the position relation, and the sound field calibration is performed by combining the determined position relation (such as the distance and angle relation) of the intelligent screen and the sound box, so that the emperor position of the sound field is calibrated to the position of the user.
Thus, by any one or more of the above example modes, the adaptive sound field calibration of the user position change can be realized, more flexible sound field calibration is provided for the user, and the use experience of the user is improved.
In some embodiments, a plurality of users may use the home theater at the same time, so that listening experience of the plurality of users can be considered in the sound field calibration process, and sound field calibration of multiple C bits is realized.
Illustratively, as shown in the scenario of fig. 10, both users C1 and C2 use a home theater. Then, the smart screen may determine the user positions of the user C1 and the user C2 and may determine the positional relationship among the smart screen, the sound box, the user C1 and the user C2 by way of one or several of the above-described exemplary embodiments. After that, the intelligent screen can adjust the playback parameters of a plurality of sound boxes and the intelligent screen according to the determined position relation, so that the user C1 and the user C2 can obtain better listening experience. The playback parameters include, for example, various parameters such as phase, frequency response, loudness, reverberation, etc.
For example, as shown in the scenario of fig. 10, the smart screen may adjust the playback parameters of the plurality of speakers and the smart screen, so that the speaker a and the speaker C close to the user C1 provide better listening experience for the user C1, and the speaker B and the speaker D close to the user C2 provide better listening experience for the user C2. Alternatively, the smart screen may be similar in relative distance and orientation to the two users, and then the smart screen may provide a similar listening experience for the two users.
For example, the intelligent screen adjusts the loudness of the sound box, so that the sound box A and the sound box C provide the audio with preset loudness for the user C1, and the preset loudness meets the audio loudness requirement of the user C1. Then the user C1 is not provided with too much audio because of the need to compromise the audio loudness requirements of the user C2.
Further, different sound waveforms meet at the same time, for example, peaks of two sound waveforms meet, which can generate a sound enhancement effect; such as peaks of one sound waveform meeting valleys of another sound waveform, will produce a sound canceling effect. Then, the intelligent screen can reduce the mutual influence of the playing effects of different sound boxes by adjusting the phase and other modes in the process of adjusting the playing parameters of different devices.
For example, as shown in the scenario of fig. 10, in the process that the sound box a and the sound box C provide the better listening experience for the user C1, the influence of the sound box a and the sound box C on the sound box B and the sound box D to provide the better listening experience for the user C2 can be reduced.
Therefore, through the adjustment of the joint playback parameters of the plurality of sound boxes, in a multi-user scene, the accuracy of the sound emitting direction is improved, and the influence of room reverberation on the listening of multiple users is reduced. Therefore, better listening experience is provided for a plurality of users, and the use experience of the plurality of users is improved.
In some embodiments, by jointly adjusting the playback parameters of different sound boxes, the mutual superposition and mutual cancellation of the sound waveforms of the different sound boxes are controlled, so that the sound target area and the mute area in the space can be divided. The sound waveforms of the sound boxes in the sound target area are overlapped with each other, and the sound waveforms of the sound boxes in the mute area are counteracted with each other. Thereby realizing the control of sound to play in the sound target area, and the sound is not sounded or is smaller in the silence area.
Illustratively, as shown in fig. 11, the smart screen adjusts playback parameters of the smart screen and the plurality of speakers according to the determined positional relationship among the one or more users, the smart screen, and the plurality of speakers, so that the living room is a sound target area and the area outside the living room is a mute area. If the intelligent screen is combined with the adjustment of the playback parameters by the device, the main player is in a mute area. Thus, the user can use the home theater in the living room, and the rest of the user in the primary sleeping is not influenced.
Therefore, the influence of the home theater on users in other areas is reduced through the division of the mute areas while the listening experience of the users is ensured.
In some embodiments, the smart screen may determine the positional relationship between the smart screen, the sound box, the user through ultrasonic localization techniques, sound source localization techniques, or based on high precision clock synchronization techniques, or through methods described in one or more of the above examples. After that, the sound production time of wisdom screen and audio amplifier can be adjusted according to this positional relationship to the sound that the speaker of a plurality of equipment sent reaches the user's people ear at the same time phase place, thereby brings better listening experience for the user.
Illustratively, as shown in fig. 12, the smart screen determines that the distance between the sound box a and the user is D1, the distance between the smart screen and the user is D2, the distance between the sound box B and the user is D3, the distance between the sound box D and the user is D4, and the distance between the sound box C and the user is D5.
And, the intelligent screen can determine the propagation time of sound from the device to the user's ear based on the determined distance and the propagation speed of sound. After the speaker of the sound box A sounds, the time for the sound to reach the ear of the user is t11; after the speaker of the intelligent screen sounds, the time for the sound to reach the user's ear is t22; after the loudspeaker of the sound box B sounds, the time for the sound to reach the ear of the user is t33; after the loudspeaker of the sound box D sounds, the time for the sound to reach the ear of the user is t44; after the speaker of the sound box C sounds, the time for the sound to reach the user's ear is t44.
After that, the sound transmission time of each device can be used by the intelligent screen to adjust the sound production time of the speakers of the sound box A, the intelligent screen, the sound box B, the sound box D and the sound box C to be t1-t5 respectively, so that the time phases of the sound reaching the ears of the user are identical or nearly identical, and the listening experience of the user is ensured.
Optionally, during sound transmission, objects in the home environment (e.g., furniture, electronic devices, etc.) may reflect the sound. Then, the intelligent screen is in the process of adjusting the sound production time of the intelligent screen and the sound boxes, and the sound production time can be adjusted, so that the direct sound and the reflected sound of the sound production device can reach the ears of the user at the same time, and better listening experience is provided for the user. It should be noted that, regarding the relevant content of the influence of the home environment on the sound transmission, reference may be made to the following related embodiments, which are not described herein.
Therefore, through positioning the position relation between each electronic device and the user in the home theater, the sounding time of the loudspeaker of each electronic device is adjusted, so that the sound reaches the user's ear in the same time phase or similar time phases, and the listening experience of the user is improved.
In some embodiments, home theater scenes support 3D audio playback, which may enable the re-arrangement of the trajectories of sound objects in video content, resulting in a better sound experience for the user.
Wherein different sounds in the video content may correspond to different sound objects, and the propagation trajectory of the sound in space may correspond to the sound trajectory of the sound objects. The different sound objects each correspond to a respective sound content and sound track, which sound track is generated and changed based on a change in time.
Alternatively, the smart screen may determine the positional relationship among the smart screen, the sound box, and the user by ultrasonic positioning techniques, acoustic source positioning techniques, or based on high precision clock synchronization techniques, or by methods described in one or more embodiments of the examples above. Then, the intelligent screen can match the sound track of the sound object in the video content to the space scene of the actual home theater, and calibrate the sound field to the position of the user, so that the sound presentation effect of the home theater on the video content is matched with the sound track, and the immersive viewing experience is provided for the user.
Illustratively, in video content, there are multiple hives that are flying. Then, the sound of the plurality of buzzers flaring wings corresponds to the plurality of sound objects, and the intelligent screen schedules and renders the sound tracks of the plurality of sound objects. And, the wisdom screen is based on the position relation between wisdom screen, audio amplifier, the user that has determined, with sound track calibration to user's listening position, through the broadcast cooperation of a plurality of audio amplifier and wisdom screen for the user obtains the listening experience that the buzzer flies around the video broadcast in-process.
Thus, compared with the prior art, all the sounds corresponding to the video content in the home theater are emitted by the front of the user, and the user cannot obtain immersive experience. According to the sound field calibration method provided by the embodiment of the application, the sound track of the sound object can be rearranged and rendered, and the sound field is calibrated to the position of the user, so that the user obtains immersive listening experience.
Optionally, in response to the user selecting the target character, the intelligent screen may re-arrange and render the sound object track in the video content according to the determined character viewing angle with the viewing angle of the target character selected by the user as the character viewing angle of the user.
And the intelligent screen calibrates the sound field of the role view angle to the listening position of the user, so that the user participates in the video content with the determined role view angle, and the listening experience and the viewing experience of the determined role view angle are obtained.
For example, as shown in fig. 13, in response to a user's operation of selecting character a during the display of video content, the smart screen may take the view of character a as the user's view of character. The smart screen decodes the audio in the video content to extract each audio object, and the different audio objects have waveforms as shown in fig. 14 (1), (2), (3), and (4). Before sound output, the intelligent screen combines the position relations among the intelligent screen, the sound box and the user, and according to the role visual angle of the role A, the sound track of each sound object is arranged and rendered, and the sound track is arranged to the position of the user.
For example, as in the home theater scene shown in fig. 15, the smart screen determines that the current station word "waits" for the sound corresponding to character C, which is located behind character a selected by the user in the video content, in the scene shown in fig. 13. Then, after the sound track arrangement and rendering of the intelligent screen are completed, the intelligent screen can complete the playing of the sentence by matching the sound box C and the sound box D positioned at the rear of the sofa, so that a user sitting on the sofa can obtain immersive listening experience.
In some embodiments, in a multi-user scenario, the smart screen may integrate the positional relationships of multiple users, providing an immersive listening experience for each of the different users.
For example, different users have selected the same view of the character. The intelligent screen can arrange and render the sound track of the sound object corresponding to the sound in the video content through the above example method, so that a plurality of users acquire the immersive listening experience with the same character view angle.
For another example, different users have selected different persona perspectives. The intelligent screen may group speakers in the home theater, with different groups of speakers serving different users. If user a corresponds to a first set of speakers and user B corresponds to a second set of speakers. The intelligent screen can conduct arrangement and rendering of sound tracks of sound objects according to the role visual angle selected by the user A in video content, and provides listening experience of the role visual angle selected by the user A for the user A through the first group of sound boxes. And the intelligent screen can arrange and render the sound track of the sound object according to the role visual angle selected by the user B in the video content, and provide the listening experience of the role visual angle selected by the user B for the user B through the second group of sound boxes. Thus providing an immersive listening experience for multiple users in a multi-user scenario.
Thus, compared with the prior art, the home theater has the advantage that the user cannot select the role viewing angle in the video playing process. According to the sound field calibration method provided by the embodiment of the application, the sound field is calibrated to the position of the user, and the immersive listening experience of the character view angle selected by the user is provided for the user, so that the use experience of the user is improved.
In some embodiments, an application with a sound field calibration function may be installed in an electronic device (such as a mobile phone, a smart screen, etc.), and a user may directly adjust "emperor positions" of sound field calibration in a home theater through the application, so as to calibrate the sound field to "emperor positions" selected by the user, thereby meeting the personalized use requirements of the user.
The smart screen may generate map information after determining a positional relationship between the smart screen and each of the sound boxes, and transmit the map information to the mobile phone.
As shown in fig. 16, the mobile phone can display an interface 1601 according to the received map information. A home theater map in which the relative positional relationship of the smart screen and the respective speakers is schematically displayed and a "emperor' icon 161 is displayed on the interface 1601. In response to the user moving the "emperor 'icon 161, the handset may determine the" emperor' indicated by the user.
The mobile phone may then send information to the smart screen that the user has finally moved to display the location of the "emperor' icon 161. After the intelligent screen receives the information, the sound field can be calibrated to the position corresponding to the information, so that the "emperor position" for sound field calibration meets the user requirement.
Optionally, the handset displays the "emperor 'icon 161 in a random position in the home theater map, or displays the" emperor' icon 161 in a default display position. The default display position is a fixed display position, or a recommended "emperor position", or a "emperor position" determined in the last sound field calibration process.
Optionally, the smart screen may perform sound field calibration after determining the positional relationship between the smart screen and each of the speakers, determine the best listening position, and instruct to display the "emperor' icon 161 at the best listening position. And, the user holds the mobile phone or wears the wearable device, and the user position can be determined through the positioning function of the mobile phone or the wearable device. The user's location is located, for example, by UWB sensors, millimeter wave sensors, or other high precision location technology in the room. The handset may then direct the user to the determined sweet spot.
Optionally, in the process of displaying the interface 1601, the mobile phone may display a prompt message for prompting the user to confirm whether to calibrate the sound field to the "emperor position" shown by the current "emperor position" icon 161. After detecting the user's determination operation, the mobile phone may send information of the position corresponding to the "emperor' icon 161 to the smart screen.
Optionally, the intelligent screen may also perform sound field calibration of a plurality of "emperors" in response to user operation.
For example, as shown in fig. 17, an application having a sound field calibration function is installed in a cellular phone. After the position relation between the intelligent screen and each sound box is determined, the intelligent screen sends the generated map information to the mobile phone. The mobile phone can display one or a preset number of "emperor' icons first according to the map information. Then, in response to the user indicating the operation of newly creating the "emperor position" icons, a corresponding number of "emperor position" icons can be newly created.
As shown in interface 1701, the handset displays "emperor 'icons 171 and" emperor' icons 172. Then, in response to the user operating the two "emperor" icons, the mobile phone may determine the two "emperor" positions indicated by the user. Then, the mobile phone can send the information of the two "emperor positions" indicated by the user to the intelligent screen. The intelligent screen can complete the sound field calibration of the two emperor positions according to the received information.
It should be understood that the smart screen may also directly display a corresponding map after determining the positional relationship between the smart screen and each of the speakers. Then, in response to the user moving the "emperor position" icon on the intelligent screen, the intelligent screen can also complete the calibration of the sound field according to the "emperor position" indicated by the user.
Therefore, the position relationship between the intelligent screen and the sound box is automatically determined in various modes, and the sound field calibration of the home theater is flexibly performed in response to the operation of a user. Thereby meeting the personalized requirements of the user, reducing the operation difficulty of the user and improving the use experience of the user.
In some embodiments, different object materials in space have different capabilities for reflecting sound, etc. Then, in the sound field calibration process, the intelligent screen needs to refer to the space environment influence, so that the accuracy of sound field calibration is improved.
For example, when an incident sound as shown in (1) in fig. 18 contacts the surface of an object, a reflected sound as shown in (2) in fig. 18, a transmitted sound as shown in (3) in fig. 18, and an absorbed sound as shown in (4) in fig. 18 are generated. And due to the difference of flatness of the surfaces of the objects, the same incident sound can correspond to a plurality of reflected sounds in different directions. Then, the sound field calibration needs to calibrate the incident sound, reflected sound, etc. in multiple directions to the same "emperor position". Therefore, the intelligent screen needs to determine the acoustic parameter modeling corresponding to the current space environment in advance, so that the intelligent screen is convenient to use in the subsequent sound field calibration process, and the influence of the space environment on the sound field calibration is avoided.
Optionally, after the sound field parameter is modeled, the intelligent screen may use the acoustic parameter modeling as an input of the sound field calibration process corresponding to the above embodiments, so as to improve the accuracy of the sound field calibration process.
In some embodiments, as shown in fig. 19, any electronic device in the home theater may be used as a transmitting end device, and after the speaker of the transmitting end device transmits the ultrasonic signal, other devices may be used as receiving end devices to receive the direct sound corresponding to the ultrasonic signal. And after the ultrasonic signal is reflected by different objects (such as walls, roofs, floors, windows, other devices and the like) in the room, the receiving end device can receive reverberant sound corresponding to the ultrasonic signal.
The receiving end device can perform acoustic calculation according to the received ultrasonic signals (such as direct sound and reverberant sound), and determine corresponding acoustic parameters. The acoustic parameters include, for example, a decoration material of a home environment and a reflection coefficient, an absorption coefficient and a transmission coefficient of a home for sound.
And then, the sending end equipment can send ultrasonic signals through different angles or send ultrasonic signals through different electronic equipment serving as the sending end equipment, so that environment detection on the current space environment is completed, and the intelligent screen can acquire acoustic parameters sent by each sound box to establish an acoustic parameter model.
In the subsequent sound field calibration process, the intelligent screen can adjust the playing frequency, response parameters, phase parameters, loudness parameters and the like of the speakers of the intelligent screen and each sound box based on the acoustic parameter model, so as to finish sound field calibration.
In some examples, the acoustic parameters determined by the electronic device may also be used alone in the sound field calibration process.
In some examples, the enclosure transmits ultrasonic signals through speakers to different angles and orientations and carries angle and orientation information in the ultrasonic signals. After receiving the ultrasonic signals, microphones in other sound boxes can be combined with angle and azimuth information carried in the ultrasonic signals to perform acoustic analysis, and corresponding acoustic parameters (such as reflection coefficients, absorption coefficients, transmission coefficients and the like) are determined. After the ultrasonic environment detection of the whole house is completed, the sound box feeds back acoustic parameters corresponding to different angles and directions to the intelligent screen. And the intelligent screen models the acoustic parameters according to the received acoustic parameters corresponding to the angles and the directions.
In other examples, multiple speakers may be provided in the smart screen, and the smart screen may transmit ultrasonic signals through the speakers in different orientations. For example, the smart screen is provided with sky-sound speakers sounding in a ceiling direction, speakers disposed at 360 ° different directions, speakers disposed at the back of the smart screen, and the like. After receiving the ultrasonic signal, the microphone in the sound box performs acoustic analysis to determine corresponding acoustic parameters (such as reflection coefficient, absorption coefficient, transmission coefficient, etc.). In this way, a full and complete home environment detection is achieved. Subsequently, the intelligent screen can perform acoustic parameter modeling according to acoustic parameters fed back by different sound boxes and combining the sound emitting directions of the loudspeakers for sending ultrasonic signals.
In the above two example scenarios, the transmitting end device may carry the transmission angle and the azimuth of the ultrasonic signal in the transmitted ultrasonic signal, and the receiving end device calculates and feeds back the acoustic parameters corresponding to the angle and the azimuth. Or the transmitting end equipment transmits ultrasonic signals to different angles and directions, and matches the angles and directions corresponding to the acoustic parameters after receiving the acoustic parameters fed back by the receiving end equipment. Namely, the matching of the acoustic parameters with the angle and the azimuth can be completed at the receiving end or at the transmitting end. The matching relationship and the corresponding acoustic parameters can be obtained by the electronic device (such as a smart screen) for establishing the acoustic parameter model finally, so as to establish the acoustic parameter model.
In other examples, communication connection is established between the smart screen and the plurality of sound boxes in the home theater, so that a networking relationship is formed, and the establishment of the acoustic parameter model can be realized through cooperation between the smart screen and the plurality of sound boxes.
For example, after an ultrasonic signal is transmitted by a sound box a in the communication system, the ultrasonic signal may be received by a sound box B in the communication system. And through the various implementations exemplified above, sound box B may determine the positional relationship between sound box a and sound box B. Then, the sound box B can determine the acoustic parameters corresponding to the reflecting objects and different angles of the ultrasonic signal in the home environment through acoustic analysis.
Then, the sound box A selects other electronic equipment in the communication system to send ultrasonic signals, and the other electronic equipment receives the ultrasonic signals, so that a plurality of reflection paths and acoustic parameters corresponding to the reflection objects can be calculated. And further, the detection of the home environment is more fully realized, and the accuracy of the establishment of the acoustic parameter model is improved.
Alternatively, the smart screen may be used as a scheduling device in a communication system for determining a transmitting end device transmitting an ultrasonic signal.
In other examples, speakers in home theatres are typically deployed on ceilings and/or floors. Then, in the process of acoustic parametric modeling, an ultrasonic signal may be transmitted through a sound box disposed on the ceiling, and received by a sound box disposed on the ground. And then, the sound box receiving the ultrasonic signals can calculate the acoustic parameters corresponding to the object between the ceiling and the ground through acoustic analysis, and feed the acoustic parameters back to the intelligent screen, and the intelligent screen completes the establishment of an acoustic parameter model.
Alternatively, during the acoustic parametric model creation, the ultrasonic signal may be sent through a sound box deployed on the ground and received by a sound box deployed on the ceiling. And then, the sound box receiving the ultrasonic signals can calculate the acoustic parameters corresponding to the object between the ground and the ceiling through acoustic analysis, and feed the acoustic parameters back to the intelligent screen, and the intelligent screen completes the establishment of an acoustic parameter model.
Alternatively, the sound box disposed on the ground may be disposed on a ground stand or a television cabinet, for example.
Therefore, through acoustic parameter modeling, the accuracy of automatic sound field calibration is improved.
In some embodiments, the sound box or smart screen may send ultrasonic signals through the speaker to different orientations in space, such as up, down, back, forth, left, right, at least 6 orientations. Then, the sound box or the intelligent screen can analyze the size of the current household environment space according to the received ultrasonic reflection signals. For example, a sound box or a smart screen determines the size of a home environment space by a time difference between transmitting an ultrasonic signal and receiving an ultrasonic reflected signal. Then, the intelligent screen can acquire the size of the household environment space.
The smart screen may then determine the positional relationship between the smart screen and each of the speakers by the methods exemplified in the various embodiments described above. By combining the position relationship with the size of the home environment space, the intelligent screen can determine the absolute geometric position relationship of the intelligent screen and each sound box in the home environment space. Subsequently, in the sound field calibration process, the intelligent screen can adjust the playback parameters on the sound box equipment by combining the absolute geometric position relationship and the determined acoustic parameter model to calibrate the sound field.
Or the intelligent screen can determine the position relationship among the intelligent screen, each sound box and the user through the method exemplified by each embodiment. By combining the position relationship and the size of the household environment space, the intelligent screen can determine the absolute geometric position relationship of the intelligent screen, each sound box and the user in the household environment space. Subsequently, in the sound field calibration process, the intelligent screen can adjust the playback parameters on the sound box equipment by combining the absolute geometric position relationship and the determined acoustic parameter model, and calibrate the sound field to the position of the user.
Therefore, through the determination of the size of the household environment space and the combination of acoustic parameter modeling, the accuracy of automatic sound field calibration is improved.
It should be noted that, in the home environment detection process, the electronic device in the communication system may also complete the detection of the home environment without using an ultrasonic signal. For example, the electronic device may finish the detection of the home environment by audible sound (such as playing a piece of music, etc.), and the specific detection process may refer to the above ultrasonic signal detection process, which is not repeated in the embodiments of the present application.
In addition, in the above-described embodiments, the sound field calibration process is described with ultrasonic signals. It should be appreciated that the user's location in the home environment may also be located via the ultrasonic signal, thereby enabling control of the brightness, switching, etc. of the lights at different locations based on the user's location.
In some aspects, various embodiments of the application may be combined and the combined aspects implemented. Optionally, some operations in the flow of method embodiments are optionally combined, and/or the order of some operations is optionally changed. The order of execution of the steps in each flow is merely exemplary, and is not limited to the order of execution of the steps, and other orders of execution may be used between the steps. And is not intended to suggest that the order of execution is the only order in which the operations may be performed. Those of ordinary skill in the art will recognize a variety of ways to reorder the operations described herein. In addition, it should be noted that details of processes involved in a certain embodiment herein are equally applicable to other embodiments in a similar manner, or may be used in combination between different embodiments.
Moreover, some steps in method embodiments may be equivalently replaced with other possible steps. Or some steps in method embodiments may be optional and may be deleted in some usage scenarios. Or other possible steps may be added to the method embodiments.
Moreover, the method embodiments may be implemented alone or in combination.
Fig. 20 is a schematic flow chart of a sound field calibration method according to an embodiment of the present application. As shown in fig. 20, the method includes the following steps.
S2001, the first electronic device transmits first information for detection into the space.
The first electronic device is, for example, a smart screen or a sound box. The first information is a wireless signal, and the wireless signal is one or more of the following: ultrasonic signals, UWB signals, bluetooth signals, wi-Fi signals, millimeter wave signals.
In some embodiments, different object materials in space have different capabilities for reflecting sound, etc. Then, in the sound field calibration process, the first electronic device needs to refer to the spatial environment influence, so that the accuracy of sound field calibration is improved. Thus, the first electronic device may send first information for detection into the space to detect the spatial environment.
S2002, the first electronic device acquires acoustic parameters in the space according to the first information for detection.
Wherein the acoustic parameters include a reflection coefficient, an absorption coefficient, and a transmission coefficient of sound by an object in space.
In some embodiments, the first electronic device determines the acoustic parameter of the space according to a transmission time of the at least one wireless signal and a reception time of a reflected signal corresponding to the at least one wireless signal.
S2003, the first electronic device determines second information associated with an event occurring in the space according to the acoustic parameter.
Wherein the second information includes placement advice information, or sound field calibration information.
In some embodiments, the first electronic device sends placement suggestion information to the second electronic device, the placement suggestion information being used to adjust placement locations of a plurality of electronic devices within the space, the plurality of electronic devices including the first electronic device.
In some embodiments, the acoustic parameter is an acoustic parameter at a first time within the space, and the event is an event that occurs after or before the first time.
For example, an event may include a new electronic device entering the space, or a movement in the location of an electronic device in the space. And then, the first electronic device generates placement suggestion information according to the acquired acoustic parameters at the first time.
In some examples, after the first electronic device determines the placement suggestion information associated with the event occurring in the space, the user adjusts the placement position of the electronic device according to the placement suggestion information. Then the event occurring in the space is an event after the first time.
In other examples, after the first electronic device determines the placement suggestion information associated with the event occurring in the space, the user does not adjust the placement position of the electronic device according to the placement suggestion information. Then the event occurring in the space is the event before the first time.
In some embodiments, the first electronic device transmits third information for positioning to at least one third electronic device. And then, determining first position information of the at least one third electronic device relative to the first electronic device according to the third information for positioning received by the at least one third electronic device. And then, the first electronic equipment performs sound field calibration according to the first position information and the acoustic parameters, and determines sound field calibration information associated with an event occurring in the space.
Therefore, compared with the prior art, the user needs to hold the pickup equipment to gather audio frequency to the distance information of first electronic equipment and second electronic equipment is input manually, just can realize the sound field calibration. According to the sound field calibration method provided by the embodiment of the application, the first electronic equipment or the second electronic equipment automatically completes sound field calibration by sending the ultrasonic signals, so that the operation difficulty of a user is effectively reduced. In addition, in the sound field calibration process, the accuracy of sound field calibration is effectively improved by combining acoustic parameters in the space.
In some embodiments, the first electronic device obtains second location information of the first user in the space relative to the first electronic device, the second location information being used to determine the second information.
Therefore, the user position is determined and used for calibrating the sound field to the area where the user position is located, so that better audio experience can be brought to the user after the sound field calibration is completed.
In some examples, the clocks of the first electronic device and the at least one third electronic device are synchronized. And the first electronic equipment adjusts sounding time of the first electronic equipment and the at least one third electronic equipment according to the first position information, the second position information and the acoustic parameters, so that the sounding time of the first electronic equipment and the at least one third electronic equipment is the same or similar in the range of the area indicated by the second position information.
Illustratively, as shown in fig. 12, the smart screen determines that the distance between the sound box a and the user is D1, the distance between the smart screen and the user is D2, the distance between the sound box B and the user is D3, the distance between the sound box D and the user is D4, and the distance between the sound box C and the user is D5.
And, the intelligent screen can determine the propagation time of sound from the device to the user's ear based on the determined acoustic parameters, distance and speed of propagation of the sound. After the speaker of the sound box A sounds, the time for the sound to reach the ear of the user is t11; after the speaker of the intelligent screen sounds, the time for the sound to reach the user's ear is t22; after the loudspeaker of the sound box B sounds, the time for the sound to reach the ear of the user is t33; after the loudspeaker of the sound box D sounds, the time for the sound to reach the ear of the user is t44; after the speaker of the sound box C sounds, the time for the sound to reach the user's ear is t44.
After that, the sound transmission time of each device can be used by the intelligent screen to adjust the sound production time of the speakers of the sound box A, the intelligent screen, the sound box B, the sound box D and the sound box C to be t1-t5 respectively, so that the time phases of the sound reaching the ears of the user are identical or nearly identical, and the listening experience of the user is ensured.
Therefore, through positioning the position relation between each electronic device and the user in the home theater, the sounding time of the loudspeaker of each electronic device is adjusted, so that the sound reaches the user's ear in the same time phase or similar time phases, and the listening experience of the user is improved.
In some embodiments, the first electronic device determines one or more sound tracks corresponding to one or more sound objects included in the first audio to be played. And rearranging one or more sound tracks in the sound field calibration process according to the first position information, the second position information and the acoustic parameters, so that the calibrated sound field is matched with the one or more sound tracks in the area indicated by the second position information.
Illustratively, in video content, there are multiple hives that are flying. Then, the sound of the plurality of buzzers flaring the wings corresponds to the plurality of sound objects by arranging and rendering the sound tracks of the plurality of sound objects. And based on the determined position relation among the intelligent screen, the sound box and the user and the acoustic parameters, calibrating the sound track to the listening position of the user, and enabling the user to obtain the listening experience of flying the humming bird around in the video playing process through the playing cooperation of at least one second electronic device and the first electronic device.
Thus, compared with the prior art, all the sounds corresponding to the video content in the home theater are emitted by the front of the user, and the user cannot obtain immersive experience. According to the sound field calibration method provided by the embodiment of the application, the sound track of the sound object can be rearranged and rendered, and the sound field is calibrated to the position of the user, so that the user obtains immersive listening experience.
In some examples, the first electronic device determines a target sound object of the one or more sound objects selected by the user in response to a user operation. And the first electronic equipment rearranges the target sound track corresponding to the target sound object in the sound field calibration process according to the first position information, the second position information and the acoustic parameters, so that the calibrated sound field is matched with the target sound track in the area range indicated by the second position information.
For example, as shown in fig. 13, in response to a user's operation of selecting character a during the display of video content, the smart screen may take the view of character a as the user's view of character. As shown in fig. 14, the smart screen decodes the sound in the video content to extract each sound object, and each sound object has its own waveform. Before sound output, the intelligent screen combines the position relation among the intelligent screen, the sound box and the user and acoustic parameters, and the sound track of each sound object is arranged and rendered according to the role view angle of the role A, so that the sound track is arranged to the user position.
Thus, compared with the prior art, the home theater has the advantage that the user cannot select the role viewing angle in the video playing process. According to the sound field calibration method provided by the embodiment of the application, the sound field is calibrated to the position of the user, and the immersive listening experience of the character view angle selected by the user is provided for the user, so that the use experience of the user is improved.
In addition, the first electronic device may further perform the steps and functions performed by the smart screen in the above embodiments, so as to implement the sound field calibration method provided in the above embodiments.
The sound field calibration method provided by the embodiment of the application is described in detail above with reference to fig. 3 to 20. The electronic device provided in the embodiment of the application is described in detail below with reference to fig. 21.
In one possible design, fig. 21 is a schematic structural diagram of a first electronic device according to an embodiment of the present application. As shown in fig. 21, the first electronic device 2100 may include: a transceiver unit 2101 and a processing unit 2102. The first electronic device 2100 may be used to implement the functionality of the first electronic device 100 referred to in the method embodiments described above.
Alternatively, the transceiver unit 2101 is used to support the first electronic device 2100 to execute S2001 in fig. 20.
Optionally, the processing unit 2102 is configured to support the first electronic device 2100 to execute S2002 and S2003 in fig. 20.
The transceiver unit may include a receiving unit and a transmitting unit, may be implemented by a transceiver or a transceiver related circuit component, and may be a transceiver or a transceiver module. The operations and/or functions of the respective units in the first electronic device 2100 are respectively for implementing the respective flows of the sound field calibration method described in the above method embodiments, and all relevant contents of the respective steps related to the above method embodiments may be referred to the functional descriptions of the corresponding functional units, which are not repeated herein for brevity.
Alternatively, the first electronic device 2100 illustrated in fig. 21 may further include a storage unit (not illustrated in fig. 21) in which programs or instructions are stored. When the transceiving unit 2101 and the processing unit 2102 execute the program or instructions, the first electronic device 2100 illustrated in fig. 21 is enabled to execute the sound field calibration method described in the above-described method embodiment.
The technical effects of the first electronic device 2100 shown in fig. 21 may refer to the technical effects of the sound field calibration method described in the above method embodiment, and will not be described herein.
In addition to the form of the first electronic device 2100, the technical solution provided by the present application may also be a functional unit or a chip in the first electronic device, or an apparatus used in cooperation with the first electronic device.
The embodiment of the application also provides a chip system, which comprises: a processor coupled to a memory for storing programs or instructions which, when executed by the processor, cause the system-on-a-chip to implement the method of any of the method embodiments described above.
Alternatively, the processor in the system-on-chip may be one or more. The processor may be implemented in hardware or in software. When implemented in hardware, the processor may be a logic circuit, an integrated circuit, or the like. When implemented in software, the processor may be a general purpose processor, implemented by reading software code stored in a memory.
Alternatively, the memory in the system-on-chip may be one or more. The memory may be integral with the processor or separate from the processor, and embodiments of the present application are not limited. The memory may be a non-transitory processor, such as a ROM, which may be integrated on the same chip as the processor, or may be separately provided on different chips, and the type of memory and the manner of providing the memory and the processor are not particularly limited in the embodiments of the present application.
Illustratively, the chip system may be a field programmable gate array (field programmable GATE ARRAY, FPGA), an Application Specific Integrated Chip (ASIC), a system on chip (SoC), a central processing unit (central processor unit, CPU), a network processor (network processor, NP), a digital signal processing circuit (DIGITAL SIGNAL processor, DSP), a microcontroller (micro controller unit, MCU), a programmable controller (programmable logic device, PLD) or other integrated chip.
It should be understood that the steps in the above-described method embodiments may be accomplished by integrated logic circuitry in hardware in a processor or instructions in the form of software. The steps of the method disclosed in connection with the embodiments of the present application may be embodied directly in a hardware processor for execution, or in a combination of hardware and software modules in the processor for execution.
The embodiment of the present application also provides a computer-readable storage medium having a computer program stored therein, which when run on a computer causes the computer to perform the above-described related steps to implement the sound field calibration method in the above-described embodiment.
The embodiment of the present application also provides a computer program product, which when run on a computer causes the computer to perform the above-mentioned related steps to implement the sound field calibration method in the above-mentioned embodiment.
In addition, the embodiment of the application also provides a device. The apparatus may be a component or module in particular, and may comprise one or more processors and memory coupled. Wherein the memory is for storing a computer program. The computer program, when executed by one or more processors, causes an apparatus to perform the sound field calibration method in the method embodiments described above.
Wherein an apparatus, a computer-readable storage medium, a computer program product, or a chip provided by embodiments of the application are for performing the corresponding methods provided above. Therefore, the advantages achieved by the method can be referred to as the advantages in the corresponding method provided above, and will not be described herein.
The steps of a method or algorithm described in connection with the present disclosure may be embodied in hardware, or may be embodied in software instructions executed by a processor. The software instructions may be comprised of corresponding software modules that may be stored in random access memory (random access memory, RAM), flash memory, read Only Memory (ROM), erasable programmable read only memory (erasable programmable ROM), electrically Erasable Programmable Read Only Memory (EEPROM), registers, hard disk, a removable disk, a compact disc read only memory (CD-ROM), or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an Application Specific Integrated Circuit (ASIC).
From the foregoing description of the embodiments, it will be apparent to those skilled in the art that the foregoing functional block divisions are merely illustrative for convenience and brevity of description. In practical application, the above functions can be allocated by different functional modules according to the need; i.e. the internal structure of the device is divided into different functional modules to perform all or part of the functions described above. The specific working processes of the above-described systems, devices and units may refer to the corresponding processes in the foregoing method embodiments, which are not described herein.
In the several embodiments provided in the present application, it should be understood that the disclosed method may be implemented in other manners. The device embodiments described above are merely illustrative. For example, the division of the modules or units is only one logic function division, and other division modes can be adopted when the modules or units are actually implemented; for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. In addition, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interface, module or unit indirect coupling or communication connection, which may be electrical, mechanical or other form.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
Computer readable storage media include, but are not limited to, any of the following: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely illustrative of specific embodiments of the present application, and the scope of the present application is not limited thereto, but any changes or substitutions within the technical scope of the present application should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (15)

1. A sound field calibration method, applied to a first electronic device, the method comprising:
transmitting first information for detection into a space;
acquiring acoustic parameters in the space according to the first information for detection;
Second information associated with an event occurring within the space is determined from the acoustic parameters.
2. The method of claim 1, wherein the second information comprises device placement proposal information, or sound field calibration information.
3. The method according to claim 2, wherein the method further comprises:
And sending the placement suggestion information to the second electronic equipment, wherein the placement suggestion information is used for adjusting the placement position of at least one electronic equipment in the space.
4. A method according to any one of claims 1-3, wherein the acoustic parameter is an acoustic parameter acquired in the space during a first period of time, and the event is an event occurring after the end of the first period of time.
5. The method of any of claims 1-4, wherein the event comprises a new electronic device entering the space or a movement of a location of the electronic device in the space.
6. The method according to claim 2, wherein the method further comprises:
transmitting third information for positioning to at least one third electronic device;
Determining first position information of the at least one third electronic device relative to the first electronic device according to third information for positioning received by the at least one third electronic device;
The determining, according to the acoustic parameter, second information associated with an event occurring in the space, including:
And carrying out sound field calibration according to the first position information and the acoustic parameters, and determining sound field calibration information associated with an event occurring in the space.
7. The method of claim 6, wherein the method further comprises:
second position information of a first user in the space relative to the first electronic device is acquired, and the second position information is used for determining the second information.
8. The method of claim 7, wherein the clocks of the first electronic device and the at least one third electronic device are synchronized, wherein the determining second information associated with an event occurring within the space based on the acoustic parameters comprises:
And adjusting sounding times of the first electronic device and the at least one third electronic device according to the first position information, the second position information and the acoustic parameters, so that the sounding times of the first electronic device and the at least one third electronic device reach the same or similar time within the area indicated by the second position information.
9. The method according to claim 7 or 8, wherein said determining second information associated with events occurring within said space based on said acoustic parameters comprises:
Determining one or more sound tracks corresponding to one or more sound objects included in the first audio to be played;
And rearranging the one or more sound tracks in a sound field calibration process according to the first position information, the second position information and the acoustic parameters, so that the calibrated sound field is matched with the one or more sound tracks in the area indicated by the second position information.
10. The method of claim 9, wherein after the determining one or more sound tracks corresponding to one or more sound objects included in the first audio to be played, the method further comprises:
In response to a user operation, determining a target sound object of the one or more sound objects selected by the user;
Rearranging the one or more sound tracks in a sound field calibration process according to the first position information, the second position information and the acoustic parameters, so that the calibrated sound field matches the one or more sound tracks in the area indicated by the second position information, including:
And rearranging a target sound track corresponding to the target sound object in a sound field calibration process according to the first position information, the second position information and the acoustic parameters, so that the calibrated sound field is matched with the target sound track in a region indicated by the second position information.
11. The method according to any of claims 1-10, wherein the first information comprises at least one wireless signal transmitted in at least one direction in the space; the acquiring the acoustic parameters in the space according to the first information for detection includes:
and determining the acoustic parameter of the space according to the sending time of the at least one wireless signal and the receiving time of the reflected signal corresponding to the at least one wireless signal.
12. The method according to any one of claims 1 to 11, wherein,
The acoustic parameter includes at least one of a reflection coefficient, an absorption coefficient, and a transmission coefficient of sound of an object in the space.
13. An electronic device, comprising: a processor and a memory coupled to the processor, the memory for storing computer program code, the computer program code comprising computer instructions that, when read from the memory by the processor, cause the electronic device to perform the method of any of claims 1-12.
14. A computer readable storage medium, characterized in that the computer readable storage medium comprises a computer program which, when run on an electronic device, causes the electronic device to perform the method according to any one of claims 1-12.
15. A computer program product, characterized in that the computer program product, when run on a computer, causes the computer to perform the method according to any of claims 1-12.
CN202211652603.0A 2022-12-21 2022-12-21 Sound field calibration method and electronic equipment Pending CN118233822A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211652603.0A CN118233822A (en) 2022-12-21 2022-12-21 Sound field calibration method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211652603.0A CN118233822A (en) 2022-12-21 2022-12-21 Sound field calibration method and electronic equipment

Publications (1)

Publication Number Publication Date
CN118233822A true CN118233822A (en) 2024-06-21

Family

ID=91509972

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211652603.0A Pending CN118233822A (en) 2022-12-21 2022-12-21 Sound field calibration method and electronic equipment

Country Status (1)

Country Link
CN (1) CN118233822A (en)

Similar Documents

Publication Publication Date Title
CN111131866B (en) Screen-projecting audio and video playing method and electronic equipment
WO2021203902A1 (en) Virtual image realization method and apparatus, and storage medium and terminal device
US20140347565A1 (en) Media devices configured to interface with information appliances
US20220086735A1 (en) Method and Device for Controlling Connection to Network
WO2020259542A1 (en) Control method for display apparatus, and related device
WO2022048599A1 (en) Sound box position adjusting method and audio rendering method and apparatus
WO2021017909A1 (en) Method, electronic device and system for realizing functions through nfc tag
CN110572799B (en) Method and equipment for simultaneous response
CN113573122B (en) Audio and video playing method and device
CN111741511B (en) Quick matching method and head-mounted electronic equipment
CN113965715B (en) Equipment cooperative control method and device
WO2022116930A1 (en) Content sharing method, electronic device, and storage medium
KR20160127606A (en) Mobile terminal and the control method thereof
WO2022148319A1 (en) Video switching method and apparatus, storage medium, and device
CN110162225A (en) A kind of projection lamp and the touch control method for projection lamp
CN113921002A (en) Equipment control method and related device
CN118233822A (en) Sound field calibration method and electronic equipment
WO2024131484A1 (en) Sound field calibration method and electronic device
US20240053460A1 (en) Device location determining method, apparatus, and system
CN114598984B (en) Stereo synthesis method and system
CN117412238A (en) Equipment positioning method and movable electronic equipment
CN115016298A (en) Intelligent household equipment selection method and terminal
CN115250428A (en) Positioning method and device
EP4351172A1 (en) Channel configuration method, electronic device, and system
CN113436635A (en) Self-calibration method and device of distributed microphone array and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination