WO2024131484A1 - 声场校准方法及电子设备 - Google Patents

声场校准方法及电子设备 Download PDF

Info

Publication number
WO2024131484A1
WO2024131484A1 PCT/CN2023/134737 CN2023134737W WO2024131484A1 WO 2024131484 A1 WO2024131484 A1 WO 2024131484A1 CN 2023134737 W CN2023134737 W CN 2023134737W WO 2024131484 A1 WO2024131484 A1 WO 2024131484A1
Authority
WO
WIPO (PCT)
Prior art keywords
electronic device
user
sound
speaker
position information
Prior art date
Application number
PCT/CN2023/134737
Other languages
English (en)
French (fr)
Inventor
蔡双林
程力
梁志涛
郑磊
谢殿晗
徐昊玮
董伟
朱焱
惠少博
孙渊
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2024131484A1 publication Critical patent/WO2024131484A1/zh

Links

Definitions

  • the present application relates to the field of terminal technology, and in particular to a sound field calibration method and electronic equipment.
  • a home theater is generally composed of a smart screen and multiple smart speaker devices, which provide users with a good audio-visual experience through the collaborative operation of multiple devices.
  • the sound field of the home theater needs to be calibrated.
  • the user needs to hold a pickup device (such as a mobile phone, microphone, professional audio acquisition equipment, etc.) to collect audio.
  • the smart screen can complete the sound field calibration based on the audio data sent by the pickup device.
  • the sound field calibration since the sound field calibration depends on the accuracy and completeness of the audio collected by the user's handheld sound pickup device, it is difficult for the user to operate.
  • the sound field can only be calibrated to the position where the user holds the sound pickup device. If the user's position changes, the user needs to hold the sound pickup device again to collect audio to re-calibrate the sound field.
  • the present application provides a sound field calibration method and electronic device.
  • the technical solution provided by the present application realizes automatic sound field calibration based on positioning technology, reducing the difficulty of user operation. Moreover, as the user's position changes, the sound field can be automatically calibrated to the area indicated by the user's position.
  • a sound field calibration method is provided, which is applied to a system including a first electronic device and at least one second electronic device.
  • the method includes: at least one second electronic device receives first information for positioning from the first electronic device respectively. According to the first information for positioning received by at least one second electronic device, first position information of at least one second electronic device relative to the first electronic device is determined. Second position information of the first user relative to the first electronic device is obtained. According to the first position information and the second position information, the sound field is calibrated to the area indicated by the second position information.
  • the user needs to hold a sound pickup device to collect audio and manually enter the distance information between the first electronic device and the second electronic device to achieve sound field calibration.
  • the first electronic device or the second electronic device automatically obtains the position information through the first information used for positioning, completes the sound field calibration, and effectively reduces the difficulty of user operation.
  • obtaining second position information of a first user relative to a first electronic device includes: the first electronic device and at least one second electronic device respectively receive a sound emitted by the first user, and determining the second position information according to the time when the first electronic device and the at least one second electronic device receive the sound of the first user.
  • the sound emitted by the first user is, for example, a voice command emitted by the first user.
  • the sound field is calibrated to the user's position through the time information of the user's voice detected by each electronic device.
  • a handheld sound pickup device can be used to collect audio and re-calibrate the sound field.
  • the sound field calibration method provided in the embodiment of the present application can automatically calibrate the sound field to the user's position in response to the sound emitted by the user when the user's position changes, thereby meeting the user's usage needs while reducing the difficulty of user operation.
  • obtaining the second position information of the first user relative to the first electronic device includes: receiving the second position information determined in response to a user operation during a process in which a third electronic device displays a first interface based on the first position information; the first interface is used to display the positional relationship between the first electronic device and at least one second electronic device, and the user operation is used to move the position of an identifier corresponding to the first user displayed on the first interface.
  • the third electronic device is, for example, an electronic device installed with an application having a sound field calibration function.
  • the sound field calibration position is moved so that the sound field calibration result meets the user's personalized needs.
  • the method in the process of receiving the first interface displayed by the third electronic device based on the first location information, before determining the second location information in response to the user operation, the method also includes: sending the first location information to the third electronic device.
  • obtaining second position information of the first user relative to the first electronic device includes: sending second information for positioning to the first user. Determining the second position information according to a sending time of the second information and a receiving time of reflected information corresponding to the second information.
  • the first electronic device and at least one second electronic device send an ultrasonic signal to the first user, so that each electronic device can determine the second position information between itself and the first user based on the time of sending the ultrasonic signal and the time of receiving the corresponding reflected signal.
  • obtaining the second position information of the first user relative to the first electronic device includes: determining the second position information of the first user relative to the first electronic device based on the device position of a fourth electronic device carried by the first user.
  • a user may carry an electronic device such as a mobile phone, and by configuring a UWB sensor, a millimeter wave sensor, etc. on the electronic device, the positional relationship between the electronic device and the smart screen and various speakers can be determined. This positional relationship is the positional relationship between the user and the smart screen and various speakers.
  • the user may carry a wearable device, such as a smart watch, smart glasses, smart headphones, etc.
  • a wearable device such as a smart watch, smart glasses, smart headphones, etc.
  • sensors such as Bluetooth, Wi-Fi, UWB sensors, millimeter wave sensors, etc. configured on the wearable device, the positional relationship between the wearable device and the smart screen and each speaker can be determined, and the positional relationship is the positional relationship between the user and the smart screen and each speaker.
  • the clocks of the first electronic device and the at least one second electronic device are synchronized, and the sound field is calibrated to the area indicated by the second position information based on the first position information and the second position information, including: adjusting the sound emission time of the first electronic device and the at least one second electronic device based on the first position information and the second position information, so that the time when the sound of the first electronic device and the at least one second electronic device arrives at the area indicated by the second position information is the same or similar.
  • the same or similar time refers to the time synchronization of the device clocks. For example, by adjusting the sound emission time of each electronic device, the sound reaches the user's ear at the same or similar time point (such as 14:00).
  • the method further includes: obtaining third position information of the second user relative to the first electronic device.
  • the first sound field is calibrated to the first area indicated by the second position information
  • the second sound field is calibrated to the second area indicated by the third position information
  • the first sound field or the second sound field is a sound field formed by part or all of the electronic devices in the first electronic device and at least one second electronic device.
  • the first user is user C1, and the second user is user C2.
  • User C1 and user C2 use a home theater.
  • the smart screen i.e., the first electronic device
  • the smart screen can determine the user positions of user C1 and user C2, as well as the positional relationship between the smart screen, the speaker (i.e., the second electronic device), user C1, and user C2.
  • the smart screen can adjust the playback parameters of multiple speakers and the smart screen according to the determined positional relationship, so that both user C1 and user C2 can obtain a better listening experience.
  • the smart screen can adjust the playback parameters of multiple speakers and smart screens so that speakers A and C near user C1 provide a better listening experience for user C1, and speakers B and D near user C2 provide a better listening experience for user C2.
  • the smart screen can provide similar listening experiences for the two users.
  • the accuracy of the pronunciation direction is improved in a multi-user scenario, and the impact of room reverberation on the listening of multiple users is reduced, thereby providing a better listening experience for multiple users and improving the user experience of multiple users.
  • the first area range and the second area range are The target sound area, the area outside the first area range and the second area range is the silent area.
  • calibrating the sound field to the area indicated by the second position information includes: determining one or more sound tracks corresponding to one or more sound objects included in the first audio to be played. Based on the first position information and the second position information, re-arranging the one or more sound tracks during the sound field calibration process so that the calibrated sound field matches the one or more sound tracks within the area indicated by the second position information.
  • the sound of multiple hummingbirds flapping their wings corresponds to multiple sound objects, and the sound tracks of multiple sound objects are arranged and rendered.
  • the sound track is calibrated to the user's listening position, and through the playback cooperation of at least one second electronic device and the first electronic device, the user can get the listening experience of hummingbirds flying around during the video playback.
  • the sound field calibration method provided in the embodiment of the present application can re-arrange and render the sound track of the sound object, and calibrate the sound field to the user's position, so that the user can obtain an immersive listening experience.
  • the method further includes: determining a target sound object among the one or more sound objects selected by the user in response to a user operation. Rearrange one or more sound tracks during the sound field calibration process according to the first position information and the second position information so that the calibrated sound field matches the one or more sound tracks within the area indicated by the second position information, including: re-arranging the target sound track corresponding to the target sound object during the sound field calibration process according to the first position information and the second position information so that the calibrated sound field matches the target sound track within the area indicated by the second position information.
  • the sound field calibration method calibrates the sound field to the user's location and provides the user with an immersive listening experience of the character perspective selected by the user, thereby improving the user's experience.
  • the first position information of at least one second electronic device relative to the first electronic device is determined, including: the target second electronic device in the at least one second electronic device responds to the first ultrasonic signal sent by the first electronic device at the second time and received at the first time, and feeds back the second ultrasonic signal to the first electronic device at the third time, and the first information includes the first ultrasonic signal.
  • the first electronic device receives the second ultrasonic signal at the fourth time. Based on the first time, the second time, the third time, and the fourth time, the distance of the target second electronic device relative to the first electronic device is determined, and the first position information includes the distance.
  • one of the speakers of the first electronic device sends an ultrasonic signal to the target second electronic device (speaker A), and after receiving the ultrasonic signal, speaker A replies with an ultrasonic signal to the smart screen.
  • the time when speaker A receives the ultrasonic signal is T1
  • the time when it replies with the ultrasonic signal is T2
  • the time when the smart screen sends the ultrasonic signal is T3
  • the time when it receives the reply ultrasonic signal is T4.
  • the smart screen or speaker A can determine the distance between speaker A and the smart screen based on the four time information of T1, T2, T3, and T4.
  • the determined distance information can be sent to the smart screen.
  • the first electronic device may send the first information for positioning in a directional manner, such as sending the first information to a target second electronic device in at least one second electronic device.
  • the first electronic device may send the first information for positioning in a non-directional manner, but at least one second electronic device may also receive the first information.
  • At least one second electronic device receives first information for positioning from the first electronic device, including: at least one second electronic device receives a first ultrasonic signal sent by the first electronic device through a first speaker at a second time, and a third ultrasonic signal sent by the second speaker at a fifth time, and the first information for positioning includes the first ultrasonic signal and the third ultrasonic signal.
  • the first information for positioning determines the first position information of at least one second electronic device relative to the first electronic device, including: any second electronic device among the at least one second electronic device determines the angle relative to the first electronic device based on the distance between the first speaker and the second speaker, the time difference between the second time and the fifth time, the time of receiving the first ultrasonic signal, the time of receiving the third ultrasonic signal, and the propagation speed of the first ultrasonic signal or the third ultrasonic signal, and the first position information includes the angle.
  • At least one first location information between at least one second electronic device and the second electronic device can be determined, and the first location information includes the distance and angle of the second electronic device relative to the first electronic device.
  • the sound field is calibrated to the area indicated by the second position information, including: based on the first position information and the second position information, adjusting the playback parameters of the first electronic device and at least one second electronic device, and calibrating the sound field to the area indicated by the second position information, the playback parameters including one or more of the following: playback frequency, response parameter, phase parameter, loudness parameter.
  • the first information is a wireless signal
  • the wireless signal is one or more of the following: an ultrasonic signal, an ultra-wideband UWB signal, a Bluetooth signal, a wireless fidelity Wi-Fi signal, and a millimeter wave signal.
  • the first electronic device or the second electronic device is a smart screen or a speaker.
  • an electronic device in a second aspect, includes: a processor and a memory, the memory and the processor are coupled, the memory is used to store computer program code, the computer program code includes computer instructions, and when the processor reads the computer instructions from the memory, the electronic device executes: at least one second electronic device receives first information for positioning from the first electronic device respectively. According to the first information for positioning received by at least one second electronic device, the first position information of at least one second electronic device relative to the first electronic device is determined. The second position information of the first user relative to the first electronic device is obtained. According to the first position information and the second position information, the sound field is calibrated to the area indicated by the second position information.
  • obtaining second position information of the first user relative to the first electronic device includes: the first electronic device and at least one second electronic device respectively receive a sound emitted by the first user, and determining the second position information according to the time when the first electronic device and the at least one second electronic device receive the sound of the first user.
  • obtaining the second position information of the first user relative to the first electronic device includes: receiving the second position information determined in response to a user operation during a process in which a third electronic device displays a first interface based on the first position information; the first interface is used to display the positional relationship between the first electronic device and at least one second electronic device, and the user operation is used to move the position of an identifier corresponding to the first user displayed on the first interface.
  • the processor when the processor reads the computer instruction from the memory, it also enables the electronic device to execute: sending the first location information to a third electronic device.
  • obtaining second position information of the first user relative to the first electronic device includes: sending second information for positioning to the first user. Determining the second position information according to a sending time of the second information and a receiving time of reflected information corresponding to the second information.
  • obtaining the second position information of the first user relative to the first electronic device includes: determining the second position information of the first user relative to the first electronic device based on the device position of a fourth electronic device carried by the first user.
  • the clocks of the first electronic device and the at least one second electronic device are synchronized, and the sound field is calibrated to the area indicated by the second position information based on the first position information and the second position information, including: adjusting the sound emission time of the first electronic device and the at least one second electronic device based on the first position information and the second position information, so that the time when the sound of the first electronic device and the at least one second electronic device arrives at the area indicated by the second position information is the same or similar.
  • the electronic device when the processor reads the computer instruction from the memory, the electronic device is further caused to execute: obtaining the third position information of the second user relative to the first electronic device.
  • the first sound field is calibrated to the first area indicated by the second position information
  • the second sound field is calibrated to the second area indicated by the third position information.
  • the first sound field or the second sound field is the first electronic device.
  • the device and part or all of the electronic devices in at least one second electronic device form a sound field.
  • the first area range and the second area range are sound target areas, and an area range other than the first area range and the second area range is a silent area.
  • calibrating the sound field to the area indicated by the second position information includes: determining one or more sound tracks corresponding to one or more sound objects included in the first audio to be played. Based on the first position information and the second position information, re-arranging the one or more sound tracks during the sound field calibration process so that the calibrated sound field matches the one or more sound tracks within the area indicated by the second position information.
  • the electronic device when the processor reads the computer instruction from the memory, the electronic device is also caused to execute: in response to the user operation, determine the target sound object among the one or more sound objects selected by the user. According to the first position information and the second position information, rearrange one or more sound tracks during the sound field calibration process so that the calibrated sound field matches the one or more sound tracks within the area indicated by the second position information, including: according to the first position information and the second position information, rearrange the target sound track corresponding to the target sound object during the sound field calibration process so that the calibrated sound field matches the target sound track within the area indicated by the second position information.
  • the first position information of at least one second electronic device relative to the first electronic device is determined, including: the target second electronic device in the at least one second electronic device responds to the first ultrasonic signal sent by the first electronic device at the second time and received at the first time, and feeds back the second ultrasonic signal to the first electronic device at the third time, and the first information includes the first ultrasonic signal.
  • the first electronic device receives the second ultrasonic signal at the fourth time. Based on the first time, the second time, the third time, and the fourth time, the distance of the target second electronic device relative to the first electronic device is determined, and the first position information includes the distance.
  • At least one second electronic device receives first information for positioning from the first electronic device, including: at least one second electronic device receives a first ultrasonic signal sent by the first electronic device through a first speaker at a second time, and a third ultrasonic signal sent by the second speaker at a fifth time, and the first information for positioning includes the first ultrasonic signal and the third ultrasonic signal.
  • the first position information of at least one second electronic device relative to the first electronic device is determined, including: any second electronic device among the at least one second electronic device determines the angle relative to the first electronic device based on the distance between the first speaker and the second speaker, the time difference between the second time and the fifth time, the time of receiving the first ultrasonic signal, the time of receiving the third ultrasonic signal, and the propagation speed of the first ultrasonic signal or the third ultrasonic signal, and the first position information includes the angle.
  • the sound field is calibrated to the area indicated by the second position information, including: based on the first position information and the second position information, adjusting the playback parameters of the first electronic device and at least one second electronic device, and calibrating the sound field to the area indicated by the second position information, the playback parameters including one or more of the following: playback frequency, response parameter, phase parameter, loudness parameter.
  • the first information is a wireless signal
  • the wireless signal is one or more of the following: an ultrasonic signal, an ultra-wideband UWB signal, a Bluetooth signal, a wireless fidelity Wi-Fi signal, and a millimeter wave signal.
  • the first electronic device or the second electronic device is a smart screen or a speaker.
  • a sound field calibration method is provided, which is applied to a system including a first electronic device and at least one second electronic device.
  • the method includes: at least one second electronic device receives first information for positioning from the first electronic device respectively. According to the first information for positioning received by at least one second electronic device, first position information of at least one second electronic device relative to the first electronic device is determined. According to the preset information and the first position information, placement suggestions are obtained.
  • the first electronic device may first determine the number of second electronic devices, the home environment, etc. Prompt information is displayed to prompt the user of the placement of the first electronic device and the second electronic device, so that the placement of the first electronic device and the second electronic device corresponds to the placement requirements of the sound device in the standard sound field, so as to ensure the subsequent sound field calibration effect.
  • the method further includes: sending a placement suggestion to a third electronic device, the placement suggestion being used to adjust the placement positions of the first electronic device and at least one second electronic device.
  • the user's third electronic device may display a map according to the placement suggestion. Then, the user may place multiple electronic devices in the space according to the schematic positions of each electronic device in the map, and the multiple electronic devices include the first electronic device.
  • placement suggestions may be obtained to determine whether the placement of the electronic devices in the space needs to be adjusted.
  • the method further includes: sending second information for detection into the space. Acquiring acoustic parameters in the space according to the second information for detection. Acquiring placement suggestions according to the preset information and the first position information, including: acquiring placement suggestions according to the preset information, the first position information, and the preset information.
  • the acoustic parameters include a reflection coefficient, an absorption coefficient, and a transmission coefficient of sound by objects in a space where the first electronic device is located.
  • the method also includes: calibrating the sound field to a first area according to the first position information, the first area is located in front of the first electronic device, and the first area is one or more of the following: an area corresponding to the display screen size of the first electronic device, an area where the user is located, and an area indicated by the user.
  • the default user is watching the movie directly in front of the first electronic device (such as a smart screen), so the default position of the calibrated sound field should be directly in front of the first electronic device.
  • the default position can be determined according to the size of the display screen of the first electronic device, and the default position can be determined as the optimal viewing and listening position.
  • the display screen size of the first electronic device is 75 inches, and the default position is, for example, 3 meters to 4 meters in front of the first electronic device from the smart screen.
  • the display screen size of the first electronic device is 100 inches, and the default position is, for example, 5 meters to 6 meters in front of the first electronic device from the smart screen.
  • the direction of the first area is determined according to a preset direction of the first electronic device, or is determined in response to a user operation.
  • an electronic device in a fourth aspect, includes: a processor and a memory, the memory and the processor are coupled, the memory is used to store computer program code, the computer program code includes computer instructions, and when the processor reads the computer instructions from the memory, the electronic device executes: at least one second electronic device receives first information for positioning from the first electronic device respectively. According to the first information for positioning received by the at least one second electronic device, the first position information of the at least one second electronic device relative to the first electronic device is determined. According to the preset information and the first position information, placement suggestions are obtained.
  • the processor when the processor reads the computer instructions from the memory, it also causes the electronic device to execute: sending a placement suggestion to a third electronic device, the placement suggestion being used to adjust the placement positions of the first electronic device and at least one second electronic device.
  • the electronic device when the processor reads the computer instruction from the memory, the electronic device is also caused to execute: sending second information for detection to the space. Acquiring acoustic parameters in the space according to the second information for detection. Acquiring placement suggestions according to the preset information and the first position information, including: acquiring placement suggestions according to the preset information, the first position information and the preset information.
  • the processor when the processor reads computer instructions from the memory, it also enables the electronic device to execute: calibrate the sound field to the first area according to the first position information, the first area is located in front of the first electronic device, and the first area is one or more of the following: an area corresponding to the display screen size of the first electronic device, an area where the user is located, and an area indicated by the user.
  • the direction of the first area is determined according to a preset direction of the first electronic device, or is determined in response to a user operation.
  • an electronic device which has the function of implementing the sound field calibration method described in the first aspect and any possible implementation thereof; or, the electronic device has the function of implementing the sound field calibration method described in the third aspect and any possible implementation thereof.
  • This function can be implemented by hardware or can be executed by hardware.
  • the hardware or software includes one or more modules corresponding to the above functions.
  • a computer-readable storage medium stores a computer program (also referred to as an instruction or code), and when the computer program is executed by an electronic device, the electronic device executes the method of the first aspect or any one of the implementations of the first aspect; or, the electronic device executes the method of the third aspect or any one of the implementations of the third aspect.
  • a computer program also referred to as an instruction or code
  • a computer program product is provided.
  • the electronic device executes the method of the first aspect or any one of the embodiments of the first aspect; or, the electronic device executes the method of the third aspect or any one of the embodiments of the third aspect.
  • a circuit system comprising a processing circuit, the processing circuit being configured to execute the method of the first aspect or any one of the embodiments of the first aspect; or, the processing circuit being configured to execute the method of the third aspect or any one of the embodiments of the third aspect.
  • a chip system comprising at least one processor and at least one interface circuit, wherein the at least one interface circuit is used to perform transceiver functions and send instructions to the at least one processor, and when the at least one processor executes the instructions, the at least one processor executes the method of the first aspect or any one of the embodiments of the first aspect; or, the at least one processor executes the method of the third aspect or any one of the embodiments of the third aspect.
  • FIG1 is a schematic diagram of a communication system to which a sound field calibration method is applied according to an embodiment of the present application
  • FIG2A is a schematic diagram of the hardware structure of a first electronic device provided in an embodiment of the present application.
  • FIG2B is a schematic diagram of speaker positions of a first electronic device provided in an embodiment of the present application.
  • FIG3 is a schematic diagram of the “emperor’s seat” in the sound field provided in an embodiment of the present application.
  • FIG4A is a schematic diagram of a rectangular standard sound field provided in an embodiment of the present application.
  • FIG4B is a schematic diagram of a circular standard sound field provided in an embodiment of the present application.
  • FIG5 is a first schematic diagram of an interface provided in an embodiment of the present application.
  • FIG6A is a schematic diagram of a home theater sound field calibration scenario 1 provided in an embodiment of the present application.
  • FIG6B is a schematic diagram of an angle confirmation scenario during a home theater sound field calibration process provided by an embodiment of the present application.
  • FIG6C is a schematic diagram of a distance confirmation scenario in a home theater sound field calibration process provided by an embodiment of the present application.
  • FIG7A is a second schematic diagram of a home theater sound field calibration scenario provided by an embodiment of the present application.
  • FIG. 7B is a schematic diagram of a flow chart of confirming the position relationship between speakers during the home theater sound field calibration process provided by an embodiment of the present application;
  • FIG7C is a schematic diagram of a sound box sound emission and sound collection scene during a home theater sound field calibration process provided by an embodiment of the present application;
  • FIG8 is a third schematic diagram of a home theater sound field calibration scenario provided by an embodiment of the present application.
  • FIG9 is a fourth schematic diagram of a home theater sound field calibration scenario provided by an embodiment of the present application.
  • FIG10 is a fifth schematic diagram of a home theater sound field calibration scenario provided by an embodiment of the present application.
  • FIG11 is a sixth schematic diagram of a home theater sound field calibration scenario provided by an embodiment of the present application.
  • FIG12 is a seventh schematic diagram of a home theater sound field calibration scenario provided by an embodiment of the present application.
  • FIG13 is a second schematic diagram of an interface provided in an embodiment of the present application.
  • FIG14 is a schematic diagram of a sound object provided in an embodiment of the present application.
  • FIG15 is a schematic diagram of a home theater sound field calibration scenario eight provided in an embodiment of the present application.
  • FIG16 is a third schematic diagram of an interface provided in an embodiment of the present application.
  • FIG17 is a fourth schematic diagram of an interface provided in an embodiment of the present application.
  • FIG18 is a first schematic diagram of sound propagation provided in an embodiment of the present application.
  • FIG19 is a second schematic diagram of sound propagation provided in an embodiment of the present application.
  • FIG20 is a schematic diagram of a flow chart of a sound field calibration method provided in an embodiment of the present application.
  • FIG. 21 is a schematic diagram of the structure of an electronic device provided in an embodiment of the present application.
  • references to "one embodiment” or “some embodiments” etc. described in this specification mean that one or more embodiments of the present application include specific features, structures or characteristics described in conjunction with the embodiment. Therefore, the statements “in one embodiment”, “in some embodiments”, “in some other embodiments”, “in some other embodiments”, etc. that appear in different places in this specification do not necessarily refer to the same embodiment, but mean “one or more but not all embodiments", unless otherwise specifically emphasized in other ways.
  • the terms “including”, “comprising”, “having” and their variations all mean “including but not limited to”, unless otherwise specifically emphasized in other ways.
  • connection includes direct connection and indirect connection, unless otherwise specified. "First” and “second” are used for descriptive purposes only and cannot be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated.
  • the words “exemplarily” or “for example” are used to indicate examples, illustrations or explanations. Any embodiment or design described as “exemplarily” or “for example” in the embodiments of the present application should not be interpreted as being more preferred or more advantageous than other embodiments or designs. Specifically, the use of words such as “exemplarily” or “for example” is intended to present related concepts in a specific way.
  • Fig. 1 is a schematic diagram of a communication system for applying a sound field calibration method provided in an embodiment of the present application. As shown in Fig. 1 , a first electronic device 100 and a second electronic device 200 establish a communication connection.
  • a wireless communication connection is established between the first electronic device 100 and the second electronic device 200.
  • the first electronic device 100 can send the sound to be played on the first electronic device 100 to the second electronic device 200 for playback through the wireless communication connection with the second electronic device 200.
  • the sound to be played can be an audio file.
  • the first electronic device 100 and the second electronic device 200 cooperate to play the audio file to provide the user with a home theater audio and video effect.
  • the first electronic device 100 may include, but is not limited to, large-screen display devices (such as smart screens, large-screen devices, etc.), laptop computers, smart phones, tablet computers, projection devices, laptop computers (Laptop), personal digital assistants (PDA), artificial intelligence (AI) devices, wearable devices (such as smart watches, etc.), and other devices.
  • the operating system installed on the first electronic device 100 includes, but is not limited to, Or other operating systems.
  • the first electronic device 100 may not be installed with an operating system.
  • the first electronic device 100 may be a fixed device or a portable device. This application does not limit the specific type of the first electronic device 100 or the installed operating system.
  • the second electronic device 200 may include but is not limited to an electronic device with a sound playback function such as a speaker or a wireless speaker.
  • the second electronic device 200 may be installed with an operating system.
  • the operating system installed with the second electronic device 200 may include but is not limited to Or other operating systems.
  • the second electronic device 200 may not be installed with an operating system. This application does not limit the specific type of the second electronic device 200, whether an operating system is installed, and the type of operating system if an operating system is installed.
  • the first electronic device 100 can establish a wireless communication connection with the second electronic device 200 through wireless communication technology.
  • the wireless communication technology includes but is not limited to at least one of the following: Bluetooth (BT) (e.g., traditional Bluetooth or Bluetooth low energy (BLE)), wireless local area networks (WLAN) (e.g., wireless fidelity (Wi-Fi) networks), near field communication (NFC), ZigBee (Zigbee), frequency modulation (FM), etc.
  • Bluetooth e.g., traditional Bluetooth or Bluetooth low energy (BLE)
  • WLAN wireless local area networks
  • NFC near field communication
  • ZigBee ZigBee
  • FM frequency modulation
  • both the first electronic device 100 and the second electronic device 200 support a proximity discovery function.
  • the first electronic device 100 and the second electronic device 200 can discover each other and then establish a wireless communication connection such as a Bluetooth connection, a Wi-Fi peer to peer (P2P) connection, etc.
  • a wireless communication connection such as a Bluetooth connection, a Wi-Fi peer to peer (P2P) connection, etc.
  • the first electronic device 100 and the second electronic device 200 establish a wireless communication connection via a local area network.
  • the first electronic device 100 and the second electronic device 200 are connected to the same router.
  • the sound field calibration of the home theater is completed through the sound field calibration method described in the following embodiments.
  • the above communication system may not include the first electronic device 100, and the sound field calibration of the communication system composed of multiple second electronic devices 200 is completed through the sound field calibration methods described in the following embodiments.
  • FIG. 2A shows a schematic structural diagram of the first electronic device 100 .
  • the first electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a power management module 140, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a sensor module 180, a button 190, an indicator 191, a camera 192, and a display screen 193, etc.
  • a processor 110 an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a power management module 140, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a sensor module 180, a button 190, an indicator 191, a camera 192, and a display screen 193, etc.
  • USB universal serial bus
  • the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the first electronic device 100.
  • the first electronic device 100 may include more or fewer components than shown in the figure, or combine some components, or split some components, or arrange the components differently.
  • the components shown in the figure may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (AP), a modem processor, a graphics processor (GPU), an image signal processor (ISP), a controller, a video codec, a digital signal processor (DSP), a baseband processor, and/or a neural-network processing unit (NPU), etc.
  • AP application processor
  • GPU graphics processor
  • ISP image signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • Different processing units may be independent devices or integrated in one or more processors.
  • the controller can generate operation control signals according to the instruction operation code and timing signal to complete the control of instruction fetching and execution.
  • the processor 110 may also be provided with a memory for storing instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory may store instructions or data that the processor 110 has just used or cyclically used. If the processor 110 needs to use the instruction or data again, it may be directly called from the memory. This avoids repeated access, reduces the waiting time of the processor 110, and thus improves the efficiency of the system.
  • the processor 110 may include one or more interfaces.
  • the interface may include an inter-integrated circuit (I2C) interface, an inter-integrated circuit sound (I2S) interface, a pulse code modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a mobile industry processor interface (MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (SIM) interface, and/or a universal serial bus (USB) interface, etc.
  • I2C inter-integrated circuit
  • I2S inter-integrated circuit sound
  • PCM pulse code modulation
  • UART universal asynchronous receiver/transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus
  • the I2C interface is a bidirectional synchronous serial bus, including a serial data line (SDA) and a serial clock line (SCL).
  • the processor 110 may include multiple groups of I2C buses.
  • the processor 110 may be coupled to the touch sensor, the charger, the flash, the camera 192, etc. through different I2C bus interfaces.
  • the processor 110 may be coupled to the touch sensor through the I2C interface, so that the processor 110 communicates with the touch sensor through the I2C bus interface to realize the touch function of the first electronic device 100.
  • the MIPI interface can be used to connect the processor 110 with peripheral devices such as the display screen 193 and the camera 192.
  • the MIPI interface includes a camera serial interface (CSI), a display serial interface (DSI), etc.
  • the processor 110 and the camera 192 communicate via the CSI interface to implement the shooting function of the first electronic device 100.
  • the processor 110 and the display screen 193 communicate via the DSI interface to implement the display function of the first electronic device 100.
  • the USB interface 130 is an interface that complies with USB standard specifications, and may be a Mini USB interface, a Micro USB interface, a USB Type C interface, etc.
  • the USB interface 130 may be used to transmit data between the first electronic device 100 and a peripheral device. Used to connect other electronic devices, such as AR devices, etc.
  • the interface connection relationship between the modules illustrated in the embodiment of the present application is only a schematic illustration and does not constitute a structural limitation on the first electronic device 100.
  • the first electronic device 100 may also adopt different interface connection methods in the above embodiments, or a combination of multiple interface connection methods.
  • the power management module 140 is used to supply power to the processor 110 and other modules included in the first electronic device 100. In some embodiments, the power management module 140 can be used to receive power supply input to support the operation of the first electronic device 100.
  • the wireless communication function of the first electronic device 100 can be implemented through the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor and the baseband processor.
  • Antenna 1 and antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in the first electronic device 100 can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve the utilization of the antennas.
  • antenna 1 can be reused as a diversity antenna for a wireless local area network.
  • the antenna can be used in combination with a tuning switch.
  • the mobile communication module 150 can provide solutions for wireless communications including 2G/3G/4G/5G applied to the first electronic device 100.
  • the mobile communication module 150 may include at least one filter, a switch, a power amplifier, a low noise amplifier (LNA), etc.
  • the mobile communication module 150 can receive electromagnetic waves from the antenna 1, and filter, amplify, and process the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modulation and demodulation processor, and convert it into electromagnetic waves for radiation through the antenna 1.
  • at least some of the functional modules of the mobile communication module 150 can be set in the processor 110.
  • at least some of the functional modules of the mobile communication module 150 can be set in the same device as at least some of the modules of the processor 110.
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low-frequency baseband signal to be sent into a medium-high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal.
  • the demodulator then transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the application processor outputs a sound signal through an audio device, or displays an image or video through a display screen 193.
  • the modem processor may be an independent device.
  • the modem processor may be independent of the processor 110 and be set in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide wireless communication solutions for application on the first electronic device 100, including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) network), Bluetooth (BT), global navigation satellite system (GNSS), frequency modulation (FM), near field communication technology (NFC), infrared technology (IR), etc.
  • WLAN wireless local area networks
  • Wi-Fi wireless fidelity
  • BT Bluetooth
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication technology
  • IR infrared technology
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signal and performs filtering, and sends the processed signal to the processor 110.
  • the wireless communication module 160 may also receive a signal to be sent from the processor 110, modulate the signal, amplify the signal, and convert it into an electromagnetic wave for radiation via the antenna 2.
  • the antenna 1 of the first electronic device 100 is coupled to the mobile communication module 150, and the antenna 2 is coupled to the wireless communication module 160, so that the first electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), wideband code division multiple access (WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC, FM, and/or IR technology.
  • the GNSS may include a global positioning system (GPS), a global navigation satellite system (GLONASS), a Beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS) and/or a satellite based augmentation system (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation system
  • the first electronic device 100 implements a display function through a GPU, a display screen 193, and an application processor.
  • the GPU is a microprocessor for image processing, which connects the display screen 193 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • the processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 193 is used to display images, videos, etc.
  • the display screen 193 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix Organic light emitting diode or active-matrix organic light emitting diode (AMOLED), flexible light emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diodes (QLED), etc.
  • the first electronic device 100 may include 1 or N display screens 193, where N is a positive integer greater than 1.
  • the camera 192 is used to capture still images or videos.
  • the object generates an optical image through the lens and projects it onto the photosensitive element.
  • the photosensitive element can be a charge coupled device (CCD) or a complementary metal oxide semiconductor (CMOS) phototransistor.
  • CMOS complementary metal oxide semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then passes the electrical signal to the ISP to be converted into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • the DSP converts the digital image signal into an image signal in a standard RGB, YUV or other format.
  • the first electronic device 100 may include 1 or N cameras 192, where N is a positive integer greater than 1.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the first electronic device 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music and videos are stored in the external memory card.
  • the internal memory 121 can be used to store computer executable program codes, which include instructions.
  • the internal memory 121 can include a program storage area and a data storage area.
  • the program storage area can store an operating system, an application required for at least one function (such as a sound playback function, an image playback function, etc.), etc.
  • the data storage area can store data created during the use of the first electronic device 100 (such as audio data, a phone book, etc.), etc.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one disk storage device, a flash memory device, a universal flash storage (UFS), etc.
  • the processor 110 executes various functional applications and data processing of the first electronic device 100 by running instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
  • the audio module 170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signal.
  • the audio module 170 can also be used to encode and decode audio signals.
  • the audio module 170 can be arranged in the processor 110, or some functional modules of the audio module 170 can be arranged in the processor 110.
  • the first electronic device 100 can implement audio functions through the audio module 170 and the application processor, etc. For example, music playing, recording, etc.
  • the audio module 170 can include, for example, a speaker, a receiver, a microphone, etc.
  • the speaker also called a "speaker" is used to convert an audio electrical signal into a sound signal.
  • the first electronic device 100 can send an ultrasonic signal or play audio through the speaker.
  • the speaker can be a built-in component of the first electronic device 100 or an external accessory of the first electronic device 100.
  • the first electronic device 100 may include one or more speakers, wherein each speaker or a plurality of speakers working together may achieve calibration of a sound field, etc.
  • FIG2B an exemplary layout diagram of multiple speakers on the first electronic device 100 is given.
  • the front of the first electronic device 100 is the plane where the display screen 193 is located
  • the speaker 21 is located at the left position of the top of the first electronic device 100 (usually the side where the display screen is located)
  • the speaker 22 is located at the right position of the top right of the first electronic device 100.
  • the speaker 21 and the speaker 22 can also be left-right symmetrical relative to the central axis of the display screen 193 of the first electronic device 100.
  • the first electronic device 100 sends ultrasonic signals through the speaker 21 and the speaker 22, respectively, and the corresponding second electronic device 200 can receive the ultrasonic signals.
  • the second electronic device 200 can perform positioning calculations based on the received ultrasonic signals to determine the distance and angle relationship between the first electronic device 100 and the second electronic device 200.
  • the second electronic device 200 can also send the time information of receiving the ultrasonic signal to the first electronic device 100, and the first electronic device 100 can perform positioning calculations to determine the distance and angle relationship between the first electronic device 100 and the second electronic device 200.
  • the first electronic device 100 may further include a greater number of speakers.
  • the embodiment of the present application does not specifically limit the number of speakers.
  • the microphone also called a “microphone” or “microphone” is used to convert sound signals into analog audio electrical signals.
  • the first electronic device 100 can collect surrounding sound signals through the microphone.
  • the built-in component may also be an external accessory of the first electronic device 100 .
  • the first electronic device 100 may include one or more microphones, wherein each microphone or multiple microphones working together can collect sound signals from all directions and convert the collected sound signals into analog audio electrical signals, and can also identify the source of sound, reduce noise, or perform directional recording, etc.
  • the first electronic device 100 collects user voice signals through a microphone to locate the user's position.
  • the sensor module 180 may include a pressure sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, and the like.
  • the pressure sensor is used to sense the pressure signal and can convert the pressure signal into an electrical signal.
  • the pressure sensor can be arranged on the display screen 193.
  • pressure sensors such as resistive pressure sensors, inductive pressure sensors, capacitive pressure sensors, etc.
  • a capacitive pressure sensor can be a parallel plate including at least two conductive materials.
  • the first electronic device 100 determines the intensity of the pressure based on the change in capacitance.
  • the first electronic device 100 detects the intensity of the touch operation based on the pressure sensor.
  • the first electronic device 100 can also calculate the position of the touch based on the detection signal of the pressure sensor.
  • a touch sensor is also called a "touch control device”.
  • the touch sensor can be arranged on the display screen 193, and the touch sensor and the display screen 193 form a touch screen, also called a "touch control screen”.
  • the touch sensor is used to detect touch operations acting on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to the touch operation can be provided through the display screen 193.
  • the touch sensor can also be arranged on the surface of the first electronic device 100, which is different from the position of the display screen 193.
  • the first electronic device 100 displays a sound field calibration interface through the display screen 193, and automatically completes the sound field calibration after detecting through the touch sensor that the user indicates an operation of sound field calibration on the sound field calibration interface.
  • the key 190 includes a power key, a volume key, etc.
  • the key 190 may be a mechanical key or a touch key.
  • the first electronic device 100 may receive key input and generate key signal input related to user settings and function control of the first electronic device 100.
  • Indicator 191 may be an indicator light, which may be used to indicate a power state, or may be used to indicate a message, a notification, and the like.
  • the second electronic device 200 may have the same or similar hardware structure as the first electronic device 100.
  • the second electronic device 200 may also include more or fewer components than the structure shown in FIG. 2A, or combine certain components, or split certain components, or arrange components differently.
  • the embodiment of the present application does not specifically limit the structure of the second electronic device 200.
  • the following embodiments take the first electronic device 100 as a smart screen and the second electronic device 200 as a speaker (such as including speaker A, speaker B, speaker C, and speaker D) as an example to introduce the sound field calibration method provided in the embodiments of the present application in detail.
  • a speaker such as including speaker A, speaker B, speaker C, and speaker D
  • a "emperor seat” sound field calibration is performed for the user position to provide the user with a better listening experience.
  • the "emperor seat” refers to a position where the sound loudness, stereo surround effect, etc. are balanced in terms of sound angle.
  • the sound field calibration can be used to adjust the sound parameters of the speaker to meet the needs of the "emperor seat”.
  • the best viewing or listening position can be determined first.
  • the “Emperor’s Seat” is set to a position directly in front of the smart screen, 3 meters away from the smart screen, etc.
  • the sound field is calibrated with the set “Emperor’s Seat” as the center position, and the center of the sound field is calibrated to the “Emperor’s Seat”, so that the best viewing and listening position for the user can be gathered at the “Emperor’s Seat”.
  • the standard sound field generally includes a rectangular standard sound field and a circular standard sound field.
  • the positions of multiple sound-emitting devices can be connected to form a rectangle.
  • the positions of multiple sound-emitting devices can be connected to form a circle.
  • users can arrange the smart screen and speakers included in the theater at home according to the corresponding positions of the rectangular standard sound field and the circular standard sound field to ensure the sound field calibration effect.
  • the number of sound-generating devices included in the sound field may be more or less than the number of sound-generating devices included in the standard sound field shown in FIG. 4A or FIG. 4B .
  • the smart screen can respond to user operations and start sound field calibration for the home theater.
  • the smart screen can first display prompt information based on the number of speakers, home environment, etc. to prompt the user to place the speakers. This makes the placement of the speakers and the smart screen correspond to the placement requirements of the sound devices in the standard sound field to ensure that the sound field is calibrated correctly.
  • the user operation includes, for example, an operation on the sound field calibration interface, a voice command, etc.
  • the smart screen when the smart screen displays the setting menu 51, it detects that the user clicks the sound setting control 511 displayed on the setting menu 51, and the sound field calibration interface 502 shown in (b) of FIG5 may be displayed.
  • the smart screen displays a prompt box 52 to prompt the user whether to confirm to start the sound field calibration. If it is detected that the user clicks the start control 521, it can be determined that the current user needs to perform sound field calibration on the home theater.
  • the smart screen can automatically generate corresponding speaker placement suggestions based on the number of speakers that have established communication connections, the home environment, etc., so that the placement of the speakers and the smart screen corresponds to the position placement requirements of the sound-emitting devices in the standard sound field, thereby ensuring the subsequent sound field calibration effect.
  • the smart screen displays prompt information 53 to prompt the user to place the speaker.
  • the smart screen detects that the user clicks the confirmation control 531 and determines that the speaker placement has been completed.
  • the user places speaker A, speaker B, speaker C, and speaker D according to prompt information 53 shown in interface 503.
  • the smart screen can display a prompt message to prompt the user to adjust the speaker placement to ensure the sound field calibration effect.
  • the smart screen displays the suggestion for placing four speakers separately around the room.
  • speakers A, B, C, and D can be placed according to the positions in the scene shown in Figure 6A.
  • the smart screen determines that speakers A and C are located on the left side of the smart screen and placed together; speakers B and D are located on the right side of the smart screen and placed together. That is, no speakers are placed behind the sofa. If the smart screen determines that the current speaker placement will result in a poor sound field calibration effect, the smart screen can display a prompt message to prompt the user to rearrange the speakers.
  • the smart screen can automatically calibrate the sound field according to the positional relationship between the smart screen and multiple speakers.
  • the smart screen can send ultrasonic signals in time through the left and right speakers as shown in Figure 2B.
  • the speaker is equipped with a microphone array, and the speaker can determine the distance and angle relationship between the speaker and the smart screen through a positioning algorithm based on the time difference between the two ultrasonic signals received by the microphone array.
  • the speaker can send the determined distance and angle relationship to the smart screen.
  • the smart screen can determine the geometric relationship of the distance and angle between each speaker and the smart screen, and then calibrate the "emperor's seat" to the default position in front of the smart screen.
  • the default position should be in front of the smart screen, assuming that the user is watching the movie right in front of the smart screen.
  • the default position can be determined according to the display screen size of the smart screen, and the default position can be determined as the best viewing and listening position.
  • the default position is, for example, 3 meters to 4 meters in front of the smart screen.
  • the default position is, for example, 5 meters to 6 meters in front of the smart screen.
  • the smart screen faces the sofa, and four speakers are placed in the direction facing the smart screen.
  • the smart screen has established communication connections with all four speakers.
  • the definitions of the front and back orientations are preset in the smart screen. For example, if the smart screen is configured with two left and right speakers as shown in FIG2B , the smart screen can distinguish the left and right positions of the two speakers.
  • the smart screen sends a start positioning command to the speaker to instruct the speaker to start positioning.
  • the left and right speakers of the smart screen send ultrasonic signals in different time periods. According to the left and right positions of the speakers, the smart screen can determine that the speaker is on the left or right side of the smart screen.
  • the speaker can receive the ultrasonic signal sent by the smart screen, and determine the positional relationship between the speaker and the smart screen through the positioning algorithm.
  • the startup positioning command sent by the smart screen to the speaker may carry the speaker distance information, such as the distance D; or, the speaker A may obtain the distance D through other means.
  • the two speakers of the smart screen send ultrasonic signals in time division, with a time interval of T.
  • speaker A can obtain the time interval T.
  • the time when speaker A receives the two ultrasonic signals is tR1 and tL1 respectively.
  • the time when speaker A receives the ultrasonic signal sent by speaker SpkR is tR1
  • the time when speaker A receives the ultrasonic signal sent by speaker SpkL is tL1.
  • one of the speakers of the smart screen sends an ultrasonic signal to speaker A.
  • speaker A replies to the smart screen with an ultrasonic signal.
  • the time when speaker A receives the ultrasonic signal is T1
  • the time when it replies to the ultrasonic signal is T2
  • the time when the smart screen sends the ultrasonic signal is T3
  • the time when the reply ultrasonic signal is received is T4.
  • the smart screen or speaker A can determine the distance between speaker A and the smart screen based on the four time information of T1, T2, T3, and T4.
  • the determined distance information can be sent to the smart screen.
  • the speaker can determine the angle information and distance information between the speaker and the smart screen through one or more interactions with the smart screen.
  • the speaker when receiving ultrasonic signals, can collect multiple ultrasonic signals through the microphone array configured therein.
  • the speaker can determine the average value of the angle and distance corresponding to each microphone through a preset algorithm, thereby reducing the impact of signal jitter.
  • the smart screen will calibrate the "emperor's seat” to the default position in front of the smart screen (such as the position of the sofa 3 meters away from the front of the smart screen) according to the angle information and distance information fed back by the speaker. In this way, users can get better audio-visual effects when using the home theater on the sofa.
  • the smart screen can automatically complete the sound field calibration by sending ultrasonic signals, effectively reducing the difficulty of user operation.
  • the smart screen may also determine the position information between the smart screen and the speaker, wherein the position information includes angle information and distance information.
  • the two speakers configured in the smart screen send ultrasonic signals respectively.
  • speaker A receives the ultrasonic signal, it transmits the received ultrasonic signal back to the smart screen through the communication connection (such as Wi-Fi connection, etc.) between the speaker and the smart screen.
  • the communication connection such as Wi-Fi connection, etc.
  • the angle information between speaker A and the smart screen can be determined through a preset algorithm.
  • the smart screen sends a receiving instruction to speaker A through the communication connection with speaker A (such as Wi-Fi connection, Bluetooth connection, etc.), which is used to instruct speaker A to start receiving sound, and the smart screen also starts receiving sound.
  • the smart screen sends an ultrasonic signal through any one of the two speakers configured with it.
  • speaker A sends an ultrasonic signal to the smart screen.
  • the smart screen and speaker A record the ultrasonic signals sent by themselves and the other end, and speaker A transmits the recorded ultrasonic signals back to the smart screen through the communication connection with the smart screen.
  • the smart screen determines the position of speaker A and the distance information between speaker A and the smart screen based on the ultrasonic signals recorded by itself and the ultrasonic signals recorded by speaker A.
  • the speaker can determine the angle information and distance information between the speaker and the smart screen through one or more interactions with the smart screen.
  • the speaker and smart screen can collect multiple ultrasonic signals through the microphone array configured therein.
  • the smart screen can determine the average value of the angle and distance corresponding to each microphone through a preset algorithm, thereby reducing the impact of signal jitter.
  • the smart screen can calibrate the sound field to the default position based on the determined angle information and distance information.
  • the sound field calibration can also be completed by the speaker independently.
  • the user configures multiple speakers in the bedroom, and the multiple speakers are connected to form a communication system, and the communication system does not include a smart screen. Then, the speaker can also automatically complete the sound field calibration of the communication system through the above-mentioned ultrasonic signal positioning method.
  • speakers A, B, C, and D form a communication system.
  • one of the speakers may be determined as a master speaker and the other speakers may be slave speakers.
  • the following uses speaker A as the master speaker as an example to illustrate the sound field calibration process.
  • the main speaker also needs to determine the directional order of each slave speaker from the perspective of the main speaker.
  • the master speaker can determine the positional relationship between each slave speaker and the master speaker from the perspective of the master speaker according to the distance between each speaker and the directional order of each slave speaker, and then perform sound field calibration based on the positional relationship.
  • speaker A is the main speaker, and each speaker in the communication system establishes a communication connection (such as a Wi-Fi connection, etc.). Afterwards, speaker A sends an instruction message to each speaker based on the communication connection with each speaker, and the instruction message is used to instruct the speaker to start receiving sound, and to send an ultrasonic signal after waiting for a period of time (i.e., to notify the sound order).
  • speaker A can customize the waiting time for each speaker to wait for sending an ultrasonic signal, and the waiting time of each speaker is different, so that each subsequent speaker can record the ultrasonic signal in sequence.
  • the sound order of each speaker is determined by speaker A. For example, as shown in FIG7B , the sound order of each speaker is speaker A, speaker B, speaker C, and speaker D.
  • speaker A starts to send ultrasonic signals
  • speakers B, C, and D also start to send ultrasonic signals after waiting for a period of time according to the received instruction information.
  • each speaker records the ultrasonic signals sent by itself and other speakers, and can determine the arrival time of the ultrasonic signals sent by each speaker, that is, as shown in Figure 7B, each speaker calculates the intermediate result.
  • speakers B, C, and D can transmit the recorded time information (that is, the determined intermediate result) back to speaker A, and speaker A can determine the distance between every two speakers in the communication system according to the time information recorded by itself and the time information recorded by other speakers.
  • speaker A can determine the azimuth of the slave speaker relative to speaker A by using the time difference between different microphones in the microphone array receiving the same ultrasonic signal sent by the slave speaker. Then, in the process of speaker A determining the distance between each speaker, speaker A can determine the directional order of each speaker from the perspective of speaker A based on the ultrasonic signals sent by each slave speaker.
  • the speaker A can determine the positional relationship between the speakers according to the distance between the speakers and the direction order of each speaker. Then, the speaker A can perform sound field calibration based on the positional relationship.
  • the user may input its position information.
  • the speaker located in the left front of the user by default is taken as the main speaker.
  • one or more speakers announce “Please press the play button on the speaker device in front of you to the left” to prompt the user to press the button on the speaker device in front of the left.
  • the speaker operated by the user such as speaker A
  • speaker A can be determined to be the main speaker.
  • speaker A can use itself as the coordinate origin to locate the positions of other speakers, such as in front of the user's right, behind the left, behind the right, etc.
  • multiple speakers take turns to broadcast voice prompts to prompt the user to determine the location of the speaker.
  • the corresponding speaker is determined to be the main speaker.
  • multiple speakers take turns to announce in voice, "Where am I from? You can say left front, right front, left back, right back.”
  • the speaker recognizes that the user's voice reply is intended to be the left front through voice recognition, and then the speaker can be marked as the main speaker at the left front position, and other speakers are positioned with this speaker as the coordinate origin.
  • each speaker may make voice announcements in turn, and the position of each speaker may be determined based on the voice recognition capability of each speaker and the user's response.
  • the position of the main speaker is determined by gesture recognition. Multiple speakers take turns to broadcast voice prompts to prompt the user to determine the position of the speaker. When it is detected that the user is located at the left front speaker, the corresponding speaker is determined to be the main speaker.
  • multiple speakers take turns to announce in voice, "Please use gestures to confirm where I am in front of you. If you are in the left front, you can wave your arms from left to right; if you are in the right front, you can wave your arms from right to left; if you are in the left rear, you can wave your arms from top to bottom; if you are in the right rear, you can wave your arms from bottom to top.”
  • the speakers can determine the user's gesture through their gesture recognition capabilities. When the speaker detects the user's gesture of "waving arms from left to right”, it can recognize that the user's gesture setting intention is the left front position, and the speaker can be marked as the main speaker in the left front position, and other speakers are positioned with this speaker as the coordinate origin.
  • each speaker takes turns to broadcast voice prompts, and the position of each speaker is determined according to the user's gestures through the gesture recognition capability of each speaker.
  • voice broadcast content and gestures are all exemplary descriptions, and the embodiments of the present application do not specifically limit the voice broadcast content and gestures.
  • the user can record the orientation information of the main speaker (such as speaker A), such as the direction of the front of speaker A; or, the user can record the positional relationship of each speaker in speaker A, such as speaker B is located to the right of speaker A.
  • the orientation information of the main speaker such as speaker A
  • the user can record the positional relationship of each speaker in speaker A, such as speaker B is located to the right of speaker A.
  • the electronic devices in the communication system automatically determine the positional relationship between the devices, realize automatic sound field calibration, simplify user operations, reduce user operation difficulty, and improve user experience.
  • the microphone array configured in the electronic device may include multiple microphones arranged vertically. Based on the vertically arranged microphone array, the spatial height information of the electronic device placement position can be located. During the sound field calibration process, the electronic device performs sound field calibration based on the positional relationship between the electronic devices determined above and the spatial height information, which can improve the accuracy of the sound field calibration.
  • the smart screen can determine the spatial horizontal line.
  • the spatial horizontal line is, for example, one or more of the position of the lower edge of the smart screen, the position of the upper edge, the position of the speaker, the position of the ground, the position of the ceiling, etc.
  • the smart screen is configured with a sky sound speaker that emits sound toward the ceiling, speakers deployed in different directions of 360°, speakers deployed on the back of the smart screen, etc. The smart screen can determine the position of the ground, the position of the ceiling, etc. through the ultrasonic signals sent by these speakers.
  • the smart screen can send an indication message to the speaker to indicate the spatial horizontal line. Later, during the sound field calibration process, the speaker receives an ultrasonic signal sent by one of the speakers of the smart screen. Based on the time difference between each microphone in the vertically arranged microphone array receiving the ultrasonic signal and the distance between each microphone, the height of the speaker relative to the spatial horizontal line can be determined, and the determined spatial height information can be fed back to the smart screen. Then, the smart screen can perform sound field calibration based on the determined positional relationship between the smart screen and each speaker, the acquired spatial height information sent by each speaker, and its own spatial height information to achieve more accurate sound field calibration.
  • the main speaker can determine the spatial horizontal line.
  • the spatial horizontal line is, for example, one or more of the position of the lower edge of the main speaker, the position of the upper edge, the position of the speaker, the position of the ground, the position of the ceiling, etc.
  • the smart screen can send an indication message to the slave speaker to indicate the spatial horizontal line.
  • each speaker can determine the height of the speaker relative to the spatial horizontal line based on the time difference of each microphone in the vertically arranged microphone array receiving the ultrasonic signal, and the distance between each microphone.
  • the main speaker can perform sound field calibration based on the determined spacing, directional order, and spatial height information of each speaker to achieve more accurate sound field calibration.
  • ultra-wideband (UWB) sensors For example, by configuring one or more of ultra-wideband (UWB) sensors, millimeter wave sensors, multiple antennas for Wi-Fi protocol positioning, multiple antennas for Bluetooth protocol positioning, etc. on electronic devices, it is possible to automatically determine the location positioning relationship between devices.
  • UWB ultra-wideband
  • the following text uses the ultrasonic positioning technology of electronic devices as an example to illustrate the positioning process of multiple electronic devices.
  • the process of using other methods to achieve automatic positioning of electronic devices can refer to the process of achieving automatic positioning of electronic devices through ultrasonic positioning technology, which will not be repeated below.
  • the sound field calibration algorithm used by the smart screen can refer to the existing sound field calibration algorithm, and the embodiments of the present application do not make specific limitations or descriptions on this.
  • the sound field calibration process can refer to the sound field calibration process of the communication system including a smart screen, and this embodiment of the application will not be repeated.
  • the smart screen can also locate the user's position and calibrate the sound field to the user's position, thereby providing the user with a better listening experience.
  • the sound field calibration can be performed after the speaker or smart screen is awakened.
  • the user can use a preset voice command to A command, such as "Xiaoyi, Xiaoyi", is used to wake up the speaker and smart screen.
  • a voice command for sound field calibration is preset, such as "sound field calibration”. Then, when the smart screen and speaker in the communication system detect the preset voice command, they can determine that the user instructs to perform sound field calibration. Then, the smart screen and speaker can determine the time information when the voice command is detected for subsequent positioning of the user.
  • the smart screen and four speakers all detect the voice command issued by the user, such as the time when speaker A detects the voice command is t1, the time when the smart screen detects the voice command is t2, the time when speaker B detects the voice command is t3, the time when speaker D detects the voice command is t4, and the time when speaker C detects the voice command is t5. Afterwards, each speaker can send the time information of receiving the voice command to the smart screen.
  • the smart screen and multiple speakers in the communication system have completed clock synchronization through a high-precision clock synchronization algorithm, and the time error is within a preset range (such as 1 ⁇ s level). Then, the smart screen can determine the time difference based on the time information of each receiving the voice command feedback from multiple speakers. After that, the smart screen can determine the distance difference between each speaker and the user based on the speed of sound propagation, and then determine the distance between the user and the smart screen, and the user and each speaker, such as the distances are d1-d5 respectively.
  • the smart screen can determine the positional relationship between the user, the smart screen, and the speaker based on the positional relationship between the smart screen and the speaker determined by the ultrasonic positioning technology (such as the distance and angle relationship). Then, the smart screen can adjust the sound effect of the speaker based on the positional relationship between the user, the smart screen, and the speaker, and calibrate the "Emperor's Seat” sound field to the user's position.
  • the sound field is calibrated to the user's position through the time information of the user's voice command detected by each speaker.
  • a handheld sound pickup device can be used to collect audio and re-calibrate the sound field.
  • the sound field calibration method provided in the embodiment of the present application can automatically calibrate the sound field to the user's position in response to the user's voice command when the user's position changes, thereby meeting the user's usage needs while reducing the difficulty of user operation.
  • the user issues a preset voice command, such as "Xiaoyi, Xiaoyi".
  • a preset voice command such as "Xiaoyi, Xiaoyi”.
  • the smart screen and speaker in the communication system can respond to the detected preset voice command to determine the user's location.
  • the smart screen and the four speakers can determine the user's relative position to themselves through the sound source localization algorithm. After that, the smart screen and the speakers can send ultrasonic signals to the user's position and use the ultrasonic signals to detect the distance between themselves and the user.
  • speaker A uses ultrasonic signals to detect that the distance between itself and the user is d1
  • the smart screen uses ultrasonic signals to detect that the distance between itself and the user is d2
  • speaker B uses ultrasonic signals to detect that the distance between itself and the user is d3
  • speaker D uses ultrasonic signals to detect that the distance between itself and the user is d4
  • speaker C uses ultrasonic signals to detect that the distance between itself and the user is d5.
  • the speaker can send the determined distance between the user and the speaker to the smart screen.
  • the smart screen can obtain the distance between each speaker device and the user, and can determine the positional relationship between the user, the smart screen, and the speaker based on the positional relationship between the smart screen and the speaker determined by the ultrasonic positioning technology (such as the distance and angle relationship).
  • the smart screen can adjust the sound effects of the speaker based on the positional relationship between the user, the smart screen, and the speaker, and calibrate the "Emperor's Seat” sound field to the user's position.
  • the user's position is determined by the user's voice commands detected by each electronic device, and then the distance between the user and the user is determined, so that the sound field is calibrated to the user's position.
  • a handheld sound pickup device can be used to collect audio and re-calibrate the sound field.
  • the sound field calibration method provided in the embodiment of the present application can automatically calibrate the sound field to the user's position in response to the user's voice command when the user's position changes, thereby meeting the user's usage needs while reducing the difficulty of user operation.
  • the smart home system includes sensors (such as millimeter wave sensors) that can be used to locate the user's position. Then, after determining the user's position, these sensors can send the user's position to the smart screen. After that, the smart screen can calibrate the sound field based on the user's position and the determined positional relationship between the smart screen and the speaker (such as the distance and angle relationship), thereby calibrating the "emperor's seat" of the sound field to the user's position.
  • sensors such as millimeter wave sensors
  • users may carry electronic devices such as mobile phones, and configure UWB sensors, millimeter wave sensors, etc. on the electronic devices to determine the positional relationship between the electronic devices and the smart screen and various speakers.
  • This positional relationship is the positional relationship between the user and the smart screen and various speakers.
  • the smart screen can obtain the positional relationship and perform sound field calibration in combination with the determined positional relationship between the smart screen and the speakers (such as the distance and angle relationship), thereby calibrating the "emperor's seat" of the sound field to the user's location.
  • users may carry wearable devices such as smart watches, smart glasses, smart headphones, etc.
  • Sensors such as Bluetooth, Wi-Fi, UWB sensors, and millimeter wave sensors configured on the wearable device can determine the positional relationship between the wearable device and the smart screen and each speaker. This positional relationship is the positional relationship between the user and the smart screen and each speaker.
  • the smart screen can obtain this positional relationship and perform sound field calibration based on the determined positional relationship between the smart screen and the speaker (such as the distance and angle relationship), thereby calibrating the "emperor's seat” of the sound field to the user's location.
  • adaptive sound field calibration of user position changes can be achieved, thereby providing users with more flexible sound field calibration and improving user experience.
  • multiple users may use the home theater at the same time.
  • the listening experience of multiple users can be taken into account to achieve multi-C-position sound field calibration.
  • the smart screen can determine the user positions of user C1 and user C2, and can determine the positional relationship between the smart screen, the speaker, user C1, and user C2 through one or more of the examples in the above-mentioned multiple embodiments.
  • the smart screen can adjust the playback parameters of multiple speakers and smart screens according to the determined positional relationship, so that both user C1 and user C2 can obtain a better listening experience.
  • the playback parameters include, for example, phase, frequency response, loudness, reverberation and other parameters.
  • the smart screen can adjust the playback parameters of multiple speakers and smart screens so that speakers A and C near user C1 provide a better listening experience for user C1, and speakers B and D near user C2 provide a better listening experience for user C2.
  • the smart screen can provide similar listening experiences for the two users.
  • the smart screen adjusts the loudness of the speakers so that speakers A and C provide audio of a preset loudness to user C1, and the preset loudness meets the audio loudness requirement of user C1. Then, due to the need to take into account the audio loudness requirement of user C2, audio with excessive loudness will not be provided to user C1.
  • the smart screen can reduce the mutual influence of the playback effects of different speakers by adjusting the phase and other methods.
  • the accuracy of the pronunciation direction is improved in multi-user scenarios, and the impact of room reverberation on the listening of multiple users is reduced, thereby providing a better listening experience for multiple users and improving the user experience of multiple users.
  • the sound waveforms of different speakers are controlled to overlap and cancel each other out, so that the sound target area and the silent area in the space can be divided.
  • the sound waveforms of multiple speakers in the sound target area overlap each other, and the sound waveforms of multiple speakers in the silent area cancel each other out. This achieves the goal of controlling the sound to be played in the sound target area, and no sound or a low sound in the silent area.
  • the smart screen adjusts the playback parameters of the smart screen and multiple speakers according to the determined positional relationship between one or more users, the smart screen, and multiple speakers, so that the living room is the sound target area and the area outside the living room is the silent area.
  • the smart screen adjusts the playback parameters of the device in conjunction with the master bedroom to make it a silent area. In this way, users can use the home theater in the living room without affecting the rest of the users in the master bedroom.
  • the smart screen can determine the positional relationship between the smart screen, the speaker, and the user through ultrasonic positioning technology, sound source positioning technology, or high-precision clock synchronization technology, or through the methods described in one or more of the above-mentioned exemplary embodiments. Afterwards, the smart screen can adjust the sound emission time of the smart screen and the speaker according to the positional relationship, so that the sounds emitted by the speakers of multiple devices reach the user's ears at the same time phase, thereby bringing a better listening experience to the user.
  • the smart screen determines that the distance between speaker A and the user is d1, the distance between the smart screen and the user is d2, the distance between speaker B and the user is d3, the distance between speaker D and the user is d4, and the distance between speaker C and the user is d5.
  • the smart screen can determine the time it takes for the sound to be transmitted from the device to the user's ear based on the determined distance and sound propagation speed. Propagation time. For example, after the speaker of speaker A makes a sound, it takes t11 for the sound to reach the user's ear; after the speaker of the smart screen makes a sound, it takes t22 for the sound to reach the user's ear; after the speaker of speaker B makes a sound, it takes t33 for the sound to reach the user's ear; after the speaker of speaker D makes a sound, it takes t44 for the sound to reach the user's ear; after the speaker of speaker C makes a sound, it takes t44 for the sound to reach the user's ear.
  • the smart screen can adjust the sound time of the speakers of speaker A, smart screen, speaker B, speaker D, and speaker C to t1-t5 respectively according to the sound propagation time of each device, so that the time phase of the sound reaching the user's ears is the same or approximately the same, thereby ensuring the user's listening experience.
  • the smart screen can adjust the sound time so that the direct sound and reflected sound of the sound device reach the user's ears at the same time, thereby providing the user with a better listening experience.
  • the relevant content about the impact of the home environment on sound propagation please refer to the relevant embodiments below, which will not be repeated here.
  • the home theater scene supports 3D audio playback, which can rearrange the trajectories of sound objects in the video content, thereby bringing a better sound experience to the user.
  • different sounds in the video content may correspond to different sound objects, and the propagation track of the sound in space may correspond to the sound track of the sound object.
  • Different sound objects correspond to their own sound content and sound track, and the sound track is generated and changes based on the change of time.
  • the smart screen can determine the positional relationship between the smart screen, the speaker, and the user through ultrasonic positioning technology, sound source positioning technology, or high-precision clock synchronization technology, or through the methods described in one or more of the above examples. Then, the smart screen can match the sound track of the sound object in the video content to the spatial scene of the actual home theater, and calibrate the sound field to the user's location, so that the sound presentation effect of the home theater for the video content matches the sound track, thereby providing users with an immersive viewing experience.
  • the smart screen arranges and renders the sound tracks of multiple sound objects.
  • the smart screen calibrates the sound track to the user's listening position. Through the coordination of multiple speakers and the smart screen, the user can get the listening experience of hummingbirds flying around during the video playback.
  • the sound field calibration method provided in the embodiment of the present application can re-arrange and render the sound track of the sound object, and calibrate the sound field to the user's position, so that the user can obtain an immersive listening experience.
  • the smart screen may use the perspective of the target character selected by the user as the user's character perspective, and re-arrange and render the sound object tracks in the video content according to the determined character perspective.
  • the smart screen calibrates the sound field of the character's perspective to the user's listening position, so that the user can participate in the video content from the specific character's perspective and obtain the listening and viewing experience from the specific character's perspective.
  • the smart screen when the smart screen displays video content, in response to the user selecting character A, the smart screen can use the perspective of character A as the user's character perspective.
  • the smart screen decodes the sound in the video content and can extract each sound object. Different sound objects have their own waveforms, such as the waveforms shown in (1), (2), (3), and (4) in FIG14 .
  • the smart screen Before the sound is output, the smart screen combines the positional relationship between the smart screen, the speaker, and the user, and arranges and renders the sound tracks of each sound object according to the character perspective of character A, and arranges the sound tracks to the user's position.
  • the smart screen determines that in the scene shown in Figure 13, the current line "wait a minute" corresponds to the voice of character C, and in the video content, character C is located behind the character A selected by the user. Then, after the sound track is arranged and rendered, the smart screen can play the line through the cooperation of speakers C and D located behind the sofa, so that the user sitting on the sofa can get an immersive listening experience.
  • the smart screen in a multi-user scenario, can integrate the positional relationships of multiple users to provide an immersive listening experience for different users.
  • Smart screen can use the above example method to convert the sound in the video content
  • the sound tracks of the corresponding sound objects are arranged and rendered, so that multiple users can obtain an immersive listening experience from the perspective of the same character.
  • the smart screen can group the speakers in the home theater, and different groups of speakers serve different users.
  • user A corresponds to the first group of speakers
  • user B corresponds to the second group of speakers.
  • the smart screen can arrange and render the sound track of the sound object according to the character perspective selected by user A in the video content, and provide user A with the listening experience of the character perspective selected by user A through the first group of speakers.
  • the smart screen can arrange and render the sound track of the sound object according to the character perspective selected by user B in the video content, and provide user B with the listening experience of the character perspective selected by user B through the second group of speakers.
  • an immersive listening experience is provided for multiple users in a multi-user scenario.
  • the sound field calibration method calibrates the sound field to the user's location and provides the user with an immersive listening experience of the character perspective selected by the user, thereby improving the user's experience.
  • an application with a sound field calibration function can be installed in an electronic device (such as a mobile phone, a smart screen, etc.).
  • the user can directly adjust the "emperor's seat” of the sound field calibration in the home theater through the application to calibrate the sound field to the "emperor's seat” selected by the user to meet the user's personalized usage needs.
  • an application with a sound field calibration function is installed in the mobile phone.
  • the smart screen determines the positional relationship between the smart screen and each speaker, it can generate map information and send the map information to the mobile phone.
  • the mobile phone can display an interface 1601 according to the received map information.
  • a home theater map is displayed on the interface 1601, which schematically shows the relative position relationship between the smart screen and each speaker, and displays an “emperor seat” icon 161.
  • the mobile phone can determine the “emperor seat” indicated by the user.
  • the mobile phone can send information about the position where the user finally moves to display the "emperor's seat” icon 161 to the smart screen.
  • the smart screen can calibrate the sound field to the position corresponding to the information, so that the "emperor's seat” calibrated by the sound field meets the user's needs.
  • the mobile phone displays the "emperor seat” icon 161 at a random position in the home theater map, or displays the "emperor seat” icon 161 at a default display position.
  • the default display position is a fixed display position, or a recommended "emperor seat”, or a “emperor seat” determined during the last sound field calibration process.
  • the smart screen can perform sound field calibration, determine the best listening position, and indicate that the "Emperor's Seat” icon 161 is displayed at the best listening position.
  • the user holds a mobile phone or wears a wearable device, and the user's position can be determined by the positioning function of the mobile phone or wearable device.
  • the user's position is located through UWB sensors, millimeter wave sensors, or other high-precision positioning technologies indoors. After that, the mobile phone can guide the user to the determined best listening position.
  • a prompt message may be displayed to prompt the user to confirm whether to calibrate the sound field to the "emperor's seat” indicated by the current "emperor's seat” icon 161.
  • the mobile phone may send information about the position corresponding to the "emperor's seat” icon 161 to the smart screen.
  • the smart screen can also perform sound field calibration for multiple "emperor seats”.
  • an application with a sound field calibration function is installed in the mobile phone.
  • the smart screen After determining the positional relationship between the smart screen and each speaker, the smart screen sends the generated map information to the mobile phone.
  • the mobile phone can first display one or a preset number of "Emperor's Seat” icons based on the map information. Afterwards, in response to the user's instruction to create a new "Emperor's Seat” icon, a corresponding number of "Emperor's Seat” icons can be created.
  • the mobile phone displays an "emperor seat” icon 171 and an “emperor seat” icon 172. Afterwards, in response to the user's operation on the two “emperor seat” icons, the mobile phone can determine the two "emperor seats” indicated by the user. Afterwards, the mobile phone can send the information of the two "emperor seats” indicated by the user to the smart screen. The smart screen can complete the sound field calibration of the two "emperor seats” based on the received information.
  • the smart screen determines the positional relationship between the smart screen and each speaker, it can also directly display the corresponding map. Afterwards, in response to the user moving the "emperor's seat” icon on the smart screen, the smart screen can also complete the calibration of the sound field according to the "emperor's seat” indicated by the user.
  • the positional relationship between the smart screen and the speaker is automatically determined in a variety of ways, and the sound field calibration of the home theater is flexibly performed in response to user operations, thereby meeting the personalized needs of users, reducing the difficulty of user operation, and improving the user experience.
  • different materials of objects in the space have different abilities to reflect sound. Therefore, during the sound field calibration process, the smart screen needs to refer to the influence of the space environment to improve the accuracy of the sound field calibration.
  • the incident sound shown in (1) in Figure 18 contacts the surface of an object, it will produce reflected sound as shown in (2) in Figure 18, transmitted sound as shown in (3) in Figure 18, and absorbed sound as shown in (4) in Figure 18. And because the flatness of the surface of the object is different, the same incident sound can correspond to reflected sounds in multiple different directions. Then, the sound field calibration needs to calibrate the incident sound, reflected sound, etc. from multiple directions to the same "emperor's seat". Therefore, the smart screen needs to pre-determine the acoustic parameter modeling corresponding to the current spatial environment, so as to facilitate use in the subsequent sound field calibration process to avoid the influence of the spatial environment on the sound field calibration.
  • the smart screen can use the acoustic parameter modeling as the input of the sound field calibration process corresponding to each of the above embodiments, thereby improving the accuracy of the sound field calibration process.
  • any electronic device in a home theater can be used as a transmitting device, and after the speaker of the transmitting device sends an ultrasonic signal, other devices as receiving devices can receive the direct sound corresponding to the ultrasonic signal.
  • the receiving device can receive the reverberation sound corresponding to the ultrasonic signal.
  • the receiving device can perform acoustic calculations based on the received ultrasonic signal (such as direct sound and reverberation sound) to determine the corresponding acoustic parameters, where the acoustic parameters include, for example, the decoration material of the home environment and the reflection coefficient, absorption coefficient and transmission coefficient of the home environment to sound.
  • the acoustic parameters include, for example, the decoration material of the home environment and the reflection coefficient, absorption coefficient and transmission coefficient of the home environment to sound.
  • the transmitting device can send ultrasonic signals at different angles, or use different electronic devices as transmitting devices to send ultrasonic signals to complete environmental detection of the current spatial environment.
  • the smart screen can obtain the acoustic parameters sent by each speaker to establish an acoustic parameter model.
  • the smart screen can adjust the playback frequency, response parameters, phase parameters, loudness parameters, etc. of the speakers of the smart screen and each speaker based on the acoustic parameter model to complete the sound field calibration.
  • the acoustic parameters determined by the electronic device may also be used alone in the sound field calibration process.
  • the speaker sends ultrasonic signals to different angles and directions through the speaker, and carries angle and direction information in the ultrasonic signal. Afterwards, after receiving the ultrasonic signal, the microphones in other speakers can combine the angle and direction information carried therein to perform acoustic analysis and determine the corresponding acoustic parameters (such as reflection coefficient, absorption coefficient, transmission coefficient, etc.). After completing the ultrasonic environment detection of the whole house, the speaker will feed back the acoustic parameters corresponding to different angles and directions to the smart screen. The smart screen performs acoustic parameter modeling based on the acoustic parameters corresponding to the multiple angles and directions received.
  • multiple speakers can be configured in the smart screen, so the smart screen can send ultrasonic signals through speakers in different directions.
  • the smart screen is equipped with sky sound speakers that emit sound toward the ceiling, speakers deployed in different directions of 360°, speakers deployed on the back of the smart screen, and so on.
  • the microphone in the speaker After that, after receiving the ultrasonic signal, the microphone in the speaker performs acoustic analysis to determine the corresponding acoustic parameters (such as reflection coefficient, absorption coefficient, transmission coefficient, etc.). In this way, full and complete home environment detection is achieved.
  • the smart screen can perform acoustic parameter modeling based on the acoustic parameters fed back by different speakers and combined with the sound direction of the speaker that sends the ultrasonic signal.
  • the transmitting device may carry the sending angle and azimuth of the ultrasonic signal in the transmitted ultrasonic signal, and the receiving device calculates the acoustic parameters corresponding to the angle and azimuth and feeds back.
  • the transmitting device sends ultrasonic signals to different angles and azimuths, and after receiving the acoustic parameters fed back by the receiving device, matches the angle and azimuth corresponding to the acoustic parameters. That is, the matching of acoustic parameters with angles and azimuths can be completed at the receiving end or at the transmitting end.
  • the electronic device (such as a smart screen) that is ultimately used to establish an acoustic parameter model can obtain the matching relationship and the corresponding acoustic parameters to facilitate the establishment of an acoustic parameter model.
  • a communication connection is established between the smart screen and multiple speakers in the home theater, thereby forming a networking relationship.
  • the establishment of an acoustic parameter model can be achieved.
  • speaker B in the communication system can receive the ultrasonic signal. And through the various implementation methods of the above examples, speaker B can determine the positional relationship between speaker A and speaker B. Then, speaker B can determine the acoustic parameters corresponding to different angles and reflection objects of the ultrasonic signal in the home environment through acoustic analysis.
  • speaker A selects other electronic devices in the communication system to send ultrasonic signals, and other electronic devices receive the ultrasonic signals, thereby being able to calculate multiple reflection paths and acoustic parameters corresponding to the reflecting objects, thereby achieving more comprehensive detection of the home environment and improving the accuracy of the acoustic parameter model.
  • the smart screen can be used as a scheduling device in the communication system to determine the sending device that sends the ultrasonic signal.
  • the speakers in the home theater are generally deployed on the ceiling and/or the ground. Then, in the process of establishing the acoustic parameter model, an ultrasonic signal can be sent by the speakers deployed on the ceiling, and the ultrasonic signal can be received by the speakers deployed on the ground. After that, the speakers that receive the ultrasonic signal can calculate the acoustic parameters corresponding to the objects between the ceiling and the ground through acoustic analysis, and feed the acoustic parameters back to the smart screen, which completes the establishment of the acoustic parameter model.
  • an ultrasonic signal can be sent by a speaker deployed on the ground, and the ultrasonic signal can be received by a speaker deployed on the ceiling.
  • the speaker that receives the ultrasonic signal can calculate the acoustic parameters corresponding to the object between the ground and the ceiling through acoustic analysis, and feed the acoustic parameters back to the smart screen, which completes the establishment of the acoustic parameter model.
  • the speakers deployed on the ground may be deployed on a ground stand or a TV cabinet, for example.
  • the speaker or smart screen can send ultrasonic signals to different directions in the space through the speaker, such as up, down, front, back, left, and right, and at least 6 directions. Afterwards, the speaker or smart screen can analyze the size of the current home environment space based on the received ultrasonic reflection signal. For example, the speaker or smart screen determines the size of the home environment space by the time difference between sending the ultrasonic signal and receiving the ultrasonic reflection signal. Afterwards, the smart screen can obtain the size of the home environment space.
  • the smart screen can determine the positional relationship between the smart screen and each speaker through the methods of the above-mentioned embodiments. Combining this positional relationship with the size of the home environment space, the smart screen can determine the absolute geometric positional relationship between the smart screen and each speaker in the home environment space. Subsequently, during the sound field calibration process, the smart screen can adjust the playback parameters on the speaker device in combination with the absolute geometric positional relationship and the determined acoustic parameter model to perform sound field calibration.
  • the smart screen can determine the positional relationship between the smart screen, each speaker, and the user through the methods of the above-mentioned embodiments. Combining this positional relationship and the size of the home environment space, the smart screen can determine the absolute geometric positional relationship between the smart screen, each speaker, and the user in the home environment space. Subsequently, during the sound field calibration process, the smart screen can adjust the playback parameters on the speaker device in combination with the absolute geometric positional relationship and the determined acoustic parameter model to calibrate the sound field to the user's location.
  • the accuracy of automatic sound field calibration can be improved by determining the size of the home environment space and combining it with acoustic parameter modeling.
  • the electronic device in the communication system may not complete the home environment detection through ultrasonic signals.
  • the electronic device may complete the home environment detection through audible sounds (such as playing a piece of music, etc.).
  • the specific detection process can refer to the above-mentioned ultrasonic signal detection process, which will not be described in detail in the embodiments of this application.
  • the sound field calibration process is described by ultrasonic signals. It should be understood that the user's position in the home environment can also be located by ultrasonic signals, and then the brightness, switch, etc. of lights at different positions can be controlled according to the user's position.
  • multiple embodiments of the present application can be combined, and the combined scheme can be implemented.
  • some operations in the process of each method embodiment are optionally combined, and/or the order of some operations is optionally changed.
  • the execution order between the steps of each process is only exemplary and does not constitute a restriction on the execution order between the steps.
  • a person of ordinary skill in the art will think of a variety of ways to reorder the operations described herein.
  • the process details involved in a certain embodiment of this article are also applicable to other embodiments in a similar manner, or different embodiments can be used in combination.
  • steps in the method embodiment may be equivalently replaced by other possible steps.
  • some steps in the method embodiment may be optional and may be deleted in certain usage scenarios.
  • other possible steps may be added to the method embodiment.
  • Figure 20 is a flow chart of a sound field calibration method provided in an embodiment of the present application. The method is applied to a system including a first electronic device and at least one second electronic device. As shown in Figure 20, the method includes the following steps.
  • At least one second electronic device receives first information for positioning from a first electronic device respectively.
  • the first electronic device or the second electronic device is a smart screen or a speaker.
  • the first information is a wireless signal, and the wireless signal is one or more of the following: an ultrasonic signal, a UWB signal, a Bluetooth signal, a Wi-Fi signal, and a millimeter wave signal.
  • At least one second electronic device receives the first electronic device's signal received through the first speaker at a second time.
  • the first ultrasonic signal is sent, and the third ultrasonic signal is sent through the second speaker at a fifth time, and the first information used for positioning includes the first ultrasonic signal and the third ultrasonic signal.
  • the first electronic device transmits ultrasonic signals in time-sharing manner through the speaker 21 and the speaker 22. Accordingly, the second electronic device can receive the ultrasonic signals transmitted in time-sharing manner by the first electronic device through the two speakers.
  • S2002 Determine first position information of at least one second electronic device relative to the first electronic device based on first information for positioning received by at least one second electronic device.
  • the first electronic device determines first location information of at least one second electronic device relative to the first electronic device based on first information for positioning received by at least one second electronic device.
  • the second electronic device determines first position information of at least one second electronic device relative to the first electronic device according to the first information for positioning received by at least one second electronic device.
  • the electronic devices in the communication system automatically determine the positional relationship between the devices, realize automatic sound field calibration, simplify user operations, reduce user operation difficulty, and improve user experience.
  • the target second electronic device in at least one second electronic device responds to the first ultrasonic signal sent by the first electronic device at the second time and received at the first time, and feeds back the second ultrasonic signal to the first electronic device at the third time, and the first information includes the first ultrasonic signal.
  • the first electronic device receives the second ultrasonic signal at the fourth time. According to the first time, the second time, the third time, and the fourth time, the distance of the target second electronic device relative to the first electronic device is determined, and the first position information includes the distance.
  • one of the speakers of the first electronic device sends an ultrasonic signal to the target second electronic device (speaker A), and after receiving the ultrasonic signal, speaker A replies with an ultrasonic signal to the smart screen.
  • the time when speaker A receives the ultrasonic signal is T1
  • the time when the ultrasonic signal is replied is T2
  • the time when the smart screen sends the ultrasonic signal is T3
  • the time when the reply ultrasonic signal is received is T4.
  • the smart screen or speaker A can determine the distance between speaker A and the smart screen based on the four time information of T1, T2, T3, and T4.
  • the determined distance information can be sent to the smart screen.
  • the first electronic device may send the first information for positioning in a directionally manner, such as sending the first information to a target second electronic device among the at least one second electronic device.
  • the first electronic device sends the first information for positioning in a non-directional manner, but at least one second electronic device can also receive the first information.
  • any second electronic device among at least one second electronic device determines an angle relative to the first electronic device based on the distance between the first speaker and the second speaker, the time difference between the second time and the fifth time, the time of receiving the first ultrasonic signal, the time of receiving the third ultrasonic signal, and the propagation speed of the first ultrasonic signal or the third ultrasonic signal, and the first position information includes the angle.
  • the start positioning instruction sent by the smart screen to the speaker may carry the speaker distance information, such as the distance D; or, speaker A may obtain the distance D by other means.
  • the two speakers of the smart screen send ultrasonic signals in time division, and the time interval is T.
  • the speaker A can obtain the time interval T.
  • the time when the speaker A receives the two ultrasonic signals is tR1 and tL1 respectively.
  • the time when the speaker A receives the ultrasonic signal sent by the speaker SpkR is tR1
  • the time when the speaker A receives the ultrasonic signal sent by the speaker SpkL is tL1.
  • At least one first location information between at least one second electronic device and the second electronic device can be determined, and the first location information includes the distance and angle of the second electronic device relative to the first electronic device.
  • S2003 Acquire second location information of the first user relative to the first electronic device.
  • the first electronic device determines the second location information of the first user relative to the first electronic device.
  • the second electronic device determines the second location information of the first user relative to the first electronic device.
  • the first electronic device and the at least one second electronic device respectively receive a voice emitted by the first user
  • the second location information is determined according to the time when the first electronic device and the at least one second electronic device receive the voice of the first user.
  • the sound emitted by the first user is, for example, a voice command emitted by the first user.
  • the smart screen and the four speakers all detect the voice command emitted by the user, such as the time when speaker A detects the voice command is t1, the time when the smart screen detects the voice command is t2, the time when speaker B detects the voice command is t3, the time when speaker D detects the voice command is t4, and the time when speaker C detects the voice command is t5.
  • each speaker can send the time information of receiving the voice command to the smart screen.
  • the smart screen can determine the position information of each speaker relative to the smart screen based on the acquired time information.
  • the first electronic device is the smart screen
  • the second electronic device is the speaker.
  • the sound field is calibrated to the user's position through the time information of the user's voice command detected by each electronic device.
  • a handheld sound pickup device can be used to collect audio and re-calibrate the sound field.
  • the sound field calibration method provided in the embodiment of the present application can automatically calibrate the sound field to the user's position in response to the user's voice command when the user's position changes, thereby meeting the user's usage needs while reducing the difficulty of user operation.
  • the second location information is determined in response to a user operation during the process of receiving the first interface displayed by the third electronic device based on the first location information; the first interface is used to display the position relationship between the first electronic device and at least one second electronic device, and the user operation is used to move the position of the identifier corresponding to the first user displayed on the first interface.
  • the third electronic device before the third electronic device displays the first interface, it obtains the first location information sent by the first electronic device.
  • a mobile phone i.e., the third electronic device
  • the smart screen i.e., the first electronic device
  • it can generate map information and send the map information to the mobile phone.
  • the mobile phone can display an interface 1601 according to the received map information.
  • a home theater map is displayed on the interface 1601, which schematically shows the relative position relationship between the smart screen and each speaker, and displays an “emperor seat” icon 161.
  • the mobile phone can determine the “emperor seat” indicated by the user.
  • the mobile phone can send information about the position where the user finally moves to display the "emperor's seat” icon 161 to the smart screen.
  • the smart screen can calibrate the sound field to the position corresponding to the information, so that the "emperor's seat” calibrated by the sound field meets the user's needs.
  • second information for positioning is sent to the first user, and the second position information is determined according to the sending time of the second information and the receiving time of the reflected information corresponding to the second information.
  • the first electronic device and at least one second electronic device send an ultrasonic signal to the first user, so that each electronic device can determine the second position information between itself and the first user based on the time of sending the ultrasonic signal and the time of receiving the corresponding reflected signal.
  • second location information of the first user relative to the first electronic device is determined based on the device location of a fourth electronic device carried by the first user.
  • the user may carry electronic devices such as mobile phones, and by configuring UWB sensors, millimeter wave sensors, etc. on the electronic device, the positional relationship between the electronic device and the smart screen and each speaker can be determined.
  • the positional relationship is the positional relationship between the user and the smart screen and each speaker.
  • the user may carry wearable devices such as smart watches, smart glasses, smart headphones, etc.
  • sensors such as Bluetooth, Wi-Fi, UWB sensors, millimeter wave sensors, etc. on the wearable device, the positional relationship between the wearable device and the smart screen and each speaker can be determined.
  • the positional relationship is the positional relationship between the user and the smart screen and each speaker.
  • the playback parameters of the first electronic device and at least one second electronic device are adjusted to calibrate the sound field to the area indicated by the second position information, and the playback parameters include one or more of the following: playback frequency, response parameter, phase parameter, loudness parameter.
  • the user needs to hold a sound pickup device to collect audio and manually enter the distance information between the first electronic device and the second electronic device to achieve sound field calibration.
  • the device or the second electronic device automatically completes the sound field calibration by sending ultrasonic signals, effectively reducing the difficulty of user operation.
  • the clocks of the first electronic device and the at least one second electronic device are synchronized, and according to the first location information and the second location information, the first electronic device (or the second electronic device) can adjust the sounding time of the first electronic device and the at least one second electronic device according to the first location information and the second location information, so that the time when the sound of the first electronic device and the at least one second electronic device reaches the area indicated by the second location information is the same or similar.
  • the smart screen determines that the distance between speaker A and the user is d1, the distance between the smart screen and the user is d2, the distance between speaker B and the user is d3, the distance between speaker D and the user is d4, and the distance between speaker C and the user is d5.
  • the smart screen can determine the propagation time of the sound from the device to the user's ear based on the determined distance and sound propagation speed. For example, after the speaker of speaker A makes a sound, it takes time t11 for the sound to reach the user's ear; after the speaker of the smart screen makes a sound, it takes time t22 for the sound to reach the user's ear; after the speaker of speaker B makes a sound, it takes time t33 for the sound to reach the user's ear; after the speaker of speaker D makes a sound, it takes time t44 for the sound to reach the user's ear; after the speaker of speaker C makes a sound, it takes time t44 for the sound to reach the user's ear.
  • the smart screen can adjust the sound time of the speakers of speaker A, smart screen, speaker B, speaker D, and speaker C to t1-t5 respectively according to the sound propagation time of each device, so that the time phase of the sound reaching the user's ears is the same or approximately the same, thereby ensuring the user's listening experience.
  • third position information of the second user relative to the first electronic device is obtained. Based on the first position information, the second position information, and the third position information, the first sound field is calibrated to the first area indicated by the second position information, and the second sound field is calibrated to the second area indicated by the third position information, wherein the first sound field or the second sound field is a sound field formed by part or all of the electronic devices in the first electronic device and at least one second electronic device.
  • the smart screen can determine the user positions of user C1 and user C2, and determine the positional relationship between the smart screen, the speaker, user C1, and user C2. Afterwards, the smart screen can adjust the playback parameters of multiple speakers and the smart screen according to the determined positional relationship, so that both user C1 and user C2 can obtain a better listening experience.
  • the smart screen can adjust the playback parameters of multiple speakers and smart screens so that speakers A and C near user C1 provide a better listening experience for user C1, and speakers B and D near user C2 provide a better listening experience for user C2.
  • the smart screen can provide similar listening experiences for the two users.
  • the accuracy of the pronunciation direction is improved in a multi-user scenario, and the impact of room reverberation on the listening of multiple users is reduced, thereby providing a better listening experience for multiple users and improving the user experience of multiple users.
  • the first area range and the second area range are the sound target area
  • the area range outside the first area range and the second area range is the silent area.
  • the sound waveforms of multiple speakers in the sound target area are superimposed on each other, and the sound waveforms of multiple speakers in the silent area cancel each other out. This achieves the control of sound playing in the sound target area, and no sound or a small sound in the silent area.
  • the first electronic device determines one or more sound tracks corresponding to one or more sound objects included in the first audio to be played, and rearranges the one or more sound tracks during the sound field calibration process according to the first position information and the second position information, so that the calibrated sound field matches the one or more sound tracks within the area indicated by the second position information.
  • the sound of multiple hummingbirds flapping their wings corresponds to multiple sound objects.
  • the sound tracks of the multiple sound objects are arranged and rendered.
  • the sound track is calibrated to the user's listening position, and the sound track is played back through at least one second electronic device and the user.
  • the playback cooperation of the first electronic device enables the user to obtain the listening experience of hummingbirds flying around during the video playback.
  • the sound field calibration method provided in the embodiment of the present application can re-arrange and render the sound track of the sound object, and calibrate the sound field to the user's position, so that the user can obtain an immersive listening experience.
  • a target sound object among one or more sound objects selected by the user is determined.
  • a target sound track corresponding to the target sound object is rearranged during the sound field calibration process so that the calibrated sound field matches the target sound track within the area indicated by the second position information.
  • the smart screen when the smart screen displays the video content, in response to the user selecting character A, the smart screen can use the perspective of character A as the user's character perspective.
  • the smart screen decodes the sound in the video content and can extract each sound object, and different sound objects have their own waveforms.
  • the smart screen combines the positional relationship between the smart screen, the speaker, and the user, and arranges and renders the sound track of each sound object according to the character perspective of character A, and arranges the sound track to the user's position.
  • the sound field calibration method calibrates the sound field to the user's location and provides the user with an immersive listening experience of the character perspective selected by the user, thereby improving the user's experience.
  • the first electronic device can also execute the steps and functions performed by the smart screen in the above embodiments, and at least one second electronic device can also execute the steps and functions performed by the speaker in the above embodiments, thereby realizing the sound field calibration method provided in the above embodiments.
  • FIG21 is a schematic diagram of the structure of an electronic device provided in an embodiment of the present application.
  • the electronic device 2100 may include: a transceiver unit 2101 and a processing unit 2102.
  • the electronic device 2100 may be used to implement the functions of the first electronic device or the second electronic device involved in the above method embodiment.
  • the transceiver unit 2101 is used to support the electronic device 2100 to execute S2001 in Figure 20.
  • the processing unit 2102 is used to support the electronic device 2100 to execute S2002, S2003 and S2004 in Figure 20.
  • the transceiver unit may include a receiving unit and a sending unit, and may be implemented by a transceiver or a transceiver-related circuit component, and may be a transceiver or a transceiver module.
  • the operations and/or functions of each unit in the electronic device 2100 are respectively to implement the corresponding process of the sound field calibration method described in the above method embodiment. All relevant contents of each step involved in the above method embodiment can be referred to the functional description of the corresponding functional unit, and for the sake of brevity, they will not be repeated here.
  • the electronic device 2100 shown in FIG21 may further include a storage unit (not shown in FIG21 ), in which a program or instruction is stored.
  • a storage unit not shown in FIG21
  • the electronic device 2100 shown in FIG21 may execute the sound field calibration method described in the above method embodiment.
  • the technical effects of the electronic device 2100 shown in FIG. 21 may refer to the technical effects of the sound field calibration method described in the above method embodiment, and will not be described in detail here.
  • the technical solution provided in the present application may also be a functional unit or chip in the electronic device, or a device used in conjunction with the electronic device.
  • An embodiment of the present application also provides a chip system, including: a processor, the processor is coupled to a memory, the memory is used to store programs or instructions, when the program or instructions are executed by the processor, the chip system implements the method in any of the above method embodiments.
  • the processor in the chip system may be one or more.
  • the processor may be implemented by hardware or by software.
  • the processor may be a logic circuit, an integrated circuit, etc.
  • the processor may be a general-purpose processor implemented by reading software code stored in a memory.
  • the memory in the chip system may also be one or more.
  • the memory may be integrated with the processor or may be separately arranged with the processor, which is not limited in the embodiments of the present application.
  • the memory may be a non-transient processor, such as a read-only memory ROM, which may be integrated with the processor on the same chip or may be arranged on different chips respectively.
  • the embodiments of the present application do not specifically limit the type of memory and the arrangement of the memory and the processor.
  • the chip system can be a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), or a system chip (SoC). It can be a system on chip (SoC), a central processor unit (CPU), a network processor (NP), a digital signal processor (DSP), a microcontroller unit (MCU), a programmable logic device (PLD) or other integrated chips.
  • FPGA field programmable gate array
  • ASIC application specific integrated circuit
  • SoC system chip
  • SoC system on chip
  • CPU central processor unit
  • NP network processor
  • DSP digital signal processor
  • MCU microcontroller unit
  • PLD programmable logic device
  • each step in the above method embodiment can be completed by an integrated logic circuit of hardware in a processor or by instructions in the form of software.
  • the method steps disclosed in the embodiments of the present application can be directly embodied as being executed by a hardware processor, or by a combination of hardware and software modules in a processor.
  • An embodiment of the present application further provides a computer-readable storage medium, in which a computer program is stored.
  • the computer program When the computer program is run on a computer, the computer executes the above-mentioned related steps to implement the sound field calibration method in the above-mentioned embodiment.
  • the embodiment of the present application further provides a computer program product.
  • the computer program product When the computer program product is run on a computer, the computer is enabled to execute the above-mentioned related steps to implement the sound field calibration method in the above-mentioned embodiment.
  • an embodiment of the present application further provides a device.
  • the device may be a component or a module, and the device may include one or more processors and a memory connected to each other.
  • the memory is used to store a computer program.
  • the computer program is executed by one or more processors, the device performs the sound field calibration method in the above-mentioned method embodiments.
  • the device, computer-readable storage medium, computer program product or chip provided in the embodiments of the present application are all used to execute the corresponding methods provided above. Therefore, the beneficial effects that can be achieved can refer to the beneficial effects in the corresponding methods provided above, and will not be repeated here.
  • the steps of the method or algorithm described in conjunction with the disclosed content of the embodiments of the present application can be implemented in hardware or by a processor executing software instructions.
  • the software instructions can be composed of corresponding software modules, and the software modules can be stored in random access memory (RAM), flash memory, read-only memory (ROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), registers, hard disks, mobile hard disks, read-only compact disks (CD-ROMs) or any other form of storage medium known in the art.
  • An exemplary storage medium is coupled to a processor so that the processor can read information from the storage medium and write information to the storage medium.
  • the storage medium can also be a component of the processor.
  • the processor and the storage medium can be located in an application specific integrated circuit (ASIC).
  • ASIC application specific integrated circuit
  • the disclosed method can be implemented in other ways.
  • the device embodiments described above are merely schematic.
  • the division of the modules or units is only a logical function division, and there may be other division methods in actual implementation; for example, multiple units or components can be combined or integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed can be through some interfaces, indirect coupling or communication connection of modules or units, which can be electrical, mechanical or other forms.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit may be implemented in the form of hardware or in the form of software functional units.
  • Computer-readable storage media include, but are not limited to, any of the following: USB flash drives, mobile hard disks, read-only memory (ROM), random access memory (RAM), magnetic disks or optical disks, and other media that can store program codes.

Landscapes

  • Stereophonic System (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)

Abstract

本申请提供声场校准方法及电子设备,涉及终端技术领域。本申请能够基于定位技术,实现自动声场校准,降低用户操作难度。并且,随着用户位置的变化,能够自动将声场校准到用户位置所指示的区域范围内。该方法应用于包括第一电子设备和至少一个第二电子设备的系统,该方法包括:至少一个第二电子设备分别从第一电子设备接收用于定位的第一信息。之后,根据至少一个第二电子设备接收的用于定位的第一信息,确定至少一个第二电子设备分别相对于第一电子设备的第一位置信息。并且,获取第一用户相对于第一电子设备的第二位置信息。之后,根据第一位置信息和第二位置信息,将声场校准到第二位置信息所指示的区域范围内。

Description

声场校准方法及电子设备
本申请要求于2022年12月21日提交国家知识产权局、申请号为202211648949.3、申请名称为“声场校准方法及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及终端技术领域,尤其涉及一种声场校准方法及电子设备。
背景技术
家庭影院一般由一个智慧屏和多个智能音箱设备组网形成,通过多个设备之间的协同作业,为用户提供良好的影音体验。
为保证家庭影院的音频效果,需要对家庭影院进行声场校准。在声场校准过程中,需要用户手持拾音设备(如手机、麦克风、专业音频采集设备等)采集音频。之后,智慧屏才能够根据拾音设备发送的音频数据完成声场校准。
在上述声场校准过程中,由于声场校准依赖于用户手持拾音设备采集音频的准确度和完整性,导致用户操作难度较高。并且,声场只能够校准到用户手持拾音设备的位置,如果用户位置变化,需要用户再次手持拾音设备采集音频,以重新进行声场校准。
发明内容
为了解决上述的技术问题,本申请提供了一种声场校准方法及电子设备。本申请提供的技术方案,基于定位技术,实现自动声场校准,降低用户操作难度。并且,随着用户位置的变化,能够自动将声场校准到用户位置所指示的区域范围内。
为了实现上述的技术目的,本申请提供了如下技术方案:
第一方面,提供一种声场校准方法,应用于包括第一电子设备和至少一个第二电子设备的系统。该方法包括:至少一个第二电子设备分别从第一电子设备接收用于定位的第一信息。根据至少一个第二电子设备接收的用于定位的第一信息,确定至少一个第二电子设备分别相对于第一电子设备的第一位置信息。获取第一用户相对于第一电子设备的第二位置信息。根据第一位置信息和第二位置信息,将声场校准到第二位置信息所指示的区域范围内。
如此,相比于现有技术中,用户需要手持拾音设备采集音频,并且手动录入第一电子设备和第二电子设备的距离信息,才能够实现声场校准。本申请实施例提供的声场校准方法,第一电子设备或第二电子设备通过用于定位的第一信息,自动获取位置信息,完成声场校准,有效降低用户操作难度。
根据第一方面,获取第一用户相对于第一电子设备的第二位置信息,包括:第一电子设备和至少一个第二电子设备分别接收第一用户发出的声音。根据第一电子设备和至少一个第二电子设备接收到第一用户的声音的时间,确定第二位置信息。
示例性的,该第一用户发出的声音例如为第一用户发出的语音命令。
如此,通过各个电子设备检测到的用户声音的时间信息,实现将声场校准到用户所在位置。相比于现有技术中,用户位置变化,只能采用手持拾音设备采集音频,重新进行声场校准。本申请实施例提供的声场校准方法,能够在用户位置发生变化时,响应于用户的发出的声音,自动将声场校准到用户所在位置,满足用户使用需求的同时,降低用户操作难度。
根据第一方面,或者以上第一方面的任意一种实现方式,获取第一用户相对于第一电子设备的第二位置信息,包括:接收第三电子设备根据第一位置信息,显示第一界面的过程中,响应于用户操作,确定的第二位置信息;第一界面用于显示第一电子设备和至少一个第二电子设备的位置关系,用户操作用于移动第一界面上显示的第一用户对应的标识的位置。
一些示例中,第三电子设备例如为安装有具有声场校准功能的应用的电子设备。
如此,响应于用户操作,移动声场校准位置,使得声场的校准结果满足用户的个性化需求。
根据第一方面,或者以上第一方面的任意一种实现方式,在接收第三电子设备根据第一位置信息,显示第一界面的过程中,响应于用户操作,确定的第二位置信息之前,方法还包括:向第三电子设备发送第一位置信息。
根据第一方面,或者以上第一方面的任意一种实现方式,获取第一用户相对于第一电子设备的第二位置信息,包括:向第一用户发送用于定位的第二信息。根据第二信息的发送时间,以及第二信息对应的反射信息的接收时间,确定第二位置信息。
示例性的,第一电子设备、至少一个第二电子设备向第一用户发送超声波信号,从而各个电子设备可根据超声波信号的发送时间和接收对应的反射信号的时间,确定自身与第一用户之间的第二位置信息。
根据第一方面,或者以上第一方面的任意一种实现方式,获取第一用户相对于第一电子设备的第二位置信息,包括:根据第一用户携带的第四电子设备的设备位置,确定第一用户相对于第一电子设备的第二位置信息。
示例性的,用户可能携带手机等电子设备,通过在该电子设备上配置如UWB传感器、毫米波传感器等,实现确定该电子设备与智慧屏、各个音箱之间的位置关系,该位置关系即为用户与智慧屏、各个音箱之间的位置关系。
或者,用户可能携带可穿戴设备,如智能手表、智能眼镜、智能耳机等。通过可穿戴设备上配置的如蓝牙、Wi-Fi、UWB传感器、毫米波传感器等传感器,可实现确定该可穿戴设备与智慧屏、各个音箱之间的位置关系,该位置关系即为用户与智慧屏、各个音箱之间的位置关系。
如此,实现用户位置变化的自适应声场校准,实现为用户提供更加灵活的声场校准,提升用户的使用体验。
根据第一方面,或者以上第一方面的任意一种实现方式,第一电子设备和至少一个第二电子设备的时钟同步,根据第一位置信息和第二位置信息,将声场校准到第二位置信息所指示的区域范围内,包括:根据第一位置信息和第二位置信息,调整第一电子设备和至少一个第二电子设备的发声时间,使得第一电子设备和至少一个第二电子设备的发声到达第二位置信息所指示的区域范围的时间相同或相似。
一些示例中,时间相同或相似是指设备时钟上的时间同步。例如,通过调整各个电子设备的发声时间,使得声音都是相同或相似的时间点(如14:00)到达用户人耳。
如此,通过定位家庭影院中各个电子设备与用户之间的位置关系,调整各个电子设备的扬声器的发声时间,使得声音在同一时间相位或相似的时间相位到达用户人耳,从而提升用户的听音体验。
根据第一方面,或者以上第一方面的任意一种实现方式,方法还包括:获取第二用户相对于第一电子设备的第三位置信息。根据第一位置信息、第二位置信息、以及第三位置信息,将第一声场校准到第二位置信息所指示的第一区域范围内,将第二声场校准到第三位置信息所指示的第二区域范围内,第一声场或第二声场为第一电子设备和至少一个第二电子设备中的部分或全部电子设备形成的声场。
示例性的,第一用户为用户C1,第二用户为用户C2。用户C1和用户C2两个用户使用家庭影院。那么,智慧屏(即第一电子设备)可确定用户C1和用户C2的用户位置,以及确定智慧屏、音箱(即第二电子设备)、用户C1和用户C2之间的位置关系。之后,智慧屏可根据确定的位置关系,通过调整多个音箱、智慧屏的放音参数,使得用户C1和用户C2均可获取较好的听音体验。
比如,智慧屏可通过调整多个音箱、智慧屏的放音参数,使得靠近用户C1的音箱A和音箱C为用户C1提供更好的听音体验,使得靠近用户C2的音箱B和音箱D为用户C2提供更好的听音体验。可选的,智慧屏与两个用户的相对距离、方位相似,那么智慧屏可为两个用户提供相似的听音体验。
如此,通过多个电子设备的联合放音参数的调整,在多用户场景中,提升发音方向的准确度,并且降低房间混响对于多用户的听音影响。从而实现为多个用户均提供更好的听音体验,提升多个用户的使用体验。
根据第一方面,或者以上第一方面的任意一种实现方式,第一区域范围和第二区域范围为声 音目标区域,第一区域范围和第二区域范围以外的区域范围为静音区域。
如此,在保证用户听音体验的同时,通过静音区的划分,降低家庭影院对于其他区域用户的影响。
根据第一方面,或者以上第一方面的任意一种实现方式,根据第一位置信息和第二位置信息,将声场校准到第二位置信息所指示的区域范围内,包括:确定待播放的第一音频中包括的一个或多个声音对象对应的一个或多个声音轨迹。根据第一位置信息和第二位置信息,在声场校准过程中重新编排一个或多个声音轨迹,使得校准后的声场在第二位置信息所指示的区域范围内与一个或多个声音轨迹匹配。
示例性的,视频内容中,存在多只正在飞舞的蜂鸟。那么,多只蜂鸟煽动翅膀的声音对应于多个声音对象,通过对多个声音对象的声音轨迹进行编排和渲染。并且,基于已经确定的智慧屏、音箱、用户之间的位置关系,将声音轨迹校准到用户的听音位置,通过至少一个第二电子设备和第一电子设备的播放配合,使得用户在视频播放过程中,获得蜂鸟在四周飞舞的听音体验。
如此,相比于现有技术中,家庭影院中视频内容对应的声音全部由用户前方发出,用户无法获得沉浸式体验。本申请实施例提供的声场校准方法,能够对声音对象的声音轨迹进行重新的编排和渲染,并且将声场校准到用户所在位置,使得用户获得沉浸式的听音体验。
根据第一方面,或者以上第一方面的任意一种实现方式,在确定待播放的第一音频中包括的一个或多个声音对象对应的一个或多个声音轨迹之后,方法还包括:响应于用户操作,确定用户选择的一个或多个声音对象中的目标声音对象。根据第一位置信息和第二位置信息,在声场校准过程中重新编排一个或多个声音轨迹,使得校准后的声场在第二位置信息所指示的区域范围内与一个或多个声音轨迹匹配,包括:根据第一位置信息和第二位置信息,在声场校准过程中重新编排目标声音对象对应的目标声音轨迹,使得校准后的声场在第二位置信息所指示的区域范围内与目标声音轨迹匹配。
如此,相比于现有技术中,家庭影院在视频播放过程中,用户无法选择角色视角。本申请实施例提供的声场校准方法,将声场校准到用户所在位置,并且为用户提供用户选择的角色视角的沉浸式听音体验,从而提升用户的使用体验。
根据第一方面,或者以上第一方面的任意一种实现方式,根据至少一个第二电子设备接收的用于定位的第一信息,确定至少一个第二电子设备分别相对于第一电子设备的第一位置信息,包括:至少一个第二电子设备中的目标第二电子设备响应于在第一时间接收到的第一电子设备在第二时间发送的第一超声波信号,在第三时间向第一电子设备反馈第二超声波信号,第一信息包括第一超声波信号。第一电子设备在第四时间接收到第二超声波信号。根据第一时间、第二时间、第三时间、以及第四时间,确定目标第二电子设备相对于第一电子设备的距离,第一位置信息包括距离。
示例性的,在第一位置信息确定过程中,第一电子设备(如智慧屏)其中一个扬声器向目标第二电子设备(音箱A)发送超声波信号,音箱A接收到超声波信号后,向智慧屏回复超声波信号。其中,音箱A接收到超声波信号的时间为T1,回复超声波信号的时间为T2,智慧屏发送超声波信号的时间为T3,接收到回复的超声波信号的时间为T4。
那么,智慧屏或音箱A可根据T1、T2、T3、T4这四个时间信息,确定音箱A和智慧屏之间的距离。可选的,如音箱A确定音箱A和智慧屏之间的距离后,可将确定的距离信息发送至智慧屏。
应理解,第一电子设备可定向发送用于定位的第一信息,如向至少一个第二电子设备中的目标第二电子设备发送第一信息。或者,第一电子设备非定向发送用于定位的第一信息,但是至少一个第二电子设备也可接收到该第一信息。
根据第一方面,或者以上第一方面的任意一种实现方式,至少一个第二电子设备分别从第一电子设备接收用于定位的第一信息,包括:至少一个第二电子设备分别接收第一电子设备在第二时间通过第一扬声器发送的第一超声波信号,以及在第五时间通过第二扬声器发送的第三超声波信号,用于定位的第一信息包括第一超声波信号和第三超声波信号。
根据第一方面,或者以上第一方面的任意一种实现方式,根据至少一个第二电子设备接收的 用于定位的第一信息,确定至少一个第二电子设备分别相对于第一电子设备的第一位置信息,包括:至少一个第二电子设备中的任一第二电子设备根据第一扬声器和第二扬声器之间的距离,第二时间和第五时间的时间差,接收第一超声波信号的时间,接收第三超声波信号的时间,第一超声波信号或第三超声波信号的传播速度,确定相对于第一电子设备的角度,第一位置信息包括角度。
如此,根据至少一个第二电子设备接收到的第一电子设备发送的用于定位第一信息,可确定至少一个第二电子设备和第二电子设备之间的至少一个第一位置信息,该第一位置信息包括第二电子设备相对于第一电子设备的距离和角度。
根据第一方面,或者以上第一方面的任意一种实现方式,根据第一位置信息和第二位置信息,将声场校准到第二位置信息所指示的区域范围内,包括:根据第一位置信息和第二位置信息,调整第一电子设备和至少一个第二电子设备的放音参数,将声场校准到第二位置信息所指示的区域范围内,放音参数包括如下一项或几项:放音频率、响应参数、相位参数、响度参数。
根据第一方面,或者以上第一方面的任意一种实现方式,第一信息为无线信号,无线信号为如下一种或几种:超声波信号、超宽带UWB信号、蓝牙信号、无线保真Wi-Fi信号、毫米波信号。
根据第一方面,或者以上第一方面的任意一种实现方式,第一电子设备或第二电子设备为智慧屏、或音箱。
第二方面,提供一种电子设备。该电子设备包括:处理器和存储器,存储器和处理器耦合,存储器用于存储计算机程序代码,计算机程序代码包括计算机指令,当处理器从存储器中读取计算机指令,使得电子设备执行:至少一个第二电子设备分别从第一电子设备接收用于定位的第一信息。根据至少一个第二电子设备接收的用于定位的第一信息,确定至少一个第二电子设备分别相对于第一电子设备的第一位置信息。获取第一用户相对于第一电子设备的第二位置信息。根据第一位置信息和第二位置信息,将声场校准到第二位置信息所指示的区域范围内。
根据第二方面,获取第一用户相对于第一电子设备的第二位置信息,包括:第一电子设备和至少一个第二电子设备分别接收第一用户发出的声音。根据第一电子设备和至少一个第二电子设备接收到第一用户的声音的时间,确定第二位置信息。
根据第二方面,或者以上第二方面的任意一种实现方式,获取第一用户相对于第一电子设备的第二位置信息,包括:接收第三电子设备根据第一位置信息,显示第一界面的过程中,响应于用户操作,确定的第二位置信息;第一界面用于显示第一电子设备和至少一个第二电子设备的位置关系,用户操作用于移动第一界面上显示的第一用户对应的标识的位置。
根据第二方面,或者以上第二方面的任意一种实现方式,当处理器从存储器中读取计算机指令,还使得电子设备执行:向第三电子设备发送第一位置信息。
根据第二方面,或者以上第二方面的任意一种实现方式,获取第一用户相对于第一电子设备的第二位置信息,包括:向第一用户发送用于定位的第二信息。根据第二信息的发送时间,以及第二信息对应的反射信息的接收时间,确定第二位置信息。
根据第二方面,或者以上第二方面的任意一种实现方式,获取第一用户相对于第一电子设备的第二位置信息,包括:根据第一用户携带的第四电子设备的设备位置,确定第一用户相对于第一电子设备的第二位置信息。
根据第二方面,或者以上第二方面的任意一种实现方式,第一电子设备和至少一个第二电子设备的时钟同步,根据第一位置信息和第二位置信息,将声场校准到第二位置信息所指示的区域范围内,包括:根据第一位置信息和第二位置信息,调整第一电子设备和至少一个第二电子设备的发声时间,使得第一电子设备和至少一个第二电子设备的发声到达第二位置信息所指示的区域范围的时间相同或相似。
根据第二方面,或者以上第二方面的任意一种实现方式,当处理器从存储器中读取计算机指令,还使得电子设备执行:获取第二用户相对于第一电子设备的第三位置信息。根据第一位置信息、第二位置信息、以及第三位置信息,将第一声场校准到第二位置信息所指示的第一区域范围内,将第二声场校准到第三位置信息所指示的第二区域范围内,第一声场或第二声场为第一电子 设备和至少一个第二电子设备中的部分或全部电子设备形成的声场。
根据第二方面,或者以上第二方面的任意一种实现方式,第一区域范围和第二区域范围为声音目标区域,第一区域范围和第二区域范围以外的区域范围为静音区域。
根据第二方面,或者以上第二方面的任意一种实现方式,根据第一位置信息和第二位置信息,将声场校准到第二位置信息所指示的区域范围内,包括:确定待播放的第一音频中包括的一个或多个声音对象对应的一个或多个声音轨迹。根据第一位置信息和第二位置信息,在声场校准过程中重新编排一个或多个声音轨迹,使得校准后的声场在第二位置信息所指示的区域范围内与一个或多个声音轨迹匹配。
根据第二方面,或者以上第二方面的任意一种实现方式,当处理器从存储器中读取计算机指令,还使得电子设备执行:响应于用户操作,确定用户选择的一个或多个声音对象中的目标声音对象。根据第一位置信息和第二位置信息,在声场校准过程中重新编排一个或多个声音轨迹,使得校准后的声场在第二位置信息所指示的区域范围内与一个或多个声音轨迹匹配,包括:根据第一位置信息和第二位置信息,在声场校准过程中重新编排目标声音对象对应的目标声音轨迹,使得校准后的声场在第二位置信息所指示的区域范围内与目标声音轨迹匹配。
根据第二方面,或者以上第二方面的任意一种实现方式,根据至少一个第二电子设备接收的用于定位的第一信息,确定至少一个第二电子设备分别相对于第一电子设备的第一位置信息,包括:至少一个第二电子设备中的目标第二电子设备响应于在第一时间接收到的第一电子设备在第二时间发送的第一超声波信号,在第三时间向第一电子设备反馈第二超声波信号,第一信息包括第一超声波信号。第一电子设备在第四时间接收到第二超声波信号。根据第一时间、第二时间、第三时间、以及第四时间,确定目标第二电子设备相对于第一电子设备的距离,第一位置信息包括距离。
根据第二方面,或者以上第二方面的任意一种实现方式,至少一个第二电子设备分别从第一电子设备接收用于定位的第一信息,包括:至少一个第二电子设备分别接收第一电子设备在第二时间通过第一扬声器发送的第一超声波信号,以及在第五时间通过第二扬声器发送的第三超声波信号,用于定位的第一信息包括第一超声波信号和第三超声波信号。
根据第二方面,或者以上第二方面的任意一种实现方式,根据至少一个第二电子设备接收的用于定位的第一信息,确定至少一个第二电子设备分别相对于第一电子设备的第一位置信息,包括:至少一个第二电子设备中的任一第二电子设备根据第一扬声器和第二扬声器之间的距离,第二时间和第五时间的时间差,接收第一超声波信号的时间,接收第三超声波信号的时间,第一超声波信号或第三超声波信号的传播速度,确定相对于第一电子设备的角度,第一位置信息包括角度。
根据第二方面,或者以上第二方面的任意一种实现方式,根据第一位置信息和第二位置信息,将声场校准到第二位置信息所指示的区域范围内,包括:根据第一位置信息和第二位置信息,调整第一电子设备和至少一个第二电子设备的放音参数,将声场校准到第二位置信息所指示的区域范围内,放音参数包括如下一项或几项:放音频率、响应参数、相位参数、响度参数。
根据第二方面,或者以上第二方面的任意一种实现方式,第一信息为无线信号,无线信号为如下一种或几种:超声波信号、超宽带UWB信号、蓝牙信号、无线保真Wi-Fi信号、毫米波信号。
根据第二方面,或者以上第二方面的任意一种实现方式,第一电子设备或第二电子设备为智慧屏、或音箱。
第二方面以及第二方面中任意一种实现方式所对应的技术效果,可参见上述第一方面及第一方面中任意一种实现方式所对应的技术效果,此处不再赘述。
第三方面,提供一种声场校准方法,应用于包括第一电子设备和至少一个第二电子设备的系统。该方法包括:至少一个第二电子设备分别从第一电子设备接收用于定位的第一信息。根据至少一个第二电子设备接收的用于定位的第一信息,确定至少一个第二电子设备分别相对于第一电子设备的第一位置信息。根据预置信息和第一位置信息,获取摆放建议。
示例性的,在声场校准开始前,第一电子设备可先根据第二电子设备的数量、家居环境等, 显示提示信息,以提示用户第一电子设备、第二电子设备的摆放位置。从而使得第一电子设备、第二电子设备的摆放位置对应于标准声场中发声设备的位置摆放需求,以保证后续声场校准效果。
根据第三方面,方法还包括:向第三电子设备发送摆放建议,摆放建议于调整第一电子设备和至少一个第二电子设备的摆放位置。
一些示例中,用户第三电子设备可根据摆放建议显示地图。那么,用户可以根据地图中的各个电子设备的示意位置,空间内的多个电子设备的摆放位置,该多个电子设备包括第一电子设备。
一些示例中,在有新的电子设备进入空间,或者空间中的电子设备的位置发生移动的情况下,可获取摆放建议,以确定是否需要调整空间中的电子设备的摆放。
如此,通过预先获取摆放建议,有效提升声场校准效果。
根据第三方面,或者以上第三方面的任意一种实现方式,方法还包括:向空间中发送用于探测的第二信息。根据用于探测的第二信息,获取空间内的声学参数。根据预置信息和第一位置信息,获取摆放建议,包括:根据预置信息、第一位置信息和预置信息,获取摆放建议。
一些示例中,空间中不同的物体材质对于声音的反射等能力不同。那么,在声场校准过程中,需要参考空间环境影响,以提高声场校准的准确性。
一些示例中,声学参数包括第一电子设备所在的空间中的物体对声音的反射系数、吸收系数和透射系数。
根据第三方面,或者以上第三方面的任意一种实现方式,方法还包括:根据第一位置信息将声场校准到第一区域,第一区域位于第一电子设备前方,第一区域为如下一项或几项:对应于第一电子设备的显示屏尺寸的区域、用户所在区域、用户指示的区域。
示例性的,默认用户在第一电子设备(如智慧屏)的正前方观影,那么校准声场的默认位置应在第一电子设备正前方。并且,可根据第一电子设备的显示屏尺寸,确定该默认位置,并将该默认位置确定为最佳观影、听音位置。比如,第一电子设备的显示屏尺寸为75寸,该默认位置例如为第一电子设备前距离智慧屏3米-4米的位置。又比如,第一电子设备的显示屏尺寸为100寸,该默认位置例如为第一电子设备前距离智慧屏5米-6米的位置。
根据第三方面,或者以上第三方面的任意一种实现方式,第一区域的方向为根据第一电子设备的预置方向确定,或者为响应于用户操作确定。
如此,根据确定的方向,保证声场校准的准确性。
第四方面,提供一种电子设备。该电子设备包括:处理器和存储器,存储器和处理器耦合,存储器用于存储计算机程序代码,计算机程序代码包括计算机指令,当处理器从存储器中读取计算机指令,使得电子设备执行:至少一个第二电子设备分别从第一电子设备接收用于定位的第一信息。根据至少一个第二电子设备接收的用于定位的第一信息,确定至少一个第二电子设备分别相对于第一电子设备的第一位置信息。根据预置信息和第一位置信息,获取摆放建议。
根据第四方面,当处理器从存储器中读取计算机指令,还使得电子设备执行:向第三电子设备发送摆放建议,摆放建议于调整第一电子设备和至少一个第二电子设备的摆放位置。
根据第四方面,或者以上第四方面的任意一种实现方式,当处理器从存储器中读取计算机指令,还使得电子设备执行:向空间中发送用于探测的第二信息。根据用于探测的第二信息,获取空间内的声学参数。根据预置信息和第一位置信息,获取摆放建议,包括:根据预置信息、第一位置信息和预置信息,获取摆放建议。
根据第四方面,或者以上第四方面的任意一种实现方式,当处理器从存储器中读取计算机指令,还使得电子设备执行:根据第一位置信息将声场校准到第一区域,第一区域位于第一电子设备前方,第一区域为如下一项或几项:对应于第一电子设备的显示屏尺寸的区域、用户所在区域、用户指示的区域。
根据第四方面,或者以上第四方面的任意一种实现方式,第一区域的方向为根据第一电子设备的预置方向确定,或者为响应于用户操作确定。
第五方面,提供一种电子设备,该电子设备具有实现如上述第一方面及其中任一种可能的实现方式中所述的声场校准方法的功能;或者,该电子设备具有实现如上述第三方面及其中任一种可能的实现方式中所述的声场校准方法的功能。该功能可以通过硬件实现,也可以通过硬件执行 相应地软件实现。该硬件或软件包括一个或多个与上述功能相对应的模块。
第五方面以及第五方面中任意一种实现方式所对应的技术效果,可参见上述第一方面及第一方面中任意一种实现方式所对应的技术效果,此处不再赘述。
第六方面,提供一种计算机可读存储介质。计算机可读存储介质存储有计算机程序(也可称为指令或代码),当该计算机程序被电子设备执行时,使得电子设备执行第一方面或第一方面中任意一种实施方式的方法;或者,使得电子设备执行第三方面或第三方面中任意一种实施方式的方法。
第六方面以及第六方面中任意一种实现方式所对应的技术效果,可参见上述第一方面及第一方面中任意一种实现方式所对应的技术效果,此处不再赘述。
第七方面,提供一种计算机程序产品,当计算机程序产品在电子设备上运行时,使得电子设备执行第一方面或第一方面中任意一种实施方式的方法;或者,使得电子设备执行第三方面或第三方面中任意一种实施方式的方法。
第七方面以及第七方面中任意一种实现方式所对应的技术效果,可参见上述第一方面及第一方面中任意一种实现方式所对应的技术效果,此处不再赘述。
第八方面,提供一种电路系统,电路系统包括处理电路,处理电路被配置为执行第一方面或第一方面中任意一种实施方式的方法;或者,处理电路被配置为执行第三方面或第三方面中任意一种实施方式的方法。
第八方面以及第八方面中任意一种实现方式所对应的技术效果,可参见上述第一方面及第一方面中任意一种实现方式所对应的技术效果,此处不再赘述。
第九方面,提供一种芯片系统,包括至少一个处理器和至少一个接口电路,至少一个接口电路用于执行收发功能,并将指令发送给至少一个处理器,当至少一个处理器执行指令时,至少一个处理器执行第一方面或第一方面中任意一种实施方式的方法;或者,至少一个处理器执行第三方面或第三方面中任意一种实施方式的方法。
第九方面以及第九方面中任意一种实现方式所对应的技术效果,可参见上述第一方面及第一方面中任意一种实现方式所对应的技术效果,此处不再赘述。
附图说明
图1为本申请实施例提供的一种声场校准方法应用的通信系统的示意图;
图2A为本申请实施例提供的第一电子设备的硬件结构示意图;
图2B为本申请实施例提供的第一电子设备的扬声器位置示意图;
图3为本申请实施例提供的声场中的“皇帝位”示意图;
图4A为本申请实施例提供的矩形标准声场示意图;
图4B为本申请实施例提供的圆形标准声场示意图;
图5为本申请实施例提供的界面示意图一;
图6A为本申请实施例提供的家庭影院声场校准场景示意图一;
图6B为本申请实施例提供的家庭影院声场校准过程中角度确认场景示意图;
图6C为本申请实施例提供的家庭影院声场校准过程中距离确认场景示意图;
图7A为本申请实施例提供的家庭影院声场校准场景示意图二;
图7B为本申请实施例提供的家庭影院声场校准过程中音箱间位置关系确认流程示意图;
图7C为本申请实施例提供的家庭影院声场校准过程中音箱发声、收声场景示意图;
图8为本申请实施例提供的家庭影院声场校准场景示意图三;
图9为本申请实施例提供的家庭影院声场校准场景示意图四;
图10为本申请实施例提供的家庭影院声场校准场景示意图五;
图11为本申请实施例提供的家庭影院声场校准场景示意图六;
图12为本申请实施例提供的家庭影院声场校准场景示意图七;
图13为本申请实施例提供的界面示意图二;
图14为本申请实施例提供的声音对象示意图;
图15为本申请实施例提供的家庭影院声场校准场景示意图八;
图16为本申请实施例提供的界面示意图三;
图17为本申请实施例提供的界面示意图四;
图18为本申请实施例提供的声音传播示意图一;
图19为本申请实施例提供的声音传播示意图二;
图20为本申请实施例提供的声场校准方法的流程示意图;
图21为本申请实施例提供的电子设备的结构示意图。
具体实施方式
下面结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述。其中,在本申请实施例的描述中,以下实施例中所使用的术语只是为了描述特定实施例的目的,而并非旨在作为对本申请的限制。如在本申请的说明书和所附权利要求书中所使用的那样,单数表达形式“一个”、“一种”、“所述”、“上述”、“该”和“这一”旨在包括例如“一个或多个”这种表达形式,除非其上下文中明确地有相反指示。还应当理解,在本申请以下各实施例中,“至少一个”、“一个或多个”是指一个或两个以上(包含两个)。
在本说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。术语“连接”包括直接连接和间接连接,除非另外说明。“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。
在本申请实施例中,“示例性地”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性地”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性地”或者“例如”等词旨在以具体方式呈现相关概念。
图1为本申请实施例提供的一种声场校准方法应用的通信系统的示意图。如图1所示,第一电子设备100与第二电子设备200建立有通信连接。
示例性地,第一电子设备100与第二电子设备200建立有无线通信连接。第一电子设备100可通过与第二电子设备200的无线通信连接,将第一电子设备100上的待播放的声音发送至第二电子设备200上播放。其中,待播放的声音可以是音频文件。第一电子设备100和第二电子设备200配合播放音频文件,为用户提供家庭影院的影音效果。
示例性地,第一电子设备100可以包括但不限于大屏显示装置(如智慧屏、大屏设备等)、笔记本电脑、智能手机、平板电脑、投影设备、膝上型计算机(Laptop)、个人数字助理(personal digital assistant,PDA)、人工智能(artificial intelligence,AI)设备、可穿戴设备(如智能手表等)等设备。第一电子设备100安装的操作系统包括但不限于 或者其它操作系统。第一电子设备100也可以不安装有操作系统。在一些实施例中,第一电子设备100可以为固定式设备,也可以为便携式设备。本申请对第一电子设备100的具体类型、所安装的操作系统均不作限制。
示例性地,第二电子设备200可以包括但不限于音箱、无线音箱等具有声音播放功能的电子设备。第二电子设备200可以安装操作系统。第二电子设备200安装的操作系统包括但不限于 或者其它操作系统。第二电子设备200也可以不安装有操作系统。本申请对第二电子设备200的具体类型、有无安装操作系统、在有安装操作系统下操作系统的类型均不作限制。
第一电子设备100可以通过无线通信技术与第二电子设备200建立无线通信连接。其中,无线通信技术包括但不限于以下的至少一种:蓝牙(bluetooth,BT)(例如,传统蓝牙或者低功耗(bluetooth low energy,BLE)蓝牙),无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),近距离无线通信(near field communication,NFC),紫蜂 (Zigbee),调频(frequency modulation,FM)等。
在一些实施例中,第一电子设备100与第二电子设备200都支持靠近发现功能。示例性地,第一电子设备100靠近第二电子设备200后,第一电子设备100和第二电子设备200能够互相发现对方,之后建立诸如蓝牙连接、Wi-Fi端到端(peer to peer,P2P)连接等无线通信连接。
在一些实施例中,第一电子设备100与第二电子设备200通过局域网,建立无线通信连接。比如,第一电子设备100与第二电子设备200都连接至同一路由器。
在一些实施例中,第二电子设备200的数量为一个或多个,一个或多个第二电子设备200和第一电子设备100组成家庭影院。通过下述各个实施例所述的声场校准方法,完成家庭影院的声场校准。
在另一些实施例中,上述通信系统中也可不包括第一电子设备100,通过下述各个实施例所述的声场校准方法,完成多个第二电子设备200组成的通信系统的声场校准。
图2A示出了第一电子设备100的结构示意图。
第一电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,电源管理模块140,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,传感器模块180,按键190,指示器191,摄像头192,以及显示屏193等。
可以理解的是,本申请实施例示意的结构并不构成对第一电子设备100的具体限定。在本申请另一些实施例中,第一电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器110可以包含多组I2C总线。处理器110可以通过不同的I2C总线接口分别耦合触摸传感器,充电器,闪光灯,摄像头192等。例如:处理器110可以通过I2C接口耦合触摸传感器,使处理器110与触摸传感器通过I2C总线接口通信,实现第一电子设备100的触摸功能。
MIPI接口可以被用于连接处理器110与显示屏193,摄像头192等外围器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serial interface,DSI)等。在一些实施例中,处理器110和摄像头192通过CSI接口通信,实现第一电子设备100的拍摄功能。处理器110和显示屏193通过DSI接口通信,实现第一电子设备100的显示功能。
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于第一电子设备100与外围设备之间传输数据。也可以 用于连接其他电子设备,例如AR设备等。
可以理解的是,本申请实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对第一电子设备100的结构限定。在本申请另一些实施例中,第一电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
电源管理模块140用于为第一电子设备100中包含的处理器110等模块供电。在一些实施例汇总,电源管理模块140可以用于接收电源供电输入,支持第一电子设备100工作。
第一电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。第一电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在第一电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备输出声音信号,或通过显示屏193显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。
无线通信模块160可以提供应用在第一电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。
无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,第一电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得第一电子设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
第一电子设备100通过GPU,显示屏193,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏193和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏193用于显示图像,视频等。显示屏193包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵 有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,第一电子设备100可以包括1个或N个显示屏193,N为大于1的正整数。
摄像头192用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,第一电子设备100可以包括1个或N个摄像头192,N为大于1的正整数。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展第一电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储第一电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。
此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。处理器110通过运行存储在内部存储器121的指令,和/或存储在设置于处理器中的存储器的指令,执行第一电子设备100的各种功能应用以及数据处理。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。第一电子设备100可以通过音频模块170以及应用处理器等实现音频功能。例如音乐播放,录音等。音频模块170例如可以包括扬声器,受话器,麦克风等。
其中,扬声器,也称“喇叭”,用于将音频电信号转换为声音信号。第一电子设备100可以通过扬声器发送超声波信号,或播放音频等。其中,该扬声器可以是第一电子设备100的内置部件,也可以是第一电子设备100的外接配件。
在一些实施例中,第一电子设备100可以包括一个或多个扬声器,其中每一扬声器或多个扬声器合作可以实现声场的校准等。
示例性的,如图2B所示,示例性给出了第一电子设备100上多个扬声器的布局示意。如图2B所示,当第一电子设备100如图中所示的位置放置时,第一电子设备100的正面为显示屏193所在的平面,扬声器21位于第一电子设备100顶部(通常为显示屏所在一侧)偏左位置,扬声器22位于第一电子设备100顶部右侧偏右位置。进一步的,扬声器21和扬声器22还可以相对于第一电子设备100显示屏193中轴线左右对称。
需要说明的是,下面实施例中所描述的“上”,“下”,“左”和“右”均参考图2B所示的方位,后续不再赘述。
在一些实施例中,如图2B所示,第一电子设备100分别通过扬声器21和扬声器22发送超声波信号,相应的第二电子设备200可接收到该超声波信号。第二电子设备200可根据接收到的超声波信号,进行定位计算,确定第一电子设备100和第二电子设备200之间的距离和角度关系。或者,第二电子设备200也可将接收到超声波信号的时间信息发送至第一电子设备100,由第一电子设备100进行定位计算,确定第一电子设备100和第二电子设备200之间的距离和角度关系。
在一些实施例中,第一电子设备100还可以包括更多数量的扬声器。本申请实施例对扬声器的数量不做具体限制。
其中,麦克风,也称“话筒”,“传声器”,用于将声音信号转换为模拟音频电信号,第一电子设备100可以通过麦克风采集周围的声音信号。其中,该麦克风可以是第一电子设备100的 内置部件,也可以是第一电子设备100的外接配件。
在一些实施例中,第一电子设备100可以包括一个或多个麦克风,其中每一麦克风或多个麦克风合作可以实现采集各个方向的声音信号,并将采集到的声音信号转换为模拟音频电信号的功能,还可以实现识别声音来源,降噪,或定向录音功能等。
示例性的,第一电子设备100通过麦克风采集用户声音信号,从而定位用户位置。
传感器模块180可以包括压力传感器,距离传感器,接近光传感器,指纹传感器,温度传感器,触摸传感器,环境光传感器等。
压力传感器用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器可以设置于显示屏193。压力传感器的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器,电极之间的电容改变。第一电子设备100根据电容的变化确定压力的强度。当有触摸操作作用于显示屏193,第一电子设备100根据压力传感器检测所述触摸操作强度。第一电子设备100也可以根据压力传感器的检测信号计算触摸的位置。
触摸传感器,也称“触控器件”。触摸传感器可以设置于显示屏193,由触摸传感器与显示屏193组成触摸屏,也称“触控屏”。触摸传感器用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏193提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器也可以设置于第一电子设备100的表面,与显示屏193所处的位置不同。
在一些实施例中,第一电子设备100通过显示屏193显示声场校准界面,通过触摸传感器检测到用户在声场校准界面上指示声场校准的操作后,自动完成声场校准。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。第一电子设备100可以接收按键输入,产生与第一电子设备100的用户设置以及功能控制有关的键信号输入。
指示器191可以是指示灯,可以用于指示电源状态,也可以用于指示消息,通知等。
在一些实施例中,第二电子设备200可以具有与第一电子设备100相同或相似的硬件结构。第二电子设备200也可以包括比图2A所示结构更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。本申请实施例对第二电子设备200的结构不做具体限定。
为了便于说明,以下实施例均以第一电子设备100为智慧屏,以第二电子设备200为音箱(如包括音箱A、音箱B、音箱C、音箱D)为例,对本申请实施例提供的声场校准方法进行详细介绍。
在一些实施例中,在家庭影院中,针对用户位置进行“皇帝位”声场校准,从而为用户提供更好的听音效果。一些示例中,“皇帝位”例如是指在声音角度上,声音响度、立体环绕效果等均衡的位置,可通过声场校准,调整音箱的放音参数,从而满足“皇帝位”的需求。
示例性的,如图3所示,在“皇帝位”调试过程中,可先确定最佳的观影或者听音位置。例如,将“皇帝位”设置到智慧屏正前方,距离智慧屏3米远的位置等。之后,以设置的“皇帝位”为中心位置,进行声场校准,将声场中心校准到“皇帝位”,从而使得用户的看与听的最佳位置能够集合到该“皇帝位”。
在一些实施例中,标准声场一般包括矩形标准声场和圆形标准声场。例如,如图4A所示的矩形标准声场,多个发声设备(如包括智慧屏和音箱)所在位置连线可组成矩形。如图4B所示的圆形标准声场,多个发声设备所在位置连线可组成圆形。
在声场校准过程中,用户可家庭将影院中包括的智慧屏和音箱,按照矩形标准声场和圆形标准声场对应的位置进行布置,从而保证声场校准效果。
应理解,声场中包括的发声设备数量,可以比图4A或图4B所示的标准声场中包括的发声设备数量更多或更少。
在一些实施例中,智慧屏可响应于用户操作,开始对家庭影院进行声场校准。可选的,在声场校准开始前,智慧屏可先根据音箱数量、家居环境等,显示提示信息,以提示用户音箱的摆放位置。从而使得音箱和智慧屏的摆放位置对应于标准声场中发声设备的位置摆放需求,以保证后 续声场校准效果。可选的,用户操作例如包括在声场校准界面上的操作、语音命令等。
示例性的,如图5中(a)所示界面501,智慧屏在显示设置菜单51的过程中,检测到用户点击设置菜单51上显示的声音设置控件511的操作,可显示如图5中(b)所示声场校准界面502。在声场校准界面502上,智慧屏显示提示框52,用于提示用户是否确认开始进行声场校准。若检测到用户点击开始控件521的操作,可确定当前用户需要对家庭影院进行声场校准。
可选的,智慧屏在确定用户指示进行声场校准后,可根据已经建立通信连接的音箱数量、家居环境等,自动生成相应的音箱摆放建议,使得音箱和智慧屏的摆放位置对应于标准声场中发声设备的位置摆放需求,从而保证后续声场校准效果。
示例性的,如图5中(c)所示界面503,智慧屏显示提示信息53,用于提示用户音箱的摆放位置。之后,智慧屏检测到用户点击确认控件531的操作后,可确定已完成音箱的摆放。比如,如图6A所示,用户按照界面503所示的提示信息53,摆放音箱A、音箱B、音箱C、音箱D。
可选的,在声场校准过程中,若智慧屏确定当前音箱摆放位置偏差较大,影响声场校准。那么,智慧屏可显示提示信息,以提示用户调整音箱摆放位置,从而保证声场校准效果。
比如,如图5中(c)所示界面503,智慧屏显示的音箱摆放建议为四个音箱分开在房间四周摆放。例如,可按照如图6A所示场景中音箱A、音箱B、音箱C、音箱D的位置摆放。但是,在声场校准过程中,智慧屏确定音箱A和音箱C位于智慧屏左侧,且摆放在一起;音箱B和音箱D位于智慧屏右侧,且摆放在一起。即,没有音箱摆放于沙发后方。智慧屏确定当前音箱摆放位置会导致声场校准效果较差,那么智慧屏可显示提示信息,提示用户重新摆放音箱。
在一些实施例中,智慧屏可根据智慧屏和多个音箱之间的位置关系,自动进行声场校准。可选的,智慧屏可通过如图2B所示的左右两个扬声器分时发送超声波信号。相应的,音箱中配置有麦克风阵列,音箱可根据麦克风阵列接收到的两个超声波信号的时间差,通过定位算法,确定音箱和智慧屏之间的距离和角度关系。
之后,音箱可将确定的距离和角度关系发送至智慧屏。智慧屏根据获取到的多个音箱发送的距离和角度关系,可确定各个音箱、智慧屏之间的距离和角度的几何关系,进而将“皇帝位”校准到智慧屏前的默认位置。
可选的,默认用户在智慧屏正前方观影,那么默认位置应在智慧屏正前方。并且,可根据智慧屏的显示屏尺寸,确定该默认位置,并将该默认位置确定为最佳观影、听音位置。
比如,智慧屏的显示屏尺寸为75寸,该默认位置例如为智慧屏前距离智慧屏3米-4米的位置。又比如,智慧屏的显示屏尺寸为100寸,该默认位置例如为智慧屏前距离智慧屏5米-6米的位置。
示例性的,如图6A所示场景,智慧屏面向沙发,智慧屏面向的方位摆放有四个音箱,智慧屏与这四个音箱均建立有通信连接。并且,智慧屏中已预设正面朝向和背面朝向的定义。以智慧屏配置有如图2B所示的左右两个扬声器为例,智慧屏可区分这两个扬声器的左右位置。
在声场校准过程中,智慧屏向音箱发送启动定位指令,用于指示音箱开始进行定位。之后,智慧屏的左右扬声器分时发送超声波信号,根据扬声器的左右位置,智慧屏可确定音箱位于智慧屏的左侧,或位于智慧屏的右侧。
相应的,音箱可接收智慧屏发送的超声波信号,通过定位算法,确定音箱和智慧屏之间的位置关系。
如图6B所示,以音箱A确定音箱A和智慧屏之间的角度关系为例。智慧屏的扬声器SpkR和扬声器SpkL之间的距离为D。
可选的,智慧屏向音箱发送的启动定位指令中可携带该扬声器距离信息,如距离为D;或者,音箱A可通过其他方式获取到该距离D。
智慧屏的两个扬声器分时发送超声波信号,时间间隔为T。可选的,音箱A可获取到该时间间隔T。音箱A接收到两个超声波信号的时间分别为tR1和tL1,例如,如图6B所示,音箱A接收到扬声器SpkR发送的超声波信号的时间为tR1,接收到扬声器SpkL发送的超声波信号的时间为tL1。
那么,音箱A可基于超声波信号的传播速度Vs,根据公式(tL1-tR1-T)·Vs=D sinθ,确定音箱A和智慧屏之间的角度θ。之后,音箱A可将确定的角度信息发送至智慧屏。
如图6C所示,在位置信息确定过程中,智慧屏其中一个扬声器向音箱A发送超声波信号,音箱A接收到超声波信号后,向智慧屏回复超声波信号。其中,音箱A接收到超声波信号的时间为T1,回复超声波信号的时间为T2,智慧屏发送超声波信号的时间为T3,接收到回复的超声波信号的时间为T4。那么,智慧屏或音箱A可根据T1、T2、T3、T4这四个时间信息,确定音箱A和智慧屏之间的距离。可选的,如音箱A确定音箱A和智慧屏之间的距离后,可将确定的距离信息发送至智慧屏。
这样,基于上述音箱A确定角度信息和距离信息的过程,其他音箱也可确定各自与智慧屏之间的角度信息和距离信息。
可选的,音箱可通过与智慧屏之间的一次或多次交互,确定音箱和智慧屏之间的角度信息和距离信息。
可选的,音箱在接收超声波信号的过程中,通过其配置的麦克风阵列,可采集到多份超声波信号。音箱可通过预设算法,确定各个麦克风对应的角度和距离的平均值,从而降低信号抖动影响。
后续,智慧屏根据音箱反馈的角度信息和距离信息,默认将“皇帝位”校准到智慧屏前的默认位置(如沙发所在的距离智慧屏前3米的位置)。这样,用户在沙发上使用家庭影院时,能够获得更好的影音效果。
如此,相比于现有技术中,用户需要手持拾音设备采集音频,并且手动录入智慧屏和音箱的距离信息,才能够实现声场校准。本申请实施例提供的声场校准方法,智慧屏能够通过发送超声波信号,自动完成声场校准,有效降低用户操作难度。
在另一些实施例中,也可由智慧屏完成智慧屏和音箱之间的位置信息的确定。其中,位置信息包括角度信息和距离信息。
示例性的,如图6A所示场景,仍以音箱A为例,智慧屏配置的两个扬声器分别发送超声波信号,音箱A接收到超声波信号后,将接收到的超声波信号通过与智慧屏之间的通信连接(如Wi-Fi连接等)回传到智慧屏。相应的,智慧屏接收到音箱A回传的超声波信号后,通过预设算法,可确定音箱A和智慧屏之间的角度信息。
智慧屏通过与音箱A之间的通信连接(如Wi-Fi连接、蓝牙连接等)向音箱A发送收音指令,用于指示音箱A开始收音,并且智慧屏也开始收音。之后,智慧屏通过其配置的两个扬声器中任一个扬声器发送超声波信号,音箱A在接收到超声波信号后,向智慧屏发送超声波信号。在此过程中,智慧屏和音箱A录制自身及对端发送的超声波信号,并且音箱A将录制到的超声波信号通过与智慧屏之间的通信连接回传到智慧屏。之后,智慧屏根据自身录制的超声波信号以及获取到的音箱A录制的超声波信号,确定音箱A的方位,以及确定音箱A和智慧屏之间的距离信息。
这样,基于上述音箱A确定角度信息和距离信息的过程,其他音箱也可确定各自与智慧屏之间的角度信息和距离信息。
可选的,音箱可通过与智慧屏之间的一次或多次交互,确定音箱和智慧屏之间的角度信息和距离信息。
可选的,音箱和智慧屏在接收超声波信号的过程中,通过其配置的麦克风阵列,可采集到多份超声波信号。智慧屏可通过预设算法,确定各个麦克风对应的角度和距离的平均值,从而降低信号抖动影响。
后续,智慧屏可根据确定的角度信息和距离信息,将声场校准到默认位置。
在另一些实施例中,声场的校准也可由音箱独立完成。比如,用户在卧室中配置多个音箱,多个音箱建立连接后组成通信系统,并且在该通信系统中并不包括智慧屏。那么,音箱也可通过上述超声波信号定位方法,自动完成通信系统的声场校准。
示例性的,如图7A所示,音箱A、音箱B、音箱C、音箱D组成通信系统。其中,响应于用户操作,可确定其中一个音箱为主音箱,其他音箱为从音箱。下文以音箱A为主音箱为例,对声场校准过程进行说明。
在声场校准过程中,主音箱需确定每两个音箱之间的间距。如通信系统中包括N个音箱,那么需确定N(N-1)/2个间距。如图7A所示场景,包括4个音箱(即N=4)。那么,音箱A需要 确定6个间距,如包括音箱A和音箱B之间的间距LAB、音箱A和音箱D之间的间距LAD、音箱A和音箱C之间的间距LAC、音箱B和音箱C之间的间距LBC、音箱B和音箱D之间的间距LBD、以及音箱C和音箱D之间的间距LCD。
此外,主音箱还需确定在主音箱视角下,各个从音箱的方向顺序。
之后,主音箱可根据各个音箱之间的间距以及各个从音箱的方向顺序,确定在主音箱视角下的各个从音箱和主音箱的位置关系,进而基于该位置关系进行声场校准。
示例性的,如图7B所示,音箱A为主音箱,通信系统中的各个音箱建立有通信连接(如Wi-Fi连接等)。之后,音箱A基于与各个音箱间的通信连接,向各个音箱发送指示信息,该指示信息用于指示音箱开始收音,以及等待一段时间后发送超声波信号(即通知发声顺序)。其中,音箱A可自定义各个音箱等待发送超声波信号的等待时间,且各个音箱的等待时间不同,以便于后续各个音箱可实现依次录制超声波信号。也就是说,由音箱A确定各个音箱的发声顺序,例如,如图7B所示,各个音箱的发声顺序依次为音箱A、音箱B、音箱C、音箱D。
之后,音箱A开始发送超声波信号,音箱B、音箱C以及音箱D也根据接收到的指示信息,在等待一段时间后,开始发送超声波信号。如图7C所示,在超声波信号发送的过程中,各个音箱均录制自身以及其他音箱发送的超声波信号,并可确定自身接收到各个音箱发送的超声波信号的到达时间,即如图7B所示,各个音箱计算中间结果。之后,音箱B、音箱C以及音箱D可将记录的时间信息(即确定的中间结果)回传至音箱A,音箱A可根据自身记录的时间信息以及获取到的其他音箱记录的时间信息,确定通信系统中,每两个音箱之间的间距。
此外,音箱A通过麦克风阵列中不同麦克风接收同一从音箱发送的超声波信号之间的时间差,可确定该从音箱相对于音箱A的方位角。那么,在上述音箱A确定各个音箱间距的过程中,音箱A根据接收到的各个从音箱发送的超声波信号,可确定在音箱A视角下,各个音箱的方向顺序。
之后,如图7B所示,音箱A可根据音箱间的间距,以及各个音箱的方向顺序,确定音箱间的位置关系。之后,音箱A可基于该位置关系,进行声场校准。
可选的,在主音箱确认过程中,可由用户录入其方位信息。如图7A所示场景中,以默认位于用户左前方的音箱为主音箱为例。
比如,一个或多个音箱语音播报“请您按一下处于您左前方位置音箱设备上的播放键”,以提示用户按一下左前方音箱设备上的按钮。之后,检测到用户操作的音箱,如音箱A,可确定自身为主音箱。那么后续音箱A可以自身为坐标原点,定位其他音箱的方位,如位于用户右前方、左后方、右后方等。
又比如,多个音箱轮流语音播报语音提示,以提示用户确定音箱方位,在检测到用户确定位于左前方音箱时,确定对应的音箱为主音箱。
例如,多个音箱轮流语音播报“我处于您的哪个方位,您可以对我说,左前方,右前方,左后方,右后方”。当用户用语音回复“左前方”时,音箱通过语音识别能力,识别到用户语音答复设定的意图是左前方方位,则可将该音箱标定为左前方位置的主音箱,其他音箱以该音箱为坐标原点进行定位。
可选的,在确定主音箱后,其他还未语音播报的音箱可不再进行语音播报,而是直接以确定的主音箱为坐标原点进行定位。或者,各个音箱轮流语音播报语音提示,通过各个音箱的语音识别能力,根据用户回复,确定各个音箱的方位。
又比如,通过手势识别的方式,确定主音箱方位。多个音箱轮流语音播报语音提示,以提示用户确定音箱方位,在检测到用户确定位于左前方音箱时,确定对应的音箱为主音箱。
例如,多个音箱轮流语音播报“请您通过手势确认我处于您的哪个方位。如果处于左前方,您可以从左到右挥动您的手臂;如果处于右前方,您可以从右到左挥动您的手臂;如果处于左后方,您可以从上到下挥动您的手臂;如果处于右后方,您可以从下到上挥动您的手臂”。音箱可通过其手势识别能力,确定用户手势。音箱在检测到用户“从左到右挥动手臂”的手势时,可识别到用户手势设定的意图是左前方方位,则可将该音箱标定为左前方位置的主音箱,其他音箱以该音箱为坐标原点进行定位。
可选的,在确定主音箱后,其他还未语音播报的音箱可不再进行语音播报,而是直接以确定 的主音箱为坐标原点进行定位。或者,各个音箱轮流语音播报语音提示,通过各个音箱的手势识别能力,根据用户手势,确定各个音箱的方位。
应理解,上述语音播报内容以及手势,均为示例性说明,本申请实施例对于语音播报内容及手势不做具体限定。
再比如,用户可在主音箱(如音箱A)中录入其方位信息,如音箱A正面的方向等信息;或者,用户可在音箱A中录入各个音箱的位置关系,如音箱B位于音箱A右侧等信息。
如此,通信系统中的电子设备自动确定设备间的位置定位关系,实现自动的声场校准,简化用户操作,降低用户操作难度,提升用户使用体验。
在一些实施例中,电子设备(如包括智慧屏、音箱)配置的麦克风阵列可包括竖向排列的多个麦克风,基于该竖向排列的麦克风阵列,可定位电子设备摆放位置的空间高度信息。在声场校准过程中,电子设备通过上述确定的各个电子设备之间的位置关系以及该空间高度信息进行声场校准,可提升声场校准的准确度。
比如,在通信系统包括智慧屏和音箱的场景中,智慧屏可确定空间水平线。可选的,该空间水平线例如为智慧屏的下边缘所在位置、上边缘所在位置、扬声器所在位置、地面所在位置、天花板所在位置等中的一项或几项。可选的,智慧屏配置有向天花板方向发声的天空音扬声器,部署在360°不同方位的扬声器,部署在智慧屏背面的扬声器等中的一个或多个扬声器,智慧屏通过这些扬声器发送的超声波信号,可确定地面所在位置、天花板所在位置等。
智慧屏可向音箱发送指示信息,以指示该空间水平线。后续,在声场校准过程中,音箱接收智慧屏其中一个扬声器发送的超声波信号,基于竖向排列的麦克风阵列中各个麦克风接收该超声波信号的时间差,以及各个麦克风的距离,可确定音箱相对于空间水平线的高度,并将该确定的空间高度信息反馈智慧屏。那么,智慧屏可结合确定的智慧屏和各个音箱的位置关系、获取到的各个音箱发送的空间高度信息、以及自身的空间高度信息,进行声场校准,以实现更加精准的声场校准。
又比如,在通信系统中包括音箱的场景中,主音箱可确定空间水平线。可选的,该空间水平线例如为主音箱的下边缘所在位置、上边缘所在位置、扬声器所在位置、地面所在位置、天花板所在位置等中的一项或几项。智慧屏可向从音箱发送指示信息,以指示该空间水平线。后续,在声场校准过程中,各个音箱基于竖向排列的麦克风阵列中各个麦克风接收超声波信号的时间差,以及各个麦克风的距离,可确定音箱相对于空间水平线的高度。
那么,主音箱可结合确定的各个音箱的间距、方向顺序、以及空间高度信息,进行声场校准,以实现更加精准的声场校准。
需要说明的是,上述各个实施例中,以电子设备采用超声波定位技术为例,对电子设备自动确定设备间的位置定位关系的过程进行说明。应理解,通信系统中的电子设备还可以通过其他多种方式实现自动确定设备间的位置定位关系。
比如,通过在电子设备上配置超宽带(ultra wideband,UWB)传感器、毫米波传感器、Wi-Fi协议定位的多天线、蓝牙协议定位的多天线等中的一项或几项,实现自动确定设备间的位置定位关系。
下文以电子设备采用超声波定位技术为例,对多个电子设备的定位过程进行示例说明,采用其他方式实现电子设备自动定位的过程可参考通过超声波定位技术实现电子设备自动定位的过程,对此下文不再赘述。
此外,在声场校准过程中,智慧屏使用的声场校准算法可参考现有的声场校准算法,对此本申请实施例不做具体限制和说明。
下文以通信系统中包括智慧屏和音箱为例,对声场校准过程进行详细的介绍。应理解,通信系统中不包括智慧屏的情况下,声场校准过程可参考通信系统中包括智慧屏的声场校准过程,对此本申请实施例不再赘述。
在一些实施例中,在声场校准过程中,智慧屏也可通过定位用户位置,将声场校准到用户所在位置,从而为用户提供更好的听音体验。
一些示例中,音箱、智慧屏被唤醒后,可进行声场校准。可选的,用户可通过预设的语音命 令,如“小艺,小艺”,唤醒音箱、智慧屏。或者,预设用于进行声场校准的语音命令,如“声场校准”等。那么,通信系统中的智慧屏和音箱在检测到预设的语音命令时,可确定用户指示进行声场校准。那么,智慧屏和音箱可确定检测到该语音命令的时间信息,用于后续定位用户位置。
示例性的,如图8所示,智慧屏和4个音箱均检测到用户发出的语音命令,如音箱A检测到该语音命令的时间为t1,智慧屏检测到该语音命令的时间为t2,音箱B检测到该语音命令的时间为t3,音箱D检测到该语音命令的时间为t4,音箱C检测到该语音命令的时间为t5。之后,各个音箱可将接收到语音命令的时间信息发送至智慧屏。
其中,通信系统中的智慧屏和多个音箱通过高精度时钟同步算法,已完成时钟同步,时间误差在预设范围内(如1μs级)。那么,智慧屏根据获取到的多个音箱反馈的各自接收到语音命令的时间信息,可确定时间差。之后智慧屏可根据声音的传播速度,确定各个音箱与用户之间的距离差,进而确定用户与智慧屏、用户与各个音箱之间的距离,如距离分别为d1-d5。
之后,智慧屏可结合上述通过超声波定位技术确定的智慧屏和音箱之间的位置关系(如包括距离和角度关系),确定用户、智慧屏、以及音箱之间的位置关系。那么,智慧屏可基于该用户、智慧屏、以及音箱之间的位置关系,调整音箱的音效,将“皇帝位”声场校准到用户所在的位置。
如此,通过各个音箱检测到的用户语音命令的时间信息,实现将声场校准到用户所在位置。相比于现有技术中,用户位置变化,只能采用手持拾音设备采集音频,重新进行声场校准。本申请实施例提供的声场校准方法,能够在用户位置发生变化时,响应于用户的语音命令,自动将声场校准到用户所在位置,满足用户使用需求的同时,降低用户操作难度。
另一些示例中,用户发出预设的语音命令,如“小艺,小艺”。通信系统中的智慧屏和音箱均可响应于检测到的该预设的语音命令,确定用户所在的方位。
示例性的,如图9所示,智慧屏和4个音箱均检测到用户发出的语音命令,那么智慧屏和音箱可通过声源定位算法,确定用户相对于自身所在的方位。之后,智慧屏和音箱可向用户方位发送超声波信号,利用超声波信号探测自身与用户之间的距离。
例如,如图9所示,音箱A利用超声波信号探测自身与用户之间的距离为d1,智慧屏利用超声波信号探测自身与用户之间的距离为d2,音箱B利用超声波信号探测自身与用户之间的距离为d3,音箱D利用超声波信号探测自身与用户之间的距离为d4,音箱C利用超声波信号探测自身与用户之间的距离为d5。之后,音箱可将确定的用户与音箱之间的距离发送至智慧屏。
相应的,智慧屏可获取到各个音箱设备与用户之间的距离,并且可结合上述通过超声波定位技术确定的智慧屏和音箱之间的位置关系(如包括距离和角度关系),确定用户、智慧屏、以及音箱之间的位置关系。
那么,智慧屏可基于该用户、智慧屏、以及音箱之间的位置关系,调整音箱的音效,将“皇帝位”声场校准到用户所在的位置。
如此,通过各个电子设备检测到的用户语音命令确定用户方位,进而确定与用户之间的距离,实现将声场校准到用户所在位置。相比于现有技术中,用户位置变化,只能采用手持拾音设备采集音频,重新进行声场校准。本申请实施例提供的声场校准方法,能够在用户位置发生变化时,响应于用户的语音命令,自动将声场校准到用户所在位置,满足用户使用需求的同时,降低用户操作难度。
另一些示例中,智能家居系统中包括可用于定位用户位置的传感器(如毫米波传感器)。那么,这些传感器在确定用户位置后,可将用户位置发送至智慧屏。之后,智慧屏可根据用户位置,以及已经确定的智慧屏和音箱的位置关系(如包括距离和角度关系),进行声场校准,从而将声场的“皇帝位”校准到用户所在位置。
另一些示例中,用户可能携带手机等电子设备,通过在该电子设备上配置如UWB传感器、毫米波传感器等,实现确定该电子设备与智慧屏、各个音箱之间的位置关系,该位置关系即为用户与智慧屏、各个音箱之间的位置关系。之后,智慧屏可获取到该位置关系,并结合已经确定的智慧屏和音箱的位置关系(如包括距离和角度关系),进行声场校准,从而将声场的“皇帝位”校准到用户所在位置。
另一些示例中,用户可能携带可穿戴设备,如智能手表、智能眼镜、智能耳机等。通过可穿 戴设备上配置的如蓝牙、Wi-Fi、UWB传感器、毫米波传感器等传感器,可实现确定该可穿戴设备与智慧屏、各个音箱之间的位置关系,该位置关系即为用户与智慧屏、各个音箱之间的位置关系。之后,智慧屏可获取到该位置关系,并结合已经确定的智慧屏和音箱的位置关系(如包括距离和角度关系),进行声场校准,从而将声场的“皇帝位”校准到用户所在位置。
如此,通过上述任一种或几种示例方式,可实现用户位置变化的自适应声场校准,实现为用户提供更加灵活的声场校准,提升用户的使用体验。
在一些实施例中,多个用户可能同时使用家庭影院,那么在声场校准过程中,可兼顾多个用户的听音体验,实现多C位的声场校准。
示例性的,如图10所示场景,用户C1和用户C2两个用户使用家庭影院。那么,智慧屏通过上述示例的多个实施例中的一个或几个示例方式,可确定用户C1和用户C2的用户位置,并可确定智慧屏、音箱、用户C1和用户C2之间的位置关系。之后,智慧屏可根据确定的位置关系,通过调整多个音箱、智慧屏的放音参数,使得用户C1和用户C2均可获取较好的听音体验。其中,放音参数例如包括相位、频响、响度、混响等多种参数。
比如,如图10所示场景,智慧屏可通过调整多个音箱、智慧屏的放音参数,使得靠近用户C1的音箱A和音箱C为用户C1提供更好的听音体验,使得靠近用户C2的音箱B和音箱D为用户C2提供更好的听音体验。可选的,智慧屏与两个用户的相对距离、方位相似,那么智慧屏可为两个用户提供相似的听音体验。
比如,智慧屏通过调整音箱的响度,使得音箱A和音箱C为用户C1提供预设响度的音频,该预设响度满足用户C1的音频响度需求。那么不会由于需要兼顾用户C2的音频响度需求,导致为用户C1提供响度过大的音频。
进一步的,不同声音波形在相同时刻相遇,如两个声音波形的波峰相遇,会产生声音增强的效果;如一个声音波形的波峰与另一个声音波形的波谷相遇,会产生声音抵消的效果。那么,智慧屏在调整不同设备的放音参数的过程中,可通过调整相位等方式,降低不同音箱播放效果的相互影响。
比如,如图10所示场景,音箱A和音箱C为用户C1提供更好的听音体验的过程中,可降低音箱A和音箱C对于音箱B和音箱D为用户C2提供更好的听音体验的影响。
如此,通过多个音箱的联合放音参数的调整,在多用户场景中,提升发音方向的准确度,并且降低房间混响对于多用户的听音影响。从而实现为多个用户均提供更好的听音体验,提升多个用户的使用体验。
在一些实施例中,通过对不同音箱的放音参数的联合调整,控制不同音箱的声音波形的相互叠加和相互抵消,可划分空间中的声音目标区和静音区。其中,声音目标区中多个音箱的声音波形相互叠加,静音区中多个音箱的声音波形相互抵消。从而实现控制声音在声音目标区内进行播放,在静音区中不发声或声音较小。
示例性的,如图11所示,智慧屏根据确定的一个或多个用户、智慧屏、多个音箱之间的位置关系,调整智慧屏和多个音箱的放音参数,使得客厅为声音目标区,客厅之外的区域为静音区。如智慧屏通过设备联合放音参数的调整,使得主卧为静音区。这样,用户在客厅使用家庭影院,不会影响主卧中的用户休息。
如此,在保证用户听音体验的同时,通过静音区的划分,降低家庭影院对于其他区域用户的影响。
在一些实施例中,智慧屏通过超声波定位技术、声源定位技术、或基于高精度时钟同步技术,或通过上述示例的一个或多个实施例中所述的方法,可确定智慧屏、音箱、用户之间的位置关系。之后,智慧屏可根据该位置关系,调整智慧屏和音箱的发声时间,使得多个设备的扬声器发出的声音在相同的时间相位到达用户人耳,从而为用户带来更好的听音体验。
示例性的,如图12所示,智慧屏确定音箱A与用户之间的距离为d1,智慧屏与用户之间的距离为d2,音箱B与用户之间的距离为d3,音箱D与用户之间的距离为d4,音箱C与用户之间的距离为d5。
并且,智慧屏根据确定的距离和声音传播速度,可确定声音由设备发声到传播到用户人耳的 传播时间。如音箱A的扬声器发声后,声音到达用户人耳需耗时t11;智慧屏的扬声器发声后,声音到达用户人耳需耗时t22;音箱B的扬声器发声后,声音到达用户人耳需耗时t33;音箱D的扬声器发声后,声音到达用户人耳需耗时t44;音箱C的扬声器发声后,声音到达用户人耳需耗时t44。
之后,智慧屏可根据各个设备的声音传播耗时,调整音箱A、智慧屏、音箱B、音箱D、以及音箱C的扬声器的发声时间分别为t1-t5,以使得声音到达用户人耳的时间相位相同或近似相同,保证用户的听音体验。
可选的,在声音传播过程中,家居环境中的物体(如家具、电子设备等)会对声音产生反射。那么,智慧屏在调整智慧屏以及多个音箱的发声时间的过程中,可通过调整发声时间,使得发声设备的直达声和反射声同时到达用户人耳,从而为用户提供更好的听音体验。需要说明的是,关于家居环境对于声音传播的影响的相关内容,可参考下文相关实施例,在此不再赘述。
如此,通过定位家庭影院中各个电子设备与用户之间的位置关系,调整各个电子设备的扬声器的发声时间,使得声音在同一时间相位或相似的时间相位到达用户人耳,从而提升用户的听音体验。
在一些实施例中,家庭影院场景支持3D音频播放,可实现对视频内容中的声音对象的轨迹进行重新编排,从而为用户带来更好的声音体验。
其中,视频内容中不同的声音可对应于不同的声音对象,声音在空间中的传播轨迹可对应于声音对象的声音轨迹。不同声音对象均对应于各自的声音内容和声音轨迹,基于时间的变化,声音轨迹产生并发生变化。
可选的,智慧屏通过超声波定位技术、声源定位技术、或基于高精度时钟同步技术,或通过上述示例的一个或多个实施例中所述的方法,可确定智慧屏、音箱、用户之间的位置关系。那么,智慧屏可将视频内容中的声音对象的声音轨迹匹配到实际的家庭影院的空间场景中,并将声场校准到用户所在位置,使得家庭影院对于视频内容的声音呈现效果与声音轨迹相匹配,从而为用户提供沉浸式的观影体验。
示例性的,视频内容中,存在多只正在飞舞的蜂鸟。那么,多只蜂鸟煽动翅膀的声音对应于多个声音对象,智慧屏对多个声音对象的声音轨迹进行编排和渲染。并且,智慧屏基于已经确定的智慧屏、音箱、用户之间的位置关系,将声音轨迹校准到用户的听音位置,通过多个音箱和智慧屏的播放配合,使得用户在视频播放过程中,获得蜂鸟在四周飞舞的听音体验。
如此,相比于现有技术中,家庭影院中视频内容对应的声音全部由用户前方发出,用户无法获得沉浸式体验。本申请实施例提供的声场校准方法,能够对声音对象的声音轨迹进行重新的编排和渲染,并且将声场校准到用户所在位置,使得用户获得沉浸式的听音体验。
可选的,响应于用户选择目标角色的操作,智慧屏可将用户选择的目标角色的视角作为用户的角色视角,将视频内容中的声音对象轨迹按照该确定的角色视角,进行重新的编排和渲染。
并且,智慧屏将该角色视角的声场校准到用户的听音位置,从而使得用户以该确定的角色视角参与到视频内容中,获得该确定的角色视角的听音体验和观影体验。
示例性的,如图13所示,智慧屏在显示视频内容的过程中,响应于用户选择角色A的操作,智慧屏可将角色A的视角作为用户的角色视角。智慧屏对视频内容中的声音进行解码,可提取出各个声音对象,不同的声音对象有各自的波形,如图14中(1)、(2)、(3)、(4)所示的波形。在声音输出之前,智慧屏结合智慧屏、音箱、用户之间的位置关系,按照角色A的角色视角,对各个声音对象的声音轨迹进行编排和渲染,将声音轨迹编排到用户位置。
比如,如图15所示的家庭影院场景,智慧屏确定图13所示场景中,当前台词“等一下”对应于角色C的声音,在视频内容中角色C位于用户选择的角色A后方。那么,智慧屏在声音轨迹编排和渲染完成后,可通过位于沙发后方的音箱C和音箱D的配合,完成该句台词的播放,从而使得坐在沙发上的用户能够获得沉浸式听音体验。
在一些实施例中,在多用户场景中,智慧屏可综合多个用户的位置关系,为不同用户均提供沉浸式的听音体验。
比如,不同用户选择了相同的角色视角。智慧屏可通过上述示例方法,将视频内容中的声音 对应的声音对象的声音轨迹进行编排和渲染,从而使得多个用户获取到相同角色视角的沉浸式听音体验。
又比如,不同用户选择了不同的角色视角。那么智慧屏可对家庭影院中的音箱进行分组,不同组的音箱服务于不同的用户。如用户A对应于第一组音箱,用户B对应于第二组音箱。智慧屏可根据视频内容中用户A选择的角色视角,进行声音对象的声音轨迹的编排和渲染,并通过第一组音箱为用户A提供用户A选择的角色视角的听音体验。并且,智慧屏可根据视频内容中用户B选择的角色视角,进行声音对象的声音轨迹的编排和渲染,并通过第二组音箱为用户B提供用户B选择的角色视角的听音体验。从而在多用户场景中,为多个用户均提供沉浸式的听音体验。
如此,相比于现有技术中,家庭影院在视频播放过程中,用户无法选择角色视角。本申请实施例提供的声场校准方法,将声场校准到用户所在位置,并且为用户提供用户选择的角色视角的沉浸式听音体验,从而提升用户的使用体验。
在一些实施例中,电子设备(如手机、智慧屏等)中,可安装具有声场校准功能的应用,用户可直接通过该应用,调整家庭影院中声场校准的“皇帝位”,以将声场校准到用户选择的“皇帝位”,满足用户的个性化使用需求。
示例性的,手机中安装有具有声场校准功能的应用,智慧屏在确定智慧屏和各个音箱间的位置关系后,可生成地图信息,并将该地图信息发送至手机。
如图16所示,手机根据接收到的地图信息,可显示界面1601。在界面1601上显示家庭影院地图,该家庭影院地图中示意性的显示智慧屏和各个音箱的相对位置关系,并显示“皇帝位”图标161。响应于用户移动“皇帝位”图标161的操作,手机可确定用户指示的“皇帝位”。
之后,手机可将用户最终移动显示“皇帝位”图标161的位置的信息发送至智慧屏。智慧屏接收到该信息后,可将声场校准到该信息对应的位置,从而使得声场校准的“皇帝位”满足用户需求。
可选的,手机在家庭影院地图中随机位置显示“皇帝位”图标161,或者在默认显示位置显示“皇帝位”图标161。其中,默认显示位置为固定的显示位置,或者为推荐的“皇帝位”,或者为上一次声场校准过程中确定的“皇帝位”。
可选的,智慧屏可在确定智慧屏和各个音箱间的位置关系后,进行声场校准,确定最佳听音位置,并指示在该最佳听音位置显示“皇帝位”图标161。并且,用户手持手机或佩戴可穿戴设备,通过手机或可穿戴设备的定位功能可确定用户位置。如通过UWB传感器、毫米波传感器、或者室内的其他高精度定位技术等,定位用户的位置。之后,手机可将用户引导至已确定的最佳听音位置。
可选的,手机在显示界面1601的过程中,可显示提示信息,用于提示用户确认是否将声场校准到当前“皇帝位”图标161所示的“皇帝位”。在检测到用户的确定操作后,手机可将“皇帝位”图标161对应的位置的信息发送至智慧屏。
可选的,响应于用户操作,智慧屏也可进行多个“皇帝位”的声场校准。
示例性的,如图17所示,手机中安装有具有声场校准功能的应用。智慧屏在确定智慧屏和各个音箱间的位置关系后,将生成的地图信息发送至手机。手机根据地图信息可先显示一个或预设数量的“皇帝位”图标。之后,响应于用户指示新建“皇帝位”图标的操作,可再新建相应数量的“皇帝位”图标。
如界面1701所示,手机显示有“皇帝位”图标171和“皇帝位”图标172。之后,响应于用户对这两个“皇帝位”图标的操作,手机可确定用户指示的两个“皇帝位”。之后,手机可将用户指示的两个“皇帝位”的信息发送至智慧屏。智慧屏根据接收到的信息,可完成这两个“皇帝位”的声场校准。
应理解,智慧屏在完成确定智慧屏和各个音箱间的位置关系后,也可直接显示相应的地图。之后,响应于用户在智慧屏上移动“皇帝位”图标的操作,智慧屏也可根据用户指示的“皇帝位”,完成声场的校准。
如此,通过多种方式自动确定智慧屏和音箱的位置关系,并响应于用户操作,灵活的进行家庭影院的声场校准。从而满足用户的个性化需求,降低用户的操作难度,提升用户的使用体验。
在一些实施例中,空间中不同的物体材质对于声音的反射等能力不同。那么,在声场校准过程中,智慧屏需要参考空间环境影响,提高声场校准的准确性。
示例性的,如图18中(1)所示的入射声音接触到物体表面时,会产生如图18中(2)所示的反射声音、如图18中(3)所示的透射声音、以及如图18中(4)所示的吸收声音。并且由于物体表面的平整度不同,同一入射声音可对应于多个不同方向反射声音。那么,声场校准需要将多个方向的入射声音,反射声音等校准到相同的“皇帝位”。因此,智慧屏需要预先确定当前空间环境对应的声学参数建模,便于后续声场校准过程中使用,以避免空间环境对声场校准的影响。
可选的,声场参数建模后,智慧屏可将该声学参数建模作为上述各个实施例对应的声场校准过程的输入,从而提升声场校准过程的准确度。
在一些实施例中,如图19所示,家庭影院中的任一电子设备可作为发送端设备,发送端设备的扬声器发送超声波信号后,其他设备作为接收端设备可接收到该超声波信号对应的直达声。并且,该超声波信号经过房间内不同物体(如墙壁、屋顶、地面、窗户、其他设备等)的反射后,接收端设备可接收到该超声波信号对应的混响声。
接收端设备根据接收到的超声波信号(如包括直达声和混响声),可进行声学计算,确定对应的声学参数。其中,声学参数例如包括家居环境的装修材质和家居对声音的反射系数、吸收系数和透射系数。
之后,发送端设备可通过不同角度发送超声波信号、或通过不同电子设备作为发送端设备发送超声波信号,完成对当前空间环境的环境探测,智慧屏可获取到各个音箱发送的声学参数,以建立声学参数模型。
后续在声场校准过程中,智慧屏可基于声学参数模型,调整智慧屏、各个音箱的扬声器的放音频率、响应参数、相位参数、响度参数等,完成声场校准。
一些示例中,电子设备确定的声学参数也可单独用于声场校准过程。
一些示例中,音箱通过扬声器向不同角度和方位发送超声波信号,并在该超声波信号中携带角度和方位信息。之后,其他音箱中的麦克风在接收到超声波信号后,可结合其中携带的角度和方位信息,进行声学分析,确定相应的声学参数(如反射系数、吸收系数、透射系数等)。在完成全屋的超声波环境探测后,音箱将不同角度和方位对应的声学参数反馈到智慧屏。智慧屏根据接收到的多个角度和方位对应的声学参数,进行声学参数建模。
另一些示例中,智慧屏中可配置有多个扬声器,那么智慧屏可通过不同方位的扬声器发送超声波信号。例如,智慧屏配置有向天花板方向发声的天空音扬声器,部署在360°不同方位的扬声器,部署在智慧屏背面的扬声器等。之后,音箱中的麦克风在接收到超声波信号后,进行声学分析,确定相应的声学参数(如反射系数、吸收系数、透射系数等)。这样,实现充分和完整的家居环境探测。后续,智慧屏可根据不同音箱反馈的声学参数,并结合发送超声波信号的扬声器的发声方向,进行声学参数建模。
需要说明的是,在上述两个示例场景中,发送端设备可在发送的超声波信号中携带超声波信号的发送角度和方位,接收端设备计算该角度和方位对应的声学参数并反馈。或者,发送端设备向不同角度和方位发送超声波信号,并在接收到接收端设备反馈的声学参数后,匹配该声学参数对应的角度和方位。即,声学参数与角度和方位的匹配,可在接收端完成,也可在发送端完成。最终用于建立声学参数模型的电子设备(如智慧屏)可获取到该匹配关系,以及对应的声学参数,以便于建立声学参数模型。
另一些示例中,家庭影院中的智慧屏和多个音箱之间建立有通信连接,从而形成组网关系,通过智慧屏和多个音箱之间的协同,可实现声学参数模型的建立。
例如,通信系统中的音箱A发送超声波信号后,通信系统中的音箱B可接收到该超声波信号。并且通过上文示例的多种实现方式,音箱B可确定音箱A和音箱B之间的位置关系。那么,音箱B可通过声学分析,确定该超声波信号在家居环境中的不同角度和反射物体对应的声学参数。
之后,音箱A再选择通信系统中的其他电子设备发送超声波信号,其他电子设备接收该超声波信号,从而能够计算出多个反射路径和和反射物体对应的声学参数。进而实现更加充分的对于家居环境的探测,提高声学参数模型建立的精准度。
可选的,智慧屏可作为通信系统中的调度设备,用于确定发送超声波信号的发送端设备。
另一些示例中,家庭影院中的音箱一般部署在天花板和/或地面上。那么,在声学参数模型建立的过程中,可通过部署在天花板上的音箱发送超声波信号,由部署在地面上的音箱接收该超声波信号。之后,接收到超声波信号的音箱,可通过声学分析,计算出天花板和地面之间的物体对应的声学参数,并将该声学参数反馈到智慧屏,由智慧屏完成声学参数模型的建立。
或者,在声学参数模型建立的过程中,可通过部署在地面上的音箱发送超声波信号,由部署在天花板上的音箱接收该超声波信号。之后,接收到超声波信号的音箱,可通过声学分析,计算出地面和天花板之间的物体对应的声学参数,并将该声学参数反馈到智慧屏,由智慧屏完成声学参数模型的建立。
可选的,部署在地面上的音箱例如可以部署在地面支架或电视柜上等。
如此,通过声学参数建模,提升自动声场校准的准确度。
在一些实施例中,音箱或智慧屏可通过扬声器向空间中的不同方位发送超声波信号,如向上下前后左右,至少6个方位发送超声波信号。之后,音箱或智慧屏可根据接收到的超声波反射信号,分析当前家居环境空间的尺寸。例如,音箱或智慧屏通过发送超声波信号和接收超声波反射信号的时间差,确定家居环境空间的尺寸。之后,智慧屏可获取到该家居环境空间的尺寸。
之后,智慧屏可通过上述各个实施例示例的方法,确定智慧屏和各个音箱之间的位置关系。结合该位置关系和家居环境空间的尺寸,智慧屏可确定智慧屏以及各个音箱在家居环境空间中的绝对几何位置关系。后续,在声场校准过程中,智慧屏可结合该绝对几何位置关系、已经确定的声学参数模型,调整音箱设备上的放音参数,进行声场校准。
或者,智慧屏可通过上述各个实施例示例的方法,确定智慧屏、各个音箱、用户之间的位置关系。结合该位置关系和家居环境空间的尺寸,智慧屏可确定智慧屏、各个音箱、用户在家居环境空间中的绝对几何位置关系。后续,在声场校准过程中,智慧屏可结合该绝对几何位置关系、已经确定的声学参数模型,调整音箱设备上的放音参数,将声场校准到用户所在位置。
如此,通过家居环境空间的尺寸的确定,结合声学参数建模,提升自动声场校准的准确度。
需要说明的是,在家居环境探测过程中,通信系统中的电子设备也可不通过超声波信号完成家居环境的探测。例如,电子设备可通过可闻声(如播放一段音乐等),完成对家居环境的探测,具体探测过程可参考上述超声波信号探测过程,对此本申请实施例不再赘述。
此外,上述实施例中,以通过超声波信号对声场校准过程进行说明。应理解,通过超声波信号也可定位用户在家居环境中的位置,进而根据用户位置,实现控制不同位置的灯的亮度、开关等。
在一些方案中,可以对本申请的多个实施例进行组合,并实施组合后的方案。可选的,各方法实施例的流程中的一些操作任选地被组合,并且/或者一些操作的顺序任选地被改变。并且,各流程的步骤之间的执行顺序仅是示例性的,并不构成对步骤之间执行顺序的限制,各步骤之间还可以是其他执行顺序。并非旨在表明所述执行次序是可以执行这些操作的唯一次序。本领域的普通技术人员会想到多种方式来对本文所述的操作进行重新排序。另外,应当指出的是,本文某个实施例涉及的过程细节同样以类似的方式适用于其他实施例,或者,不同实施例之间可以组合使用。
此外,方法实施例中的某些步骤可等效替换成其他可能的步骤。或者,方法实施例中的某些步骤可以是可选的,在某些使用场景中可以删除。或者,可以在方法实施例中增加其他可能的步骤。
并且,各方法实施例之间可以单独实施,或结合起来实施。
示例性的,图20为本申请实施例提供的一种声场校准方法的流程示意图。该方法应用于包括第一电子设备和至少一个第二电子设备的系统。如图20所示,该方法包括如下步骤。
S2001、至少一个第二电子设备分别从第一电子设备接收用于定位的第一信息。
其中,第一电子设备或第二电子设备为智慧屏、或音箱。第一信息为无线信号,无线信号为如下一种或几种:超声波信号、UWB信号、蓝牙信号、Wi-Fi信号、毫米波信号。
在一些实施例中,至少一个第二电子设备分别接收第一电子设备在第二时间通过第一扬声器 发送的第一超声波信号,以及在第五时间通过第二扬声器发送的第三超声波信号,用于定位的第一信息包括第一超声波信号和第三超声波信号。
示例性的,如图2B所示,第一电子设备分别通过扬声器21和扬声器22,分时发送超声波信号。相应的,第二电子设备可接收到第一电子设备通过两个扬声器分时发送的超声波信号。
S2002、根据至少一个第二电子设备接收的用于定位的第一信息,确定至少一个第二电子设备分别相对于第一电子设备的第一位置信息。
在一些实施例中,第一电子设备根据至少一个第二电子设备接收的用于定位的第一信息,确定至少一个第二电子设备分别相对于第一电子设备的第一位置信息。
或者,第二电子设备根据至少一个第二电子设备接收的用于定位的第一信息,确定至少一个第二电子设备分别相对于第一电子设备的第一位置信息。
如此,通信系统中的电子设备自动确定设备间的位置定位关系,实现自动的声场校准,简化用户操作,降低用户操作难度,提升用户使用体验。
在一些实施例中,至少一个第二电子设备中的目标第二电子设备响应于在第一时间接收到的第一电子设备在第二时间发送的第一超声波信号,在第三时间向第一电子设备反馈第二超声波信号,第一信息包括第一超声波信号。第一电子设备在第四时间接收到第二超声波信号。根据第一时间、第二时间、第三时间、以及第四时间,确定目标第二电子设备相对于第一电子设备的距离,第一位置信息包括距离。
示例性的,如图6C所示,在第一位置信息确定过程中,第一电子设备(如智慧屏)其中一个扬声器向目标第二电子设备(音箱A)发送超声波信号,音箱A接收到超声波信号后,向智慧屏回复超声波信号。其中,音箱A接收到超声波信号的时间为T1,回复超声波信号的时间为T2,智慧屏发送超声波信号的时间为T3,接收到回复的超声波信号的时间为T4。那么,智慧屏或音箱A可根据T1、T2、T3、T4这四个时间信息,确定音箱A和智慧屏之间的距离。
可选的,如音箱A确定音箱A和智慧屏之间的距离后,可将确定的距离信息发送至智慧屏。
应理解,第一电子设备可定向发送用于定位的第一信息,如向至少一个第二电子设备中的目标第二电子设备发送第一信息。
或者,第一电子设备非定向发送用于定位的第一信息,但是至少一个第二电子设备也可接收到该第一信息。
在一些实施例中,至少一个第二电子设备中的任一第二电子设备根据第一扬声器和第二扬声器之间的距离,第二时间和第五时间的时间差,接收第一超声波信号的时间,接收第三超声波信号的时间,第一超声波信号或第三超声波信号的传播速度,确定相对于第一电子设备的角度,第一位置信息包括角度。
示例性的,如图6B所示,以音箱A确定音箱A(即至少一个第二电子设备中的任一第二电子设备)和智慧屏(即第一电子设备)之间的角度关系为例。智慧屏的扬声器SpkR和扬声器SpkL之间的距离为D。可选的,智慧屏向音箱发送的启动定位指令中可携带该扬声器距离信息,如距离为D;或者,音箱A可通过其他方式获取到该距离D。
其中,智慧屏的两个扬声器分时发送超声波信号,时间间隔为T。可选的,音箱A可获取到该时间间隔T。音箱A接收到两个超声波信号的时间分别为tR1和tL1,例如,如图6B所示,音箱A接收到扬声器SpkR发送的超声波信号的时间为tR1,接收到扬声器SpkL发送的超声波信号的时间为tL1。
那么,音箱A可基于超声波信号的传播速度Vs,根据公式(tL1-tR1-T)·Vs=D sinθ,确定音箱A和智慧屏之间的角度θ。之后,音箱A可将确定的角度信息发送至智慧屏。
如此,根据至少一个第二电子设备接收到的第一电子设备发送的用于定位第一信息,可确定至少一个第二电子设备和第二电子设备之间的至少一个第一位置信息,该第一位置信息包括第二电子设备相对于第一电子设备的距离和角度。
S2003、获取第一用户相对于第一电子设备的第二位置信息。
在一些实施例中,第一电子设备确定第一用户相对于第一电子设备的第二位置信息。或者,第二电子设备确定第一用户相对于第一电子设备的第二位置信息。
在一些场景中,第一电子设备和至少一个第二电子设备分别接收第一用户发出的声音。根据第一电子设备和至少一个第二电子设备接收到第一用户的声音的时间,确定第二位置信息。
示例性的,该第一用户发出的声音例如为第一用户发出的语音命令。如图8所示,智慧屏和4个音箱均检测到用户发出的语音命令,如音箱A检测到该语音命令的时间为t1,智慧屏检测到该语音命令的时间为t2,音箱B检测到该语音命令的时间为t3,音箱D检测到该语音命令的时间为t4,音箱C检测到该语音命令的时间为t5。
之后,各个音箱可将接收到语音命令的时间信息发送至智慧屏。那么,智慧屏可根据获取到的时间信息,确定各个音箱相对于智慧屏的位置信息。其中,第一电子设备为智慧屏,第二电子设备为音箱。
如此,通过各个电子设备检测到的用户语音命令的时间信息,实现将声场校准到用户所在位置。相比于现有技术中,用户位置变化,只能采用手持拾音设备采集音频,重新进行声场校准。本申请实施例提供的声场校准方法,能够在用户位置发生变化时,响应于用户的语音命令,自动将声场校准到用户所在位置,满足用户使用需求的同时,降低用户操作难度。
在另一些场景中,接收第三电子设备根据第一位置信息,显示第一界面的过程中,响应于用户操作,确定的第二位置信息;第一界面用于显示第一电子设备和至少一个第二电子设备的位置关系,用户操作用于移动第一界面上显示的第一用户对应的标识的位置。一些示例中,第三电子设备显示第一界面之前,获取第一电子设备发送的第一位置信息。
示例性的,手机(即第三电子设备)中安装有具有声场校准功能的应用,智慧屏(即第一电子设备)在确定智慧屏和各个音箱间的位置关系后,可生成地图信息,并将该地图信息发送至手机。
如图16所示,手机根据接收到的地图信息,可显示界面1601。在界面1601上显示家庭影院地图,该家庭影院地图中示意性的显示智慧屏和各个音箱的相对位置关系,并显示“皇帝位”图标161。响应于用户移动“皇帝位”图标161的操作,手机可确定用户指示的“皇帝位”。
之后,手机可将用户最终移动显示“皇帝位”图标161的位置的信息发送至智慧屏。智慧屏接收到该信息后,可将声场校准到该信息对应的位置,从而使得声场校准的“皇帝位”满足用户需求。
在另一些场景中,向第一用户发送用于定位的第二信息。根据第二信息的发送时间,以及第二信息对应的反射信息的接收时间,确定第二位置信息。
示例性的,第一电子设备、至少一个第二电子设备向第一用户发送超声波信号,从而各个电子设备可根据超声波信号的发送时间和接收对应的反射信号的时间,确定自身与第一用户之间的第二位置信息。
在又一些场景中,根据第一用户携带的第四电子设备的设备位置,确定第一用户相对于第一电子设备的第二位置信息。
示例性的,用户可能携带手机等电子设备,通过在该电子设备上配置如UWB传感器、毫米波传感器等,实现确定该电子设备与智慧屏、各个音箱之间的位置关系,该位置关系即为用户与智慧屏、各个音箱之间的位置关系。或者,用户可能携带可穿戴设备,如智能手表、智能眼镜、智能耳机等。通过可穿戴设备上配置的如蓝牙、Wi-Fi、UWB传感器、毫米波传感器等传感器,可实现确定该可穿戴设备与智慧屏、各个音箱之间的位置关系,该位置关系即为用户与智慧屏、各个音箱之间的位置关系。
如此,实现用户位置变化的自适应声场校准,实现为用户提供更加灵活的声场校准,提升用户的使用体验。
S2004、根据第一位置信息和第二位置信息,将声场校准到第二位置信息所指示的区域范围内。
在一些实施例中,根据第一位置信息和第二位置信息,调整第一电子设备和至少一个第二电子设备的放音参数,将声场校准到第二位置信息所指示的区域范围内,放音参数包括如下一项或几项:放音频率、响应参数、相位参数、响度参数。
如此,相比于现有技术中,用户需要手持拾音设备采集音频,并且手动录入第一电子设备和第二电子设备的距离信息,才能够实现声场校准。本申请实施例提供的声场校准方法,第一电子 设备或第二电子设备通过发送超声波信号,自动完成声场校准,有效降低用户操作难度。
在一些实施例中,第一电子设备和至少一个第二电子设备的时钟同步,根据第一位置信息和第二位置信息。那么,第一电子设备(或第二电子设备)可根据第一位置信息和第二位置信息,调整第一电子设备和至少一个第二电子设备的发声时间,使得第一电子设备和至少一个第二电子设备的发声到达第二位置信息所指示的区域范围的时间相同或相似。
示例性的,如图12所示,智慧屏确定音箱A与用户之间的距离为d1,智慧屏与用户之间的距离为d2,音箱B与用户之间的距离为d3,音箱D与用户之间的距离为d4,音箱C与用户之间的距离为d5。
并且,智慧屏根据确定的距离和声音传播速度,可确定声音由设备发声到传播到用户人耳的传播时间。如音箱A的扬声器发声后,声音到达用户人耳需耗时t11;智慧屏的扬声器发声后,声音到达用户人耳需耗时t22;音箱B的扬声器发声后,声音到达用户人耳需耗时t33;音箱D的扬声器发声后,声音到达用户人耳需耗时t44;音箱C的扬声器发声后,声音到达用户人耳需耗时t44。
之后,智慧屏可根据各个设备的声音传播耗时,调整音箱A、智慧屏、音箱B、音箱D、以及音箱C的扬声器的发声时间分别为t1-t5,以使得声音到达用户人耳的时间相位相同或近似相同,保证用户的听音体验。
如此,通过定位家庭影院中各个电子设备与用户之间的位置关系,调整各个电子设备的扬声器的发声时间,使得声音在同一时间相位或相似的时间相位到达用户人耳,从而提升用户的听音体验。
在一些实施例中,获取第二用户相对于第一电子设备的第三位置信息。根据第一位置信息、第二位置信息、以及第三位置信息,将第一声场校准到第二位置信息所指示的第一区域范围内,将第二声场校准到第三位置信息所指示的第二区域范围内,第一声场或第二声场为第一电子设备和至少一个第二电子设备中的部分或全部电子设备形成的声场。
示例性的,如图10所示场景,用户C1和用户C2两个用户使用家庭影院。那么,智慧屏可确定用户C1和用户C2的用户位置,以及确定智慧屏、音箱、用户C1和用户C2之间的位置关系。之后,智慧屏可根据确定的位置关系,通过调整多个音箱、智慧屏的放音参数,使得用户C1和用户C2均可获取较好的听音体验。
比如,如图10所示场景,智慧屏可通过调整多个音箱、智慧屏的放音参数,使得靠近用户C1的音箱A和音箱C为用户C1提供更好的听音体验,使得靠近用户C2的音箱B和音箱D为用户C2提供更好的听音体验。可选的,智慧屏与两个用户的相对距离、方位相似,那么智慧屏可为两个用户提供相似的听音体验。
如此,通过多个电子设备的联合放音参数的调整,在多用户场景中,提升发音方向的准确度,并且降低房间混响对于多用户的听音影响。从而实现为多个用户均提供更好的听音体验,提升多个用户的使用体验。
一些示例中,第一区域范围和第二区域范围为声音目标区域,第一区域范围和第二区域范围以外的区域范围为静音区域。其中,声音目标区中多个音箱的声音波形相互叠加,静音区中多个音箱的声音波形相互抵消。从而实现控制声音在声音目标区内进行播放,在静音区中不发声或声音较小。
如此,在保证用户听音体验的同时,通过静音区的划分,降低家庭影院对于其他区域用户的影响。
在一些实施例中,第一电子设备(或第二电子设备)确定待播放的第一音频中包括的一个或多个声音对象对应的一个或多个声音轨迹。根据第一位置信息和第二位置信息,在声场校准过程中重新编排一个或多个声音轨迹,使得校准后的声场在第二位置信息所指示的区域范围内与一个或多个声音轨迹匹配。
示例性的,视频内容中,存在多只正在飞舞的蜂鸟。那么,多只蜂鸟煽动翅膀的声音对应于多个声音对象,通过对多个声音对象的声音轨迹进行编排和渲染。并且,基于已经确定的智慧屏、音箱、用户之间的位置关系,将声音轨迹校准到用户的听音位置,通过至少一个第二电子设备和 第一电子设备的播放配合,使得用户在视频播放过程中,获得蜂鸟在四周飞舞的听音体验。
如此,相比于现有技术中,家庭影院中视频内容对应的声音全部由用户前方发出,用户无法获得沉浸式体验。本申请实施例提供的声场校准方法,能够对声音对象的声音轨迹进行重新的编排和渲染,并且将声场校准到用户所在位置,使得用户获得沉浸式的听音体验。
一些示例中,响应于用户操作,确定用户选择的一个或多个声音对象中的目标声音对象。根据第一位置信息和第二位置信息,在声场校准过程中重新编排目标声音对象对应的目标声音轨迹,使得校准后的声场在第二位置信息所指示的区域范围内与目标声音轨迹匹配。
示例性的,如图13所示,智慧屏在显示视频内容的过程中,响应于用户选择角色A的操作,智慧屏可将角色A的视角作为用户的角色视角。如图14所示,智慧屏对视频内容中的声音进行解码,可提取出各个声音对象,不同的声音对象有各自的波形。在声音输出之前,智慧屏结合智慧屏、音箱、用户之间的位置关系,按照角色A的角色视角,对各个声音对象的声音轨迹进行编排和渲染,将声音轨迹编排到用户位置。
如此,相比于现有技术中,家庭影院在视频播放过程中,用户无法选择角色视角。本申请实施例提供的声场校准方法,将声场校准到用户所在位置,并且为用户提供用户选择的角色视角的沉浸式听音体验,从而提升用户的使用体验。
此外,第一电子设备还可以执行以上实施例中智慧屏执行的步骤和功能,至少一个第二电子设备还可以执行以上实施例中音箱执行的步骤和功能,从而实现以上实施例提供的声场校准方法。
以上结合图3-图20详细说明了本申请实施例提供的声场校准方法。以下结合图21详细说明本申请实施例提供的电子设备。
在一种可能的设计中,图21为本申请实施例提供的电子设备的结构示意图。如图21所示,电子设备2100可以包括:收发单元2101和处理单元2102。电子设备2100可用于实现上述方法实施例中涉及的第一电子设备或第二电子设备的功能。
可选地,收发单元2101,用于支持电子设备2100执行图20中的S2001。
可选地,处理单元2102,用于支持电子设备2100执行图20中的S2002、S2003以及S2004。
其中,收发单元可以包括接收单元和发送单元,可以由收发器或收发器相关电路组件实现,可以为收发器或收发模块。电子设备2100中的各个单元的操作和/或功能分别为了实现上述方法实施例中所述的声场校准方法的相应流程,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能单元的功能描述,为了简洁,在此不再赘述。
可选地,图21所示的电子设备2100还可以包括存储单元(图21中未示出),该存储单元中存储有程序或指令。当收发单元2101和处理单元2102执行该程序或指令时,使得图21所示的电子设备2100可以执行上述方法实施例中所述的声场校准方法。
图21所示的电子设备2100的技术效果可以参考上述方法实施例中所述的声场校准方法的技术效果,此处不再赘述。
除了以电子设备2100的形式以外,本申请提供的技术方案也可以为电子设备中的功能单元或者芯片,或者与电子设备匹配使用的装置。
本申请实施例还提供一种芯片系统,包括:处理器,所述处理器与存储器耦合,所述存储器用于存储程序或指令,当所述程序或指令被所述处理器执行时,使得该芯片系统实现上述任一方法实施例中的方法。
可选地,该芯片系统中的处理器可以为一个或多个。该处理器可以通过硬件实现也可以通过软件实现。当通过硬件实现时,该处理器可以是逻辑电路、集成电路等。当通过软件实现时,该处理器可以是一个通用处理器,通过读取存储器中存储的软件代码来实现。
可选地,该芯片系统中的存储器也可以为一个或多个。该存储器可以与处理器集成在一起,也可以和处理器分离设置,本申请实施例并不限定。示例性地,存储器可以是非瞬时性处理器,例如只读存储器ROM,其可以与处理器集成在同一块芯片上,也可以分别设置在不同的芯片上,本申请实施例对存储器的类型,以及存储器与处理器的设置方式不作具体限定。
示例性地,该芯片系统可以是现场可编程门阵列(field programmable gate array,FPGA),可以是专用集成芯片(application specific integrated circuit,ASIC),还可以是系统芯片(system  on chip,SoC),还可以是中央处理器(central processor unit,CPU),还可以是网络处理器(network processor,NP),还可以是数字信号处理电路(digital signal processor,DSP),还可以是微控制器(micro controller unit,MCU),还可以是可编程控制器(programmable logic device,PLD)或其他集成芯片。
应理解,上述方法实施例中的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。结合本申请实施例所公开的方法步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。
本申请实施例还提供一种计算机可读存储介质,该计算机可读存储介质中存储有计算机程序,当该计算机程序在计算机上上运行时,使得计算机执行上述相关步骤,以实现上述实施例中的声场校准方法。
本申请实施例还提供一种计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述相关步骤,以实现上述实施例中的声场校准方法。
另外,本申请实施例还提供一种装置。该装置具体可以是组件或模块,该装置可包括相连的一个或多个处理器和存储器。其中,存储器用于存储计算机程序。当该计算机程序被一个或多个处理器执行时,使得装置执行上述各方法实施例中的声场校准方法。
其中,本申请实施例提供的装置、计算机可读存储介质、计算机程序产品或芯片均用于执行上文所提供的对应的方法。因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。
结合本申请实施例公开内容所描述的方法或者算法的步骤可以硬件的方式来实现,也可以是由处理器执行软件指令的方式来实现。软件指令可以由相应地软件模块组成,软件模块可以被存放于随机存取存储器(random access memory,RAM)、闪存、只读存储器(read only memory,ROM)、可擦除可编程只读存储器(erasable programmable ROM,EPROM)、电可擦可编程只读存储器(electrically EPROM,EEPROM)、寄存器、硬盘、移动硬盘、只读光盘(CD-ROM)或者本领域熟知的任何其它形式的存储介质中。一种示例性的存储介质耦合至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息。当然,存储介质也可以是处理器的组成部分。处理器和存储介质可以位于专用集成电路(application specific integrated circuit,ASIC)中。
通过以上的实施方式的描述,本领域技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明。实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成;即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的方法,可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的。例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式;例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,模块或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
计算机可读存储介质包括但不限于以下的任意一种:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (20)

  1. 一种声场校准方法,其特征在于,应用于包括第一电子设备和至少一个第二电子设备的系统,所述方法包括:
    所述至少一个第二电子设备分别从所述第一电子设备接收用于定位的第一信息;
    根据所述至少一个第二电子设备接收的用于定位的所述第一信息,确定所述至少一个第二电子设备分别相对于所述第一电子设备的第一位置信息;
    获取第一用户相对于所述第一电子设备的第二位置信息;
    根据所述第一位置信息和所述第二位置信息,将声场校准到所述第二位置信息所指示的区域范围内。
  2. 根据权利要求1所述的方法,其特征在于,所述获取第一用户相对于所述第一电子设备的第二位置信息,包括:
    所述第一电子设备和所述至少一个第二电子设备分别接收所述第一用户发出的声音;
    根据所述第一电子设备和所述至少一个第二电子设备接收到所述第一用户的声音的时间,确定所述第二位置信息。
  3. 根据权利要求1所述的方法,其特征在于,所述获取第一用户相对于所述第一电子设备的第二位置信息,包括:
    接收第三电子设备根据所述第一位置信息,显示第一界面的过程中,响应于用户操作,确定的所述第二位置信息;所述第一界面用于显示所述第一电子设备和所述至少一个第二电子设备的位置关系,所述用户操作用于移动所述第一界面上显示的所述第一用户对应的标识的位置。
  4. 根据权利要求3所述的方法,其特征在于,在所述接收第三电子设备根据所述第一位置信息,显示第一界面的过程中,响应于用户操作,确定的所述第二位置信息之前,所述方法还包括:
    向所述第三电子设备发送所述第一位置信息。
  5. 根据权利要求1所述的方法,其特征在于,所述获取第一用户相对于所述第一电子设备的第二位置信息,包括:
    向所述第一用户发送用于定位的第二信息;
    根据所述第二信息的发送时间,以及所述第二信息对应的反射信息的接收时间,确定所述第二位置信息。
  6. 根据权利要求1所述的方法,其特征在于,所述获取第一用户相对于所述第一电子设备的第二位置信息,包括:
    根据所述第一用户携带的第四电子设备的设备位置,确定所述第一用户相对于所述第一电子设备的第二位置信息。
  7. 根据权利要求1-6任一项所述的方法,其特征在于,所述第一电子设备和所述至少一个第二电子设备的时钟同步,所述根据所述第一位置信息和所述第二位置信息,将声场校准到所述第二位置信息所指示的区域范围内,包括:
    根据所述第一位置信息和所述第二位置信息,调整所述第一电子设备和所述至少一个第二电子设备的发声时间,使得所述第一电子设备和所述至少一个第二电子设备的发声到达所述第二位置信息所指示的区域范围的时间相同或相似。
  8. 根据权利要求1-7任一项所述的方法,其特征在于,所述方法还包括:
    获取第二用户相对于所述第一电子设备的第三位置信息;
    根据所述第一位置信息、所述第二位置信息、以及所述第三位置信息,将第一声场校准到所述第二位置信息所指示的第一区域范围内,将第二声场校准到所述第三位置信息所指示的第二区域范围内,所述第一声场或所述第二声场为所述第一电子设备和所述至少一个第二电子设备中的部分或全部电子设备形成的声场。
  9. 根据权利要求8所述的方法,其特征在于,所述第一区域范围和所述第二区域范围为声音目标区域,所述第一区域范围和所述第二区域范围以外的区域范围为静音区域。
  10. 根据权利要求1-9任一项所述的方法,其特征在于,所述根据所述第一位置信息和所述第二位置信息,将声场校准到所述第二位置信息所指示的区域范围内,包括:
    确定待播放的第一音频中包括的一个或多个声音对象对应的一个或多个声音轨迹;
    根据所述第一位置信息和所述第二位置信息,在声场校准过程中重新编排所述一个或多个声音轨迹,使得校准后的声场在所述第二位置信息所指示的区域范围内与所述一个或多个声音轨迹匹配。
  11. 根据权利要求10所述的方法,其特征在于,在所述确定待播放的第一音频中包括的一个或多个声音对象对应的一个或多个声音轨迹之后,所述方法还包括:
    响应于用户操作,确定用户选择的所述一个或多个声音对象中的目标声音对象;
    所述根据所述第一位置信息和所述第二位置信息,在声场校准过程中重新编排所述一个或多个声音轨迹,使得校准后的声场在所述第二位置信息所指示的区域范围内与所述一个或多个声音轨迹匹配,包括:
    根据所述第一位置信息和所述第二位置信息,在声场校准过程中重新编排所述目标声音对象对应的目标声音轨迹,使得校准后的声场在所述第二位置信息所指示的区域范围内与所述目标声音轨迹匹配。
  12. 根据权利要求1-11中任一项所述的方法,其特征在于,所述根据所述至少一个第二电子设备接收的用于定位的所述第一信息,确定所述至少一个第二电子设备分别相对于所述第一电子设备的第一位置信息,包括:
    所述至少一个第二电子设备中的目标第二电子设备响应于在第一时间接收到的所述第一电子设备在第二时间发送的第一超声波信号,在第三时间向所述第一电子设备反馈第二超声波信号,所述第一信息包括所述第一超声波信号;
    所述第一电子设备在第四时间接收到所述第二超声波信号;
    根据所述第一时间、所述第二时间、所述第三时间、以及所述第四时间,确定所述目标第二电子设备相对于所述第一电子设备的距离,所述第一位置信息包括所述距离。
  13. 根据权利要求12所述的方法,其特征在于,所述至少一个第二电子设备分别从所述第一电子设备接收用于定位的第一信息,包括:
    所述至少一个第二电子设备分别接收所述第一电子设备在所述第二时间通过第一扬声器发送的所述第一超声波信号,以及在第五时间通过第二扬声器发送的第三超声波信号,所述用于定位的第一信息包括所述第一超声波信号和所述第三超声波信号。
  14. 根据权利要求13所述的方法,其特征在于,所述根据所述至少一个第二电子设备接收的用于定位的所述第一信息,确定所述至少一个第二电子设备分别相对于所述第一电子设备的第一位置信息,包括:
    所述至少一个第二电子设备中的任一第二电子设备根据所述第一扬声器和所述第二扬声器之间的距离,所述第二时间和所述第五时间的时间差,接收所述第一超声波信号的时间,接收所述第三超声波信号的时间,所述第一超声波信号或所述第三超声波信号的传播速度,确定相对于所述第一电子设备的角度,所述第一位置信息包括所述角度。
  15. 根据权利要求1-14任一项所述的方法,其特征在于,所述根据所述第一位置信息和所述第二位置信息,将声场校准到所述第二位置信息所指示的区域范围内,包括:
    根据所述第一位置信息和所述第二位置信息,调整所述第一电子设备和所述至少一个第二电子设备的放音参数,将声场校准到所述第二位置信息所指示的区域范围内,所述放音参数包括如下一项或几项:放音频率、响应参数、相位参数、响度参数。
  16. 根据权利要求1-15任一项所述的方法,其特征在于,所述第一信息为无线信号,所述无线信号为如下一种或几种:超声波信号、超宽带UWB信号、蓝牙信号、无线保真Wi-Fi信号、毫米波信号。
  17. 根据权利要求1-16任一项所述的方法,其特征在于,所述第一电子设备或所述第二电子设备为智慧屏、或音箱。
  18. 一种电子设备,其特征在于,包括:处理器和存储器,所述存储器和所述处理器耦合,所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,当所述处理器从所述存储器中读取所述计算机指令,使得所述电子设备执行如权利要求1-17中任意一项所述的方法。
  19. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质包括计算机程序,当所述计算机程序在电子设备上运行时,使得所述电子设备执行如权利要求1-17中任意一项所述的方法。
  20. 一种计算机程序产品,其特征在于,当所述计算机程序产品在计算机上运行时,使得所述计算机执行如权利要求1-17中任意一项所述的方法。
PCT/CN2023/134737 2022-12-21 2023-11-28 声场校准方法及电子设备 WO2024131484A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211648949.3A CN118233821A (zh) 2022-12-21 2022-12-21 声场校准方法及电子设备
CN202211648949.3 2022-12-21

Publications (1)

Publication Number Publication Date
WO2024131484A1 true WO2024131484A1 (zh) 2024-06-27

Family

ID=91511671

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/134737 WO2024131484A1 (zh) 2022-12-21 2023-11-28 声场校准方法及电子设备

Country Status (2)

Country Link
CN (1) CN118233821A (zh)
WO (1) WO2024131484A1 (zh)

Also Published As

Publication number Publication date
CN118233821A (zh) 2024-06-21

Similar Documents

Publication Publication Date Title
EP4044609A1 (en) Cross-device content projection method and electronic device
CN110622123B (zh) 一种显示方法及装置
US9075572B2 (en) Media enhancement dock
CN112437190A (zh) 数据分享的方法、图形用户界面、相关装置及系统
WO2020259542A1 (zh) 一种显示设备的控制方法及相关装置
CN107211213B (zh) 基于扬声器的位置信息输出音频信号的方法和设备
WO2021017909A1 (zh) 一种通过nfc标签实现功能的方法、电子设备及系统
WO2021104114A1 (zh) 一种提供无线保真WiFi网络接入服务的方法及电子设备
WO2022048599A1 (zh) 音箱位置调节方法、音频渲染方法和装置
CN108882139A (zh) 参数配置方法以及系统
WO2022001147A1 (zh) 一种定位方法及电子设备
US20170238114A1 (en) Wireless speaker system
WO2022028537A1 (zh) 一种设备识别方法及相关装置
WO2022135527A1 (zh) 一种视频录制方法及电子设备
CN110572799A (zh) 一种同时响应的方法及设备
CN113921002A (zh) 一种设备控制方法及相关装置
WO2022062999A1 (zh) 一种分配声道的方法及相关设备
WO2021197354A1 (zh) 一种设备的定位方法及相关装置
WO2024131484A1 (zh) 声场校准方法及电子设备
US20240171802A1 (en) First electronic device and method for displaying control window of second electronic device
CN114598984B (zh) 立体声合成方法和系统
WO2022228059A1 (zh) 一种定位方法和装置
CN118233822A (zh) 声场校准方法及电子设备
WO2022143310A1 (zh) 一种双路投屏的方法及电子设备
WO2022052760A1 (zh) 一种组合音箱的配置方法、音箱和电子设备