WO2024046182A1 - 一种音频播放方法、系统及相关装置 - Google Patents

一种音频播放方法、系统及相关装置 Download PDF

Info

Publication number
WO2024046182A1
WO2024046182A1 PCT/CN2023/114402 CN2023114402W WO2024046182A1 WO 2024046182 A1 WO2024046182 A1 WO 2024046182A1 CN 2023114402 W CN2023114402 W CN 2023114402W WO 2024046182 A1 WO2024046182 A1 WO 2024046182A1
Authority
WO
WIPO (PCT)
Prior art keywords
electronic device
data
channel
audio data
speaker
Prior art date
Application number
PCT/CN2023/114402
Other languages
English (en)
French (fr)
Inventor
陈丽
胡少武
常晶
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2024046182A1 publication Critical patent/WO2024046182A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S1/00Two-channel systems

Definitions

  • the present application relates to the field of terminal technology, and in particular, to an audio playback method, system and related devices.
  • electronic device A for example, mobile phone, tablet, PC, etc.
  • electronic device A can play audio data through its own speaker, or electronic device A can send audio data to electronic device B.
  • electronic device B For example, smart glasses, neck-mounted smart headphones, smart watches, etc.
  • the audio data is played through the speaker of electronic device B.
  • only one electronic device can play the audio data.
  • the present application provides an audio playback method, system and related devices, which enables a first electronic device to establish a communication connection with a second electronic device.
  • the first electronic device obtains the device information of the second electronic device and performs the audio playback based on the second electronic device.
  • the device information determines one or more sound effect modes, and the one or more sound effect modes include the first sound effect mode.
  • the first electronic device may, in response to the input of selecting the first sound effect mode, process the sound source data of the first electronic device based on the first sound effect mode to obtain the first audio data and the second audio data.
  • the first electronic device plays the first audio data and the second electronic device plays the second audio data at the same time, thereby realizing the first electronic device and the second electronic device cooperatively playing the sound source data.
  • this application provides an audio playback method.
  • the method is applied to an audio playback system.
  • the audio playback system includes a first electronic device and a second electronic device.
  • the first electronic device establishes a communication connection with the second electronic device.
  • the method includes :
  • the first electronic device displays a collaborative playback control, and the collaborative playback control is used to instruct the first electronic device and the second electronic device to jointly play audio source data;
  • the first electronic device receives a first input for the collaborative playback control
  • the first electronic device displays a plurality of sound effect mode options in response to the first input, and the plurality of sound effect mode options include a first sound effect mode option;
  • the first electronic device marks the first sound effect mode option in response to the second input for the first sound effect mode option
  • the first electronic device sends the second audio data to the second electronic device
  • the second electronic device plays the second audio data
  • the first electronic device plays the first audio data
  • both the first audio data and the second audio data include at least part of the content of the sound source data.
  • multiple electronic devices play audio data together, breaking the limitation of the speaker layout of a single electronic device, allowing multiple electronic devices to support more sound effect modes. For example, more obvious left and right effects can be achieved through the speakers of multiple electronic devices. Ear surround effect. Moreover, corresponding sound effect modes can be provided to the user according to the device type of the second electronic device, thereby realizing different sound effect modes for different electronic devices. For example, if the second electronic device is smart glasses, since the smart glasses include a speaker located on the left temple and a speaker located on the right temple, it can provide left and right channels, left and right surround channels, vocal dialogue and other processing methods.
  • Sound effect mode thereby enhancing the loudness, surround feeling or vocal clarity of the sound source data, enhancing the user's immersion and sense of envelopment in watching movies, and improving the user experience.
  • the second electronic device is a smart watch, since the smart watch is located on the wrist, it can be enhanced by low-frequency audio signals to achieve low-frequency vibration transmission, enhance the user's sense of rhythm perception, and improve the user experience.
  • the sound source data may be sent to the first electronic device by other electronic devices (eg, a second electronic device).
  • both the first audio data and the second audio data include at least part of the content of the sound source data.
  • At least part of the content may be data of some channels and/or data of some frequency bands in the sound source data.
  • the first electronic device before the first electronic device receives the first input, the first electronic device receives a third input for playing the sound source data; or,
  • the first electronic device After receiving the second input, the first electronic device receives a third input for playing the sound source data.
  • the user can select the desired sound effect mode before playing the sound source data. You can also select the required sound effect mode or switch the sound effect mode when playing audio source data.
  • the first sound effect mode is any one of the rhythm enhancement mode, the dialogue enhancement mode, the surround enhancement mode, the all-around enhancement mode, or the smart enhancement mode
  • at least part of the sound included in the first audio data The channels are different from at least part of the channels included in the second audio data; or/and, when the first sound effect mode is the loudness enhancement mode, at least part of the channels of the first audio data are the same as at least part of the channels of the second audio data.
  • the first sound effect mode option is a rhythm enhancement mode option
  • the first audio data includes data of the left channel and the data of the right channel
  • the second audio data includes data of the low-frequency channel. In this way, the playback of low-frequency sounds can be enhanced.
  • the method when the second electronic device plays the second audio, the method further includes:
  • the second electronic device converts the second audio data into a pulse signal; the second electronic device transmits the pulse signal to the motor of the second electronic device; and the motor of the second electronic device vibrates.
  • the second electronic device can vibrate as the frequency of the second audio data changes while playing the second audio data, thereby enhancing the user's tactile experience.
  • the second electronic device includes multiple speakers, and the second electronic device uses all speakers to play the second audio data.
  • the first sound effect mode option is a dialogue enhancement mode option
  • the first audio data includes data of the left channel and the data of the right channel
  • the second audio data includes data of the center channel.
  • the first sound effect mode option is a loudness enhancement mode option
  • the first audio data includes data of the left channel and the data of the right channel
  • the second audio data includes data of the left channel and the data of the right channel. Road data.
  • the second electronic device when the second electronic device includes one speaker, the second electronic device uses one speaker to play the left channel data and the right channel data of the second audio data; or,
  • the second electronic device includes 2 speakers, and the 2 speakers include a first speaker and a second speaker
  • the second electronic device uses the first speaker to play the data of the left channel, and uses the second speaker to play the data of the right channel;
  • the second electronic device includes three or more speakers, and the three or more speakers include a first speaker, a second speaker, and a third speaker
  • the first speaker is used to play the data of the left channel
  • the second speaker is used to play the data of the right channel.
  • data using the third speaker to play the data of the left channel and the data of the right channel of the second audio data.
  • the speakers can be reasonably used to play the channel data of the second audio data, and the speaker resources can be fully utilized.
  • the first sound effect mode option is a surround enhancement mode option
  • the first audio data includes data of the left channel and the data of the right channel
  • the second audio data includes data of the left surround channel and the data of the right channel. Surround channel data. In this way, the sense of surround can be enhanced.
  • the second electronic device includes 2 speakers, the 2 speakers include a first speaker and a second speaker, the second electronic device uses the first speaker to play the data of the left surround channel, and uses the second speaker Play the right surround channel data; or,
  • the second electronic device includes 3 or more speakers
  • the 3 or more speakers include a first speaker, a second speaker and a third speaker
  • the first speaker is used to play the data of the left surround channel
  • the second speaker is used to play the right surround sound channel data
  • using the third speaker to play the left surround channel data and the right surround channel data of the second audio data.
  • the first electronic device when the first electronic device includes 2 speakers, and the 2 speakers include a fourth speaker and a fifth speaker, the first electronic device uses the fourth speaker to play the data of the left channel, and uses the fifth speaker Play the right channel data; or,
  • the first electronic device includes three or more speakers, and the three or more speakers include a fourth speaker, a fifth speaker, and a sixth speaker
  • the fourth speaker is used to play the data of the left channel
  • the fifth speaker is used to play the data of the right channel.
  • data, using the sixth speaker to play the left channel data and the right channel data of the second audio data.
  • the first sound effect mode option is an all-around enhanced mode option
  • the first audio data includes left channel data and right channel data
  • the second audio data includes left surround channel data, right surround channel data, and left surround channel data. Surround channel data and center channel data.
  • the two speakers include a first speaker and a second speaker
  • the second electronic device uses the first speaker to play the data of the left surround channel and the center sound channel data and use the second speaker to play the right surround channel data and the center channel data;
  • the second electronic device includes 3 or more speakers
  • the 3 or more speakers include a first speaker, a second speaker and a third speaker
  • the first speaker is used to play the data of the left surround channel
  • the second speaker is used to play the right surround sound channel data
  • use the third speaker to play the center channel data.
  • the first electronic device when the first electronic device includes 2 speakers, the 2 speakers include a fourth speaker and a fifth speaker, the first electronic device uses the fourth speaker to play the data of the left surround channel and the center sound channel data, using the fifth speaker to play the right surround channel data and center channel data; or,
  • the first electronic device includes 3 or more speakers
  • the 3 or more speakers include a fourth speaker, a fifth speaker and a sixth speaker
  • the fourth speaker is used to play the data of the left surround channel
  • the fifth speaker is used to play the right surround sound channel data
  • use the sixth speaker to play the center channel data.
  • the first electronic device in response to the first input, displays multiple sound effect mode options, specifically including:
  • the first electronic device acquires a plurality of sound effect mode options based on the correspondence between the device type of the second electronic device and the stored device type and sound effect mode options;
  • the device type of the second electronic device corresponds to a plurality of sound effect modes.
  • the first electronic device can provide the user with sound effect mode options suitable for the first electronic device and the second electronic device when establishing connections with electronic devices of different device types.
  • the smart watch plays surround channel data and cannot bring about a good surround effect. Therefore, the first electronic device does not provide surround enhancement when the second electronic device is a smart watch. model.
  • the device type of the second electronic device is a smart watch
  • the multiple sound effect mode options include one of a smart enhancement mode option, a loudness enhancement mode option, a rhythm enhancement mode option, and a dialogue enhancement mode option, or
  • the smart enhancement mode corresponding to the smart enhancement mode option is a combination of rhythm enhancement mode and loudness enhancement mode. In this way, users can process sound source data based on the intelligent enhancement mode when they do not know to select a certain sound effect mode.
  • the audio data played by the first electronic device corresponding to the smart enhancement mode includes left channel data and right channel data
  • the audio data played by the second electronic device includes left channel data, Right channel data and low frequency channel data.
  • the device type of the second electronic device is smart glasses, a neck-mounted speaker or a Bluetooth headset
  • the multiple sound effect mode options include a smart enhancement mode option, a loudness enhancement mode option, a rhythm enhancement mode option, and dialogue
  • the smart enhancement mode corresponding to the smart enhancement mode option is a combination of rhythm enhancement mode and loudness enhancement mode, or is a surround enhancement mode and dialogue enhancement. combination of patterns.
  • the intelligent enhancement mode is a combination of the surround enhancement mode and the dialogue enhancement mode.
  • the audio data played by the first electronic device corresponding to the intelligent enhancement mode includes left channel data and The data of the right channel
  • the audio data played by the second electronic device includes the data of the left surround channel, the data of the right surround channel and the data of the center channel; or,
  • the smart enhancement mode is a combination of the rhythm enhancement mode and the loudness enhancement mode.
  • the audio data played by the first electronic device corresponding to the smart enhancement mode includes left channel data and right channel data.
  • the audio data played by the device includes left channel data, right channel data and low-frequency channel data.
  • the first electronic device plays a video
  • the surround sound and the human voice can be highlighted, so that the user can immerse himself in enjoying the video.
  • the first electronic device plays audio, it can highlight the low frequencies and make the music more dynamic.
  • the first electronic device determines the intelligent enhancement mode based on whether the played video content is related to the music scene. In this way, even if the first electronic device plays a video, since the video content is related to the audio source scene, the intelligent enhancement mode can be set to a combination of the rhythm enhancement mode and the loudness enhancement mode to highlight the low frequencies and increase the loudness, bringing better music to the user. listening experience.
  • the first electronic device when processing the second audio data, enhances the loudness of the small audio data of the second audio data. In this way, small sounds can be highlighted and the sense of detail can be enhanced. In this way, users can hear small sounds and observe the movements of game characters while playing games, improving the gaming experience. Users can hear obvious background sounds, such as wind, insects, etc., when watching movies, which makes the picture more vivid.
  • the plurality of sound effect mode options include a marked smart enhanced mode option; the first electronic device responds to the second input for the first sound effect mode option.
  • marking the first sound effect mode option including:
  • the first electronic device responds to the second input for the first sound effect mode option by unmarking the smart enhanced mode option and marking the first sound effect mode option.
  • the first electronic device can provide a default intelligent enhancement option, and process the sound source data more intelligently when the user is not sure to select a certain sound effect mode.
  • the method further includes: when The posture of the second electronic device and/or the first electronic device changes, the first audio data does not change, and the second audio data sent by the first electronic device to the second electronic device changes as the posture changes.
  • the first electronic device can process the second audio data, so that the virtual sound source simulated by the sound signal played by the second electronic device is always located at the first electronic device, The sound source and picture are always in the same position, making it easy for the user to move back to a suitable position for viewing the picture.
  • the method further includes: when the posture of the second electronic device and/or the first electronic device changes, the second audio data does not change, and The first audio data played by the first electronic device changes as the posture changes.
  • the first electronic device can process the first audio data based on the posture change, so that the virtual sound source simulated by the sound signal played by the first electronic device is always in front of the user's sight, and no matter how the user moves, there will be no left and right ears listening to the sound. Incongruous situation.
  • the beam direction of the acoustic signal emitted by the speaker of the first electronic device changes with the change in position.
  • the first electronic device can direct the beam toward the second electronic device, so that the user can hear the sound of the first electronic device more clearly.
  • the distance between the first electronic device and the second electronic device is a first distance
  • the volume at which the first electronic device plays the first audio data is the first volume
  • the first electronic device and the second electronic device The distance is the second distance
  • the volume at which the first electronic device plays the first audio data is the second volume
  • the first distance is smaller than the second distance
  • the first volume is smaller than the second volume
  • the first electronic device can set the volume of the first electronic device based on the distance from the second electronic device (which can also be understood as the user).
  • the first electronic device detects that the user is close to the first electronic device, it reduces the volume of the first electronic device to avoid interfering with the playback of the second audio data.
  • the first electronic device detects that the user is far away from the first electronic device, it increases the volume of the first electronic device to prevent the user from hearing the sound played by the first electronic device.
  • the method when the first electronic device displays multiple sound effect modes in response to the first input, the method further includes:
  • the first electronic device displays a percentage column
  • the first sound effect mode option is a loudness enhancement mode option, a rhythm enhancement mode option, or a dialogue enhancement mode option
  • the value in the percentage column is the first value
  • the volume at which the second electronic device plays the second audio data is the third volume
  • the percentage column When the value of is the second value, the volume of the second electronic device playing the second audio data is the fourth volume, the first value is smaller than the second value, and the third volume is lower than the fourth volume;
  • the value in the percentage column is the third value.
  • the distance between the simulated sound source and the user is the third distance
  • the value in the percentage column is the third distance.
  • the fourth value is the fourth value.
  • the distance between the simulated sound source and the user is the fourth distance.
  • the third value is smaller than the fourth value and the third distance is smaller than the fourth distance.
  • the effect of the sound effect mode can be set through the percentage bar, and the user can adjust the value of the percentage bar and select a suitable playback effect.
  • this application provides an audio playback system, including: a first electronic device and a second electronic device; wherein,
  • a first electronic device configured to implement the method steps performed by the first electronic device in the first aspect
  • the second electronic device is configured to implement the method steps performed by the second electronic device in the first aspect.
  • this application provides another audio playback method, which method is applied to an audio playback system.
  • the audio playback system includes a first electronic device and a second electronic device.
  • the method includes:
  • the first electronic device In response to the operation of establishing a communication connection between the first electronic device and the second electronic device, the first electronic device cooperates with the second electronic device to play the sound source data.
  • the first electronic device plays the first audio data
  • the second electronic device plays the second audio data.
  • Both the first audio data and the second audio data include at least part of the content of the sound source data, and the first electronic device plays the second audio data.
  • the audio data is different from the second audio data.
  • the first audio data and the second audio data are different in part of the channel data and/or part of the frequency band.
  • the sound source data may be sent to the first electronic device by other electronic devices (eg, a second electronic device, a server, etc.).
  • other electronic devices eg, a second electronic device, a server, etc.
  • both the first audio data and the second audio data include at least part of the content of the sound source data.
  • At least part of the content may be data of some channels and/or data of some frequency bands in the sound source data.
  • the first electronic device before the first electronic device receives the first input, the first electronic device receives a third input for playing the sound source data; or,
  • the first electronic device After receiving the second input, the first electronic device receives a third input for playing the sound source data.
  • the user can select the desired sound effect mode before playing the sound source data. You can also select the required sound effect mode or switch the sound effect mode when playing audio source data.
  • the first sound effect mode is any one of the rhythm enhancement mode, the dialogue enhancement mode, the surround enhancement mode, the all-around enhancement mode, or the smart enhancement mode
  • the channels are different from at least part of the channels included in the second audio data; or/and, when the first sound effect mode is the loudness enhancement mode, at least part of the channels of the first audio data are different from those of the second audio data. At least some of the channels are the same.
  • the first sound effect mode option is a rhythm enhancement mode option
  • the first audio data includes data of the left channel and the data of the right channel
  • the second audio data includes data of the low-frequency channel. In this way, the playback of low-frequency sounds can be enhanced.
  • the method when the second electronic device plays the second audio, the method further includes:
  • the second electronic device converts the second audio data into a pulse signal; the second electronic device transmits the pulse signal to the motor of the second electronic device; and the motor of the second electronic device vibrates. In this way, the second electronic device can vibrate as the frequency of the second audio data changes while playing the second audio data, thereby enhancing the user's tactile experience.
  • the second electronic device includes multiple speakers, and the second electronic device uses all speakers to play the second audio data.
  • the first sound effect mode option is a dialogue enhancement mode option
  • the first audio data includes data of the left channel and the data of the right channel
  • the second audio data includes data of the center channel.
  • the first sound effect mode option is a loudness enhancement mode option
  • the first audio data includes data of the left channel and the data of the right channel
  • the second audio data includes data of the left channel and the data of the right channel. Road data.
  • the second electronic device when the second electronic device includes one speaker, the second electronic device uses one speaker to play the left channel data and the right channel data of the second audio data; or, when the second electronic device uses one speaker to play the left channel data and the right channel data of the second audio data;
  • the electronic device includes 2 speakers, the 2 speakers include a first speaker and a second speaker, the second electronic device uses the first speaker to play the data of the left channel, and uses the second speaker to play the data of the right channel; or, if the second speaker
  • the electronic device includes 3 or more speakers, and the 3 or more speakers include a first speaker, a second speaker, and a third speaker.
  • the first speaker is used to play the data of the left channel
  • the second speaker is used to play the data of the right channel.
  • the third speaker plays the left channel data and the right channel data of the second audio data.
  • the speakers can be reasonably used to play the channel data of the second audio data, and the speaker resources can be fully utilized.
  • the first sound effect mode option is a surround enhancement mode option
  • the first audio data includes data of the left channel and the data of the right channel
  • the second audio data includes data of the left surround channel and the data of the right channel. Surround channel data. In this way, the sense of surround can be enhanced.
  • the second electronic device includes 2 speakers, the 2 speakers include a first speaker and a second speaker, the second electronic device uses the first speaker to play the data of the left surround channel, and uses the second speaker Play the data of the right surround channel; or, when the second electronic device includes 3 or more speakers, and the 3 or more speakers include a first speaker, a second speaker and a third speaker, use the first speaker to play the data of the left surround channel
  • the second speaker is used to play the data of the right surround channel
  • the third speaker is used to play the data of the left surround channel and the data of the right surround channel of the second audio data.
  • the first electronic device when the first electronic device includes 2 speakers, and the 2 speakers include a fourth speaker and a fifth speaker, the first electronic device uses the fourth speaker to play the data of the left channel, and uses the fifth speaker Play the data of the right channel; or, when the first electronic device includes 3 or more speakers, and the 3 or more speakers include a fourth speaker, a fifth speaker and a sixth speaker, use the fourth speaker to play the data of the left channel, The fifth speaker is used to play the data of the right channel, and the sixth speaker is used to play the data of the left channel and the data of the right channel of the second audio data.
  • the first sound effect mode option is an all-around enhanced mode option
  • the first audio data includes left channel data and right channel data
  • the second audio data includes left surround channel data, right surround channel data, and left surround channel data. Surround channel data and center channel data.
  • the two speakers include a first speaker and a second speaker
  • the second electronic device uses the first speaker to play the data of the left surround channel and the center sound channel data, use the second speaker to play the data of the right surround channel and the data of the center channel
  • the 3 or more speakers include the first speaker, the second speaker and a third speaker, using the first speaker to play the data of the left surround channel, using the second speaker to play the data of the right surround channel, and using the third speaker to play the data of the center channel.
  • the first electronic device when the first electronic device includes 2 speakers, the 2 speakers include a fourth speaker and a fifth speaker, the first electronic device uses the fourth speaker to play the data of the left surround channel and the center sound channel data, use the fifth speaker to play the data of the right surround channel and the data of the center channel; or, when the first electronic device includes 3 or more speakers, the 3 or more speakers include a fourth speaker, a fifth speaker and a sixth speaker, the fourth speaker is used to play the data of the left surround channel, the fifth speaker is used to play the data of the right surround channel, and the sixth speaker is used to play the data of the center channel.
  • the first electronic device in response to the first input, displays multiple sound effect mode options, which specifically includes: in response to the first input, the first electronic device displays the device type of the second electronic device based on the stored The corresponding relationship between the device type and the sound effect mode options is to obtain multiple sound effect mode options; wherein, the device type of the second electronic device corresponds to the multiple sound effect modes.
  • the first electronic device can provide the user with sound effect mode selections suitable for the first electronic device and the second electronic device when establishing connections with electronic devices of different device types. item. For example, when the device type of the electronic device is a smart watch, the smart watch plays surround channel data and cannot bring about a good surround effect. Therefore, the first electronic device does not provide surround enhancement when the second electronic device is a smart watch. model.
  • the device type of the second electronic device is a smart watch
  • the multiple sound effect mode options include one of a smart enhancement mode option, a loudness enhancement mode option, a rhythm enhancement mode option, and a dialogue enhancement mode option, or
  • the smart enhancement mode corresponding to the smart enhancement mode option is a combination of rhythm enhancement mode and loudness enhancement mode. In this way, users can process sound source data based on the intelligent enhancement mode when they do not know to select a certain sound effect mode.
  • the audio data played by the first electronic device corresponding to the smart enhancement mode includes left channel data and right channel data
  • the audio data played by the second electronic device includes left channel data, Right channel data and low frequency channel data.
  • the device type of the second electronic device is smart glasses, a neck-mounted speaker or a Bluetooth headset
  • the multiple sound effect mode options include a smart enhancement mode option, a loudness enhancement mode option, a rhythm enhancement mode option, and dialogue
  • the smart enhancement mode corresponding to the smart enhancement mode option is a combination of rhythm enhancement mode and loudness enhancement mode, or is a surround enhancement mode and dialogue enhancement. combination of patterns.
  • the intelligent enhancement mode is a combination of the surround enhancement mode and the dialogue enhancement mode.
  • the audio data played by the first electronic device corresponding to the intelligent enhancement mode includes left channel data and The data of the right channel
  • the audio data played by the second electronic device includes the data of the left surround channel, the data of the right surround channel and the data of the center channel; or,
  • the smart enhancement mode is a combination of the rhythm enhancement mode and the loudness enhancement mode.
  • the audio data played by the first electronic device corresponding to the smart enhancement mode includes left channel data and right channel data.
  • the audio data played by the device includes left channel data, right channel data and low-frequency channel data.
  • the first electronic device determines the intelligent enhancement mode based on whether the played video content is related to the music scene. In this way, even if the first electronic device plays a video, since the video content is related to the audio source scene, the intelligent enhancement mode can be set to a combination of the rhythm enhancement mode and the loudness enhancement mode to highlight the low frequencies and increase the loudness, bringing better music to the user. listening experience.
  • the first electronic device when processing the second audio data, enhances the loudness of the small audio data of the second audio data. In this way, small sounds can be highlighted and the sense of detail can be enhanced. In this way, users can hear small sounds and observe the movements of game characters while playing games, improving the gaming experience. Users can hear obvious background sounds, such as wind, insects, etc., when watching movies, which makes the picture more vivid.
  • the plurality of sound effect mode options include a marked smart enhanced mode option; the first electronic device responds to the second input for the first sound effect mode option.
  • marking the first sound effect mode option including:
  • the first electronic device responds to the second input for the first sound effect mode option by unmarking the smart enhanced mode option and marking the first sound effect mode option. In this way, the first electronic device can provide a default intelligent enhancement option, and process the sound source data more intelligently when the user is not sure to select a certain sound effect mode.
  • the method further includes: when The posture of the second electronic device and/or the first electronic device changes, the first audio data does not change, and the second audio data sent by the first electronic device to the second electronic device changes as the posture changes.
  • the first electronic device can process the second audio data, so that the virtual sound source simulated by the sound signal played by the second electronic device is always located at the first electronic device, The sound source and picture are always in the same position, making it easy for the user to move back to a suitable position for viewing the picture.
  • the method further includes: when the posture of the second electronic device and/or the first electronic device changes, the second audio data does not change, and The first audio data played by the first electronic device changes as the posture changes.
  • the first electronic device can process the first audio data based on the posture change, so that the virtual sound source simulated by the sound signal played by the first electronic device is always in front of the user's sight, and no matter how the user moves, there will be no left and right ears listening to the sound. Incongruous situation.
  • the beam direction of the acoustic signal emitted by the speaker of the first electronic device changes with the change in position.
  • the first electronic device can direct the beam toward the second electronic device, so that the user can hear the sound of the first electronic device more clearly.
  • the distance between the first electronic device and the second electronic device is a first distance
  • the volume at which the first electronic device plays the first audio data is the first volume
  • the first electronic device and the second electronic device The distance is the second distance
  • the volume at which the first electronic device plays the first audio data is the second volume
  • the first distance is smaller than the second distance
  • the first volume is smaller than the second volume.
  • the first electronic device can set the volume of the first electronic device based on the distance from the second electronic device (which can also be understood as the user).
  • the first electronic device detects the When the user is close to the first electronic device, the volume of the first electronic device is reduced to avoid interfering with the playback of the second audio data.
  • the first electronic device detects that the user is far away from the first electronic device it increases the volume of the first electronic device to prevent the user from hearing the sound played by the first electronic device.
  • the method further includes: the first electronic device displays a percentage bar; if the first sound effect mode option is the loudness enhancement mode option, Rhythm enhancement mode option or dialogue enhancement mode option, when the value of the percentage column is the first value, the volume of the second electronic device playing the second audio data is the third volume, and when the value of the percentage column is the second value, the volume of the second electronic device The volume of playing the second audio data is the fourth volume, the first value is less than the second value, and the third volume is lower than the fourth volume; if the first sound effect mode option is the all-round enhanced mode option or the surround mode option, the value of the percentage column is the third value.
  • the distance between the simulated sound source and the user is the third distance.
  • the value in the percentage column is the fourth value.
  • the distance between the simulated sound source and the user is The distance is a fourth distance, the third value is less than the fourth value and the third distance is less than the fourth distance. In this way, the effect of the sound effect mode can be set through the percentage bar, and the user can adjust the value of the percentage bar and select a suitable playback effect.
  • this application provides an audio playback method, which is characterized in that the method includes:
  • the first electronic device displays a collaborative playback control, and the collaborative playback control is used to instruct the first electronic device and the second electronic device to jointly play audio source data;
  • the first electronic device receives a first input for the collaborative playback control
  • the first electronic device displays a plurality of sound effect mode options in response to the first input, and the plurality of sound effect mode options include a first sound effect mode option;
  • the first electronic device marks the first sound effect mode option in response to the second input for the first sound effect mode option
  • the first electronic device sends the second audio data to the second electronic device
  • the first electronic device plays the first audio data
  • both the first audio data and the second audio data include at least part of the content of the sound source data.
  • the first electronic device can provide different sound effect modes to the user and realize collaborative playback of audio data by multiple devices.
  • the method further includes: before the first electronic device receives the first input, the first electronic device receives a third input for playing the sound source data; or, the first electronic device receives the second input after Afterwards, a third input of playing audio source data is received.
  • the first sound effect mode is any one of the rhythm enhancement mode, the dialogue enhancement mode, the surround enhancement mode, the all-around enhancement mode, or the smart enhancement mode
  • at least part of the sound included in the first audio data The channels are different from at least part of the channels included in the second audio data; or/and, when the first sound effect mode is the loudness enhancement mode, at least part of the channels of the first audio data are the same as at least part of the channels of the second audio data.
  • the first sound effect mode option is a rhythm enhancement mode option
  • the first audio data includes data of the left channel and the data of the right channel
  • the second audio data includes data of the low-frequency channel.
  • the first sound effect mode option is a dialogue enhancement mode option
  • the first audio data includes data of the left channel and the data of the right channel
  • the second audio data includes data of the center channel
  • the first sound effect mode option is a loudness enhancement mode option
  • the first audio data includes data of the left channel and the data of the right channel
  • the second audio data includes data of the left channel and the data of the right channel. Road data.
  • the first sound effect mode option is a surround enhancement mode option
  • the first audio data includes data of the left channel and the data of the right channel
  • the second audio data includes data of the left surround channel and the data of the right channel. Surround channel data.
  • the first electronic device when the first electronic device includes 2 speakers, and the 2 speakers include a fourth speaker and a fifth speaker, the first electronic device uses the fourth speaker to play the data of the left channel, and uses the fifth speaker Play the data of the right channel; or, when the first electronic device includes 3 or more speakers, and the 3 or more speakers include a fourth speaker, a fifth speaker and a sixth speaker, use the fourth speaker to play the data of the left channel, The fifth speaker is used to play the data of the right channel, and the sixth speaker is used to play the data of the left channel and the data of the right channel of the second audio data.
  • the first sound effect mode option is an all-around enhanced mode option
  • the first audio data includes left channel data and right channel data
  • the second audio data includes left surround channel data, right surround channel data, and left surround channel data. Surround channel data and center channel data.
  • the first electronic device in response to the first input, displays multiple sound effect mode options, which specifically includes: in response to the first input, the first electronic device displays the device type of the second electronic device based on the stored The corresponding relationship between the device type and the sound effect mode options is to obtain multiple sound effect mode options; wherein, the device type of the second electronic device corresponds to the multiple sound effect modes.
  • the multiple sound effect mode options include one of a smart enhancement mode option, a loudness enhancement mode option, a rhythm enhancement mode option, and a dialogue enhancement mode option.
  • the smart enhancement mode corresponding to the smart enhancement mode option is a combination of rhythm enhancement mode and loudness enhancement mode.
  • the multiple speakers include one or more of the smart enhancement mode options, loudness enhancement mode options, rhythm enhancement mode options, dialogue enhancement mode options, surround enhancement mode options and all-around enhancement mode options.
  • the smart enhancement mode corresponding to the smart enhancement mode option It is a combination of rhythm enhancement mode and loudness enhancement mode, or a combination of surround enhancement mode and dialogue enhancement mode.
  • the plurality of sound effect mode options include a marked smart enhanced mode option; the first electronic device responds to the second input for the first sound effect mode option.
  • marking the first sound effect mode option specifically includes: the first electronic device responds to a second input for the first sound effect mode option, unmarking the smart enhanced mode option, and marking the first sound effect mode option.
  • the method further includes: when the When the posture of the second electronic device and/or the first electronic device changes, the second audio data sent by the first electronic device remains unchanged, and the first audio data played by the first electronic device changes as the posture changes.
  • the method further includes: when the posture of the second electronic device and/or the first electronic device changes, the first audio data does not change, and The second audio data sent by the first electronic device to the second electronic device changes as the posture changes.
  • the method further includes: when the position of the second electronic device and/or the first electronic device changes, the beam direction of the acoustic signal emitted by the speaker of the first electronic device changes as the position changes.
  • the method further includes: the distance between the first electronic device and the second electronic device is a first distance, and the volume at which the first electronic device plays the first audio data is the first volume; The distance of the second electronic device is the second distance, the volume of the first electronic device playing the first audio data is the second volume, the first distance is smaller than the second distance, and the first volume is smaller than the second volume.
  • the present application provides an electronic device, including: one or more processors, multiple speakers, and one or more memories; wherein the one or more memories, the multiple speakers, and the one or more processors Coupling, one or more memories are used to store computer program codes.
  • the computer program codes include computer instructions.
  • embodiments of the present application provide a computer storage medium that includes computer instructions.
  • the computer instructions When the computer instructions are run on a first electronic device, the first electronic device causes the first electronic device to execute any of the possible implementations of the fourth aspect. audio playback method.
  • the present application provides a chip system.
  • the chip system is applied to a first electronic device.
  • the chip system includes one or more processors.
  • the processor is used to call computer instructions to cause the first electronic device to execute any of the above fourth aspects. Audio playback method in one possible implementation.
  • the present application provides a computer program product containing instructions that, when the computer program is run on a first electronic device, causes the first electronic device to perform audio playback in any of the possible implementations of the fourth aspect. method.
  • Figure 1A is a schematic diagram of the hardware structure of an electronic device 100 provided by an embodiment of the present application.
  • Figure 1B is a software structure diagram of an electronic device 100 provided by an embodiment of the present application.
  • Figure 2 is a schematic diagram of the hardware structure of an electronic device 200 provided by an embodiment of the present application.
  • Figure 3 is a schematic flow chart of an audio playback method provided by an embodiment of the present application.
  • Figure 4 is a schematic flow chart provided by an embodiment of the present application.
  • Figure 5 is a schematic diagram of an application scenario provided by the embodiment of the present application.
  • Figures 6A-6D are a set of interface schematic diagrams provided by embodiments of the present application.
  • Figures 7A-7E are schematic diagrams of a scenario provided by an embodiment of the present application.
  • Figure 8 is a schematic diagram of another application scenario provided by the embodiment of the present application.
  • Figures 9A-9C are another set of interface schematic diagrams provided by embodiments of the present application.
  • first and second are used for descriptive purposes only and shall not be understood as implying or implying relative importance or implicitly specifying the quantity of indicated technical features. Therefore, the features defined as “first” and “second” may explicitly or implicitly include one or more of the features. In the description of the embodiments of this application, unless otherwise specified, “plurality” The meaning is two or more.
  • the embodiment of the present application provides an audio playback method.
  • the electronic device 100 establishes a communication connection with the electronic device 200.
  • the electronic device 100 obtains the device information of the electronic device 200, and determines one or more sound effect modes based on the device information of the electronic device 200.
  • the one or more sound effect modes include the first sound effect. model.
  • the electronic device 100 may respond to the input of selecting the first sound effect mode and process the sound source data of the electronic device 100 based on the first sound effect mode to obtain the first audio data played by the electronic device 100 and the second audio data played by the electronic device 200 .
  • the electronic device 100 plays the first audio data and the electronic device 200 plays the second audio data at the same time, thereby realizing the electronic device 100 and the electronic device 200 cooperatively playing the sound source data.
  • multiple electronic devices play audio data together, breaking the limitation of the speaker layout of a single electronic device, allowing multiple electronic devices to support more sound effect modes. For example, more obvious left and right effects can be achieved through the speakers of multiple electronic devices. Ear surround effect. Moreover, corresponding sound effect modes can be provided to the user according to the device type of the electronic device 200, thereby realizing different sound effect modes for different electronic devices. For example, if the electronic device 200 is smart glasses, since the smart glasses include a speaker located on the left temple and a speaker located on the right temple, sound effects based on left and right channels, left and right surround channels, vocal dialogue, and other processing methods can be provided.
  • the electronic device 200 is a smart watch, since the smart watch is located on the wrist, it can be enhanced by low-frequency audio signals to achieve low-frequency vibration propagation, enhance the user's sense of rhythm perception, and improve the user experience.
  • electronic device 200 is a wearable device. In this way, due to the portable nature of the wearable device, it is convenient for users to use the sound effect modes provided by the electronic device 100 and the electronic device 200 in various application scenarios to achieve collaborative playback of audio source data.
  • sound source data may be sent to electronic device 100 by other electronic devices (eg, electronic device 200).
  • both the first audio data and the second audio data include at least part of the content of the sound source data. At least part of the content may be data of some channels and/or data of some frequency bands in the sound source data.
  • FIG. 1A is a schematic diagram of the hardware structure of the electronic device 100 provided by an embodiment of the present application.
  • the electronic device 100 may be a mobile phone, a tablet computer, a desktop computer, a laptop computer, a handheld computer, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a cellular phone, a personal digital assistant (personal digital assistant) digital assistant (PDA), augmented reality (AR) devices, virtual reality (VR) devices, mixed reality (MR) devices, artificial intelligence (AI) devices, wearable devices , vehicle-mounted equipment, smart home equipment and/or smart city equipment.
  • PDA personal digital assistant
  • AR augmented reality
  • VR virtual reality
  • MR mixed reality
  • AI artificial intelligence
  • wearable devices wearable devices
  • vehicle-mounted equipment smart home equipment and/or smart city equipment.
  • the electronic device 100 may be a mobile phone or a tablet computer.
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2 , mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone interface 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display screen 194, and Subscriber identification module (SIM) card interface 195, etc.
  • a processor 110 an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2 , mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone interface 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display
  • the sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light. Sensor 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in this embodiment does not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or fewer components than shown in the figures, or some components may be combined, some components may be separated, or some components may be arranged differently.
  • the components illustrated may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU), etc.
  • application processor application processor, AP
  • modem processor graphics processing unit
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller video codec
  • digital signal processor digital signal processor
  • DSP digital signal processor
  • baseband processor baseband processor
  • neural network processor neural-network processing unit
  • the controller can generate operation control signals based on the instruction operation code and timing signals to complete the control of fetching and executing instructions.
  • the processor 110 may also be provided with a memory for storing instructions and data.
  • the memory in processor 110 is cache memory. This memory may hold instructions or data that have been recently used or recycled by processor 110 . If processor 110 needs to The first time the instruction or data is used, it can be called directly from the memory. Repeated access is avoided and the waiting time of the processor 110 is reduced, thus improving the efficiency of the system.
  • processor 110 may include one or more interfaces.
  • Interfaces may include integrated circuit (inter-integrated circuit, I2C) interface, integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, pulse code modulation (pulse code modulation, PCM) interface, universal asynchronous receiver and transmitter (universal asynchronous receiver/transmitter (UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and /or universal serial bus (USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • UART universal asynchronous receiver and transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus
  • the I2C interface is a bidirectional synchronous serial bus, including a serial data line (SDA) and a serial clock line (derail clock line, SCL).
  • the I2S interface can be used for audio communication.
  • processor 110 may include multiple sets of I2S buses.
  • the processor 110 can be coupled with the audio module 170 through the I2S bus to implement communication between the processor 110 and the audio module 170 .
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the I2S interface to implement the function of answering calls through a Bluetooth headset.
  • the PCM interface can also be used for audio communications to sample, quantize and encode analog signals.
  • the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface.
  • the audio module 170 can also transmit audio signals to the wireless communication module 160 through the PCM interface to implement the function of answering calls through a Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus can be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • a UART interface is generally used to connect the processor 110 and the wireless communication module 160 .
  • the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to implement the Bluetooth function.
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the UART interface to implement the function of playing music through the Bluetooth headset.
  • the MIPI interface can be used to connect the processor 110 with peripheral devices such as the display screen 194 and the camera 193 .
  • MIPI interfaces include camera serial interface (CSI), display serial interface (DSI), etc.
  • the processor 110 and the camera 193 communicate through the CSI interface to implement the shooting function of the electronic device 100 .
  • the processor 110 and the display screen 194 communicate through the DSI interface to implement the display function of the electronic device 100 .
  • the GPIO interface can be configured through software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface can be used to connect the processor 110 with the camera 193, display screen 194, wireless communication module 160, audio module 170, sensor module 180, etc.
  • the GPIO interface can also be configured as an I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 130 is an interface that complies with the USB standard specification, and may be a Mini USB interface, a Micro USB interface, a USB Type C interface, etc.
  • the USB interface 130 can be used to connect a charger to charge the electronic device 100, and can also be used to transmit data between the electronic device 100 and peripheral devices. It can also be used to connect headphones to play audio through them. This interface can also be used to connect other electronic devices, such as AR devices, etc.
  • the interface connection relationships between the modules illustrated in this embodiment are only schematic illustrations and do not constitute a structural limitation of the electronic device 100 .
  • the electronic device 100 may also adopt different interface connection methods in the above embodiments, or a combination of multiple interface connection methods.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger can be a wireless charger or a wired charger.
  • the charging management module 140 may receive charging input from the wired charger through the USB interface 130 .
  • the charging management module 140 may receive wireless charging input through the wireless charging coil of the electronic device 100 . While the charging management module 140 charges the battery 142, it can also provide power to the electronic device 100 through the power management module 141.
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, the internal memory 121, the display screen 194, the camera 193, the wireless communication module 160, and the like.
  • the power management module 141 can also be used to monitor battery capacity, battery cycle times, battery health status (leakage, impedance) and other parameters.
  • the power management module 141 may also be provided in the processor 110 .
  • the power management module 141 and the charging management module 140 may also be provided in the same device.
  • the wireless communication function of the electronic device 100 can be implemented through the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor and the baseband processor.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 100 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization. For example: Antenna 1 can be reused as a diversity antenna for a wireless LAN. In other embodiments, antennas may be used in conjunction with tuning switches.
  • the mobile communication module 150 can provide solutions for wireless communication including 2G/3G/4G/5G applied on the electronic device 100 .
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), etc.
  • the mobile communication module 150 can receive electromagnetic waves through the antenna 1, perform filtering, amplification and other processing on the received electromagnetic waves, and transmit them to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modem processor and convert it into electromagnetic waves through the antenna 1 for radiation.
  • at least part of the functional modules of the mobile communication module 150 may be disposed in the processor 110 .
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be provided in the same device.
  • a modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low-frequency baseband signal to be sent into a medium-high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal.
  • the demodulator then transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the application processor outputs sound signals through audio devices (not limited to speaker 170A, receiver 170B, etc.), or displays images or videos through display screen 194.
  • the modem processor may be a stand-alone device.
  • the modem processor may be independent of the processor 110 and may be provided in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) network), Bluetooth (bluetooth, BT), and global navigation satellites.
  • WLAN wireless local area networks
  • System global navigation satellite system, GNSS
  • frequency modulation frequency modulation, FM
  • near field communication technology near field communication, NFC
  • infrared technology infrared, IR
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110, frequency modulate it, amplify it, and convert it into electromagnetic waves through the antenna 2 for radiation.
  • the antenna 1 of the electronic device 100 is coupled to the mobile communication module 150, and the antenna 2 is coupled to the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi) -zenith satellite system (QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like.
  • the GPU is an image processing microprocessor and is connected to the display screen 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
  • the display screen 194 is used to display images, videos, etc.
  • Display 194 includes a display panel.
  • the display panel can use a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • AMOLED organic light-emitting diode
  • FLED flexible light-emitting diode
  • Miniled MicroLed, Micro-oLed, quantum dot light emitting diode (QLED), etc.
  • the electronic device 100 may include 1 or N display screens 194, where N is a positive integer greater than 1.
  • the electronic device 100 can implement the shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
  • the ISP is used to process the data fed back by the camera 193. For example, when taking a photo, the shutter is opened, the light is transmitted to the camera sensor through the lens, the optical signal is converted into an electrical signal, and the camera sensor passes the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye. ISP can also perform algorithm optimization on image noise and brightness. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene. In some embodiments, the ISP may be provided in the camera 193.
  • Camera 193 is used to capture still images or video.
  • the object passes through the lens to produce an optical image that is projected onto the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then passes the electrical signal to the ISP to convert it into a digital image signal.
  • ISP outputs digital image signals to DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other format image signals.
  • the electronic device 100 may include 1 or N cameras 193 , N being greater than 1 positive integer.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy.
  • Video codecs are used to compress or decompress digital video.
  • Electronic device 100 may support one or more video codecs. In this way, the electronic device 100 can play or record videos in multiple encoding formats, such as moving picture experts group (MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
  • MPEG moving picture experts group
  • MPEG2 MPEG2, MPEG3, MPEG4, etc.
  • NPU is a neural network (NN) computing processor.
  • NN neural network
  • Intelligent cognitive applications of the electronic device 100 can be implemented through the NPU, such as image recognition, face recognition, speech recognition, text understanding, etc.
  • the external memory interface 120 can be used to connect an external non-volatile memory to expand the storage capacity of the electronic device 100 .
  • the external non-volatile memory communicates with the processor 110 through the external memory interface 120 to implement the data storage function. For example, save music, video and other files in external non-volatile memory.
  • Internal memory 121 may be used to store computer executable program code, which includes instructions.
  • the processor 110 executes instructions stored in the internal memory 121 to execute various functional applications and data processing of the electronic device 100 .
  • the internal memory 121 may include a program storage area and a data storage area. Among them, the stored program area can store an operating system, at least one application program required for a function (such as a sound playback function, an image playback function, etc.).
  • the storage data area may store data created during use of the electronic device 100 (such as audio data, phone book, etc.).
  • the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as at least one disk storage device, flash memory device, universal flash storage (UFS), etc.
  • the electronic device 100 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signals. Audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be provided in the processor 110 , or some functional modules of the audio module 170 may be provided in the processor 110 .
  • Speaker 170A also called “speaker” is used to convert audio electrical signals into sound signals.
  • the electronic device 100 can listen to music through the speaker 170A, or listen to hands-free calls.
  • the speaker 170A may be used to play the first audio data.
  • Receiver 170B also called “earpiece” is used to convert audio electrical signals into sound signals.
  • the electronic device 100 answers a call or a voice message, the voice can be heard by bringing the receiver 170B close to the human ear.
  • Microphone 170C also called “microphone” or “microphone” is used to convert sound signals into electrical signals. When making a call or sending a voice message, the user can speak close to the microphone 170C with the human mouth and input the sound signal to the microphone 170C.
  • the electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C, which in addition to collecting sound signals, may also implement a noise reduction function. In other embodiments, the electronic device 100 can also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions, etc.
  • the headphone interface 170D is used to connect wired headphones.
  • the pressure sensor 180A is used to sense pressure signals and can convert the pressure signals into electrical signals.
  • pressure sensor 180A may be disposed on display screen 194 .
  • the gyro sensor 180B may be used to determine the motion posture of the electronic device 100 .
  • the angular velocity of electronic device 100 about three axes ie, x, y, and z axes
  • the acceleration sensor 180E can detect the acceleration of the electronic device 100 in various directions (generally three axes). When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of the electronic device 100 and be used in horizontal and vertical screen switching, pedometer and other applications.
  • Air pressure sensor 180C is used to measure air pressure.
  • Magnetic sensor 180D includes a Hall sensor.
  • the electronic device 100 may utilize the magnetic sensor 180D to detect opening and closing of the flip holster.
  • Distance sensor 180F for measuring distance.
  • Proximity light sensor 180G may be used to determine whether there is an object near electronic device 100 .
  • the ambient light sensor 180L is used to sense ambient light brightness.
  • Fingerprint sensor 180H is used to collect fingerprints.
  • Temperature sensor 180J is used to detect temperature.
  • Touch sensor 180K also known as "touch device”.
  • the touch sensor 180K can be disposed on the display screen 194.
  • the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen”.
  • the touch sensor 180K is used to detect a touch operation on or near the touch sensor 180K.
  • Bone conduction sensor 180M can acquire vibration signals.
  • the buttons 190 include a power button, a volume button, etc.
  • Key 190 may be a mechanical key. It can also be a touch button.
  • the electronic device 100 may receive key inputs and generate key signal inputs related to user settings and function control of the electronic device 100 .
  • the motor 191 can generate vibration prompts.
  • the indicator 192 may be an indicator light, which may be used to indicate charging status, power changes, or may be used to indicate messages, missed calls, notifications, etc.
  • the SIM card interface 195 is used to connect a SIM card.
  • the software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • This embodiment takes an operating system with a layered architecture as an example to illustrate the software structure of the electronic device 100 .
  • FIG. 1B is a software structure block diagram of the electronic device 100 of this embodiment.
  • the layered architecture divides the software into several layers, and each layer has clear roles and division of labor.
  • the layers communicate through software interfaces.
  • the operating system is divided into four layers, from top to bottom: application layer, application framework layer, runtime and system library, and kernel layer.
  • the application layer can include a series of application packages.
  • the application package can include camera, gallery, calendar, calling, map, navigation, WLAN, Bluetooth, music, video, short message and other applications.
  • the application framework layer provides an application programming interface (API) and programming framework for applications in the application layer.
  • API application programming interface
  • the application framework layer includes some predefined functions.
  • the application framework layer can include window manager, content provider, view system, resource manager, notification manager, smart device identification module, multi-channel processing module, sound effect mode display module, etc.
  • a window manager is used to manage window programs.
  • the window manager can obtain the display size, determine whether there is a status bar, lock the screen, capture the screen, etc.
  • Content providers are used to store and retrieve data and make this data accessible to applications.
  • Said data can include videos, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
  • the view system includes visual controls, such as controls that display text, controls that display pictures, etc.
  • a view system can be used to build applications.
  • the display interface can be composed of one or more views.
  • a display interface including a text message notification icon may include a view for displaying text and a view for displaying pictures.
  • the resource manager provides various resources to applications, such as localized strings, icons, pictures, layout files, video files, etc.
  • the notification manager allows applications to display notification information in the status bar, which can be used to convey notification-type messages and can automatically disappear after a short stay without user interaction.
  • the notification manager is used to notify download completion, message reminders, etc.
  • the notification manager can also be notifications that appear in the status bar at the top of the system in the form of charts or scroll bar text, such as notifications for applications running in the background, or notifications that appear on the screen in the form of conversation windows. For example, text information is prompted in the status bar, a beep sounds, the electronic device vibrates, the indicator light flashes, etc.
  • the smart device identification module can be used to obtain the device information of the electronic device 200, determine the device type of the electronic device 200 based on the device information of the electronic device 200 (for example, the name of the electronic device 200, or the capability information of the electronic device 200), and then One or more sound effect modes that the electronic device 100 and the electronic device 200 can implement are determined based on the device type of the electronic device 200 (eg, headphones, glasses, watches, etc.). For example, if the electronic device 100 is provided with multiple sound effect modes as shown in Table 1 below.
  • the smart device identification module can determine that the electronic device 100 and the electronic device 200 can implement intelligent enhanced mode, omnipotent enhanced mode, surround enhanced mode, dialogue Enhancement mode, loudness enhancement mode, and rhythm enhancement mode.
  • the device type of the electronic device 200 is a watch.
  • the smart device identification module can determine that the electronic device 100 and the electronic device 200 can implement the smart enhancement mode, the dialogue enhancement mode, and the rhythm enhancement mode. .
  • the smart device identification module can directly obtain the audio channel information supported by the electronic device 200 from the electronic device 200.
  • the electronic device 200 only supports playing mono-channel audio data, or the electronic device 200 supports playing dual-channel audio data.
  • audio data of channels, or the electronic device 200 supports audio data of more channels, for example, 5.1 channels (including left channel, right channel, center channel, left surround channel, right surround channel and (low frequency) audio data, 7.1 channels (including left channel, right channel, center channel, front left surround channel, front right surround channel, rear left surround channel, rear right surround channel and low frequency) audio data, etc.
  • the smart device identification module can directly determine one or more supported sound effect modes based on the audio channel information supported by the electronic device 200 .
  • the sound effect modes obtained by the smart device identification module are the loudness enhancement mode, the rhythm enhancement mode, and the dialogue enhancement mode.
  • the sound effect modes obtained by the intelligent device identification module are intelligent enhancement mode, all-round enhancement mode, surround enhancement mode, dialogue enhancement mode, loudness enhancement mode and /or rhythm boost mode.
  • the smart device identification module can obtain the number of speakers of the electronic device 200 and determine the supported sound effect modes. For example, if the number of speakers of the electronic device 200 is one, the sound effect modes obtained by the smart device recognition module are the rhythm enhancement mode and the dialogue enhancement mode. If the number of speakers of the electronic device 200 is greater than one, the sound effect mode obtained by the intelligent device identification module is an intelligent enhancement mode, an all-around enhancement mode, a surround enhancement mode, a dialogue enhancement mode, a loudness enhancement mode and/or a rhythm enhancement mode.
  • the electronic device 200 cannot support the playback of two-channel audio data.
  • the sound effect modes obtained by the device identification module are only the rhythm enhancement mode and the dialogue enhancement mode.
  • the multi-channel processing module can be used to process the sound source data based on the sound effect mode to obtain the first audio data played by the electronic device 100 and the second audio data played by the electronic device 200 .
  • the sound effect mode display module can be used to display the sound effect mode obtained by the intelligent device identification module.
  • Runtime can refer to all code libraries, frameworks, etc. required when the program is running.
  • the runtime includes a series of function libraries required to run C programs.
  • the runtime includes a core library and a virtual machine.
  • the core library can include functional functions that the Java language needs to call.
  • the application layer and application framework layer run in virtual machines.
  • the virtual machine executes the Java files of the application layer and application framework layer as binary files.
  • the virtual machine is used to perform object life cycle management, stack management, thread management, security and exception management, and garbage collection and other functions.
  • System libraries can include multiple functional modules. For example: surface manager (surface manager), media libraries (Media Libraries), 3D graphics processing libraries (for example: OpenGL ES), 2D graphics engines (for example: SGL), etc.
  • the surface manager is used to manage the display subsystem and provides the fusion of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as static image files, etc.
  • the media library can support multiple audio and video encoding formats, such as: MPEG4, H. 264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, composition, and layer processing.
  • 2D Graphics Engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer at least includes display drivers, camera drivers, audio drivers, sensor drivers, etc.
  • the following exemplifies the workflow of the software and hardware of the electronic device 100 in conjunction with capturing the photographing scene.
  • the corresponding hardware interrupt is sent to the kernel layer.
  • the kernel layer processes touch operations into raw input events (including touch coordinates, timestamps of touch operations, and other information). Raw input events are stored at the kernel level.
  • the application framework layer obtains the original input event from the kernel layer and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation and the control corresponding to the click operation as a camera application icon control as an example, the camera application calls the interface of the application framework layer to start the camera application, and then starts the camera driver by calling the kernel layer. Camera 193 captures still images or video.
  • the electronic device 200 involved in the embodiment of the present application may be an electronic device including a speaker.
  • the electronic device 200 may be smart glasses, Bluetooth headsets (for example, neck-mounted earphones), neck-mounted Bluetooth speakers, smart watches, smart helmets, etc., which are not limited in the embodiments of the present application.
  • FIG. 2 is a schematic diagram of the hardware structure of the electronic device 200 provided by the embodiment of the present application.
  • the electronic device 200 may include but is not limited to a processor 210, an audio module 220, a speaker 220A, a microphone 220B, a wireless communication module 230, an internal memory 240, an external memory interface 245, a power management module 250, and a sensor module 260. wait.
  • the sensor module 260 may include a pressure sensor 260A, a touch sensor 260B, an inertial measurement unit 260C (inertial measurement unit, IMU), etc.
  • the structure illustrated in this embodiment does not constitute a specific limitation on the electronic device 100 .
  • the electronic device 200 may include more or fewer components than shown in the figures, or some components may be combined, some components may be separated, or some components may be arranged differently.
  • the components illustrated may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 210 is generally used to control the overall operation of the electronic device 200 and may include one or more processing units.
  • the processor 110 may include a central processing unit, an application processor, a modem processor, a baseband processor, etc. Among them, different processing units can be independent devices or integrated in one or more processors.
  • the controller can generate operation control signals based on the instruction operation code and timing signals to complete the control of fetching and executing instructions.
  • the processor 210 may also be provided with a memory for storing instructions and data.
  • processor 210 may include one or more interfaces.
  • the interface may include I2C interface, I2S interface, PCM interface, UART interface, MIPI, GPIO interface, SIM interface, and/or USB interface, SPI interface, etc.
  • the electronic device 200 can implement audio functions through the audio module 220, the speaker 220A, the microphone 220B, and the application processor. For example, play audio.
  • the audio module 220 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signals. Audio module 220 may also be used to encode and decode audio signals. In some embodiments, audio module 220 may be configured to process in the processor 210, or some functional modules of the audio module 220 are provided in the processor 210.
  • Speaker 220A also called “speaker” is used to convert audio electrical signals into sound signals.
  • the electronic device 200 can listen to music through the speaker 220A, or listen to calls.
  • the speaker 220A may be used to play the second audio data.
  • Microphone 220B also called “microphone” or “microphone”, is used to convert sound signals into electrical signals.
  • the microphone 220B may be used to collect nearby sound signals, for example, sound signals generated when the electronic device 100 plays the first audio data.
  • the electronic device 200 may include a wireless communication function.
  • the electronic device 200 may receive and play audio data from other electronic devices (such as the electronic device 100).
  • the wireless communication function may be implemented through an antenna (not shown), the wireless communication module 230, a modem processor (not shown), a baseband processor (not shown), and the like.
  • Antennas are used to transmit and receive electromagnetic wave signals. Multiple antennas may be included in the electronic device 200, and each antenna may be used to cover a single or multiple communication frequency bands.
  • a modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low-frequency baseband signal to be sent into a medium-high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal.
  • the demodulator then transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the baseband processor After the low-frequency baseband signal is processed by the baseband processor, it is passed to the application processor.
  • the application processor outputs a sound signal through an audio device (eg, speaker 220A, etc.).
  • the wireless communication module 230 can provide wireless communications applied to the electronic device 200 including wireless local area network WLAN (such as Wi-Fi network), Bluetooth BT, global navigation satellite system GNSS, frequency modulation, short-range wireless communication technology NFC, infrared technology IR, etc. s solution.
  • the wireless communication module 230 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 230 receives electromagnetic waves through the antenna, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 210 .
  • the wireless communication module 230 can also receive the signal to be sent from the processor 210, frequency modulate it, amplify it, and convert it into electromagnetic waves through the antenna for radiation.
  • the wireless communication module 230 may be used to transmit communication signals, including receiving and sending communication signals, such as audio data, control signaling, etc.
  • the electronic device 200 can establish communication connections with other electronic devices, such as the electronic device 100, etc. through the wireless communication module 230.
  • Internal memory 240 may be used to store computer executable program code, which includes instructions.
  • the processor 210 executes instructions stored in the internal memory 240 to execute various functional applications and data processing of the electronic device 100 .
  • the internal memory 240 may include a program storage area and a data storage area. Among them, the stored program area can store an operating system, at least one application program required for a function (such as a sound playback function), etc.
  • the storage data area may store data created during use of the electronic device 100 (such as audio data, etc.).
  • the internal memory 240 may also include high-speed random access memory, non-volatile memory, such as at least one magnetic disk storage device, flash memory device, general-purpose flash memory, etc.
  • Power management module 250 may be used to receive charging input from the charger.
  • the power management module 250 can charge the electronic device 200 and can also provide power to various components of the electronic device 200 .
  • the electronic device 100 is equipped with one or more sensors, including but not limited to a pressure sensor 260A, a touch sensor 260B, an inertial measurement unit 260C, and the like.
  • the pressure sensor 260A is used to sense the pressure signal and convert the pressure signal into an electrical signal.
  • pressure sensors 260A such as resistive pressure sensors, inductive pressure sensors, capacitive pressure sensors, etc.
  • the electronic device 200 detects the intensity of the touch operation according to the pressure sensor 260A.
  • the electronic device 200 may also calculate the touched position based on the detection signal of the pressure sensor 260A.
  • touch operations acting on the same touch location but with different touch operation intensities may correspond to different operation instructions. For example: when a touch operation with a touch operation intensity smaller than the first pressure threshold acts on the pressure sensor 260A, an instruction to pause the audio is executed.
  • touch operations that act on the same touch location but have different touch operation durations may correspond to different operation instructions. For example: when a touch operation whose duration is less than the first time threshold is applied to the pressure sensor 260A, a confirmation instruction is executed. When a touch operation with a duration greater than or equal to the first time threshold acts on the pressure sensor 260A, a power on/off instruction is executed.
  • Touch sensor 260B is also called a "touch device”. Touch sensor 260B is used to detect touch operations on or near it. Touch sensor 260B may pass the detected touch operation to the application processor to determine the touch event type. The electronic device 200 may provide auditory output related to the touch operation through the audio module 220 . The electronic device 200 may send instructions corresponding to the touch operation to other electronic devices that establish communication connections.
  • the inertial measurement unit 260C is a sensor used to detect and measure acceleration and rotational motion, and may include an accelerometer, an angular velocity meter (or gyroscope), etc.
  • the accelerometer can detect the acceleration of the electronic device 200 in various directions (generally three axes). When the electronic device 200 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of the electronic device 200 and be used in somatosensory game scenarios, horizontal and vertical screen switching, pedometer and other applications.
  • the gyroscope can be used to determine the motion posture of the electronic device 200 .
  • the gyroscope The angular velocities of the electronic device 200 are determined about three axes (ie, x, y, and z axes). Gyroscopes can also be used for navigation, somatosensory game scenes, camera anti-shake, etc. For example, the electronic device 200 may track the movement of the electronic device 200 according to an IMU or the like.
  • the inertial measurement unit 260C may be used to detect pose data of the electronic device 200 , and the pose data may be used to represent the offset position of the electronic device 200 relative to the display screen of the electronic device 100 .
  • the electronic device 200 can send the pose data to the electronic device 100, and the electronic device 100 can process the sound source data based on the pose data, so that the user perceives that the sound source is located at the electronic device 100, or so that the user perceives that the sound source is located in the user's line of sight. In front of.
  • the electronic device 200 is smart glasses as an example for description. Smart glasses are worn on the user's head. In addition to the optical correction, visual light adjustment or decoration functions of ordinary glasses, they can also have communication functions and audio playback functions.
  • the electronic device 200 can establish a communication connection with the electronic device 100, and the electronic device 200 can receive audio data sent by the electronic device 100 and play the audio data through a speaker.
  • the communication connection may be a wireless connection, for example, a wireless communication connection established through the wireless communication module 230 .
  • the communication connection may be a wired connection, such as a universal serial bus (USB) connection, a high definition multimedia interface (HDMI) connection, etc. This embodiment does not limit the type of communication connection.
  • USB universal serial bus
  • HDMI high definition multimedia interface
  • the electronic device 200 can transmit data with other electronic devices through these communication connection methods. For example, when a communication connection is established between the electronic device 200 and the communication device, when the communication device is on the phone with other communication devices, the electronic device 200 can be used to answer calls, listen to music, etc.
  • the electronic device 200 may also include lenses, which may be transparent lenses or lenses of other colors, spectacle lenses with optical correction functions, lenses with adjustable filtering functions, sunglasses or other lenses with Decorative effect lenses.
  • the sensor module 260 may also include a bone conduction sensor, and the bone conduction sensor may also be used as an audio playback device for outputting sound to the user.
  • the audio playback device is a bone conduction sensor
  • the two temples of the electronic device 200 may be provided with resisting parts, and the bone conduction sensor may be disposed at the position of the resisting parts.
  • the resisting portion resists the skull in front of the ear, thereby generating vibrations so that sound waves are conducted to the inner ear via the skull and bony labyrinth.
  • the position of the resisting part is directly close to the skull, which can reduce vibration loss and allow users to hear audio more clearly.
  • the electronic device 200 may be a smart watch.
  • the electronic device 200 may also include a watch strap and a watch face.
  • the dial may include the above-mentioned display screen for displaying images, videos, controls, text information, etc.
  • the watch strap can be used to fix the electronic device 200 to the limbs of the human body for easy wearing.
  • the electronic device 200 may also include a motor, and the motor may generate a vibration prompt.
  • the motor can be used to vibrate for incoming calls or for touch vibration feedback.
  • touch operations acting on different applications can correspond to different vibration feedback effects.
  • Acting on touch operations in different areas of the electronic device 200 the motor can also respond to different vibration feedback effects.
  • Different application scenarios such as time reminders, receiving information, alarm clocks, games, etc.
  • the touch vibration feedback effect can also be customized.
  • the electronic device 200 can generate a corresponding vibration feedback effect based on the audio data, so that the user perceives the electronic device 200 to vibrate as the audio data is played.
  • the frequency of vibration and the intensity of the vibration are determined by the audio data.
  • the electronic device 200 can obtain the motor pulse signal based on the audio data, thereby generating a corresponding vibration feedback effect.
  • the electronic device 200 may be a neck-mounted earphone, a neck-mounted Bluetooth speaker, etc., and the electronic device 200 may further include a neck brace, so that the electronic device 200 can be hung around the user's neck.
  • the electronic device 200 may be a Bluetooth speaker, and the electronic device 100 and the electronic device 200 may jointly realize the playback of audio data.
  • the electronic device 100 may process the sound source data based on the maximum number of channels supported by the electronic device 200 to obtain the second audio data. For example, if the electronic device 200 supports playing 5.1-channel audio data, the speaker of the electronic device 100 plays the first audio data including the left channel and the right channel, and the speaker of the electronic device 200 plays the second audio data including the 5.1-channel. .
  • the speaker of the electronic device 100 plays the first audio data including the left channel and the right channel, and the speaker of the electronic device 200 plays the second audio data including the 7.1-channel, so as to And so on.
  • the electronic device 100 may determine that the electronic device 100 and the electronic device 200 support the intelligent enhancement mode, the all-around enhancement mode, the surround enhancement mode, the dialogue enhancement mode, and the loudness enhancement mode based on the device type of the electronic device 200 being a speaker. and/or rhythm enhancement mode.
  • the electronic device 100 can process the sound source data based on the sound effect mode selected by the user to obtain first audio data and second audio data. The description of processing the sound source data can be found in the following embodiments and will not be described again here.
  • the electronic device 200 can be a mobile phone, tablet, etc.
  • the electronic device 100 can be used to play videos (that is, play video images and audio data of the videos simultaneously), and the electronic device 200 can be used to play audio sent by the electronic device 100 , which is obtained based on the video played by the electronic device 100 .
  • the electronic device 100 and the electronic device 200 can play at the same time.
  • the audio data obtained based on the same sound source data allows the user to hear the sounds played by the electronic device 100 and the electronic device 200 at the same time, thereby achieving different audio playback effects.
  • the electronic device 200 in order to allow the user to better hear the sound of the electronic device 100, is not an electronic device that affects the user's hearing of the sound of the electronic device 100, such as in-ear headphones, earphones, etc. Over-ear headphones, etc.
  • the electronic device 200 if the electronic device 200 has a noise reduction function, when the noise reduction function of the electronic device 200 is turned on, the user will not be able to hear the sound of the electronic device 100. Therefore, the electronic device 200 implements the audio playback provided by the embodiments of the present application. During the method, the electronic device 200 turns off the noise reduction function. It can be understood that the electronic device 200 can turn on the noise reduction function when the user chooses to play the sound source data only through the electronic device 200 , and the electronic device 200 can turn off the noise reduction function when the user chooses to play the sound source data through the electronic device 100 and the electronic device 200 collaboratively. noise function.
  • the noise reduction function can be used to reduce interference from external noise.
  • the user selects the electronic device 100 and the electronic device 200 to play sound source data together, the user can turn off the noise reduction function and listen to the sound signals played by the electronic device 100 and the electronic device 200 at the same time to enhance the sense of surround.
  • the electronic device 200 may provide a transparency mode switch control to turn on/off the transparency mode.
  • the transparent mode switch control may be a control displayed on the display screen of the electronic device 200 , and the electronic device 200 may receive touch input for the control to turn on/off the transparent mode.
  • the electronic device 200 may provide a transparent mode switch, which may be a physical switch, and the switch may turn on/off the transparent mode when receiving a user's toggle input.
  • a transparent mode switch which may be a physical switch
  • the switch may turn on/off the transparent mode when receiving a user's toggle input.
  • the electronic device 200 is an electronic device that covers the user's ears such as headphones (eg, in-ear headphones, over-ear headphones), the user can also turn on the transparency mode so that the user can hear the sound played by the electronic device 100 .
  • the electronic device 100 can increase the playback volume of the electronic device 100 and/or decrease the playback volume of the electronic device 200 so that the user can hear the sounds played by the electronic device 100 and the electronic device 200 at the same time.
  • the electronic device 200 is a portable device provided with a speaker, such as smart glasses, Bluetooth headsets, smart watches, etc.
  • a speaker such as smart glasses, Bluetooth headsets, smart watches, etc.
  • the electronic device 100 and the electronic device 200 can implement the audio playback method provided by the embodiment of the present application in many scenes (for example, airports, hotels, outdoor scenes, etc.) to provide users with multiple sound effect modes. auditory experience.
  • the audio playback method includes the following steps:
  • the electronic device 100 establishes a communication connection with the electronic device 200.
  • the electronic device 100 displays a first control.
  • the first control is used to trigger the electronic device 100 to display one or more sound effect mode options corresponding to the sound effect mode.
  • the electronic device 100 establishes a communication connection with the electronic device 200 .
  • the electronic device 100 and the electronic device 200 can establish a connection in a wireless/wired manner.
  • the communication connection established in a wireless manner may include but is not limited to Wi-Fi communication connection, Bluetooth communication connection, point-to-peer (peer to peer, P2P) communication connection, etc.
  • the Bluetooth communication connection can be realized through one or more Bluetooth communication solutions including but not limited to classic Bluetooth (basic rate/enhanced data rate, BR/EDR) or Bluetooth low energy (bluetooth low energy, BLE).
  • the electronic device 100 may display the first control in response to establishing a communication connection with the electronic device 200 .
  • the first control may be used to trigger the electronic device 100 to acquire and display the sound effect modes supported by the electronic device 100 and the electronic device 200 . It can also be understood that the first control can trigger the electronic device 100 and the electronic device 200 to cooperatively play the audio data of the electronic device 100 .
  • the electronic device 100 when it displays the first control, it may also display the second control.
  • the second control may be used to trigger the electronic device 100 or the electronic device 200 to play the audio data of the electronic device 100 .
  • the electronic device 200 when the electronic device 200 is headphones or glasses, the second control can be used to trigger the electronic device 100 to send audio data to the electronic device 200 and play the audio data through the electronic device 200 .
  • the second control when the electronic device 200 is a watch, the second control may be used to trigger the electronic device 100 to play the audio data of the electronic device 100 .
  • the first control may also be used to trigger the electronic device 100 or the electronic device 200 to play the audio data of the electronic device 100 .
  • the electronic device 100 displays the first control
  • the designated area A and the designated area B are also displayed.
  • the electronic device 100 and the electronic device 200 may trigger the electronic device 100 and the electronic device 200 to cooperatively play the audio data of the electronic device 100 .
  • the electronic device 100 may trigger the electronic device 100 or the electronic device 200 to play the audio data of the electronic device 100 when detecting that the first control is dragged within the designated area B.
  • the electronic device 100 when the electronic device 100 displays the first control, it also displays a third control.
  • the first control may represent the electronic device 200 and the third control may represent the electronic device 100 .
  • the distance between the first control and the third control is greater than the specified distance.
  • the electronic device 100 may trigger the electronic device 100 or the electronic device 200 to play the sound of the electronic device 100 when detecting that the distance between the first control and the third control is greater than a specified distance. Frequency data.
  • the electronic device 100 and the electronic device 200 may trigger the electronic device 100 and the electronic device 200 to cooperatively play the audio data of the electronic device 100 .
  • the electronic device 100 may detect the first control and the third control after receiving input from the user dragging the first control to the vicinity of the third control.
  • the distance between the controls is less than the specified distance.
  • the electronic device 100 may detect the distance between the first control and the third control after receiving an input from the user dragging the first control away from the vicinity of the third control when the distance between the first control and the third control is less than a specified distance. greater than the specified distance.
  • the electronic device 100 receives the first input for the first control.
  • the first input may be a click or a double-click on the first control, or the first input may be a voice command input, etc., which is not limited in the embodiments of the present application.
  • the electronic device 100 obtains and displays one or more sound effect mode options, including the first mode option, one or more sound effect mode options and one or more sound effect modes in one-to-one correspondence.
  • sound channels refer to mutually independent audio signals collected at different spatial locations during sound recording.
  • the number of sound channels can be understood as the number of sound sources during sound recording.
  • 5.1 channels include left channel, right channel, left surround channel, right surround channel, center channel and low-frequency channel.
  • the sound frequency range of the left channel, right channel, left surround channel, right surround channel and center channel is 20Hz-200KHz, and the sound frequency of the low-frequency channel is lower than 150Hz.
  • the sound effect modes may include, but are not limited to, all-around enhancement mode, surround enhancement mode, rhythm enhancement mode, vocal enhancement mode, and loudness enhancement mode.
  • the channels included in the first audio data are different from the channels included in the second audio data. It can be understood that because the first audio data includes different channels than the second audio data includes different channels, the first audio data and the second audio data may have one or more different amplitudes, waveforms, and frequencies.
  • the channels of the first audio data are the same as the channels of the second audio data.
  • the amplitude of the first audio data and the amplitude of the second audio data are different.
  • the channels included in the first audio data played by the electronic device 100 and the channels included in the second audio data played by the electronic device 200 are as shown in Table 1,
  • the first audio data obtained based on the sound source data includes the left channel, the right channel and the center channel.
  • the second audio data obtained based on the sound source data includes a left surround channel, a right surround channel and a center channel.
  • the first audio data obtained based on the sound source data includes a left channel and a right channel.
  • the second audio data obtained based on the sound source data includes a left surround channel and a right surround channel.
  • the first audio data obtained based on the sound source data includes a left channel and a right channel.
  • the second audio data obtained based on the sound source data includes only the center channel. In this way, the clarity of dialogue can be increased by the electronic device 200 .
  • the first audio data obtained based on the sound source data includes a left channel and a right channel.
  • the second audio data obtained based on the sound source data includes a left channel and a right channel. In this way, audio loudness can be increased by electronic device 200.
  • the first audio data obtained based on the sound source data includes the left channel and the right channel. Based on the sound source data, the The second audio data includes low frequency channels. In this way, the heavy bass can be played through the electronic device 200, so that the user can feel the power brought by the heavy bass.
  • the electronic device 200 plays the first audio data, and the electronic device 100 plays the second audio data.
  • the electronic device 200 only includes one speaker.
  • the electronic device 200 may use the speaker to play data of the low-frequency channel.
  • the electronic device 200 may use the speaker to play the center channel data.
  • the electronic device 200 needs to use the speaker to play the data of the left channel and the data of the right channel.
  • the electronic device 200 can superimpose the data of the left channel and the data of the right channel together, and use the speaker to play the superimposed data of the left channel and the data of the right channel.
  • the electronic device 200 may downmix the data of the left channel and the data of the right channel to obtain data including only the mono channel, and use the speaker to play the mono channel data.
  • the electronic device 200 includes two speakers.
  • the rhythm enhancement mode the electronic device 200 can use all speakers to play data of low-frequency channels.
  • the electronic device 200 may use all speakers to play the center channel data.
  • the loudness enhancement mode the electronic device 200 uses the left speaker to play the data of the left channel, and the right speaker to play the data of the right channel.
  • the surround enhancement mode the electronic device 200 uses the left speaker to play the data of the left surround channel, and the right speaker to play the data of the right surround channel.
  • the all-in-one enhancement mode the electronic device 200 can use the left speaker to play the data of the left surround channel and the data of the center channel, and the right speaker to play the data of the right surround channel and the data of the center channel.
  • the electronic device 200 includes three speakers. Among them, in the rhythm enhancement mode, the electronic device 200 can use all speakers to play data of low-frequency channels. In the dialogue enhancement mode, the electronic device 200 may use all speakers to play the center channel data. In the loudness enhancement mode, the electronic device 200 uses the left speaker to play the data of the left channel, the right speaker to play the data of the right channel, and the other speaker to play the data of the left channel and the data of the right channel. In the surround enhancement mode, the electronic device 200 uses the left speaker to play the data of the left surround channel, the right speaker to play the data of the right surround channel, and the other speaker to play the data of the left surround channel and the data of the right surround channel. In the full enhancement mode, the electronic device 200 can use the left speaker to play the data of the left surround channel, the right speaker to play the data of the right surround channel, and the other speaker (herein, it can be called the center speaker) to play the center sound. Road data.
  • the electronic device 200 can use all the speakers to play the data of the low-frequency channel in the rhythm enhancement mode.
  • center channel data is played using all speakers.
  • the electronic device 200 uses the left speaker to play the data of the left channel, the right speaker to play the data of the right channel, and uses other speakers to play the data of the left channel and the data of the right channel.
  • the electronic device 200 uses the left speaker to play the data of the left surround channel, the right speaker to play the data of the right surround channel, and uses other speakers to play the data of the left surround channel and the data of the right surround channel.
  • the electronic device 200 can use the left speaker to play the data of the left surround channel, the right speaker to play the data of the right surround channel, the center speaker to play the data of the center channel, and the other speakers to play the left surround channel data, right surround channel data, and center channel data, or use other speakers to play the center channel data.
  • the electronic device 200 may use the left speaker to play the data of the left surround channel and the data of the center channel, and the right speaker to play the data of the right surround channel and the data of the center channel.
  • the electronic device 100 includes two speakers.
  • the electronic device 100 in the surround enhancement mode, rhythm enhancement mode, dialogue enhancement mode or loudness enhancement mode, can use the left speaker to play the data of the left channel, and the right speaker to play the data of the right channel.
  • the electronic device 100 in the all-in-one enhancement mode, can use the left speaker to play the data of the left channel and the data of the center channel, and the right speaker to play the data of the right channel and the data of the center channel.
  • the electronic device 100 can use the left speaker to play the data of the left channel and the right speaker to play the right channel. data, using another speaker to play the left channel data and the right channel data.
  • the electronic device 100 can use the left speaker to play the data of the left channel, the right speaker to play the data of the right channel, and the other speaker (which can be called a center speaker here) to play the data of the center channel. data.
  • the electronic device 100 can use the left speaker to play the data of the left channel, and use the right speaker to play the data of the left channel.
  • the speaker plays the right channel data, and the other speakers play the left channel data and the right channel data.
  • the electronic device 100 can use the left speaker to play the data of the left channel, the right speaker to play the data of the right channel, the center speaker to play the data of the center channel, and the other speakers to play the left surround channel. data, right surround channel data, and center channel data, or use other speakers to play the center channel Channel data.
  • the speaker located near the user's left ear may be called a left speaker and may be used to play the data of the left surround channel of the second audio data or the data of the left channel.
  • the speaker near the ear may be called a right speaker and may be used to play the data of the right surround channel or the data of the right channel of the second audio data.
  • the center speaker is used to play data of the center channel.
  • the position of the center speaker is not limited.
  • the center speaker may be one or more speakers located between the left and right speakers.
  • the left speaker may be located at the left temple
  • the right speaker may be located at the right temple
  • the center speaker may be located at the nose pad.
  • the electronic device 100 is located in front of the user's line of sight, and the electronic device 200 is worn on the user's head.
  • the electronic device 100 is responsible for the playback of the front left channel and the front right channel of the audio source data
  • the electronic device 200 is responsible for the playback of the rear left surround channel and the rear right surround channel of the audio source data.
  • the electronic device 100 since the speaker of the electronic device 100 is located in front of the user's line of sight, and the speaker of the electronic device 200 is located near the user's ear, based on the relative positional relationship between the electronic device 100, the electronic device 200 and the user, the electronic device 100 is responsible for playing the front-end channel, and the electronic device 200 responsible for the back-end sound channel, it simulates the playback scenario of 3D surround sound, allowing users to feel that they are in the center of the 3D space, and can hear sounds coming from the front and rear, giving users an immersive experience.
  • the electronic device 100 can obtain the device information of the electronic device 200, determine the device type of the electronic device 200 based on the device information of the electronic device 200, and then determine the device type of the electronic device 200 based on the device type of the electronic device 200 (for example, headphones, glasses, watches or speakers, etc. ) determines one or more sound effect modes that the electronic device 100 and the electronic device 200 can implement.
  • the sound effect modes determined by the electronic device 100 include a dialogue enhancement mode, a loudness enhancement mode and a rhythm enhancement mode.
  • the sound effect modes determined by the electronic device 100 include an all-around enhancement mode, a surround enhancement mode, a dialogue enhancement mode, a loudness enhancement mode, and a rhythm enhancement mode.
  • the electronic device 100 and the electronic device 200 play audio in a certain sound effect mode please refer to the embodiment shown in Table 1 above, and will not be described again here.
  • the electronic device 100 may obtain the device information of the electronic device 200 when establishing a communication connection with the electronic device 200.
  • the device information of the electronic device 200 may include but is not limited to the device name of the electronic device 200.
  • the device name of the electronic device 200 includes the device type of the electronic device 200. For example, when the device name of the electronic device 200 is "xwatch", the electronic device 100 determines that the device name of the electronic device 200 includes "watch", and may Accordingly, it is determined that the device type of the electronic device 200 is a watch. For another example, when the device name of the electronic device 200 includes a character string such as "eye” or "glass”, it can be determined based on this that the device type of the electronic device 200 is glasses.
  • the device name of the electronic device 200 includes a character string such as "audio”, it can be determined based on this that the device type of the electronic device 200 is audio.
  • the device name of the electronic device 200 includes character strings such as "ear”, “earphone”, and “headphone”, it can be determined based on this that the device type of the electronic device 200 is a headset.
  • the device name of the electronic device 200 is the product model set by the manufacturer.
  • the electronic device 100 can obtain the corresponding device based on the stored corresponding relationship between the device name and the device model based on the device name of the electronic device 200 and the corresponding relationship. type.
  • the electronic device 200 stores the device type of the electronic device 200 , and the electronic device 100 can directly obtain the device type of the electronic device 200 from the electronic device 200 .
  • the electronic device 100 may obtain the device information of the electronic device 200 in response to the input to the first control.
  • the electronic device 100 can obtain the channel information supported by the electronic device 200 or the number of speakers of the electronic device 200, and determine one or more sound effect modes that the electronic device 100 and the electronic device 200 can implement. For details, reference may be made to the embodiment shown in FIG. 1B , which will not be described again here.
  • the electronic device 100 can also provide a smart enhanced mode.
  • the electronic device 100 can set the sound effect mode to the smart enhanced mode.
  • the intelligent enhanced mode is a combination of one or more of one or more sound effect modes supported by the electronic device 100 and the electronic device 200 . In this way, when the user does not select the sound effect mode, the electronic device 100 can also process the played sound source data in the intelligent enhanced mode, so that the electronic device 100 and the electronic device 200 play sounds together.
  • the smart enhancement mode may be a combination of the surround enhancement mode and the dialogue enhancement mode. That is to say, the first audio data includes the left channel and the right channel, and the second audio data includes Left surround channel, right surround channel and center channel.
  • the smart enhancement mode may be a combination of the loudness enhancement mode and the rhythm enhancement mode, that is, the first audio data includes the left channel and the right channel, and the second audio data includes the left surround channel. , right surround channel and low frequency channel.
  • the electronic device 100 may set the smart enhancement mode to be a combination of the rhythm enhancement mode and the loudness enhancement mode, that is, the first audio data includes a left channel and a right channel.
  • the second audio data includes a left channel, a right channel and a low frequency channel.
  • the smart enhanced mode is obtained by the electronic device 100 based on the sound effect modes supported by the electronic device 100 and the electronic device 200 , one or more sound effect modes determined by the electronic device 100 always include the smart enhanced mode.
  • the electronic device 100 may display the sound effect mode options corresponding to the one or more sound effect modes.
  • the one or more sound effect mode options include the first mode option.
  • the one or more sound effect modes include the first mode, the first mode option and the first mode. Corresponding.
  • the electronic device 100 receives the second input of selecting the first mode option.
  • the electronic device 100 may receive a second input of selecting the first mode option and determine to process the sound source data based on the first mode.
  • the second input may be a single click, a double click, a voice command input, etc.
  • the electronic device 100 obtains the first audio data played by the electronic device 100 and the second audio data played by the electronic device 200 based on the sound source data and the first mode.
  • the electronic device 100 is playing the sound source data, and the electronic device 100 can respond to the second input, process the sound source data based on the first mode, and obtain the first audio data and the second audio data.
  • the electronic device 100 may receive an input from the user to play the sound source data before receiving the second input.
  • the electronic device 100 sets the electronic device 100 (and the electronic device 200) to play the audio data in the first mode.
  • the electronic device 100 can process the sound source data based on the first mode to obtain the first audio data and the second audio data.
  • the electronic device 100 obtains the number of channels of the sound source data.
  • the electronic device 100 obtains the number of channels of the audio source data to be played.
  • the electronic device 100 determines whether the number of channels of the audio source data is less than or equal to 2. When the electronic device 100 determines that the number of channels of the audio source data is less than or equal to 2, step S403 is executed; when the electronic device 100 determines that the number of channels of the audio source data is greater than 2. When, execute step S406.
  • the audio source data is a two-channel audio source.
  • the two-channel audio source includes the left channel and the right channel.
  • the electronic device 100 determines whether the sound source data is a two-channel sound source, that is, whether the sound source data only includes the left channel and the right channel.
  • step S404 may be performed; when the electronic device 100 determines that the audio source data is a binaural sound source, step S405 may be performed.
  • the electronic device 100 copies the monophonic sound source data to obtain binaural sound source data.
  • the electronic device 100 determines that the sound source data is not a two-channel sound source.
  • the electronic device 100 determines that the sound source data is a monophonic sound source data.
  • the electronic device 100 can copy mono-channel sound source data to obtain binaural sound source data.
  • the electronic device 100 can directly copy the mono sound source data to obtain two pieces of mono sound source data, and use one piece of the mono sound source data as the left channel audio data and the other piece as the right channel audio data. Audio data, obtain the audio source data including left channel and right channel. Alternatively, after the electronic device 100 copies and obtains two pieces of monophonic sound source data, it can process the two pieces of monophonic sound source data through a specified algorithm and adjust one of the phase difference, amplitude and frequency of the two pieces of sound source data. or multiple items, to obtain the audio source data including the left channel and the right channel.
  • the electronic device 100 processes the two-channel sound source data based on the first mode to obtain first audio data and second audio data.
  • the electronic device 100 can upmix the two-channel audio source data to obtain 5.1-channel audio source data, and then superimpose the low-frequency channels to the left channel, right channel, and center channel. And extract the data of the left channel, the data of the right channel and the data of the center channel from the two-channel sound source data to obtain the first audio data, and extract the data of the left surround channel, right channel and right channel from the two-channel sound source data.
  • the data of the surround channel and the data of the center channel are used to obtain the second audio data.
  • the electronic device 100 can use the two-channel sound source data as the first audio data.
  • the electronic device 100 can also process the two-channel sound source data through relevant algorithms to obtain the left surround channel and right surround sound data. Secondary audio data for the surround channels.
  • the electronic device 100 can use the two-channel sound source data as the first audio data, and the electronic device 100 can also extract the data of the low-frequency channel in the two-channel sound source data to obtain the low-frequency channel data. Second audio data.
  • the electronic device 100 can use the two-channel sound source data as the first audio data, and the electronic device 100 can also process the two-channel sound source data through a related algorithm to extract the mid-range sound in the two-channel sound source data. channel data to obtain second audio data including the center channel.
  • the electronic device 100 may use the two-channel sound source data as the first audio data and the second audio data.
  • 5.1 channels include left channel, right channel, center channel, left surround channel, right surround channel and low frequency channel.
  • the electronic device 100 can determine whether the number of channels of the sound source data is less than 5.1.
  • the electronic device 100 may execute step S407 when it is determined that the number of channels of the sound source data is less than or equal to 5.1; the electronic device 100 may execute step S409 when it is determined that the number of channels of the audio source data is greater than 5.1.
  • the number of channels less than 5.1 can be understood as the number of channels included in the sound source data is less than the number of channels included in the 5.1 channel.
  • the channels of the sound source data are 4 channels, 4.1 channels, 3 channels, 3.1 channels or 2.1 channels
  • the number of channels is less than 5.1.
  • 4-channel can be understood as the sound source data including left channel, right channel, left surround channel and right surround channel
  • 4.1-channel can be understood as the sound source data including left channel, right channel, low-frequency channel, Left surround channel and right surround channel, and so on.
  • the number of channels included in the sound source data is greater than the number of channels included in the 5.1 channel.
  • the channels of the audio source data are 7.1 channels or 10.1 channels
  • the number of channels is greater than 5.1.
  • 7.1 channel can be understood as the sound source data including left channel, right channel, left front surround channel, right front surround channel, left rear surround channel, right rear surround channel, center channel and low frequency channel.
  • the audio source data is a 5.1-channel audio source.
  • the electronic device 100 can determine whether the sound source data is a 5.1-channel sound source.
  • the electronic device 100 may perform step S410 when determining that the sound source data is 5.1-channel sound source data; the electronic device 100 may perform step S408 when determining that the number of channels of the sound source data is not 5.1-channel sound source data.
  • the electronic device 100 determines that the number of channels of the audio source data is less than 5.1, and can use a specified upmixing algorithm to upmix the audio source data to obtain 5.1-channel audio source data.
  • the electronic device 100 determines that the number of channels of the audio source data is greater than 5.1, and can downmix the audio source data to obtain 5.1-channel audio source data through a specified downmixing algorithm.
  • the electronic device 100 processes the 5.1-channel sound source data based on the first mode to obtain first audio data and second audio data.
  • the electronic device 100 can extract the left surround channel, the right surround channel and the center channel from the 5.1-channel sound source data to obtain the second audio data.
  • the electronic device 100 can also superimpose the low-frequency channels in the sound source data to the left channel, the right channel, and the center channel. Then, the left channel, the right channel and the center channel are extracted from the superimposed sound source data to obtain the first audio data. It can be understood that if the low-frequency channel is superimposed on the left and right surround channels, the surround effect will be affected. Therefore, in the preferred embodiment of the present application, only the low-frequency channel is superimposed on the left and right channels. channel and center channel.
  • the electronic device 100 can directly extract the data of the corresponding channels of the audio source data to obtain first audio data and second audio data.
  • the electronic device 100 can extract the data of the left surround channel and the data of the right surround channel in the 5.1-channel sound source data to obtain the second audio data.
  • the electronic device 100 can sequentially superimpose the low-frequency channel and the center channel of the sound source data to the left channel and the right channel. Then extract the left channel data and the right channel data of the processed sound source data to obtain the first audio data.
  • the unprocessed audio source data only includes the left channel, the right channel, the left surround channel and the right surround channel
  • the electronic device 100 can directly extract the data of the corresponding channels of the audio source data to obtain the first audio data and Second audio data.
  • the electronic device 100 can process the left channel, right channel, left surround channel, right surround channel and center channel of the 5.1-channel sound source data through a downmixing algorithm, and downmix Get the data of the left channel and the data of the right channel.
  • the processed audio source data only includes left channel, right channel and low-frequency channel.
  • the electronic device 100 can extract the data of the left channel and the data of the right channel of the processed sound source data to obtain the first audio data.
  • the electronic device 100 can also extract the data of the low-frequency channel in the processed sound source data, Second audio data including a low-frequency channel is obtained.
  • the unprocessed sound source data only includes the left channel, the right channel and the low-frequency channel
  • the electronic device 100 can directly extract the data of the corresponding channels of the sound source data to obtain the first audio data and the second audio data.
  • the electronic device 100 can process the left channel, right channel, left surround channel, right surround channel and low-frequency channel of the 5.1-channel sound source data through a related downmixing algorithm, and downmix Get the data of the left channel and the data of the right channel.
  • the processed audio source data only includes the left channel, right channel and center channel.
  • the electronic device 100 can extract the data of the left channel and the data of the right channel of the processed sound source data to obtain the first audio data.
  • the electronic device 100 can also extract the data of the center channel in the processed sound source data. , to obtain the second audio data including the center channel.
  • the unprocessed sound source data only includes the left channel, the right channel and the center channel, the electronic device 100 can directly extract the data of the corresponding channels of the sound source data to obtain the first audio data and the second audio data.
  • the electronic device 100 can process the 5.1-channel sound source data through a related downmixing algorithm, and downmix the sound source data including only the left channel and the right channel.
  • the electronic device 100 can use the processed sound source data as the first audio data and the second audio data. Frequency data.
  • the first mode is the intelligent enhancement mode
  • the sound effect modes supported by the electronic device 100 and the electronic device 200 include an all-around enhancement mode, a surround enhancement mode, a dialogue enhancement mode, a loudness enhancement mode and a rhythm enhancement mode.
  • the intelligent enhancement mode may be a combination of the rhythm enhancement mode and the loudness enhancement mode.
  • the electronic device 100 may process the sound source data based on the rhythm enhancement mode to obtain low-frequency channel data. And process the sound source data based on the loudness enhancement mode to obtain the data of the left channel and the data of the right channel.
  • the electronic device 100 may transmit the second audio data including the data of the low-frequency channel, the data of the left channel, and the data of the right channel to the electronic device 200 .
  • the smart enhancement mode may be a combination of the surround enhancement mode and the dialogue enhancement mode.
  • the electronic device 100 may process the sound source data based on the dialogue enhancement mode to obtain center channel data. And process the sound source data based on the surround enhancement mode to obtain the data of the left surround channel and the data of the right surround channel.
  • the electronic device 100 may transmit the second audio data including the data of the center channel, the data of the left surround channel, and the data of the right surround channel to the electronic device 200 .
  • the sound effect modes supported by the electronic device 100 and the electronic device 200 include a dialogue enhancement mode, a rhythm enhancement mode and a loudness enhancement mode.
  • the intelligent enhancement mode may be a combination of the rhythm enhancement mode and the loudness enhancement mode.
  • the electronic device 100 may process the sound source data based on the rhythm enhancement mode to obtain low-frequency channel data. And process the sound source data based on the loudness enhancement mode to obtain the data of the left channel and the data of the right channel.
  • the electronic device 100 may transmit the second audio data including the data of the low-frequency channel, the data of the left channel, and the data of the right channel to the electronic device 200 .
  • the electronic device 100 may process the sound source data based on all supported sound effect modes to obtain data of all channels in which the electronic device 100 and the electronic device 200 support all sound effect modes.
  • the electronic device 100 may send data of all channels (which may be collectively referred to as second audio data) to the electronic device 200 .
  • the electronic device 100 can also send the current sound effect mode to the electronic device 200, and the electronic device 200 can play the data of the corresponding sound channel based on the current sound effect mode.
  • the electronic device 100 may determine which channels are included in the audio data played by the electronic device 200 based on the currently selected sound effect mode, and instruct the electronic device 200 to play the data of the corresponding audio channels.
  • the electronic device 100 can send the data of different channels that the electronic device 200 may use to the electronic device 200, and the electronic device 200 can play the audio data of the channel indicated by the sound effect mode based on the current sound effect mode.
  • the electronic device 100 may instruct the electronic device 200 to play data of one or more sound channels based on the sound effect mode.
  • the sound effect mode of the electronic device 100 changes, the electronic device 100 does not need to process the sound source data according to the modified sound effect mode again, and the electronic device 200 can directly play the audio data corresponding to the changed sound effect mode.
  • the second audio data sent by the electronic device 100 to the electronic device 200 includes left channel data, right channel data, left surround channel data, right surround channel data, and center channel data obtained based on the multiple sound effect modes. channel data and low-frequency channel data.
  • the electronic device 100 may notify the electronic device 200 to play the data of the left surround channel, the data of the right surround channel, and the data of the center channel when playing a video.
  • the electronic device 100 is notified to play data of a low-frequency channel.
  • the electronic device 100 sends the second audio data to the electronic device 200.
  • the electronic device 100 may send the second audio data to the electronic device 200 .
  • the electronic device 100 can play the first audio data
  • the electronic device 200 can play the second audio data.
  • the electronic device 200 may also perform an audio processing operation on the second audio data, and then play the processed second audio data.
  • the audio processing operation may be to adjust the loudness of the second audio data.
  • the electronic device 200 may identify small signal audio in the second audio data based on the amplitude of the second audio data and increase the loudness of small sounds.
  • the small-signal audio is an audio signal in the second audio data whose loudness is within the range of -35dB and below. In this way, since the loudness of small signal audio is small and not obvious to human ears, increasing the loudness of small signal audio will help users listen to small signal audio more clearly and enhance the user's perception of audio details.
  • the small-signal audio can trigger the sound of environmental changes in the game scene for the game character (for example, the rustling sound of the game character passing through the grass, the footsteps of the game character, the sound of a car driving). passing sounds, etc.).
  • the electronic device 200 can increase the volume of small signal audio, enhance the immersion of the game, and improve the user's gaming experience.
  • the second audio data is audio data provided by a video application
  • the small signal audio may be environmental sounds in the video (for example, insects, birds, wind, etc.).
  • the electronic device 100 is only in the loudness enhancement mode, and the electronic device 200 performs the audio processing operation for the second audio data.
  • the electronic device 100 when playing a video, can use a video recognition algorithm to identify the positions of multiple characters in the video, and can extract the vocal data of the characters at the positions from the audio source data of the video. The electronic device 100 can thereby obtain information included in the visual
  • the second audio data of one or more characters in the video picture that is closer to the display screen (that is, the data of the center channel) is included in the video picture to obtain one or more characters that are far from the display screen in the video picture.
  • the first audio data of the vocal data (that is, the data of the center channel). In this way, the distance between the electronic device 100 and the user is greater than the distance between the electronic device 200 and the user.
  • the electronic device 100 plays the human voice of the person further away from the display screen, and the electronic device 200 plays the voice of the person closer to the display screen. Human voices allow users to feel human voices in different locations, improving users’ immersion in watching videos.
  • the first audio data includes the vocal data of at least one character
  • the second audio data includes the vocal data of all characters except the character corresponding to the vocal data of the first audio data.
  • the electronic device 100 can put the vocal data of the person closest to the display screen into the second audio data, and put the vocal data of the other two people into the first audio data.
  • the electronic device 100 can put the vocal data of the two people closest to the display screen into the second audio data, and put the vocal data of another person into the first audio data. This is not limited in the embodiment of the present application.
  • the sound source data includes multiple center channels, and the multiple center channels correspond one-to-one to the characters in the video picture.
  • the data of a center channel is the vocal data of a character in the video picture.
  • the electronic device 100 can identify the position of each character in the video picture, and obtain the second audio data including the vocal data of one or more characters closer to the display screen in the video picture (that is, the data of the center channel), and obtain The first audio data includes the vocal data of one or more characters far away from the display screen in the video frame (that is, the data of the center channel).
  • the electronic device 100 may determine whether the played video is related to the music scene when playing the audio source data of the video.
  • music scenes can include but are not limited to concert scenes, music video (MV) scenes, singing competition scenes, performance scenes, etc.
  • the electronic device 100 may set the smart enhancement mode to a combination of the rhythm enhancement mode and the loudness enhancement mode when it is determined that the video is related to the music scene.
  • the electronic device 100 may set the smart enhancement mode to a combination of the dialogue enhancement mode and the surround enhancement mode when it is determined that the video has nothing to do with the music scene.
  • the electronic device 100 may determine whether the video is related to the music scene based on the name of the video. For example, when the name of the video includes but is not limited to words related to music such as "singing", “music”, “playing”, “song”, etc., it is determined that the video is related to the music scene.
  • the electronic device 100 can use an image recognition algorithm to identify whether there are characters in the video picture performing actions such as singing or playing musical instruments. When the electronic device 100 recognizes that the characters in the video picture are playing or singing, the electronic device 100 can determine The video is related to the music scene.
  • the audio source data may be audio data in a video file, and the audio source data may also be audio data in an audio file corresponding to the video file.
  • the sound source data may be stored in the storage of the electronic device 100, or the sound source data may be obtained by the electronic device 100 from other electronic devices (eg, server, electronic device 200, etc.).
  • the electronic device 100 or the electronic device 200 can simultaneously adjust the volumes of the electronic device 100 and the electronic device 200 after receiving an input to adjust the volume. In this way, when the electronic device 100 and the electronic device 200 play the sound source data together, the volume of one electronic device will not decrease (or increase) while the volume of the other electronic device will remain unchanged, resulting in an uncoordinated sound volume heard by the user. Even the sound of electronic devices with low volume cannot be heard.
  • the electronic device 100 may, when receiving an input to increase (or decrease) the volume, increase (or decrease) the amplitude of the first audio data and the second audio data sent to the electronic device 200 to increase (or decrease) The effect of volume on electronic devices.
  • the electronic device 100 may set the volume of the electronic device 100 to the adjusted volume value through a system service (such as a volume adjustment service).
  • the electronic device 100 may send the adjusted volume value to the electronic device 200, and the electronic device 200 may also set the volume to the adjusted volume value.
  • the electronic device 100 may adjust the volume at which the electronic device 100 plays the first audio data based on the distance between the electronic device 100 and the electronic device 200 . In this way, when the user wears the electronic device 200, the distance between the electronic device 200 and the electronic device 100 changes due to the user's movement. The electronic device 100 can adjust the playback volume to ensure the user's listening experience so that the user does not hear the electronic device 100. The sound suddenly becomes quieter or louder.
  • the electronic device 100 can obtain the distance between the electronic device 100 and the electronic device 200 .
  • the electronic device 100 can obtain the distance between the electronic device 100 and the electronic device 200 through Wi-Fi ranging or other methods.
  • the electronic device 100 may increase the volume of the electronic device 100 when detecting that the distance between the electronic device 100 and the electronic device 200 increases.
  • the electronic device 100 may reduce the volume of the electronic device 100 when detecting that the distance between the electronic device 100 and the electronic device 200 decreases.
  • the electronic device 100 may adjust the broadcast of the electronic device 200 based on the distance between the electronic device 100 and the electronic device 200. Play the volume of the second audio data. For example, the electronic device 100 may adjust the volume of the second audio data played by the electronic device 200 by adjusting the amplitude of the second audio data.
  • the audio playback method provided by the embodiment of the present application is exemplarily introduced in combination with application scenarios.
  • the audio playback method provided by the embodiment of the present application is introduced.
  • the electronic device 100 may be a tablet computer, and the electronic device 200 may be smart glasses. It should be noted that the electronic device 100 is not limited to a tablet computer, and may also be other electronic devices including a display screen and multiple speakers, such as a mobile phone.
  • the electronic device 100 includes one or more speakers, and the one or more speakers include speaker A and speaker B.
  • speaker A is located on the left side of the display screen
  • speaker B is located on the right side of the display screen.
  • the electronic device 200 includes one or more speakers including speaker C and speaker D.
  • the speaker C is located near the user's left ear
  • the speaker D is located near the user's right ear.
  • Both the electronic device 100 and the electronic device 200 support playing audio data including left and right channels. It should be noted that due to the limitation of the viewing angle depicted in Figure 5 , speaker B and speaker D are blocked and cannot be shown in Figure 5 .
  • the steps for the electronic device 100 and the electronic device 200 to play audio data in the first mode are as follows:
  • the electronic device 100 and the electronic device 200 enable the Bluetooth function.
  • the electronic device 100 can display one or more Bluetooth device options in the Bluetooth interface.
  • the Bluetooth device option can represent a nearby searched Bluetooth device and can be used to trigger the electronic device 100 to establish a communication connection with the electronic device corresponding to the Bluetooth device option.
  • the one or more Bluetooth device options include a Bluetooth device option for triggering the establishment of a Bluetooth connection with the electronic device 200 .
  • the electronic device 100 may establish a Bluetooth connection with the electronic device 200 in response to input of a Bluetooth device option for triggering establishment of a Bluetooth connection with the electronic device 200 .
  • the Bluetooth connection is only an example and is not limited to the Bluetooth connection.
  • the electronic device 100 and the electronic device 200 can also establish other wireless or wired communication connections, which is not limited in the embodiment of the present application.
  • the electronic device 100 may display the Bluetooth interface 601 as shown in FIG. 6A after establishing a Bluetooth connection with the electronic device 200 .
  • the Bluetooth interface 601 displayed by the electronic device 100 may include one or more Bluetooth device options.
  • the one or more Bluetooth device options include a Bluetooth device option 603, which may also display a device name of the electronic device 200.
  • the device name of the electronic device 200 is "Eyewear”.
  • the Bluetooth device option 603 can also display connection status information.
  • the connection status information can be used to indicate the connection status of the electronic device 100 and the electronic device 200 (for example, not connected, connected, etc.). Here, the connection status is connected.
  • Bluetooth interface 601 may also include tab 604 that includes independent audio options 605 and collaborative audio options 606.
  • the independent audio option 605 can be used to trigger the electronic device 100 to play sound through the electronic device 200 .
  • Collaborative audio option 606 may be used to trigger electronic device 100 to play sounds simultaneously through electronic device 100 and electronic device 200 .
  • the tab 604 can also display a device image of the electronic device 200 .
  • Electronic device 100 may receive input (eg, a click) for collaborative audio option 606 and, in response to the input, determine one or more sound effect modes supported by electronic device 100 and electronic device 200 .
  • the electronic device 100 can determine that the device type of the electronic device 200 is smart glasses based on the device name of the electronic device 200, and determine one or more sound effect modes based on the device type of the electronic device 200.
  • the determined one or more sound effect modes For detailed description, please refer to the above embodiments and will not be described again here.
  • the electronic device 100 may display the sound effect selection interface 611 as shown in FIG. 6B after determining one or more audio modes supported by the electronic device 100 and the electronic device 200 .
  • the electronic device 100 displays a sound effect selection interface 611 .
  • the sound effect selection interface 611 includes a sound effect mode list 612 .
  • the sound effect mode list 612 includes one or more sound effect mode options.
  • the one or more audio options and the electronic device 100 Corresponding to the determined one or more sound effect modes.
  • the sound effect mode option may include an identification of the sound effect mode, and the sound effect mode option may be used to trigger the electronic device 100 to process the sound source data based on the sound effect mode indicated by the sound effect mode option.
  • the one or more sound effect mode options may include, but are not limited to, sound effect mode option 612A, sound effect mode option 612B, sound effect mode option 612C, sound effect mode option 612D, sound effect mode option 612E, and sound effect mode option 612F.
  • the sound effect mode option 612A corresponds to the intelligent enhancement mode
  • the sound effect mode option 612B corresponds to the loudness enhancement mode
  • the sound effect mode option 612C corresponds to the surround enhancement mode
  • the sound effect mode option 612D corresponds to the dialogue enhancement mode
  • the sound effect mode option 612E corresponds to the rhythm enhancement mode
  • the sound effect mode option 612F corresponds to the all-around enhanced mode
  • the one or more sound effect mode options may include the name of the corresponding sound effect mode.
  • sound effect mode option 612A is selected.
  • the sound effect selection interface 611 may also include a return control, which may be used to trigger the electronic device 100 to return to the previous level interface.
  • the electronic device 100 may receive an input to return to the desktop and display the desktop including one or more application icons.
  • the application icons can include a music application icon.
  • the electronic device 100 may receive input for the music application icon and display the music playing interface 621.
  • the music playback interface 621 may include but is not limited to song names, playback controls, etc.
  • the playback control may be used to trigger the electronic device 100 to play the song indicated by the song name.
  • the electronic device 100 After the electronic device 100 receives the input for the playback control, in response to the input, the electronic device 100 processes the sound source data (ie, the audio data of the song) based on the intelligent enhanced mode to obtain the first audio data and the second audio data, wherein the electronic device 100 processes the sound source data (ie, the audio data of the song) based on the intelligent enhanced mode.
  • the sound source data ie, the audio data of the song
  • the electronic device 100 processes the sound source data (ie, the audio data of the song) based on the intelligent enhanced mode.
  • the electronic device 100 After the electronic device 100 obtains the first audio data and the second audio data, it can send the second audio data to the electronic device 200 .
  • the electronic device 100 and the electronic device 200 may start playing respective audio data at the same time, as shown in FIG. 6C.
  • the electronic device 100 plays the first audio data through speakers A and B, and the electronic device 200 plays the second audio data through speakers C and D. It can be understood that the electronic device 100 is playing a song, the electronic device 100 cancels the display of the playback control, and displays the pause control 623.
  • the pause control 623 can be used to trigger the electronic device 100 and the electronic device 200 to stop playing audio data.
  • the electronic device 100 can provide users with multiple sound effect modes to achieve different audio playback effects and improve user experience.
  • the sound effect modes supported by the electronic device 100 and the electronic device 200 are a dialogue enhancement mode, a loudness enhancement mode, or a rhythm enhancement mode.
  • the electronic device 100 only displays sound effect mode options corresponding to the intelligent enhanced mode, dialogue enhanced mode or rhythm enhanced mode.
  • the electronic device 200 includes at least three speakers, they are respectively located at the left and right temples and the nose pads.
  • the sound effect mode supported by the electronic device 100 and the electronic device 200 is the omnipotent sound effect mode.
  • the electronic device 200 can play the center channel data of the second audio data through the speaker at the nose pad, and play the second audio through the speaker near the user's left ear.
  • the data of the left channel in the data is played through the speaker near the user's right ear, and the data of the right channel in the second audio data are played.
  • the electronic device 100 may only display sound effect mode options corresponding to the smart enhanced mode and the all-round enhanced mode.
  • the electronic device 100 may also display a percentage bar when displaying one or more sound effect mode options.
  • the percentage bar may be used to control the electronic device 100 to process the sound source data to obtain the second audio data based on the percentage bar. Numerically adjusts the second audio data. In this way, the electronic device 100 can receive the user's adjustment of the percentage column, set the effect of the sound effect mode, and select a suitable sound effect mode to realize the effect.
  • the amplitude of the second audio data of the electronic device 200 can be changed by changing the value of the percentage column, so as to adjust the volume of the second audio data played by the electronic device 200 .
  • the larger the value of the surround factor the farther the position of the virtual sound source simulated by the surround channel data is from the user, and the more obvious the surround effect will be. That is, as the surround factor increases, when the user listens to the electronic device 200 playing surround channel data, he/she perceives that the distance between the user and the sound source also increases.
  • the larger the value in the percentage column is the larger the value of the surround factor is, and the more obvious the surround effect is.
  • the smaller the value in the percentage column the smaller the value of the wrapping factor, and the less obvious the wrapping effect.
  • the amplitude of the second audio data of the electronic device 200 can be changed by changing the value of the percentage column, so as to adjust the volume of the second audio data played by the electronic device 200 .
  • the amplitude of the second audio data of the electronic device 200 can be changed by changing the value of the percentage column to adjust the volume of the second audio data played by the electronic device 200 .
  • the electronic device 200 may control the motor vibration based on the second audio data.
  • the intensity of the motor vibration is determined by the amplitude of the second audio data.
  • the amplitude of the second audio data of the electronic device 200 can be changed by changing the value in the percentage column to adjust the volume of the second audio data played by the electronic device 200 .
  • the electronic device 100 can synchronously adjust multiple sound effect modes in the smart enhanced mode based on the value in the percentage column.
  • the loudness, surround sound range, vibration intensity, etc. of the second audio data played by the electronic device 200 can be adjusted according to the percentage value.
  • the value of the percentage column is always greater than zero, and the electronic device 100 and the electronic device 200 play the corresponding audio data according to the sound effect mode.
  • the percentage column of the electronic device 100 is divided into ten equal parts, and the electronic device 100 may set the initial value of the percentage column of each sound effect mode to 50%. If the value of the percentage column is adjusted to be greater than 50%, the corresponding channel processing method in the sound effect mode is increased according to the influence factor. If the value of the percentage column is adjusted to be less than 50%, the corresponding channel processing method in the sound effect mode is increased according to the influence factor. The impact factor decreases.
  • the influence factor in surround sound field mode, can be understood as the surround factor in the surround channel processing algorithm.
  • the value of the percentage column is 50%
  • the value of the surround factor in the surround channel processing algorithm can be increased according to the preset corresponding relationship to enhance the expansion effect of the surround channel.
  • the expansion effect is stronger than the effect of the sound effect mode with a value of 50% in the percentage column.
  • the value of the surround factor in the surround channel processing algorithm can be reduced according to the preset corresponding relationship, weakening the expansion effect of the surround channel, which is weaker than the value of the percentage column 50% effect of sound mode.
  • the electronic device 100 and the electronic device 200 play the audio data together better than a single electronic device playing the audio data alone. Give users a better experience.
  • the loudness of the second audio data is 5dB.
  • the loudness of the second audio data is adjusted to 2dB.
  • the influence factor can be regarded as the loudness coefficient.
  • the value of the percentage column is 20% , the value of the loudness coefficient is 0.4.
  • the electronic device 100 may display a percentage column 641 when displaying one or more sound effect mode options.
  • the percentage column 641 may be used to change the intensity of the sound effect mode selected by the electronic device 100 .
  • Percent column 641 may include slider 641A.
  • the electronic device 100 may receive an input from the user dragging the slider 641A to the left to reduce the intensity of the sound effect mode.
  • the electronic device 100 may receive an input from the user dragging the slider 641A to the right to increase the intensity of the sound effect mode. It can be understood that the specific operation of reducing/increasing the intensity of the sound effect mode can be referred to the above embodiments, and will not be described again here.
  • the percentage column 641 may also include numerical information 641B.
  • the numerical information 641B may be used to represent the numerical value of the percentage column to prompt the user that the current sound effect mode is in effect.
  • the value of the numerical information 641B in the percentage column 641 of the electronic device 100 is a default value of 50%.
  • the electronic device 100 can directly send the second audio data to the electronic device 200, and the electronic device 200 can play the second audio data.
  • the electronic device 100 can reduce the value of the percentage column 641 after receiving the input of dragging the slider 641A to the left. For example, the value of the numerical information 641B is reduced to 20%.
  • the electronic device 100 can process based on the value of the percentage column.
  • the second audio data is then sent to the electronic device 200, and the electronic device 200 can play the processed second audio data. It can be understood that the effect of the surround sound effect in a scene where the value of the percentage column is 50% is stronger than the effect of the surround sound effect in a scene where the value of the ratio column is 20%. For example, in the scene shown in FIG.
  • the distance between the simulated virtual sound source and the user may be 20 cm, so that the user feels that the width of the sound source extends a distance of 0.2 m to the left and right along the central axis of the electronic device 100 .
  • the distance between the virtual sound source and the user simulated by the virtual channel data can be 50 cm, so that the user feels that the width of the sound source extends to the left and right along the central axis of the electronic device 100 0.5m distance each.
  • the farther the virtual sound source is from the user the more obvious the surround effect will be.
  • the distance can be other values, which is not limited in the embodiment of the present application.
  • the electronic device 100 can process the first audio data based on the pose data of the electronic device 200 when the relative positions of the electronic device 200 and the electronic device 100 change, so that the user perceives that the sound source is located near the user. Sight straight ahead.
  • the second audio data is processed based on the pose data of the electronic device 200 so that the user perceives that the sound source is located at the electronic device 100 . In this way, the user can perceive the location of the sound source and give the user a better sense of space.
  • the relative positions of the electronic device 200 and the electronic device 100 change.
  • the reason for the position change is that the position of the electronic device 200 changes due to the rotation of the user's head.
  • the electronic device 200 can send the pose data of the electronic device 200 to the electronic device 100 , and the electronic device 100 can obtain an angle coefficient based on the pose data.
  • the angle coefficient can represent the orientation of the electronic device 200 relative to the electronic device 100 .
  • the positional movement of the electronic device 200 causing the relative positions of the electronic device 200 and the electronic device 100 to change.
  • the positional movement of the electronic device 100 may also cause the relative positions of the electronic device 200 and the electronic device 100 to change.
  • the electronic device 100 may be based on the electronic device
  • the angle coefficient is obtained from the pose data of 200.
  • the electronic device 100 can obtain the angle coefficient based on the pose data of the electronic device 100 and the pose data of the electronic device 200 .
  • the electronic device 100 can use the angle coefficient to change the phase difference between the left channel data and the right channel data in the first audio data, so that when the user hears the sound played by the electronic device 100, the user feels that the sound source is located Sight ahead.
  • the electronic device 100 can obtain the angle coefficient based on the pose data of the electronic device 200, and then process the first audio data based on the angle coefficient. , obtain the corrected first audio data.
  • the electronic device 100 plays the first audio data corrected based on the angle coefficient, so that the user feels that the sound source position is in front of the user's line of sight, as shown in FIG. 7B .
  • the first audio data played by the electronic device 100 can make the user feel that the sound source is in front of the user's line of sight.
  • the speaker A plays The data of the left channel of the first audio data, the data of the right channel of the first audio data played by speaker B, and the sound wave signals played by speakers A and speaker B can make the user think that the sound source is in the sound source shown in (a) in Figure 7C Virtual sound source.
  • the corrected first audio data played by the electronic device 100 can also make the user feel that the sound source is located in front of the user's line of sight, as shown in (b) in FIG. 7C
  • speaker A plays the data of the left channel of the first audio data
  • speaker B plays the data of the right channel of the first audio data.
  • the sound wave signals played by speakers A and speaker B can make the user think that the sound source is in (( in Figure 7C b)
  • the virtual sound source shown It can be understood that the position of the virtual sound source shown in (a) in FIG. 7C is the same as the position of the virtual sound source shown in (b) in FIG. 7C . In this way, you can ensure that no matter how you move, you can hear the sound coming from directly in front of your line of sight.
  • the electronic device 100 can process the first audio data based on the relative positions of the electronic device 100 and the electronic device 200 so that the virtual sound source simulated by the sound signal of the electronic device 100 is always located in front of the user's line of sight. In this way, the virtual sound source can always be in front of the user's line of sight, allowing the user to experience an audio playback experience where the sound always follows the user's movement.
  • the electronic device 100 can adjust one or more of the amplitude and phase difference between the data of the left channel and the data of the right channel played by the electronic device 100 based on the relative position relationship, so that the electronic device 100 can be used.
  • the sound signal simulates a virtual sound source located in front of the user's implementation.
  • the electronic device 100 may adjust the beam direction of the sound signal of the speaker of the electronic device 100 to face the direction of the electronic device 200 . In this way, since the user wears the electronic device 200, the beam direction of the sound signal of the electronic device 100 is adjusted to the direction of the electronic device 200, making it easier for the user to listen to the sound signal played by the electronic device 100.
  • the change of the sound beam of the electronic device 100 before and after movement is exemplified.
  • the electronic device 100 is located at the clock direction 1 of the electronic device 200 (for example, the 12 o'clock direction).
  • the angle between the beam direction of the sound wave beam sent by the speaker A and the north direction is an angle. 1, and the angle between the beam direction of the acoustic beam sent by speaker B and the north direction is angle 2;
  • the electronic device 100 is located at clock orientation 2 (for example, 10 o'clock direction) of the electronic device 200 , and the sound wave beam sent by speaker A when the electronic device 100 plays the first audio data
  • the angle between the beam direction and the north direction is angle 3
  • the angle between the beam direction of the acoustic wave beam sent by speaker B and the north direction is angle 4.
  • the clock orientation 1 is different from the clock orientation 2, and the angle 1 is different from the angle 1 3 is different, and angle 2 is different from angle 4.
  • the north direction is only a comparison direction to illustrate the direction of the sound signal beam, and is not limited to the north direction. Any other direction can also be used as the comparison direction, which is not limited in the embodiment of the present application.
  • the electronic device 100 can use the angle coefficient to change the phase difference between the left channel data and the right channel data in the second audio data, so that when the user hears the sound played by the electronic device 200, the user feels that the sound source is located Electronic equipment 100 places.
  • the electronic device 100 can obtain the angle coefficient based on the pose data of the electronic device 200, and then process the second audio data based on the angle coefficient. , to obtain the corrected second audio data.
  • the electronic device 100 can send the corrected second audio data to the electronic device 200, and the electronic device 200 can play the corrected second audio data based on the angle coefficient, so that the user feels that the sound source position is located at the electronic device 100, as shown in Figure 7D shown.
  • the second audio data played by the electronic device 200 can make the user feel that the sound source is located at the electronic device 100.
  • the speaker C Play the data of the left channel of the second audio data
  • speaker D plays the data of the right channel of the second audio data.
  • the sound wave signals played by speakers C and speaker D can make the user think that the sound source is as shown in (a) in Figure 7E virtual sound source.
  • the corrected second audio data played by the electronic device 200 can also make the user feel that the sound source is located at the electronic device 100, as shown in (b) in Figure 7E
  • speaker C plays the data of the left channel of the second audio data
  • speaker D plays the data of the right channel of the second audio data.
  • the sound wave signals played by speakers C and speaker D can make the user think that the sound source is in Figure 7E
  • the virtual sound source shown in (b) It can be understood that the position of the virtual sound source shown in (a) in FIG. 7E is the same as the position of the virtual sound source shown in (b) in FIG. 7E. In this way, it can be ensured that no matter how the user moves, they can hear the sound coming from the video The sound from directly in front of the line.
  • the electronic device 100 can adjust the sound signal of the speaker of the electronic device 200 so that the virtual sound source simulated by the sound signal played by the electronic device 200 is located at the electronic device 100 .
  • the sound signal emitted by the electronic device 200 can be used to simulate that the virtual sound source is located at the electronic device 100, making the user think that the sound is emitted by the electronic device 100.
  • the electronic device 100 may adjust one or more of the phase difference and amplitude of the data of the left channel and the data of the right channel of the second audio data of the electronic device 200 based on the position of the electronic device 200, so as to Adjust the position of the virtual sound source simulated by the sound signal played by the electronic device 200 .
  • FIG. 7E illustrates the relative relationship between the beam direction and the north direction of the sound signal played by the electronic device 200.
  • (a) of FIG. 7E taking north as the reference direction as an example, the change of the sound beam of the electronic device 100 before and after movement is exemplified.
  • the electronic device 100 is located at the clock direction 1 of the electronic device 200 (for example, the 12 o'clock direction).
  • the angle between the beam direction of the sound wave beam sent by the speaker C and the north direction is an angle. 1, and the angle between the beam direction of the acoustic beam sent by speaker D and the north direction is angle 2;
  • the electronic device 100 is located at clock orientation 2 (for example, 10 o'clock direction) of the electronic device 200 , and the sound wave beam sent by the speaker C when the electronic device 200 plays the second audio data
  • the angle between the beam direction of and the north direction is angle 3
  • the angle between the beam direction of the acoustic wave beam sent by speaker D and the north direction is angle 4
  • the clock orientation 1 is different from the clock orientation 2
  • the angle 1 is different from the angle 3 is different
  • angle 2 is different from angle 4.
  • the north direction is only a comparison direction to illustrate the direction of the sound signal beam, and is not limited to the north direction. Any other direction can also be used as the comparison direction, which is not limited in the embodiment of the present application.
  • the electronic device 100 can adjust the beam direction of the sound signal of the electronic device 100 based on changes in the relative positions of the electronic device 100 and the electronic device 200 , and set the beam direction toward the electronic device 200 .
  • the electronic device 200 since the electronic device 200 is worn on a certain body part of the user, there is no need to change the beam direction of the sound signal of the electronic device 200.
  • the beam direction of the sound signal emitted by the speaker of the electronic device 200 is always toward the user.
  • the electronic device 200 can only change the phase difference of the left and right channel data in the second audio data and change the position of the virtual sound source when it supports stereo playback. Therefore, the electronic device 100 can process based on the angle coefficient only when the electronic device 100 plays audio data including a left channel and a right channel, or when the electronic device 100 plays audio data including a left surround channel and a right surround channel.
  • the second audio data enables the user to feel that the sound source is located at the location of the electronic device 100 .
  • the electronic device 100 may process the first audio data based on the angle coefficient, or may process the second audio data based on the angle coefficient.
  • the sound effect mode of the electronic device 100 is a dialogue enhancement mode, a rhythm enhancement mode, or an intelligent enhancement mode including one or more of the dialogue enhancement mode and the rhythm enhancement mode
  • the electronic device 100 can only process the first sound effect based on the angle coefficient. audio data. The second audio data cannot be processed based on the angle coefficient.
  • the electronic device 200 can send the pose data of the electronic device 200 to the electronic device 100 every preset time (for example, 1 s), or the electronic device 100 can obtain it from the electronic device 200 every preset time.
  • the pose data of the electronic device 200 In this way, the electronic device 100 can process the first audio data or the second audio data based on the pose data of the electronic device 200 every preset time.
  • the electronic device 200 may send the pose data of the electronic device 200 to the electronic device 100 when detecting the position movement of the electronic device 200 . In this way, the power consumption of the electronic device 100 in adjusting the first audio data or the second audio data based on the posture data can be saved.
  • the audio playback method provided by the embodiment of the present application is introduced.
  • the electronic device 100 may be a tablet computer, and the electronic device 200 may be a smart watch. It should be noted that the electronic device 100 is not limited to a tablet computer, and may also be other electronic devices including a display screen and multiple speakers, such as a mobile phone.
  • the electronic device 100 includes one or more speakers, and the one or more speakers include speaker E and speaker F.
  • speaker E is located on the left side of the display screen
  • speaker F is located on the right side of the display screen.
  • the electronic device 200 includes at least one speaker including a speaker G. When the electronic device 200 is worn, the speaker G is located near the user's wrist.
  • the electronic device 100 supports playing audio data including left and right channels, and the electronic device 200 only supports playing mono-channel audio data. It should be noted that due to the limitation of the viewing angle in FIG. 8 , the speaker F is blocked and the speaker F cannot be shown in FIG. 8 .
  • the steps for the electronic device 100 and the electronic device 200 to play audio data in the first mode are as follows:
  • the electronic device 100 and the electronic device 200 enable the Bluetooth function.
  • the electronic device 100 can display one or more Bluetooth device options in the Bluetooth interface.
  • the Bluetooth device option can represent a nearby searched Bluetooth device and can be used to trigger the electronic device 100 to establish a communication connection with the electronic device corresponding to the Bluetooth device option.
  • the one or more Bluetooth device options include a Bluetooth device option for triggering the establishment of a Bluetooth connection with the electronic device 200 .
  • Electronic device 100 may respond to input for a Bluetooth device option for triggering establishment of a Bluetooth connection with electronic device 200 , and electronic device 200 may Device 200 establishes a Bluetooth connection.
  • the Bluetooth connection is only an example and is not limited to the Bluetooth connection.
  • the electronic device 100 and the electronic device 200 can also establish other wireless or wired communication connections, which is not limited in the embodiment of the present application.
  • the electronic device 100 may display the Bluetooth interface 901 as shown in FIG. 9A after establishing a Bluetooth connection with the electronic device 200 .
  • the Bluetooth interface 901 displayed by the electronic device 100 may include one or more Bluetooth device options.
  • the one or more Bluetooth device options include a Bluetooth device option 903.
  • the Bluetooth device option 903 may also display a device name of the electronic device 200.
  • the device name of the electronic device 200 is "Iwatch".
  • the Bluetooth device option 903 can also display connection status information.
  • the connection status information can be used to indicate the connection status of the electronic device 100 and the electronic device 200 (for example, not connected, connected, etc.). Here, the connection status is connected.
  • Bluetooth interface 901 may also include tab 904 that includes independent audio options 905 and collaborative audio options 906 .
  • the independent audio option 905 can be used to trigger the electronic device 100 to play sounds by itself.
  • Collaborative audio option 906 may be used to trigger electronic device 100 to play sounds through electronic device 100 and electronic device 200 simultaneously.
  • the tab 904 can also display a device image of the electronic device 200 .
  • Electronic device 100 may receive input (eg, a click) for collaborative audio option 906 and, in response to the input, determine one or more sound effect modes supported by electronic device 100 and electronic device 200 .
  • input eg, a click
  • electronic device 100 determining one or more sound effect modes please refer to the above embodiments and will not be described again here.
  • the electronic device 100 may display the sound effect selection interface 911 as shown in FIG. 9B after determining one or more audio modes supported by the electronic device 100 and the electronic device 200 .
  • the electronic device 100 displays a sound effect selection interface 911 .
  • the sound effect selection interface 911 includes a sound effect mode list 912 .
  • the sound effect mode list 912 includes one or more sound effect mode options.
  • the one or more audio options and the electronic device 100 Corresponding to the determined one or more sound effect modes.
  • the sound effect mode option may include an identification of the sound effect mode, and the sound effect mode option may be used to trigger the electronic device 100 to process the sound source data based on the sound effect mode indicated by the sound effect mode option.
  • the one or more sound effect mode options may include, but are not limited to, sound effect mode option 912A, sound effect mode option 912B, and sound effect mode option 912C.
  • sound effect mode option A corresponds to the intelligent enhancement mode
  • sound effect mode option 912B corresponds to the rhythm enhancement mode
  • sound effect mode option 912C corresponds to the dialogue enhancement mode.
  • the description of each sound effect mode can be found in the embodiment shown in Figure 3, and will not be described again here.
  • sound mode option A is selected.
  • the sound effect selection interface 911 may also include a return control, which may be used to trigger the electronic device 100 to return to the previous level interface.
  • the electronic device 100 may also display a percentage bar in the sound effect selection interface 911.
  • the electronic device 100 may receive an input to return to the desktop and display the desktop including one or more application icons, and the one or more application icons may include a music application. icon.
  • the electronic device 100 may receive input for the music application icon and display the music playing interface 921 .
  • the music playback interface 921 may include but is not limited to song names, playback controls, etc.
  • the playback control may be used to trigger the electronic device 100 to play the song indicated by the song name.
  • the electronic device 100 After the electronic device 100 receives the input for the playback control, in response to the input, the electronic device 100 processes the sound source data (ie, the audio data of the song) based on the intelligent enhanced mode to obtain the first audio data and the second audio data, wherein the electronic device 100 processes the sound source data (ie, the audio data of the song) based on the intelligent enhanced mode.
  • the sound source data ie, the audio data of the song
  • the electronic device 100 processes the sound source data (ie, the audio data of the song) based on the intelligent enhanced mode.
  • the electronic device 100 After the electronic device 100 obtains the first audio data and the second audio data, it can send the second audio data to the electronic device 200 .
  • the electronic device 100 and the electronic device 200 may start playing respective audio data at the same time, as shown in FIG. 9C.
  • the electronic device 100 plays the first audio data through the speakers E and F, and the electronic device 200 plays the second audio data through the speakers G and C. It can be understood that the electronic device 100 is playing a song, the electronic device 100 cancels the display of the playback control, and displays the pause control 923.
  • the pause control 923 can be used to trigger the electronic device 100 and the electronic device 200 to stop playing audio data.
  • the electronic device 200 can obtain the motor pulse signal based on the second audio data when playing the second audio data, and By inputting the motor pulse signal into the motor of the electronic device 200, the electronic device 200 can control the motor to vibrate at a corresponding vibration frequency as the frequency of the audio signal in the second audio data changes. In this way, the electronic device 200 can increase the tactile experience for the user, and simultaneously play the audio source data through auditory and tactile senses, thereby improving the user experience.
  • the electronic device 100 may also provide a rhythm enhancement mode.
  • the electronic device 100 can extract the data of the low-frequency channel in the sound source data, and send the second audio data including the data of the low-frequency channel to the electronic device 200 .
  • the electronic device 200 can control the motor vibration based on the second audio data. In this way, even if the electronic device 200 does not include a speaker, the electronic device 200 can provide the user with a better listening effect through vibration.
  • the electronic device 200 can set the vibration intensity of the motor vibration to 5 levels, from level 1 to level 5. Gradually strengthen.
  • the electronic device 200 can divide the second audio data into five amplitude ranges according to the amplitude of the second audio data. From amplitude range 1 to amplitude range 5, the amplitude value gradually increases. Among them, the vibration intensity corresponding to the audio data falling in the amplitude range 1 in the second audio data is level 1, the vibration intensity corresponding to the audio data falling in the amplitude range 2 in the second audio data is level 2, and so on. In this way, the intensity of the motor vibration can be realized to change as the amplitude of the audio data changes.
  • the electronic device 100 can provide users with multiple sound effect modes to achieve different audio playback effects and improve user experience.
  • the electronic device 100 can cooperate with the two or more electronic devices to play audio source data.
  • the two or more electronic devices that support playing audio data may be wearable devices. In this way, it is convenient for users to realize coordinated sound production by multiple devices through portable electronic devices in various scenarios and enhance the sense of surround.
  • the electronic device 100 may display one or more sound effect modes, and the one or more sound effect modes may instruct the electronic device 100 to process the sound source data to obtain audio data played by the electronic device 100 and the two or more electronic devices.
  • the one or more sound effect modes may include the electronic device 100 playing the first audio data, and the two or more electronic devices jointly playing the second audio data (or part of the two or more electronic devices).
  • the electronic device plays the first audio data, and another part of the electronic device plays the sound effect mode of the second audio data.
  • the one or more sound effect modes may include sound effect modes in which each electronic device plays different audio data obtained based on the sound source data, that is, the electronic device 100 and the two or more electronic devices play different audio data.
  • the electronic device 100 is a mobile phone
  • multiple electronic devices that support playing audio data may include but are not limited to smart watches, smart glasses, headphones, etc.
  • the mobile phone can display collaborative playback controls after establishing communication connections with both smart watches and smart glasses.
  • the collaborative playback controls can be used to trigger the mobile phone, smart watches and smart glasses to collaboratively play audio source data.
  • the electronic device 100 may display one or more sound effect mode options, and the electronic device 100 provides sound effect mode option 1.
  • sound effect mode option 1 is selected, the mobile phone can be used to play the left channel data and the right channel data based on the sound source data.
  • the smart watch can be used to play the low-frequency channel data based on the sound source data.
  • Smart glasses It can be used to play left surround channel data and right surround channel data based on the audio source data.
  • multi-device multi-channel collaborative playback can be achieved through portable devices, bringing users a surround sound field experience.
  • the description of the audio data played by each electronic device corresponding to the sound effect mode option 1 is only an example and should not be specifically limited.
  • the electronic device 100 may also receive user input to select the electronic device 100 to cooperate with one or more electronic devices among the plurality of electronic devices that support playing audio data to play the audio source data.

Abstract

本申请公开了一种音频播放方法、系统及相关装置。第一电子设备与第二电子设备建立通信连接,第一电子设备获取第二电子设备的设备信息,并基于该第二电子设备的设备信息确定出一个或多个音效模式,一个或多个音效模式包括第一音效模式。第一电子设备可以响应于选中第一音效模式的输入,基于第一音效模式,处理第一电子设备的音源数据,得到第一音频数据和第二音频数据。第一电子设备播放第一音频数据并且第二电子设备同时播放第二音频数据,实现第一电子设备和第二电子设备协同播放音源数据。这样,由多个电子设备共同播放音频数据,打破了单个电子设备的扬声器布局的限制,使得多个电子设备支持更多的音效模式。

Description

一种音频播放方法、系统及相关装置
本申请要求于2022年08月30日提交中国专利局、申请号为202211049076.4、申请名称为“一种音频播放方法、系统及相关装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及终端技术领域,尤其涉及一种音频播放方法、系统及相关装置。
背景技术
当前,多个电子设备之间建立有通信连接时,电子设备A(例如,手机、平板、PC等)可以通过自身的扬声器播放音频数据,或者,电子设备A可以将音频数据发送到电子设备B(例如,智能眼镜、挂颈式智能耳机、智能手表等),通过电子设备B的扬声器播放该音频数据。在同一时刻,只有一个电子设备可以播放该音频数据。
发明内容
本申请提供了一种音频播放方法、系统及相关装置,实现了第一电子设备与第二电子设备建立通信连接,第一电子设备获取第二电子设备的设备信息,并基于该第二电子设备的设备信息确定出一个或多个音效模式,一个或多个音效模式包括第一音效模式。第一电子设备可以响应于选中第一音效模式的输入,基于第一音效模式,处理第一电子设备的音源数据,得到第一音频数据和第二音频数据。第一电子设备播放第一音频数据并且第二电子设备同时播放第二音频数据,实现第一电子设备和第二电子设备协同播放音源数据。
第一方面,本申请提供了一种音频播放方法,方法应用于音频播放系统,音频播放系统包括第一电子设备和第二电子设备,第一电子设备与第二电子设备建立通信连接,方法包括:
第一电子设备显示协同播放控件,协同播放控件用于指示第一电子设备与第二电子设备共同播放音源数据;
第一电子设备接收针对协同播放控件的第一输入;
第一电子设备响应于第一输入,显示多个音效模式选项,多个音效模式选项包括第一音效模式选项;
第一电子设备响应于针对第一音效模式选项的第二输入,标记第一音效模式选项;
第一电子设备将第二音频数据发送至第二电子设备;
第二电子设备播放第二音频数据;
当第二电子设备播放第二音频数据时,第一电子设备播放第一音频数据,第一音频数据和第二音频数据均至少包括音源数据的至少部分内容。
这样,由多个电子设备共同播放音频数据,打破了单个电子设备的扬声器布局的限制,使得多个电子设备支持更多的音效模式,例如,可以通过多个电子设备的扬声器实现较明显的左右耳环绕效果。并且,可以根据第二电子设备的设备类型,给用户提供对应的音效模式,实现不同电子设备的不同音效模式。例如,若第二电子设备为智能眼镜,由于智能眼镜包括位于左镜腿的扬声器和右镜腿的扬声器,可以提供基于左右声道、左右环绕声道、人声对白等多种处理方式得到的音效模式,从而增强音源数据的响度、环绕感或者人声清晰度等,使得用户观影沉浸感和包围感增强,提升用户体验。再例如,若第二电子设备为智能手表,由于智能手表位于手腕,可通过低频音频信号增强,实现低频震感传播,加强用户节奏感感知效果,提升用户体验。
在一种可能的实现方式中,声源数据可以由其他电子设备(例如,第二电子设备)发送至第一电子设备。
在一种可能的实现方式中,第一音频数据和第二音频数据均至少包括音源数据的至少部分内容。其中,至少部分内容可以为音源数据中部分声道的数据和/或部分频段的数据。
在一种可能的实现方式中,在第一电子设备接收第一输入之前,第一电子设备接收到播放音源数据的第三输入;或者,
第一电子设备在接收到第二输入之后,接收到播放音源数据的第三输入。
这样,用户可以在播放音源数据之前,选择需要的音效模式。也可以在播放音源数据时,选择需要的音效模式,或者切换音效模式。
在一种可能的实现方式中,当第一音效模式为节奏增强模式、对白增强模式、环绕增强模式、全能增强模式或智能增强模式中的任一种时,第一音频数据包括的至少部分声道与第二音频数据包括的至少部分声道不相同;或/和,当第一音效模式为响度增强模式时,第一音频数据的至少部分声道与第二音频数据的至少部分声道相同。
在一种可能的实现方式中,第一音效模式选项为节奏增强模式选项,第一音频数据包括左声道的数据和右声道的数据,第二音频数据包括低频声道的数据。这样,可以增强低频声音的播放。
在一种可能的实现方式中,第二电子设备播放第二音频时,方法还包括:
第二电子设备将第二音频数据转换为脉冲信号;第二电子设备将脉冲信号传输给第二电子设备的马达;第二电子设备的马达振动。
这样,第二电子设备可以在播放第二音频数据的同时,随着第二音频数据的频率变化而振动,增强用户触觉体验。
在一种可能的实现方式中,第二电子设备包括多个扬声器,第二电子设备使用所有扬声器播放第二音频数据。
在一种可能的实现方式中,第一音效模式选项为对白增强模式选项,第一音频数据包括左声道的数据和右声道的数据,第二音频数据包括中置声道的数据。这样,可以体现人声,在视频播放时,突出人物台词。
在一种可能的实现方式中,第一音效模式选项为响度增强模式选项,第一音频数据包括左声道的数据和右声道的数据,第二音频数据包括左声道的数据和右声道的数据。
在一种可能的实现方式中,当第二电子设备包括1个扬声器,第二电子设备使用1个扬声器播放第二音频数据的左声道的数据和右声道的数据;或者,
当第二电子设备包括2个扬声器,2个扬声器包括第一扬声器和第二扬声器,第二电子设备使用第一扬声器播放左声道的数据,使用第二扬声器播放右声道的数据;或者,
若第二电子设备包括3个及以上扬声器,3个及以上扬声器包括第一扬声器、第二扬声器和第三扬声器,使用第一扬声器播放左声道的数据,使用第二扬声器播放右声道的数据,使用第三扬声器播放第二音频数据的左声道的数据和右声道的数据。
这样,可以基于第二电子设备的扬声器数量,合理使用扬声器进行第二音频数据的声道数据的播放,充分利用扬声器资源。
在一种可能的实现方式中,第一音效模式选项为环绕增强模式选项,第一音频数据包括左声道的数据和右声道的数据,第二音频数据包括左环绕声道的数据和右环绕声道的数据。这样,可以增强环绕感。
在一种可能的实现方式中,第二电子设备包括2个扬声器,2个扬声器包括第一扬声器和第二扬声器,第二电子设备使用第一扬声器播放左环绕声道的数据,使用第二扬声器播放右环绕声道的数据;或者,
当第二电子设备包括3个及以上扬声器,3个及以上扬声器包括第一扬声器、第二扬声器和第三扬声器,使用第一扬声器播放左环绕声道的数据,使用第二扬声器播放右环绕声道的数据,使用第三扬声器播放第二音频数据的左环绕声道的数据和右环绕声道的数据。
在一种可能的实现方式中,当第一电子设备包括2个扬声器,2个扬声器包括第四扬声器和第五扬声器,第一电子设备使用第四扬声器播放左声道的数据,使用第五扬声器播放右声道的数据;或者,
当第一电子设备包括3个及以上扬声器,3个及以上扬声器包括第四扬声器、第五扬声器和第六扬声器,使用第四扬声器播放左声道的数据,使用第五扬声器播放右声道的数据,使用第六扬声器播放第二音频数据的左声道的数据和右声道的数据。
在一种可能的实现方式中,第一音效模式选项为全能增强模式选项,第一音频数据包括左声道的数据和右声道的数据,第二音频数据包括左环绕声道的数据、右环绕声道的数据和中置声道的数据。
在一种可能的实现方式中,当第二电子设备包括2个扬声器,2个扬声器包括第一扬声器和第二扬声器,第二电子设备使用第一扬声器播放左环绕声道的数据和中置声道的数据,使用第二扬声器播放右环绕声道的数据和中置声道的数据;或者,
当第二电子设备包括3个及以上扬声器,3个及以上扬声器包括第一扬声器、第二扬声器和第三扬声器,使用第一扬声器播放左环绕声道的数据,使用第二扬声器播放右环绕声道的数据,使用第三扬声器播放中置声道的数据。
在一种可能的实现方式中,当第一电子设备包括2个扬声器,2个扬声器包括第四扬声器和第五扬声器,第一电子设备使用第四扬声器播放左环绕声道的数据和中置声道的数据,使用第五扬声器播放右环绕 声道的数据和中置声道的数据;或者,
当第一电子设备包括3个及以上扬声器,3个及以上扬声器包括第四扬声器、第五扬声器和第六扬声器,使用第四扬声器播放左环绕声道的数据,使用第五扬声器播放右环绕声道的数据,使用第六扬声器播放中置声道的数据。
在一种可能的实现方式中,响应于第一输入,第一电子设备显示多个音效模式选项,具体包括:
第一电子设备响应于第一输入,基于第二电子设备的设备类型与存储的设备类型与音效模式选项的对应关系,获取多个音效模式选项;
其中,第二电子设备的设备类型与多个音效模式相对应。
这样,第一电子设备可以在与不同设备类型的电子设备建立连接时,给用户提供适合第一电子设备与第二电子设备的音效模式选项。例如,电子设备的设备类型为智能手表时,智能手表播放环绕声道的数据,也无法带来好的环绕效果,因此,第一电子设备在第二电子设备为智能手表时,不提供环绕增强模式。
在一种可能的实现方式中,第二电子设备的设备类型为智能手表,多个音效模式选项包括智能增强模式选项、响度增强模式选项、节奏增强模式选项和对白增强模式选项中的一种或几种,智能增强模式选项对应的智能增强模式为节奏增强模式与响度增强模式的组合。这样,用户可以在不知道选择某个音效模式时,基于智能增强模式处理音源数据。
在一种可能的实现方式中,智能增强模式对应的第一电子设备播放的音频数据包括左声道的数据和右声道的数据,第二电子设备播放的音频数据包括左声道的数据、右声道的数据和低频声道的数据。
在一种可能的实现方式中,第二电子设备的设备类型为智能眼镜、挂颈式音箱或蓝牙耳机,多个音效模式选项包括智能增强模式选项、响度增强模式选项、节奏增强模式选项、对白增强模式选项、环绕增强模式选项和全能增强模式选项中的一种或几种,智能增强模式选项对应的智能增强模式为节奏增强模式与响度增强模式的组合,或者,为环绕增强模式与对白增强模式的组合。
在一种可能的实现方式中,当音源数据属于视频文件,智能增强模式为环绕增强模式与对白增强模式的组合,智能增强模式对应的第一电子设备播放的音频数据包括左声道的数据和右声道的数据,第二电子设备播放的音频数据包括左环绕声道的数据、右环绕声道的数据和中置声道的数据;或者,
当音源数据属于音频文件,智能增强模式为节奏增强模式与响度增强模式的组合,智能增强模式对应的第一电子设备播放的音频数据包括左声道的数据和右声道的数据,第二电子设备播放的音频数据包括左声道的数据、右声道的数据和低频声道的数据。
这样,当第一电子设备播放视频时,可以突出环绕和人声,使得用户可以沉浸享受视频。当第一电子设备播放音频时,可以突出低频,音乐更加动感。
在一种可能的实现方式中,第一电子设备基于播放的视频内容与音乐场景是否相关,确定智能增强模式。这样,即使第一电子设备播放视频,由于视频内容与音源场景相关,也可以将智能增强模式设置为节奏增强模式与响度增强模式的组合,突出低频和增加响度,给用户带来更佳的音乐收听体验。
在一种可能的实现方式中,第一电子设备处理第二音频数据时,加强第二音频数据的细小音频数据的响度。这样,可以突出细小声音,增强细节感。这样,用户在玩游戏可以听到细小声音,观察游戏人物动向,提升游戏体验。用户可以在看电影时,听到明显的背景音,例如,风声、虫鸣等,更加有画面感。
在一种可能的实现方式中,在第一电子设备接收第二输入之前,多个音效模式选项包括被标记的智能增强模式选项;第一电子设备响应于针对第一音效模式选项的第二输入,标记第一音效模式选项,具体包括:
第一电子设备响应于针对第一音效模式选项的第二输入,取消标记智能增强模式选项,并标记第一音效模式选项。
这样,第一电子设备可以提供默认的智能增强选项,在用户不确定选择某个音效模式时,更加智能地智能处理音源数据。
在一种可能的实现方式中,当第二电子设备包括第一扬声器与第二扬声器,且第一音效模式选项为响度增强模式选项、环绕增强模式选项或全能增强模式选项;方法还包括:当第二电子设备和/或第一电子设备姿态变化,第一音频数据不变,并且第一电子设备发送给第二电子设备的第二音频数据随着姿态变化而变化。
这样,第一电子设备和/或第一电子设备姿态变化时,第一电子设备可以处理第二音频数据,使得第二电子设备播放的声音信号模拟得到的虚拟音源始终位于第一电子设备处,声源和画面始终处于同一位置,便于用户移动回到适宜观看画面的位置。
在一种可能的实现方式中,当第一电子设备包括第四扬声器和第五扬声器;方法还包括:当第二电子设备和/或第一电子设备姿态变化,第二音频数据不变,并且第一电子设备播放的第一音频数据随着姿态变化而变化。
这样,第一电子设备可以基于姿态变化,处理第一音频数据,使得第一电子设备播放的声音信号模拟得到的虚拟音源始终处于用户视线前方,无论用户如何移动,也不会出现左右耳收听声音不协调的情形。
在一种可能的实现方式中,当第二电子设备和/或第一电子设备位置变化,第一电子设备的扬声器发出的声波信号的波束方向随着位置变化而变化。
这样,第一电子设备可以将波束方向朝向第二电子设备处,使得用户可以更加清晰地听到第一电子设备的声音。
在一种可能的实现方式中,第一电子设备与第二电子设备的距离为第一距离,第一电子设备播放第一音频数据的音量为第一音量;第一电子设备与第二电子设备的距离为第二距离,第一电子设备播放第一音频数据的音量为第二音量,第一距离小于第二距离,且第一音量小于第二音量。
这样,第一电子设备可以基于与第二电子设备(也能理解为用户)之间的距离,设置第一电子设备的音量。第一电子设备检测到用户与第一电子设备接近时,减小第一电子设备的音量,避免干扰第二音频数据的播放。第一电子设备检测到用户与第一电子设备远离时,增大第一电子设备的音量,避免用户听不清第一电子设备播放的声音。
在一种可能的实现方式中,第一电子设备响应于第一输入,显示多个音效模式时,方法还包括:
第一电子设备显示百分比栏;
若第一音效模式选项为响度增强模式选项、节奏增强模式选项或对白增强模式选项,百分比栏的值为第一值时,第二电子设备播放第二音频数据的音量为第三音量,百分比栏的值为第二值时,第二电子设备播放第二音频数据的音量为第四音量,第一值小于第二值,且第三音量低于第四音量;
若第一音效模式选项为全能增强模式选项或环绕模式选项,百分比栏的值为第三值,第二电子设备播放第二音频数据时模拟音源与用户的距离为第三距离,百分比栏的值为第四值,第二电子设备播放第二音频数据时模拟音源与用户的距离为第四距离,第三值小于第四值且第三距离小于第四距离。
这样,可以通过百分比栏设置音效模式的效果,用户可以调整百分比栏的值,选择适合的播放效果。
第二方面,本申请提供了一种音频播放系统,包括:第一电子设备和第二电子设备;其中,
第一电子设备,被配置为用于实现上述第一方面中第一电子设备执行的方法步骤;
第二电子设备,被配置为用于实现上述第一方面中第二电子设备执行的方法步骤。
第三方面,本申请提供另了一种音频播放方法,该方法应用于音频播放系统,音频播放系统包括第一电子设备和第二电子设备,方法包括:
第一电子设备响应于第一电子设备与第二电子设备建立通信连接的操作,与第二电子设备协同播放音源数据。
在一种可能的实现方式中,第一电子设备播放第一音频数据,第二电子设备播放第二音频数据,第一音频数据和第二音频数据均至少包括音源数据的至少部分内容,第一音频数据和第二音频数据不同。
在一种可能的实现方式中,第一音频数据与第二音频数据的部分声道的数据和/或部分频段的数据不同。
在一种可能的实现方式中,声源数据可以由其他电子设备(例如,第二电子设备、服务器等)发送至第一电子设备。
在一种可能的实现方式中,第一音频数据和第二音频数据均至少包括音源数据的至少部分内容。其中,至少部分内容可以为音源数据中部分声道的数据和/或部分频段的数据。
在一种可能的实现方式中,在第一电子设备接收第一输入之前,第一电子设备接收到播放音源数据的第三输入;或者,
第一电子设备在接收到第二输入之后,接收到播放音源数据的第三输入。
这样,用户可以在播放音源数据之前,选择需要的音效模式。也可以在播放音源数据时,选择需要的音效模式,或者切换音效模式。
在一种可能的实现方式中,当第一音效模式为节奏增强模式、对白增强模式、环绕增强模式、全能增强模式或智能增强模式中的任一种时,第一音频数据包括的至少部分声道与第二音频数据包括的至少部分声道不相同;或/和,当第一音效模式为响度增强模式时,第一音频数据的至少部分声道与第二音频数据的 至少部分声道相同。
在一种可能的实现方式中,第一音效模式选项为节奏增强模式选项,第一音频数据包括左声道的数据和右声道的数据,第二音频数据包括低频声道的数据。这样,可以增强低频声音的播放。
在一种可能的实现方式中,第二电子设备播放第二音频时,方法还包括:
第二电子设备将第二音频数据转换为脉冲信号;第二电子设备将脉冲信号传输给第二电子设备的马达;第二电子设备的马达振动。这样,第二电子设备可以在播放第二音频数据的同时,随着第二音频数据的频率变化而振动,增强用户触觉体验。
在一种可能的实现方式中,第二电子设备包括多个扬声器,第二电子设备使用所有扬声器播放第二音频数据。
在一种可能的实现方式中,第一音效模式选项为对白增强模式选项,第一音频数据包括左声道的数据和右声道的数据,第二音频数据包括中置声道的数据。这样,可以体现人声,在视频播放时,突出人物台词。
在一种可能的实现方式中,第一音效模式选项为响度增强模式选项,第一音频数据包括左声道的数据和右声道的数据,第二音频数据包括左声道的数据和右声道的数据。
在一种可能的实现方式中,当第二电子设备包括1个扬声器,第二电子设备使用1个扬声器播放第二音频数据的左声道的数据和右声道的数据;或者,当第二电子设备包括2个扬声器,2个扬声器包括第一扬声器和第二扬声器,第二电子设备使用第一扬声器播放左声道的数据,使用第二扬声器播放右声道的数据;或者,若第二电子设备包括3个及以上扬声器,3个及以上扬声器包括第一扬声器、第二扬声器和第三扬声器,使用第一扬声器播放左声道的数据,使用第二扬声器播放右声道的数据,使用第三扬声器播放第二音频数据的左声道的数据和右声道的数据。这样,可以基于第二电子设备的扬声器数量,合理使用扬声器进行第二音频数据的声道数据的播放,充分利用扬声器资源。
在一种可能的实现方式中,第一音效模式选项为环绕增强模式选项,第一音频数据包括左声道的数据和右声道的数据,第二音频数据包括左环绕声道的数据和右环绕声道的数据。这样,可以增强环绕感。
在一种可能的实现方式中,第二电子设备包括2个扬声器,2个扬声器包括第一扬声器和第二扬声器,第二电子设备使用第一扬声器播放左环绕声道的数据,使用第二扬声器播放右环绕声道的数据;或者,当第二电子设备包括3个及以上扬声器,3个及以上扬声器包括第一扬声器、第二扬声器和第三扬声器,使用第一扬声器播放左环绕声道的数据,使用第二扬声器播放右环绕声道的数据,使用第三扬声器播放第二音频数据的左环绕声道的数据和右环绕声道的数据。
在一种可能的实现方式中,当第一电子设备包括2个扬声器,2个扬声器包括第四扬声器和第五扬声器,第一电子设备使用第四扬声器播放左声道的数据,使用第五扬声器播放右声道的数据;或者,当第一电子设备包括3个及以上扬声器,3个及以上扬声器包括第四扬声器、第五扬声器和第六扬声器,使用第四扬声器播放左声道的数据,使用第五扬声器播放右声道的数据,使用第六扬声器播放第二音频数据的左声道的数据和右声道的数据。
在一种可能的实现方式中,第一音效模式选项为全能增强模式选项,第一音频数据包括左声道的数据和右声道的数据,第二音频数据包括左环绕声道的数据、右环绕声道的数据和中置声道的数据。
在一种可能的实现方式中,当第二电子设备包括2个扬声器,2个扬声器包括第一扬声器和第二扬声器,第二电子设备使用第一扬声器播放左环绕声道的数据和中置声道的数据,使用第二扬声器播放右环绕声道的数据和中置声道的数据;或者,当第二电子设备包括3个及以上扬声器,3个及以上扬声器包括第一扬声器、第二扬声器和第三扬声器,使用第一扬声器播放左环绕声道的数据,使用第二扬声器播放右环绕声道的数据,使用第三扬声器播放中置声道的数据。
在一种可能的实现方式中,当第一电子设备包括2个扬声器,2个扬声器包括第四扬声器和第五扬声器,第一电子设备使用第四扬声器播放左环绕声道的数据和中置声道的数据,使用第五扬声器播放右环绕声道的数据和中置声道的数据;或者,当第一电子设备包括3个及以上扬声器,3个及以上扬声器包括第四扬声器、第五扬声器和第六扬声器,使用第四扬声器播放左环绕声道的数据,使用第五扬声器播放右环绕声道的数据,使用第六扬声器播放中置声道的数据。
在一种可能的实现方式中,响应于第一输入,第一电子设备显示多个音效模式选项,具体包括:第一电子设备响应于第一输入,基于第二电子设备的设备类型与存储的设备类型与音效模式选项的对应关系,获取多个音效模式选项;其中,第二电子设备的设备类型与多个音效模式相对应。这样,第一电子设备可以在与不同设备类型的电子设备建立连接时,给用户提供适合第一电子设备与第二电子设备的音效模式选 项。例如,电子设备的设备类型为智能手表时,智能手表播放环绕声道的数据,也无法带来好的环绕效果,因此,第一电子设备在第二电子设备为智能手表时,不提供环绕增强模式。
在一种可能的实现方式中,第二电子设备的设备类型为智能手表,多个音效模式选项包括智能增强模式选项、响度增强模式选项、节奏增强模式选项和对白增强模式选项中的一种或几种,智能增强模式选项对应的智能增强模式为节奏增强模式与响度增强模式的组合。这样,用户可以在不知道选择某个音效模式时,基于智能增强模式处理音源数据。
在一种可能的实现方式中,智能增强模式对应的第一电子设备播放的音频数据包括左声道的数据和右声道的数据,第二电子设备播放的音频数据包括左声道的数据、右声道的数据和低频声道的数据。
在一种可能的实现方式中,第二电子设备的设备类型为智能眼镜、挂颈式音箱或蓝牙耳机,多个音效模式选项包括智能增强模式选项、响度增强模式选项、节奏增强模式选项、对白增强模式选项、环绕增强模式选项和全能增强模式选项中的一种或几种,智能增强模式选项对应的智能增强模式为节奏增强模式与响度增强模式的组合,或者,为环绕增强模式与对白增强模式的组合。
在一种可能的实现方式中,当音源数据属于视频文件,智能增强模式为环绕增强模式与对白增强模式的组合,智能增强模式对应的第一电子设备播放的音频数据包括左声道的数据和右声道的数据,第二电子设备播放的音频数据包括左环绕声道的数据、右环绕声道的数据和中置声道的数据;或者,
当音源数据属于音频文件,智能增强模式为节奏增强模式与响度增强模式的组合,智能增强模式对应的第一电子设备播放的音频数据包括左声道的数据和右声道的数据,第二电子设备播放的音频数据包括左声道的数据、右声道的数据和低频声道的数据。这样,当第一电子设备播放视频时,可以突出环绕和人声,使得用户可以沉浸享受视频。当第一电子设备播放音频时,可以突出低频,音乐更加动感。
在一种可能的实现方式中,第一电子设备基于播放的视频内容与音乐场景是否相关,确定智能增强模式。这样,即使第一电子设备播放视频,由于视频内容与音源场景相关,也可以将智能增强模式设置为节奏增强模式与响度增强模式的组合,突出低频和增加响度,给用户带来更佳的音乐收听体验。
在一种可能的实现方式中,第一电子设备处理第二音频数据时,加强第二音频数据的细小音频数据的响度。这样,可以突出细小声音,增强细节感。这样,用户在玩游戏可以听到细小声音,观察游戏人物动向,提升游戏体验。用户可以在看电影时,听到明显的背景音,例如,风声、虫鸣等,更加有画面感。
在一种可能的实现方式中,在第一电子设备接收第二输入之前,多个音效模式选项包括被标记的智能增强模式选项;第一电子设备响应于针对第一音效模式选项的第二输入,标记第一音效模式选项,具体包括:
第一电子设备响应于针对第一音效模式选项的第二输入,取消标记智能增强模式选项,并标记第一音效模式选项。这样,第一电子设备可以提供默认的智能增强选项,在用户不确定选择某个音效模式时,更加智能地智能处理音源数据。
在一种可能的实现方式中,当第二电子设备包括第一扬声器与第二扬声器,且第一音效模式选项为响度增强模式选项、环绕增强模式选项或全能增强模式选项;方法还包括:当第二电子设备和/或第一电子设备姿态变化,第一音频数据不变,并且第一电子设备发送给第二电子设备的第二音频数据随着姿态变化而变化。这样,第一电子设备和/或第一电子设备姿态变化时,第一电子设备可以处理第二音频数据,使得第二电子设备播放的声音信号模拟得到的虚拟音源始终位于第一电子设备处,声源和画面始终处于同一位置,便于用户移动回到适宜观看画面的位置。
在一种可能的实现方式中,当第一电子设备包括第四扬声器和第五扬声器;方法还包括:当第二电子设备和/或第一电子设备姿态变化,第二音频数据不变,并且第一电子设备播放的第一音频数据随着姿态变化而变化。这样,第一电子设备可以基于姿态变化,处理第一音频数据,使得第一电子设备播放的声音信号模拟得到的虚拟音源始终处于用户视线前方,无论用户如何移动,也不会出现左右耳收听声音不协调的情形。
在一种可能的实现方式中,当第二电子设备和/或第一电子设备位置变化,第一电子设备的扬声器发出的声波信号的波束方向随着位置变化而变化。这样,第一电子设备可以将波束方向朝向第二电子设备处,使得用户可以更加清晰地听到第一电子设备的声音。
在一种可能的实现方式中,第一电子设备与第二电子设备的距离为第一距离,第一电子设备播放第一音频数据的音量为第一音量;第一电子设备与第二电子设备的距离为第二距离,第一电子设备播放第一音频数据的音量为第二音量,第一距离小于第二距离,且第一音量小于第二音量。这样,第一电子设备可以基于与第二电子设备(也能理解为用户)之间的距离,设置第一电子设备的音量。第一电子设备检测到用 户与第一电子设备接近时,减小第一电子设备的音量,避免干扰第二音频数据的播放。第一电子设备检测到用户与第一电子设备远离时,增大第一电子设备的音量,避免用户听不清第一电子设备播放的声音。
在一种可能的实现方式中,第一电子设备响应于第一输入,显示多个音效模式时,方法还包括:第一电子设备显示百分比栏;若第一音效模式选项为响度增强模式选项、节奏增强模式选项或对白增强模式选项,百分比栏的值为第一值时,第二电子设备播放第二音频数据的音量为第三音量,百分比栏的值为第二值时,第二电子设备播放第二音频数据的音量为第四音量,第一值小于第二值,且第三音量低于第四音量;若第一音效模式选项为全能增强模式选项或环绕模式选项,百分比栏的值为第三值,第二电子设备播放第二音频数据时模拟音源与用户的距离为第三距离,百分比栏的值为第四值,第二电子设备播放第二音频数据时模拟音源与用户的距离为第四距离,第三值小于第四值且第三距离小于第四距离。这样,可以通过百分比栏设置音效模式的效果,用户可以调整百分比栏的值,选择适合的播放效果。
第四方面,本申请提供了一种音频播放方法,其特征在于,方法包括:
第一电子设备显示协同播放控件,协同播放控件用于指示第一电子设备与第二电子设备共同播放音源数据;
第一电子设备接收针对协同播放控件的第一输入;
第一电子设备响应于第一输入,显示多个音效模式选项,多个音效模式选项包括第一音效模式选项;
第一电子设备响应于针对第一音效模式选项的第二输入,标记第一音效模式选项;
第一电子设备将第二音频数据发送至第二电子设备;
当第二电子设备播放第二音频数据时,第一电子设备播放第一音频数据,第一音频数据和第二音频数据均至少包括音源数据的至少部分内容。
这样,第一电子设备可以给用户提供不同的音效模式,实现多设备协同播放音频数据。
在一种可能的实现方式中,方法还包括:在第一电子设备接收第一输入之前,第一电子设备接收到播放音源数据的第三输入;或者,第一电子设备在接收到第二输入之后,接收到播放音源数据的第三输入。
在一种可能的实现方式中,当第一音效模式为节奏增强模式、对白增强模式、环绕增强模式、全能增强模式或智能增强模式中的任一种时,第一音频数据包括的至少部分声道与第二音频数据包括的至少部分声道不相同;或/和,当第一音效模式为响度增强模式时,第一音频数据的至少部分声道与第二音频数据的至少部分声道相同。
在一种可能的实现方式中,第一音效模式选项为节奏增强模式选项,第一音频数据包括左声道的数据和右声道的数据,第二音频数据包括低频声道的数据。
在一种可能的实现方式中,第一音效模式选项为对白增强模式选项,第一音频数据包括左声道的数据和右声道的数据,第二音频数据包括中置声道的数据。
在一种可能的实现方式中,第一音效模式选项为响度增强模式选项,第一音频数据包括左声道的数据和右声道的数据,第二音频数据包括左声道的数据和右声道的数据。
在一种可能的实现方式中,第一音效模式选项为环绕增强模式选项,第一音频数据包括左声道的数据和右声道的数据,第二音频数据包括左环绕声道的数据和右环绕声道的数据。
在一种可能的实现方式中,当第一电子设备包括2个扬声器,2个扬声器包括第四扬声器和第五扬声器,第一电子设备使用第四扬声器播放左声道的数据,使用第五扬声器播放右声道的数据;或者,当第一电子设备包括3个及以上扬声器,3个及以上扬声器包括第四扬声器、第五扬声器和第六扬声器,使用第四扬声器播放左声道的数据,使用第五扬声器播放右声道的数据,使用第六扬声器播放第二音频数据的左声道的数据和右声道的数据。
在一种可能的实现方式中,第一音效模式选项为全能增强模式选项,第一音频数据包括左声道的数据和右声道的数据,第二音频数据包括左环绕声道的数据、右环绕声道的数据和中置声道的数据。
在一种可能的实现方式中,响应于第一输入,第一电子设备显示多个音效模式选项,具体包括:第一电子设备响应于第一输入,基于第二电子设备的设备类型与存储的设备类型与音效模式选项的对应关系,获取多个音效模式选项;其中,第二电子设备的设备类型与多个音效模式相对应。
在一种可能的实现方式中,当第二电子设备的设备类型为智能手表,多个音效模式选项包括智能增强模式选项、响度增强模式选项、节奏增强模式选项和对白增强模式选项中的一种或几种,智能增强模式选项对应的智能增强模式为节奏增强模式与响度增强模式的组合。
在一种可能的实现方式中,当第二电子设备的设备类型为智能眼镜、挂颈式音箱或蓝牙耳机,多个音 效模式选项包括智能增强模式选项、响度增强模式选项、节奏增强模式选项、对白增强模式选项、环绕增强模式选项和全能增强模式选项中的一种或几种,智能增强模式选项对应的智能增强模式为节奏增强模式与响度增强模式的组合,或者,为环绕增强模式与对白增强模式的组合。
在一种可能的实现方式中,在第一电子设备接收第二输入之前,多个音效模式选项包括被标记的智能增强模式选项;第一电子设备响应于针对第一音效模式选项的第二输入,标记第一音效模式选项,具体包括:第一电子设备响应于针对第一音效模式选项的第二输入,取消标记智能增强模式选项,并标记第一音效模式选项。
在一种可能的实现方式中,当第二电子设备包括第一扬声器与第二扬声器,第一音效模式选项为响度增强模式选项、环绕增强模式选项或全能增强模式选项;方法还包括:当第二电子设备和/或第一电子设备姿态变化,第一电子设备发送的第二音频数据不变,并且第一电子设备播放的第一音频数据随着姿态变化而变化。
在一种可能的实现方式中,当第一电子设备包括第四扬声器和第五扬声器;方法还包括:当第二电子设备和/或第一电子设备姿态变化,第一音频数据不变,并且第一电子设备发送给第二电子设备的第二音频数据随着姿态变化而变化。
在一种可能的实现方式中,方法还包括:当第二电子设备和/或第一电子设备位置变化,第一电子设备的扬声器发出的声波信号的波束方向随着位置变化而变化。
在一种可能的实现方式中,方法还包括:第一电子设备与第二电子设备的距离为第一距离,第一电子设备播放第一音频数据的音量为第一音量;第一电子设备与第二电子设备的距离为第二距离,第一电子设备播放第一音频数据的音量为第二音量,第一距离小于第二距离,且第一音量小于第二音量。
第五方面,本申请提供了一种电子设备,包括:一个或多个处理器、多个扬声器和一个或多个存储器;其中,一个或多个存储器、多个扬声器与一个或多个处理器耦合,一个或多个存储器用于存储计算机程序代码,计算机程序代码包括计算机指令,当一个或多个处理器在执行计算机指令时,使得第一电子设备执行如第四方面任一项可能的实现方式中的音频播放方法。
第六方面,本申请实施例提供了一种计算机存储介质,包括计算机指令,当计算机指令在第一电子设备上运行时,使得第一电子设备执行上述第四方面任一项可能的实现方式中的音频播放方法。
第七方面,本申请提供了一种芯片系统,芯片系统应用于第一电子设备,芯片系统包括一个或多个处理器,处理器用于调用计算机指令以使得第一电子设备执行上述第四方面任一项可能的实现方式中的音频播放方法。
第八方面,本申请提供了一种包含指令的计算机程序产品,当计算机程序在第一电子设备上运行时,使得第一电子设备执行上述第四方面任一项可能的实现方式中的音频播放方法。
附图说明
图1A为本申请实施例提供的一种电子设备100的硬件结构示意图;
图1B为本申请实施例提供的一种电子设备100的软件结构图;
图2为本申请实施例提供的一种电子设备200的硬件结构示意图;
图3为本申请实施例提供的一种音频播放方法的流程示意图;
图4为本申请实施例提供的一种流程示意图;
图5为本申请实施例提供的一种应用场景示意图;
图6A-图6D为本申请实施例提供的一组界面示意图;
图7A-图7E为本申请实施例提供的一种场景示意图;
图8为本申请实施例提供的另一种应用场景示意图;
图9A-图9C为本申请实施例提供的另一组界面示意图。
具体实施方式
下面将结合附图对本申请实施例中的技术方案进行清楚、详尽地描述。其中,在本申请实施例的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;文本中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为暗示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征,在本申请实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
本申请实施例提供了一种音频播放方法。电子设备100与电子设备200建立通信连接,电子设备100获取电子设备200的设备信息,并基于该电子设备200的设备信息确定出一个或多个音效模式,一个或多个音效模式包括第一音效模式。电子设备100可以响应于选中第一音效模式的输入,基于第一音效模式,处理电子设备100的音源数据,得到电子设备100播放的第一音频数据,与电子设备200播放的第二音频数据。电子设备100播放第一音频数据并且电子设备200同时播放第二音频数据,实现电子设备100和电子设备200协同播放音源数据。
这样,由多个电子设备共同播放音频数据,打破了单个电子设备的扬声器布局的限制,使得多个电子设备支持更多的音效模式,例如,可以通过多个电子设备的扬声器实现较明显的左右耳环绕效果。并且,可以根据电子设备200的设备类型,给用户提供对应的音效模式,实现不同电子设备的不同音效模式。例如,若电子设备200为智能眼镜,由于智能眼镜包括位于左镜腿的扬声器和右镜腿的扬声器,可以提供基于左右声道、左右环绕声道、人声对白等多种处理方式得到的音效模式,从而增强音源数据的响度、环绕感或者人声清晰度等,使得用户观影沉浸感和包围感增强,提升用户体验。再例如,若电子设备200为智能手表,由于智能手表位于手腕,可通过低频音频信号增强,实现低频震感传播,加强用户节奏感感知效果,提升用户体验。
在一些实施例中,电子设备200为可穿戴设备。这样,由于可穿戴设备便于携带的特性,便于用户在各个应用场景,使用电子设备100与电子设备200提供的音效模式,实现音源数据的协同播放。
在一些实施例中,声源数据可以由其他电子设备(例如,电子设备200)发送至电子设备100。
在一些示例中,第一音频数据和第二音频数据均至少包括音源数据的至少部分内容。其中,至少部分内容可以为音源数据中部分声道的数据和/或部分频段的数据。
图1A为本申请实施例提供的电子设备100的硬件结构示意图。
电子设备100可以是手机、平板电脑、桌面型计算机、膝上型计算机、手持计算机、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本,以及蜂窝电话、个人数字助理(personal digital assistant,PDA)、增强现实(augmented reality,AR)设备、虚拟现实(virtual reality,VR)设备、混合现实(mixed reality,MR)设备、人工智能(artificial intelligence,AI)设备、可穿戴式设备、车载设备、智能家居设备和/或智慧城市设备,本申请实施例对该电子设备100的具体类型不作特殊限制。
可选地,在本申请一些实施例中,电子设备100可以为手机、平板电脑。
电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
可以理解的是,本实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再 次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。I2S接口可以用于音频通信。在一些实施例中,处理器110可以包含多组I2S总线。处理器110可以通过I2S总线与音频模块170耦合,实现处理器110与音频模块170之间的通信。在一些实施例中,音频模块170可以通过I2S接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。在一些实施例中,音频模块170与无线通信模块160可以通过PCM总线接口耦合。在一些实施例中,音频模块170也可以通过PCM接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。所述I2S接口和所述PCM接口都可以用于音频通信。
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。在一些实施例中,UART接口通常被用于连接处理器110与无线通信模块160。例如:处理器110通过UART接口与无线通信模块160中的蓝牙模块通信,实现蓝牙功能。在一些实施例中,音频模块170可以通过UART接口向无线通信模块160传递音频信号,实现通过蓝牙耳机播放音乐的功能。
MIPI接口可以被用于连接处理器110与显示屏194,摄像头193等外围器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serial interface,DSI)等。在一些实施例中,处理器110和摄像头193通过CSI接口通信,实现电子设备100的拍摄功能。处理器110和显示屏194通过DSI接口通信,实现电子设备100的显示功能。
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器110与摄像头193,显示屏194,无线通信模块160,音频模块170,传感器模块180等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为电子设备100充电,也可以用于电子设备100与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备,例如AR设备等。
可以理解的是,本实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在本申请另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过电子设备100的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备100供电。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,显示屏194,摄像头193,和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备100可以包括1个或N个摄像头193,N为大于1 的正整数。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
外部存储器接口120可以用于连接外部的非易失性存储器,实现扩展电子设备100的存储能力。外部的非易失性存储器通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部的非易失性存储器中。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行电子设备100的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备100可以通过扬声器170A收听音乐,或收听免提通话。在本申请实施例中,扬声器170A可以用于播放第一音频数据。
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备100接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。电子设备100可以设置至少一个麦克风170C。在另一些实施例中,电子设备100可以设置两个麦克风170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备100还可以设置三个,四个或更多麦克风170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
耳机接口170D用于连接有线耳机。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。
陀螺仪传感器180B可以用于确定电子设备100的运动姿态。在一些实施例中,可以通过陀螺仪传感器180B确定电子设备100围绕三个轴(即,x,y和z轴)的角速度。
加速度传感器180E可检测电子设备100在各个方向上(一般为三轴)加速度的大小。当电子设备100静止时可检测出重力的大小及方向。还可以用于识别电子设备100姿态,应用于横竖屏切换,计步器等应用。
气压传感器180C用于测量气压。磁传感器180D包括霍尔传感器。电子设备100可以利用磁传感器180D检测翻盖皮套的开合。距离传感器180F,用于测量距离。接近光传感器180G可以用于确定电子设备100附近是否有物体。环境光传感器180L用于感知环境光亮度。指纹传感器180H用于采集指纹。温度传感器180J用于检测温度。触摸传感器180K,也称“触控器件”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。骨传导传感器180M可以获取振动信号。按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。马达191可以产生振动提示。指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。SIM卡接口195用于连接SIM卡。
下面介绍本申请实施例提供的电子设备100的软件结构图。
电子设备100的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本实施例以分层架构的操作系统为例,示例性说明电子设备100的软件结构。
图1B是本实施例的电子设备100的软件结构框图。
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将操作系统分为四层,从上至下分别为应用程序层,应用程序框架层,运行时(runtime)和系统库,以及内核层。
应用程序层可以包括一系列应用程序包。
如图1B所示,应用程序包可以包括相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等应用程序。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
应用程序框架层可以包括窗口管理器,内容提供器,视图系统,资源管理器,通知管理器,智能设备识别模块,多声道处理模块,音效模式显示模块等。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。
智能设备识别模块可以用于获取电子设备200的设备信息,基于电子设备200的设备信息(例如,电子设备200的名称,或者电子设备200的能力信息),确定出电子设备200的设备类型,再基于电子设备200的设备类型(例如,耳机、眼镜、手表等)确定出电子设备100和电子设备200可以实现的一个或多个音效模式。例如,若电子设备100设置有下表1所示的多个音效模式。若电子设备200的设备类型为蓝牙耳机、智能眼镜、挂颈式蓝牙音响等,智能设备识别模块可以确定出电子设备100和电子设备200可以实现智能增强模式、全能增强模式、环绕增强模式、对白增强模式、响度增强模式、节奏增强模式,再例如,电子设备200的设备类型为手表,智能设备识别模块可以确定出电子设备100和电子设备200可以实现智能增强模式、对白增强模式、节奏增强模式。
在一些实施例中,智能设备识别模块可以直接从电子设备200处获取电子设备200支持的声道信息,例如,电子设备200仅支持播放单声道的音频数据,或者,电子设备200支持播放双声道的音频数据,或者,电子设备200支持更多声道的音频数据,例如,5.1声道(包括左声道、右声道、中置声道、左环绕声道、右环绕声道和低频)的音频数据,7.1声道(包括左声道、右声道、中置声道、左前方环绕声道、右前方环绕声道、左后方环绕声道、右后方环绕声道和低频)的音频数据等等。智能设备识别模块可以直接基于电子设备200支持的声道信息,确定出支持的一个或多个音效模式。例如,若电子设备200仅支持播放单声道的音频数据,智能设备识别模块得到的音效模式为响度增强模式、节奏增强模式和对白增强模式。再例如,若电子设备200支持播放双声道及更多声道的音频数据,智能设备识别模块得到的音效模式为智能增强模式、全能增强模式、环绕增强模式、对白增强模式、响度增强模式和/或节奏增强模式。
在另一些实施例中,智能设备识别模块可以获取电子设备200的扬声器数量信息,确定出支持的音效模式。例如,若电子设备200的扬声器数量为1个,智能设备识别模块得到的音效模式为节奏增强模式和对白增强模式。若电子设备200的扬声器数量大于1个,智能设备识别模块得到的音效模式为智能增强模式、全能增强模式、环绕增强模式、对白增强模式、响度增强模式和/或节奏增强模式。
需要说明的是,在一些示例中,即使电子设备200的扬声器数量大于1个,由于该多个扬声器位于电子设备200的同一部件上,电子设备200也无法支持播放双声道的音频数据,智能设备识别模块得到的音效模式仅为节奏增强模式和对白增强模式。
多声道处理模块可以用于基于音效模式,处理音源数据,得到电子设备100播放的第一音频数据和电子设备200播放的第二音频数据。
音效模式显示模块可以用于显示智能设备识别模块得到的音效模式。
运行时可以指程序运行时所需的一切代码库、框架等。例如,对于C语言来说,运行时包括一系列C程序运行所需的函数库。对于Java语言来说,运行时包括核心库和虚拟机,核心库可包括Java语言需要调用的功能函数。应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的Java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
系统库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(Media Libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。
表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H。264,MP3,AAC,AMR,JPG,PNG等。
三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。
2D图形引擎是2D绘图的绘图引擎。
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动等等。
下面结合捕获拍照场景,示例性说明电子设备100软件以及硬件的工作流程。
当触摸传感器180K接收到触摸操作,相应的硬件中断被发给内核层。内核层将触摸操作加工成原始输入事件(包括触摸坐标,触摸操作的时间戳等信息)。原始输入事件被存储在内核层。应用程序框架层从内核层获取原始输入事件,识别该输入事件所对应的控件。以该触摸操作是触摸单击操作,该单击操作所对应的控件为相机应用图标的控件为例,相机应用调用应用框架层的接口,启动相机应用,进而通过调用内核层启动摄像头驱动,通过摄像头193捕获静态图像或视频。
接下来介绍本申请实施例提供的一种电子设备200的硬件架构示意图。
其中,本申请实施例涉及的电子设备200可以为包括扬声器的电子设备。例如,电子设备200可以为智能眼镜,蓝牙耳机(例如,挂颈式耳机),挂脖式蓝牙音响,智能手表,智能头盔等等,本申请实施例对此不作限制。
图2为本申请实施例提供的电子设备200的硬件结构示意图。
如图2所示,电子设备200可以包括但不限于处理器210,音频模块220,扬声器220A,麦克风220B,无线通信模块230,内部存储器240,外部存储器接口245,电源管理模块250,传感器模块260等。其中,传感器模块260可以包括压力传感器260A,触摸传感器260B,惯性测量单元260C(inertial measurement unit,IMU)等。
可以理解的是,本实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备200可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器210通常用于控制电子设备200的整体操作,可以包括一个或多个处理单元。例如:处理器110可以包括中央处理器,应用处理器,调制解调处理器,基带处理器等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。处理器210中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器210可以包括一个或多个接口。接口可以包括I2C接口,I2S接口,PCM接口,UART接口,MIPI,GPIO接口,SIM接口,和/或USB接口,SPI接口等。其中,处理器210以及包括的接口的描述可以参见图1A所示实施例中处理器110的描述,在此不再赘述。
电子设备200可以通过音频模块220,扬声器220A,麦克风220B,以及应用处理器等实现音频功能。例如,播放音频。
音频模块220用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块220还可以用于对音频信号编码和解码。在一些实施例中,音频模块220可以设置于处理 器210中,或将音频模块220的部分功能模块设置于处理器210中。
扬声器220A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备200可以通过扬声器220A收听音乐,或收听通话。在本申请实施例中,扬声器220A可以用于播放第二音频数据。
麦克风220B,也称“话筒”,“传声器”,用于将声音信号转换为电信号。在本身亲实施例中,麦克风220B可以用于采集附近的声音信号,例如,电子设备100播放第一音频数据时产生的声音信号。电子设备200可以包含无线通信功能,比如,电子设备200可以从其它电子设备(比如电子设备100)接收音频数据并播放。无线通信功能可以通过天线(未示出),无线通信模块230,调制解调处理器(未示出)以及基带处理器(未示出)等实现。天线用于发射和接收电磁波信号。电子设备200中可以包含多个天线,每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(例如扬声器220A等)输出声音信号。
无线通信模块230可以提供应用在电子设备200上的包括无线局域网WLAN(如Wi-Fi网络),蓝牙BT,全球导航卫星系统GNSS,调频FM,近距离无线通信技术NFC,红外技术IR等无线通信的解决方案。无线通信模块230可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块230经由天线接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器210。无线通信模块230还可以从处理器210接收待发送的信号,对其进行调频,放大,经天线转为电磁波辐射出去。在一些实施例中,无线通信模块230可以用于传输通信信号,包括接收、发送通信信号,如音频数据、控制信令等。电子设备200可以通过无线通信模块230与其他电子设备,如电子设备100等建立通信连接。
内部存储器240可以用于存储计算机可执行程序代码,该可执行程序代码包括指令。处理器210通过运行存储在内部存储器240的指令,从而执行电子设备100的各种功能应用以及数据处理。内部存储器240可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据等)等。此外,内部存储器240还可以包括高速随机存取存储器,非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器等。
电源管理模块250可以用于从充电器接收充电输入。电源管理模块250可以给电子设备200充电,还可以给电子设备200的各个部件供电。
电子设备100上装备有一个或多个传感器,包括但不限于压力传感器260A,触摸传感器260B,惯性测量单元260C等。
其中,压力传感器260A用于感受压力信号,可以将压力信号转换成电信号。压力传感器260A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。当有触摸操作作用于电子设备200,电子设备200根据压力传感器260A检测所述触摸操作强度。电子设备200也可以根据压力传感器260A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度小于第一压力阈值的触摸操作作用于压力传感器260A时,执行暂停音频的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于压力传感器260A时,执行关闭音频的指令。在一些实施例中,作用于相同触摸位置,但不同触摸操作时间长度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作时间长度小于第一时间阈值的触摸操作作用于压力传感器260A时,执行确认的指令。当有触摸操作时间长度大于或等于第一时间阈值的触摸操作作用于压力传感器260A时,执行开机/关机的指令。
触摸传感器260B,也称“触控器件”。触摸传感器260B用于检测作用于其上或附近的触摸操作。触摸传感器260B可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。电子设备200可以通过音频模块220提供与触摸操作相关的听觉输出。电子设备200可以将触摸操作对应的指令发送给建立通信连接的其他电子设备。
惯性测量单元260C是用来检测和测量加速度与旋转运动的传感器,可以包括加速度计、角速度计(或称陀螺仪)等。加速度计可检测电子设备200在各个方向上(一般为三轴)加速度的大小。当电子设备200静止时可检测出重力的大小及方向。还可以用于识别电子设备200的姿态,应用于体感游戏场景,横竖屏切换,计步器等应用。陀螺仪可以用于确定电子设备200的运动姿态。在一些实施例中,可以通过陀螺仪 确定电子设备200围绕三个轴(即,x,y和z轴)的角速度。陀螺仪还可以用于导航,体感游戏场景,相机防抖等。例如,电子设备200可以根据IMU等来跟踪电子设备200的移动。
在本申请的一些实施例中,惯性测量单元260C可以用于检测电子设备200的位姿数据,位姿数据可以用于表示电子设备200相对电子设备100的显示屏的偏移位置。例如,电子设备200可以将位姿数据发送给电子设备100,电子设备100可以基于位姿数据处理音源数据,使得用户感知到音源处于电子设备100处,或者,使得用户感知到音源位于用户的视线正前方。
在一些实施例中,以电子设备200为智能眼镜为例进行说明。智能眼镜佩戴在用户头部,除了具备普通眼镜具备的光学矫正、调节可视光线或装饰等功能,还可以具备通信功能,音频播放功能。例如,该电子设备200可以和电子设备100建立通信连接,电子设备200可以接收电子设备100发送的音频数据,并通过扬声器播放该音频数据。该通信连接可以为无线连接,例如,通过无线通信模块230建立的无线通信连接。或者,该通信连接可以为有线连接,例如,通用串行总线(universal serial bus,USB)连接、高清多媒体接口(high definition multimedia interface,HDMI)连接等。本实施例对通信连接的类型不作限制。该电子设备200可以通过这些通信连接方式和其他电子设备进行数据传输。例如,当电子设备200和通讯设备之间建立有通信连接时,当该通讯设备和其他通讯设备通电话时,可以通过电子设备200接听通话,收听音乐等。其中,电子设备200还可以包括镜片,该镜片可以是透明镜片或其他颜色的镜片,可以是带有光学矫正功能的眼镜镜片,可以是具备可调节滤光功能的镜片,可以是墨镜或其他具有装饰效果的镜片。
在一些示例中,电子设备200为智能眼镜时,传感器模块260还可以包括骨传导传感器,骨传导传感器也可以作为音频播放器件,用于向用户输出声音。当音频播放器件为骨传导传感器时,电子设备200的两个镜腿可以设有抵持部,骨传导传感器可以设置于该抵持部位置处。当用户佩戴电子设备200时,抵持部抵持耳朵前侧颅骨,进而产生振动使得声波经由颅骨和骨迷路传导至内耳。抵持部的位置直接贴近颅骨,可以减少振动损耗,使得用户更加清晰地听取音频。
在另一些实施例中,电子设备200可以为智能手表。该电子设备200还可以包括有表带和表盘。表盘可以包括有上述显示屏,以用于显示图像、视频、控件、文字信息等等。表带可以用于将电子设备200固定在人体四肢部位以便于穿戴。
其中,若电子设备200为智能手表,电子设备200还可以包括马达,马达可以产生振动提示。马达可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。作用于电子设备200不同区域的触摸操作,马达也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。在本申请实施例中,电子设备200可以基于音频数据,产生对应的振动反馈效果,使得用户感知到电子设备200随着音频数据的播放而振动,振动的频次,震感的强度都由音频数据决定。例如,电子设备200可以基于音频数据得到马达脉冲信号,从而产生对应的振动反馈效果。
在另一些实施例中,电子设备200可以为挂颈式耳机、挂颈式蓝牙音箱等,电子设备200还可以包括颈托,使得电子设备200可以被用户挂在脖子上。
在另一些实施例中,电子设备200可以为蓝牙音响,电子设备100和电子设备200可以共同实现音频数据的播放。在一些示例中,电子设备100可以基于电子设备200支持播放的最大声道数量,处理音源数据,得到第二音频数据。例如,若电子设备200支持播放5.1声道的音频数据,电子设备100的扬声器播放包括左声道和右声道的第一音频数据,电子设备200的扬声器播放包括5.1声道的第二音频数据。若电子设备200支持播放7.1声道的音频数据,电子设备100的扬声器播放包括左声道和右声道的第一音频数据,电子设备200的扬声器播放包括7.1声道的第二音频数据,以此类推。在另一些实施例中,电子设备100可以基于电子设备200的设备类型为音响,确定出电子设备100和电子设备200支持智能增强模式、全能增强模式、环绕增强模式、对白增强模式、响度增强模式和/或节奏增强模式。电子设备100可以基于用户选中的音效模式,处理音源数据,得到第一音频数据和第二音频数据。其中,该处理音源数据的描述可以参见以下实施例,在此不再赘述。
在一些示例中,电子设备100为AR,VR,MR设备时,电子设备200可以为手机,平板等。其中,电子设备100可以用于播放视频(即同时播放视频画面,和视频的音频数据),电子设备200可以用于播放电子设备100发送的音频,该音频基于电子设备100播放的视频得到。
需要说明的是,由于实施本申请提供的音频播放方法时,电子设备100和电子设备200可以同时播放 基于同一音源数据得到的音频数据,使得用户可以同时听到电子设备100和电子设备200播放的声音,实现不同的音频播放效果。在本身请实施例的优选方案中,为了让用户可以更好地听到电子设备100的声音,电子设备200不为影响用户听到电子设备100的声音的电子设备,例如,入耳式耳机,耳罩式耳机等。
在一些示例中,若电子设备200具备降噪功能,当电子设备200的降噪功能开启时,用户将不能听到电子设备100的声音,因此,电子设备200实现本申请实施例提供的音频播放方法时,电子设备200关闭降噪功能。可以理解的是,电子设备200可以在用户选择仅通过电子设备200播放音源数据时,开启降噪功能,电子设备200可以在用户选择通过电子设备100和电子设备200协同播放音源数据时,关闭降噪功能。这样,用户可以在选择仅通过电子设备200播放音源数据时,通过降噪功能,降低外界杂音的干扰。用户可以在选择过电子设备100和电子设备200协同播放音源数据时,关闭降噪功能,同时收听电子设备100和电子设备200播放的声音信号,增强环绕感。
在另一些实施例中,若电子设备200包括通透模式,电子设备200开启通透模式时,电子设备200可以通过麦克风采集附近的声音信号,并且通过扬声器播放麦克风采集的声音信号。这样,可以便于用户收听除了电子设备200播放的第二音频数据发出的声音以外的声音,即,电子设备100播放第一音频数据发出的声音。在一些示例中,电子设备200可以提供通透模式开关控件,开启/关闭通透模式。例如,该通透模式开关控件可以为电子设备200的显示屏显示的控件,电子设备200可以接收针对该控件的触摸输入,开启/关闭通透模式。在另一些示例中,电子设备200可以提供通透模式开关,该开关可以为物理开关,该开关可以接收到用户的拨动输入时,开启/关闭通透欧式。这样,若电子设备200为耳机(例如,入耳式耳机、耳罩式耳机)等遮挡用户双耳的电子设备时,也可以通过开启通透模式,使得用户可以听见电子设备100播放的声音。
在一些示例中,电子设备100可以通过增大电子设备100的播放音量,和/或减小电子设备200的播放音量,使得用户可以同时听到电子设备100和电子设备200播放的声音。
在一些实施例中,电子设备200为设置有扬声器的便携式设备,例如,智能眼镜、蓝牙耳机、智能手表等。这样,由于电子设备200的便携性,电子设备100和电子设备200可以在众多场景(例如,机场,酒店,户外等场景)实施本申请实施例提供的音频播放方法,给用户提供多种音效模式的听觉体验。
接下来介绍本申请实施例提供的音频播放方法的流程示意图。
如图3所示,该音频播放方法包括以下步骤:
S301.电子设备100与电子设备200建立通信连接,电子设备100显示第一控件,第一控件用于触发电子设备100显示一个或多个音效模式对应的音效模式选项。
电子设备100与电子设备200建立通信连接。其中,电子设备100与电子设备200可以以无线/有线方式建立连接。其中,以无线方式建立的通信连接可以包括但不限于Wi-Fi通信连接、蓝牙通信连接、点对点(peer to peer,P2P)通信连接等。其中,蓝牙通信连接可以通过包括但不限于经典蓝牙(basic rate/enhanced data rate,BR/EDR)或蓝牙低功耗(bluetooth low energy,BLE)中一项或多项蓝牙通信的解决方案实现。电子设备100可以响应于和电子设备200建立通信连接,显示第一控件。第一控件可以用于触发电子设备100获取并显示电子设备100与电子设备200支持的音效模式。也可以理解为,第一控件可以触发电子设备100和电子设备200协同播放电子设备100的音频数据。
在一些示例中,电子设备100在显示第一控件时,还可以显示第二控件。第二控件可以用于触发电子设备100或电子设备200播放电子设备100的音频数据。例如,当电子设备200为耳机或眼镜时,第二控件可以用于触发电子设备100将音频数据发送至电子设备200,并通过电子设备200播放音频数据。再例如,当电子设备200为手表时,第二控件可以用于触发电子设备100播放电子设备100的音频数据。
在另一些示例中,第一控件也可以用于触发电子设备100或电子设备200播放电子设备100的音频数据。
例如,电子设备100显示第一控件时,还显示指定区域A和指定区域B。电子设备100可以在检测到第一控件被拖动至指定区域A范围内时,触发电子设备100和电子设备200协同播放电子设备100的音频数据。电子设备100可以在检测到第一控件被拖动至指定区域B范围内时,触发电子设备100或电子设备200播放电子设备100的音频数据。
再例如,电子设备100在显示第一控件时,还显示第三控件,第一控件可以表示电子设备200,第三控件可以表示电子设备100。该第一控件和第三控件之间的距离大于指定距离。电子设备100可以在检测到第一控件与第三控件的距离大于指定距离时,触发电子设备100或电子设备200播放电子设备100的音 频数据。电子设备100可以在检测到第一控件与第三控件的距离小于指定距离时,触发电子设备100和电子设备200协同播放电子设备100的音频数据。其中,电子设备100可以在第一控件和第三控件之间的距离大于指定距离时,在接收到用户将第一控件拖动至第三控件附近的输入后,检测到第一控件与第三控件的距离小于指定距离。电子设备100可以在第一控件和第三控件之间的距离小于指定距离时,在接收到用户拖动第一控件离开第三控件附近的输入后,检测到第一控件与第三控件的距离大于指定距离。
S302.电子设备100接收到针对第一控件的第一输入。
该第一输入可以为针对第一控件的单击或双击等输入,或者,该第一输入可以为语音指令输入等,本申请实施例对此不作限定。
S303.电子设备100基于电子设备200的设备信息,得到并显示一个或多个音效模式选项,包括第一模式选项,一个或多个音效模式选项和一个或多个音效模式一一对应。
接下来以第一音频数据和第二音频数据包括5.1声道中的多个声道的数据为例,介绍本申请实施例提供的多种音效模式。在本申请实施例中,声道是指声音在录制时在不同空间位置采集的相互独立的音频信号,声道数可以理解为声音录制时的音源数量。其中,5.1声道包括左声道、右声道、左环绕声道、右环绕声道、中置声道和低频声道。其中,左声道、右声道、左环绕声道、右环绕声道和中置声道的声音频率范围为20Hz-200KHz,低频声道的声音频率低于150Hz。
在一种可能的实现方式中,音效模式可以包括但不限于全能增强模式、环绕增强模式、节奏增强模式、人声增强模式、响度增强模式。
其中,在节奏增强模式、对白增强模式、环绕增强模式或全能增强模式下,第一音频数据包括的声道与第二音频数据包括的声道不相同。可以理解的是,由于第一音频数据包括的声道与第二音频数据包括的声道不相同,第一音频数据和第二音频数据的振幅、波形、频率中的一项或多项不同。
在响度增强模式下,第一音频数据的声道与第二音频数据的声道相同。可选的,在响度增强模式下,第一音频数据的振幅和第二音频数据的振幅不同。
在一些示例中,各个音效模式下,电子设备100播放的第一音频数据包括的声道,电子设备200播放的第二音频数据包括的声道如表1所示,
表1
其中,如表1所示,全能增强模式中,基于音源数据得到的第一音频数据包括左声道、右声道以及中置声道。基于音源数据的得到的第二音频数据包括左环绕声道、右环绕声道和中置声道。这样,在全能增强模式下,可以通过增加扬声器数量(即,增加了电子设备200的扬声器)来达到声场更好的包围感,并且,由于全能增强模式中,电子设备100和电子设备200播放的音频数据包括中置声道,中置声道包括音频数据中的人声,可以用于增加对白的清晰度。
环绕增强模式中,基于音源数据得到的第一音频数据包括左声道、右声道。基于音源数据的得到的第二音频数据包括左环绕声道、右环绕声道。这样,在环绕增强模式下,可以通过增加扬声器数量(即,增加了电子设备200的扬声器)来达到声场更好的包围感。
对白增强模式中,基于音源数据得到的第一音频数据包括左声道、右声道。基于音源数据的得到的第二音频数据仅包括中置声道。这样,可以通过电子设备200增加对白的清晰度。
响度增强模式中,基于音源数据得到的第一音频数据包括左声道、右声道。基于音源数据的得到的第二音频数据包括左声道、右声道。这样,可以通过电子设备200增加音频响度。
节奏增强模式中,基于音源数据得到的第一音频数据包括左声道、右声道。基于音源数据的得到的第 二音频数据包括低频声道。这样,可以通过电子设备200播放重低音,使得用户可以感受到重低音带来的力量感。
在一些示例中,电子设备200播放第一音频数据,电子设备100播放第二音频数据。
其中,若电子设备200仅包括1个扬声器时。在节奏增强模式下,电子设备200可以使用扬声器播放低频声道的数据。在对白增强模式下,电子设备200可以使用扬声器播放中置声道的数据。在响度增强模式下,电子设备200需要使用扬声器播放左声道的数据和右声道的数据。具体的,电子设备200可以将左声道的数据和右声道的数据叠加在一起,使用扬声器播放该叠加在一起的左声道的数据和右声道的数据。或者,电子设备200可以将左声道的数据和右声道的数据进行下混,得到仅包括单声道的数据,并使用扬声器播放该单声道的数据。
若电子设备200包括两个扬声器。在节奏增强模式下,电子设备200可以使用所有扬声器播放低频声道的数据。在对白增强模式下,电子设备200可以使用所有扬声器播放中置声道的数据。在响度增强模式下,电子设备200使用左扬声器播放左声道的数据,右扬声器播放右声道的数据。在环绕增强模式下,电子设备200使用左扬声器播放左环绕声道的数据,右扬声器播放右环绕声道的数据。在全能增强模式下,电子设备200可以使用左扬声器播放左环绕声道的数据和中置声道的数据,使用右扬声器播放右环绕声道的数据和中置声道的数据。
在以下实施例中,使用一个扬声器播放多个声道的数据时,可以理解为扬声器播放该多个声道叠加得到的数据,或者,扬声器播放基于该多个声道下混得到的数据。为了减少重复的描述,以下将不再重复解释使用一个扬声器播放多个声道的数据的具体描述。
若电子设备200包括三个扬声器。其中,在节奏增强模式下,电子设备200可以使用所有扬声器播放低频声道的数据。在对白增强模式下,电子设备200可以使用所有扬声器播放中置声道的数据。在响度增强模式下,电子设备200使用左扬声器播放左声道的数据,右扬声器播放右声道的数据,使用另一个扬声器播放左声道的数据和右声道的数据。在环绕增强模式下,电子设备200使用左扬声器播放左环绕声道的数据,右扬声器播放右环绕声道的数据,使用另一个扬声器播放左环绕声道的数据和右环绕声道的数据。在全能增强模式下,电子设备200可以使用左扬声器播放左环绕声道的数据,使用右扬声器播放右环绕声道的数据,使用另一个扬声器(在此可以称为中置扬声器)播放中置声道的数据。
以此类推,当电子设备200包括更多数量的扬声器时,电子设备200可以在节奏增强模式下,使用所有扬声器播放低频声道的数据。在对白增强模式下,使用所有扬声器播放中置声道的数据。在响度增强模式下,电子设备200使用左扬声器播放左声道的数据,右扬声器播放右声道的数据,使用其他的扬声器播放左声道的数据和右声道的数据。在环绕增强模式下,电子设备200使用左扬声器播放左环绕声道的数据,右扬声器播放右环绕声道的数据,使用其他的扬声器播放左环绕声道的数据和右环绕声道的数据。在全能增强模式下,电子设备200可以使用左扬声器播放左环绕声道的数据,使用右扬声器播放右环绕声道的数据,使用中置扬声器播放中置声道的数据,使用其他扬声器播放左环绕声道的数据、右环绕声道的数据和中置声道的数据,或者,使用其他扬声器播放中置声道的数据。可选的,电子设备200可以使用左扬声器播放左环绕声道的数据和中置声道的数据,使用右扬声器播放右环绕声道的数据和中置声道的数据。
其中,若电子设备100包括两个扬声器。其中,在环绕增强模式、节奏增强模式、对白增强模式或响度增强模式下,电子设备100可以使用左扬声器播放左声道的数据,使用右扬声器播放右声道的数据。在全能增强模式下,电子设备100可以使用左扬声器播放左声道的数据和中置声道的数据,使用右扬声器播放右声道的数据和中置声道的数据。
若电子设备100包括三个扬声器,其中,在环绕增强模式、节奏增强模式、对白增强模式或响度增强模式下,电子设备100可以使用左扬声器播放左声道的数据,使用右扬声器播放右声道的数据,使用另一个扬声器播放左声道的数据和右声道的数据。在全能增强模式下,电子设备100可以使用左扬声器播放左声道的数据,使用右扬声器播放右声道的数据,使用另一个扬声器(在此可以称为中置扬声器)播放中置声道的数据。
以此类推,当电子设备100包括更多数量的扬声器时,在环绕增强模式、节奏增强模式、对白增强模式或响度增强模式下,电子设备100可以使用左扬声器播放左声道的数据,使用右扬声器播放右声道的数据,使用其他扬声器播放左声道的数据和右声道的数据。在全能增强模式下,电子设备100可以使用左扬声器播放左声道的数据,使用右扬声器播放右声道的数据,使用中置扬声器播放中置声道的数据,使用其他扬声器播放左环绕声道的数据、右环绕声道的数据和中置声道的数据,或者,使用其他扬声器播放中置 声道的数据。
在一些示例中,电子设备200被佩戴时,位于用户左耳附近的扬声器可称为左扬声器,可以用于播放第二音频数据的左环绕声道的数据或左声道的数据,位于用户右耳附近的扬声器可称为右扬声器,可以用于播放第二音频数据的右环绕声道的数据或右声道的数据。
其中,中置扬声器用于播放中置声道的数据,该中置扬声器的位置不做限定,在一些示例中,中置扬声器可以为位于左右扬声器中间的一个或多个扬声器。例如,若电子设备200为智能眼镜,左扬声器可以位于左镜腿,右扬声器可以位于右镜腿,中置扬声器可以位于鼻托处。
在一些实施例中,电子设备100位于用户视线前方,电子设备200佩戴于用户头部。电子设备100负责音源数据的前端左声道和前端右声道的播放,电子设备200负责音源数据的后端左环绕声道和后端右环绕声道的播放。这样,由于电子设备100的扬声器位于用户视线前方,电子设备200的扬声器位于用户耳朵附近,基于电子设备100、电子设备200与用户的相对位置关系,电子设备100负责播放前端声道,电子设备200负责后端声道,模拟3D环绕音的播放情景,可以使得用户感受自身处于3D空间的中心位置,可以听到前方和后方传来的声音,让用户获得沉浸式体验。
其中,电子设备100可以获取电子设备200的设备信息,基于电子设备200的设备信息,确定出电子设备200的设备类型,再基于电子设备200的设备类型(例如,耳机、眼镜、手表或音响等)确定出电子设备100和电子设备200可以实现的一个或多个音效模式。其中,若电子设备200的设备类型为手表,电子设备100确定出的音效模式包括对白增强模式、响度增强模式和节奏增强模式。若电子设备200的设备类型为耳机、眼镜、音响,电子设备100确定出的音效模式包括全能增强模式、环绕增强模式、对白增强模式、响度增强模式和节奏增强模式。其中,电子设备100和电子设备200在某个音效模式下,播放音频的描述可以参见上述表1所示实施例,在此不再赘述。
其中,电子设备100可以在和电子设备200建立通信连接时,获取电子设备200的设备信息,该电子设备200的设备信息可以包括但不限于电子设备200的设备名称。在一些示例中,电子设备200的设备名称包括电子设备200的设备类型,例如,电子设备200的设备名称为“xwatch”时,电子设备100确定出电子设备200的设备名称包括“watch”,可以据此确定出电子设备200的设备类型为手表。再例如,电子设备200的设备名称包括“eye”、“glass”等字符串时,可以据此确定出电子设备200的设备类型为眼镜。电子设备200的设备名称包括“audio”等字符串时,可以据此确定出电子设备200的设备类型为音响。电子设备200的设备名称包括“ear”、“earphone”、“headphone”等字符串时,可以据此确定出电子设备200的设备类型为耳机。
在另一些示例中,电子设备200的设备名称为厂家设置的产品型号,电子设备100可以基于存储有设备名称和设备型号的对应关系,基于电子设备200的设备名称和该对应关系得到对应的设备类型。可选的,电子设备200存储有电子设备200的设备类型,电子设备100可以直接从电子设备200处获取电子设备200的设备类型。可选的,电子设备100可以响应于针对第一控件的输入,获取电子设备200的设备信息。
在一些实施例中,电子设备100可以获取电子设备200支持的声道信息或电子设备200的扬声器数量信息,确定出电子设备100和电子设备200可以实现的一个或多个音效模式。具体的,可以参见图1B所示实施例,在此不再赘述。
在一种可能的实现方式中,电子设备100还可以提供智能增强模式,当电子设备100未接收到选择任一音效模式的输入时,电子设备100可以将音效模式设置为智能增强模式。智能增强模式为电子设备100和电子设备200支持的一个或多个音效模式中的某一个或多个的组合。这样,电子设备100可以在用户未选择音效模式时,电子设备100也可以以智能增强模式处理播放的音源数据,使得电子设备100和电子设备200共同播放声音。
具体的,当电子设备100和电子设备200支持的音效模式为全能增强模式、环绕增强模式、响度增强模式时。电子设备100可以基于是否播放视频,确定出智能增强模式。其中,当电子设备100同时播放视频和音频时,智能增强模式可以为环绕增强模式和对白增强模式的组合,也就是说,第一音频数据包括左声道、右声道,第二音频数据包括左环绕声道、右环绕声道和中置声道。当电子设备100仅播放音频时,智能增强模式可以为响度增强模式和节奏增强模式的组合,也就是说,第一音频数据包括左声道、右声道,第二音频数据包括左环绕声道、右环绕声道和低频声道。
当电子设备100和电子设备200支持的音效模式为对白增强模式、节奏增强模式或响度增强模式时。 电子设备100可以设置智能增强模式可以为节奏增强模式和响度增强模式的组合,也就是说,第一音频数据包括左声道和右声道。第二音频数据包括左声道、右声道和低频声道。
需要说明的是,由于智能增强模式为电子设备100基于电子设备100和电子设备200支持的音效模式得到的,电子设备100确定出的一个或多个音效模式始终包括智能增强模式。
电子设备100可以显示该一个或多个音效模式对应的音效模式选项,一个或多个音效模式选项包括第一模式选项,一个或多个音效模式包括第一模式,第一模式选项和第一模式相对应。
S304.电子设备100接收到选中第一模式选项的第二输入。
电子设备100显示包括第一模式选项的一个或多个音效模式选项时,可以接收选中第一模式选项的第二输入,确定出基于第一模式处理音源数据。其中,第二输入可以为单击、双击、语音指令输入等。
S305.电子设备100基于音源数据和第一模式,得到电子设备100播放的第一音频数据和电子设备200播放的第二音频数据。
其中,电子设备100正在播放音源数据,电子设备100可以响应于第二输入,基于第一模式处理音源数据,得到第一音频数据和第二音频数据。在一些示例中,电子设备100可以在接收到第二输入之前,接收用户播放音源数据的输入。
或者,电子设备100在接收到第二输入后,将电子设备100(和电子设备200)设置为在第一模式下播放音频数据。电子设备100可以在接收到播放音源数据的输入后,基于第一模式处理音源数据,得到第一音频数据和第二音频数据。
其中,电子设备100基于第一模式,得到第一音频数据和第二音频数据的流程如图4所示:
S401.电子设备100获取音源数据的声道数量。
电子设备100获取待播放的音源数据的声道数量。
S402.音源数据的声道数量是否小于等于2。
电子设备100判断音源数据的声道数量是否小于等于2,当电子设备100判定出音源数据的声道数量小于等于2时,执行步骤S403;当电子设备100判定出音源数据的声道数量大于2时,执行步骤S406。
S403.音源数据是否为双声道音源,双声道包括左声道与右声道。
电子设备100判断音源数据是否为双声道音源,即,音源数据是否仅包括左声道和右声道。当电子设备100判定出音源数据不是双声道音源时,可以执行步骤S404;当电子设备100判定出音源数据为双声道音源时,可以执行步骤S405。
S404.电子设备100复制单声道音源数据,得到双声道音源数据。
电子设备100判定出不是双声道音源,在此,可以理解为电子设备100判定出音源数据为单声道音源数据。电子设备100可以复制单声道音源数据,得到双声道音源数据。
在一些示例中,电子设备100可以直接拷贝单声道音源数据,得到两段单声道音源数据,并将其中一段单声道音源数据作为左声道的音频数据,另一段作为右声道的音频数据,得到包括左声道和右声道的音源数据。或者,电子设备100拷贝得到两段单声道音源数据后,可以针对该两段单声道音源数据通过指定算法进行处理,调整该两段音源数据的相位差、幅值和频率中的一项或多项,得到包括左声道和右声道的音源数据。
S405.电子设备100基于第一模式,处理双声道音源数据,得到第一音频数据和第二音频数据。
若第一模式为全能增强模式,电子设备100可以将双声道音源数据上混得到5.1声道音源数据,再将低频声道叠加至左声道、右声道和中置声道。并从双声道音源数据中提取左声道的数据、右声道的数据和中置声道的数据,得到第一音频数据,从双声道音源数据中提取左环绕声道的数据、右环绕声道的数据和中置声道的数据,得到第二音频数据。
若第一模式为环绕增强模式,电子设备100可以将双声道音源数据作为第一音频数据,电子设备100还可以将双声道音源数据,经过相关算法处理,得到包括左环绕声道和右环绕声道的第二音频数据。
若第一模式为节奏增强模式,电子设备100可以将双声道音源数据作为第一音频数据,电子设备100还可以提取双声道音源数据中的低频声道的数据,得到包括低频声道的第二音频数据。
若第一模式为对白增强模式,电子设备100可以将双声道音源数据作为第一音频数据,电子设备100还可以通过相关算法处理双声道音源数据,提取双声道音源数据中的中置声道的数据,得到包括中置声道的第二音频数据。
若第一模式为响度增强模式,电子设备100可以将双声道音源数据作为第一音频数据和第二音频数据。
S406.音源数据的声道数量是否小于等于5.1,其中,5.1声道包括左声道、右声道、中置声道、左环绕 声道、右环绕声道与低频声道。
电子设备100可以判断音源数据的声道数量是否小于5.1。电子设备100可以在判定出音源数据的声道数量小于等于5.1时,执行步骤S407;电子设备100可以在判定出音源数据的声道数量大于5.1时,执行步骤S409。
其中,声道数量小于5.1可以理解为音源数据包括的声道的数量小于5.1声道包括的声道的数量。例如,音源数据的声道为4声道、4.1声道、3声道、3.1声道或2.1声道时,声道数量小于5.1。其中,4声道可以理解为音源数据包括左声道、右声道、左环绕声道和右环绕声道,4.1声道可以理解为音源数据包括左声道、右声道、低频声道、左环绕声道和右环绕声道,以此类推。
同理,声道数量大于5.1可以理解为音源数据包括的声道的数量大于5.1声道包括的声道的数量。例如,音源数据的声道为7.1声道、10.1声道时,声道数量大于5.1。其中,7.1声道可以理解为音源数据包括左声道、右声道、左前环绕声道、右前环绕声道、左后环绕声道、右后环绕声道、中置声道和低频声道。
S407.音源数据是否为5.1声道音源。
电子设备100可以判断音源数据是否为5.1声道音源。电子设备100可以在判定出音源数据为5.1声道音源数据时,执行步骤S410;电子设备100可以在判定出音源数据的声道数量不为5.1声道音源数据时,执行步骤S408。
S408.将音源数据上混得到5.1声道音源数据。
电子设备100判定出音源数据的声道数量小于5.1,可以经过指定上混算法,将音源数据上混得到5.1声道音源数据。
S409.将音源数据下混得到5.1声道音源数据。
电子设备100判定出音源数据的声道数量大于5.1,可以经过指定下混算法,将音源数据下混得到5.1声道音源数据。
S410.电子设备100基于第一模式,处理5.1声道音源数据,得到第一音频数据和第二音频数据。
若第一模式为全能增强模式,电子设备100可以从5.1声道音源数据中提取左环绕声道、右环绕声道和中置声道,得到第二音频数据。电子设备100还可以将音源数据中的低频声道叠加至左声道、右声道和中置声道。再从该叠加后的音源数据中提取左声道、右声道和中置声道,得到第一音频数据。可以理解的是,若将低频声道若叠加到左环绕声道和右环绕声道,会影响环绕效果,因此,在本申请优选实施例中,只将低频声道叠加至左声道、右声道和中置声道。可选的,若未处理的音源数据仅包括左声道、右声道、左环绕声道、右环绕声道和中置声道,电子设备100可以直接提取音源数据的对应声道的数据得到第一音频数据和第二音频数据。
若第一模式为环绕增强模式,电子设备100可以提取5.1声道音源数据中的左环绕声道的数据和右环绕声道的数据,得到第二音频数据。电子设备100可以将音源数据的低频声道和中置声道依次叠加至左声道、右声道。再提取处理后的音源数据的左声道的数据和右声道的数据,得到第一音频数据。可选的,若未处理的音源数据仅包括左声道、右声道、左环绕声道和右环绕声道,电子设备100可以直接提取音源数据的对应声道的数据得到第一音频数据和第二音频数据。
若第一模式为节奏增强模式,电子设备100可以将5.1声道音源数据的左声道、右声道、左环绕声道、右环绕声道和中置声道经过下混算法处理,下混得到左声道的数据和右声道的数据。经过处理后的音源数据仅包括左声道、右声道和低频声道。之后,电子设备100可以提取处理后的音源数据的左声道的数据和右声道的数据,得到第一音频数据,电子设备100还可以提取处理后的音源数据中的低频声道的数据,得到包括低频声道的第二音频数据。可选的,若未处理的音源数据仅包括左声道、右声道和低频声道,电子设备100可以直接提取音源数据的对应声道的数据得到第一音频数据和第二音频数据。
若第一模式为对白增强模式,电子设备100可以将5.1声道音源数据的左声道、右声道、左环绕声道、右环绕声道和低频声道经过相关下混算法处理,下混得到左声道的数据和右声道的数据。经过处理后的音源数据仅包括左声道、右声道和中置声道。之后,电子设备100可以提取处理后的音源数据的左声道的数据和右声道的数据,得到第一音频数据,电子设备100还可以提取处理后的音源数据中的中置声道的数据,得到包括中置声道的第二音频数据。可选的,若未处理的音源数据仅包括左声道、右声道和中置声道,电子设备100可以直接提取音源数据的对应声道的数据得到第一音频数据和第二音频数据。
若第一模式为响度增强模式,电子设备100可以将5.1声道音源数据经过相关下混算法处理,下混得到仅包括左声道和右声道的音源数据。电子设备100可以将处理后的音源数据作为第一音频数据和第二音 频数据。
若第一模式为智能增强模式,若电子设备100和电子设备200支持的音效模式包括全能增强模式、环绕增强模式、对白增强模式、响度增强模式和节奏增强模式。电子设备100在播放音频时,智能增强模式可以为节奏增强模式和响度增强模式的组合,电子设备100可以基于节奏增强模式处理音源数据,得到低频声道的数据。并基于响度增强模式处理音源数据,得到左声道的数据和右声道的数据。电子设备100可以将包括低频声道的数据、左声道的数据和右声道的数据的第二音频数据发送给电子设备200。
电子设备100在播放视频时,智能增强模式可以为环绕增强模式和对白增强模式的组合,电子设备100可以基于对白增强模式处理音源数据,得到中置声道的数据。并基于环绕增强模式处理音源数据,得到左环绕声道的数据和右环绕声道的数据。电子设备100可以将包括中置声道的数据、左环绕声道的数据和右环绕声道的数据的第二音频数据发送给电子设备200。
同理,若电子设备100和电子设备200支持的音效模式包括对白增强模式、节奏增强模式和响度增强模式。智能增强模式可以为节奏增强模式和响度增强模式的组合,电子设备100可以基于节奏增强模式处理音源数据,得到低频声道的数据。并基于响度增强模式处理音源数据,得到左声道的数据和右声道的数据。电子设备100可以将包括低频声道的数据、左声道的数据和右声道的数据的第二音频数据发送给电子设备200。
在一些实施例中,电子设备100可以将基于支持的所有音效模式,处理音源数据,得到电子设备100和电子设备200支持所有音效模式的所有声道的数据。电子设备100可以将所有声道的数据(可以统称为第二音频数据)发送给电子设备200。电子设备100还可以将当前所处的音效模式发送给电子设备200,电子设备200可以基于当前的音效模式,播放对应的声道的数据。或者,电子设备100可以基于当前选中的音效模式,确定出电子设备200播放的音频数据包括哪些声道,并指示电子设备200播放对应的声道的数据。这样,电子设备100可以将电子设备200可能使用的不同声道的数据都发送给电子设备200,电子设备200可以基于当前的音效模式,播放音效模式指示的声道的音频数据。或者,电子设备100可以基于音效模式指示电子设备200播放某一个或多个声道的数据。当电子设备100的音效模式发生改变时,电子设备100不需要再次按照修改后的音效模式处理音源数据,电子设备200可以直接播放改变后的音效模式对应的音频数据。
例如,若电子设备100和电子设备200支持的音效模式包括全能增强模式、环绕增强模式、对白增强模式、响度增强模式和节奏增强模式。电子设备100发送给电子设备200的第二音频数据包括基于该多个音效模式得到的左声道的数据、右声道的数据、左环绕声道的数据、右环绕声道的数据、中置声道的数据和低频声道的数据。例如,电子设备100处于智能增强模式时,电子设备100可以在播放视频时,通知电子设备200播放左环绕声道的数据、右环绕声道的数据以及中置声道的数据。再例如,电子设备100处于节奏增强模式时,通知电子设备100播放低频声道的数据。S306.电子设备100将第二音频数据发送给电子设备200。
电子设备100可以在得到第一音频数据和第二音频数据后,将第二音频数据发送至电子设备200。电子设备100可以播放第一音频数据,电子设备200可以播放第二音频数据。
在一些实施例中,电子设备200接收到第二音频数据后,还可以针对第二音频数据进行音频处理操作,再播放该处理后的第二音频数据。例如,音频处理操作可以为调整第二音频数据的响度。
在一些示例中,电子设备200可以基于第二音频数据的振幅,识别第二音频数据中的小信号音频,并提高细小声音的响度。其中,小信号音频为第二音频数据中,响度处于-35dB及以下范围内的音频信号。这样,由于小信号音频的响度较小,人耳感知不明显,提高小信号音频的响度,有利于用户更加清晰地收听小信号音频,加强用户对音频细节的感知。例如,若第二音频数据为游戏应用的音频数据,小信号音频可以为游戏角色引发游戏场景中环境变化的声音(例如,游戏角色经过草丛发出的窸窣声、游戏角色的脚步声、汽车驶过的声音等等)。电子设备200可以提高小信号音频的音量,加强游戏的沉浸感,提高用户游戏体验。再例如,若第二音频数据为视频应用提供的音频数据,该小信号音频可以为视频中的环境音(例如,虫鸣、鸟鸣、风声等等)。在本申请的优选方案中,电子设备100仅在响度增强模式下,电子设备200执行该针对第二音频数据的音频处理操作。
在一些实施例中,电子设备100可以在播放视频时,通过视频识别算法,识别视频画面中多个人物的位置,并且可以从视频的音源数据中提取该位置的人物的人声数据。电子设备100可以据此得到包括在视 频画面中离显示屏较近的一个或多个人物的人声数据(即中置声道的数据)的第二音频数据,得到包括在视频画面中离显示屏较远的一个或多个人物的人声数据(即中置声道的数据)的第一音频数据。这样,电子设备100和用户之间的距离大于电子设备200和用户之间的距离,通过电子设备100播放离显示屏更远的人物的人声,电子设备200播放离显示屏更近的人物的人声,使得用户可以感受到不同位置的人声,提高用户看视频的沉浸感。
需要说明的是,第一音频数据包括至少一个人物的人声数据,第二音频数据包括除了第一音频数据的人声数据对应的人物以外的所有人物的人声数据。以视频画面包括3个人为例,电子设备100可以将离显示屏最近的1个人的人声数据放入第二音频数据,将另外2个人的人声数据放入第一音频数据。或者,电子设备100可以将离显示屏最近的2个人的人声数据放入第二音频数据,将另外1个人的人声数据放入第一音频数据,本申请实施例对此不作限定。
在一些示例中,音源数据包括多个中置声道,该多个中置声道与视频画面中的人物一一对应。其中,一个中置声道的数据为视频画面中一个人物的人声数据。电子设备100可以识别视频画面中各个人物的位置,得到包括在视频画面中离显示屏较近的一个或多个人物的人声数据(即中置声道的数据)的第二音频数据,得到包括在视频画面中离显示屏较远的一个或多个人物的人声数据(即中置声道的数据)的第一音频数据。
在一些示例中,电子设备100可以在播放视频的音源数据时,判断播放的视频与音乐场景是否相关。其中,音乐场景可以包括但不限于演唱会场景、音乐短片(music video,MV)场景、歌唱比赛场景、演奏场景等等。电子设备100可以在判断出视频与音乐场景相关时,将智能增强模式设置为节奏增强模式与响度增强模式的组合。电子设备100可以在判断出视频与音乐场景无关时,将智能增强模式设置为对白增强模式与环绕增强模式的组合。
在一些示例中,电子设备100可以基于视频的名称判断该视频与音乐场景是否相关。例如,当视频的名称包括但不限于包括“唱”、“音乐”、“演奏”、“歌”等与音乐相关的字词时,判定出该视频与音乐场景相关。
在另一些示例中,电子设备100可以通过图像识别算法,识别是视频画面中是否有人物发生唱歌、演奏乐器等动作,电子设备100可以在识别出视频画面中的人物在演奏、唱歌时,判定出视频与音乐场景相关。
需要说明的是,音源数据可以为视频文件中的音频数据,音源数据也可以为视频文件对应的音频文件中的音频数据。
还需要说明的是,音源数据可以存储在电子设备100的存储其中,或者,音源数据可以为电子设备100从其他电子设备(例如,服务器,电子设备200等)处获取的。
在一些实施例中,在电子设备100和电子设备200协同播放音源数据时,电子设备100或电子设备200可以在接收到调节音量的输入后,同时调整电子设备100和电子设备200的音量。这样,电子设备100和电子设备200共同播放音源数据时,不会出现一个电子设备的音量减小(或增大),另一个电子设备的音量不变,导致用户听到的声音音量不协调,甚至不能听到音量较小的电子设备的声音的情形。
例如,电子设备100可以在接收到增加(或减少)音量的输入时,增大(或减小)第一音频数据以及发送给电子设备200的第二音频数据的振幅,达到增加(或减少)电子设备的音量的效果。或者,电子设备100可以在接收到调节音量的输入时,通过系统服务(例如音量调节服务),将电子设备100的音量设置为调节后的音量值。并且,电子设备100可以将调节后的音量值发送给电子设备200,电子设备200可以将音量也设置为该调节后的音量值。
在一些实施例中,电子设备100可以基于电子设备100与电子设备200的距离调整电子设备100播放第一音频数据的音量。这样,用户在佩戴电子设备200时,由于用户的移动导致电子设备200和电子设备100的距离改变,电子设备100可以通过调整播放音量,保证用户的收听体验,使得用户不会听到电子设备100的声音突然变小或变大。
具体的,电子设备100可以获取电子设备100和电子设备200的距离。例如,电子设备100可以通过Wi-Fi测距等方式获取电子设备100与电子设备200的距离。电子设备100可以在检测到电子设备100与电子设备200的距离增加时,增大电子设备100的音量。电子设备100可以在检测到电子设备100与电子设备200的距离减小时,减小电子设备100的音量。
在另一些实施例中,电子设备100可以基于电子设备100与电子设备200的距离调整电子设备200播 放第二音频数据的音量。例如,电子设备100可以通过调整第二音频数据的振幅,调整电子设备200播放第二音频数据的音量。
接下来结合应用场景,示例性地介绍本申请实施例提供的音频播放方法。
在一些应用场景中,以电子设备200为智能眼镜为例,介绍本申请实施例提供的音频播放方法。
如图5所示,电子设备100可以为平板电脑,电子设备200为智能眼镜。需要说明的是,不限于平板电脑,电子设备100还可以为其他包括显示屏和多个扬声器的电子设备,例如,手机。
其中,电子设备100包括一个或多个扬声器,该一个或多个扬声器包括扬声器A和扬声器B。其中,扬声器A位于显示屏的左侧,扬声器B位于显示屏的右侧。电子设备200包括一个或多个扬声器,该一个或多个扬声器包括扬声器C和扬声器D。其中,电子设备200被佩戴时,扬声器C位于用户的左耳附近,扬声器D位于用户的右耳附近。电子设备100和电子设备200都支持播放包括左右声道的音频数据。需要说明的是,由于图5中描绘视角的限制,扬声器B和扬声器D被遮挡,无法在图5中示出。
其中,电子设备100和电子设备200在第一模式下播放音频数据的步骤如下所示,
电子设备100和电子设备200开启蓝牙功能。电子设备100可以在蓝牙界面中显示一个或多个蓝牙设备选项,蓝牙设备选项可以表示附近搜索到的蓝牙设备,可以用于触发电子设备100和该蓝牙设备选项对应的电子设备建立通信连接。该一个或多个蓝牙设备选项包括用于触发和电子设备200建立蓝牙连接的蓝牙设备选项。
电子设备100可以响应于针对用于触发和电子设备200建立蓝牙连接的蓝牙设备选项的输入,和电子设备200建立蓝牙连接。蓝牙连接仅为示例,不限于蓝牙连接,电子设备100和电子设备200也可以建立其他无线或有线通信连接,本申请实施例对此不作限定。
电子设备100可以在和电子设备200建立蓝牙连接后,显示如图6A所示的蓝牙界面601。
示例性的,如图6A所示,电子设备100显示的蓝牙界面601可以包括一个或多个蓝牙设备选项。该一个或多个蓝牙设备选项包括蓝牙设备选项603,该蓝牙设备选项603还可以显示有电子设备200的设备名称,在此,电子设备200的设备名称为“Eyewear”。可选的,蓝牙设备选项603还可以显示连接状态信息,该连接状态信息可以用于表示电子设备100和电子设备200的连接状态(例如,未连接,已连接等),在此,连接状态为已连接。
蓝牙界面601还可以包括选项卡604,选项卡604包括独立音频选项605和协同音频选项606。其中,独立音频选项605可以用于触发电子设备100通过电子设备200播放声音。协同音频选项606可以用于触发电子设备100通过电子设备100和电子设备200同时播放声音。可选的,选项卡604还可以显示电子设备200的设备图像。
电子设备100可以接收针对协同音频选项606的输入(例如单击),响应于该输入,确定电子设备100和电子设备200支持的一个或多个音效模式。其中,电子设备100可以基于电子设备200的设备名称,确定出电子设备200的设备类型为智能眼镜,并基于电子设备200的设备类型确定一个或多个音效模式,该确定一个或多个音效模式的具体描述可以参见上述实施例,在此不再赘述。
电子设备100可以在确定出电子设备100和电子设备200支持的一个或多个音频模式后,显示如图6B所示的音效选择界面611。
如图6B所示,电子设备100显示音效选择界面611,音效选择界面611包括音效模式列表612,该音效模式列表612包括一个或多个音效模式选项,该一个或多个音频选项和电子设备100确定出的一个或多个音效模式相对应。音效模式选项可以包括音效模式的标识,音效模式选项可以用于触发电子设备100基于音效模式选项指示的音效模式处理音源数据。其中,该一个或多个音效模式选项可以包括但不限于音效模式选项612A、音效模式选项612B、音效模式选项612C、音效模式选项612D、音效模式选项612E以及音效模式选项612F。其中,音效模式选项612A对应智能增强模式,音效模式选项612B对应响度增强模式,音效模式选项612C对应环绕增强模式,音效模式选项612D对应对白增强模式,音效模式选项612E对应节奏增强模式,音效模式选项612F对应全能增强模式,其中,该一个或多个音效模式选项可以包括对应的音效模式的名称。其中,各个音效模式的描述可以参见图3所示实施例,在此不再赘述。在此,音效模式选项612A处于选中状态。可选的,音效选择界面611还可以包括返回控件,返回控件可以用于触发电子设备100返回上一级界面。
接下来以音效模式选项A被选中为例,介绍后续步骤。示例性的,当电子设备100的音效模式选项A被选中后,电子设备100可以接收到返回桌面的输入,显示包括一个或多个应用图标的桌面,该一个或多 个应用图标可以包括音乐应用图标。电子设备100可以接收到针对音乐应用图标的输入,显示音乐播放界面621。
其中,音乐播放界面621可以包括但不限于歌曲名称,播放控件等。其中,播放控件可以用于触发电子设备100播放歌曲名称指示的歌曲。
电子设备100接收到针对播放控件的输入后,响应于该输入,基于智能增强模式处理音源数据(即,歌曲的音频数据),得到第一音频数据和第二音频数据,其中,电子设备100基于音效模式处理音源数据的描述可以参见图4所示实施例,在此不再赘述。
电子设备100得到第一音频数据和第二音频数据后,可以将第二音频数据发送至电子设备200。电子设备100和电子设备200可以同时开始播放各自的音频数据,如图6C所示。电子设备100通过扬声器A和扬声器B播放第一音频数据,电子设备200通过扬声器C和扬声器D播放第二音频数据。可以理解的是,电子设备100正在播放歌曲,电子设备100取消显示播放控件,并显示暂停控件623,暂停控件623可以用于触发电子设备100和电子设备200停止播放音频数据。
这样,电子设备100可以给用户提供多种音效模式,实现不同的音频播放效果,提升用户体验。
在一些示例中,若电子设备200仅包括一个扬声器,电子设备100和电子设备200支持的音效模式为对白增强模式、响度增强模式或节奏增强模式。电子设备100只显示智能增强模式、对白增强模式或节奏增强模式对应的音效模式选项。
在另一些示例中,若电子设备200包括至少三个扬声器,分别位于左右镜腿和鼻托处。电子设备100和电子设备200支持的音效模式为全能音效模式,电子设备200可以通过鼻托处的扬声器播放第二音频数据的中置声道的数据,通过用户左耳附近的扬声器播放第二音频数据中的左声道的数据,通过用户右耳附近的扬声器播放第二音频数据中的右声道的数据。电子设备100可以只显示智能增强模式和全能增强模式对应的音效模式选项。
在一些实施例中,电子设备100在显示一个或多个音效模式选项时,还可以显示百分比栏,该百分比栏可以用于控制电子设备100处理音源数据得到第二音频数据时,基于百分比栏的数值调整第二音频数据。这样,电子设备100可以接收用户对百分比栏的调整,设置音效模式的实现效果,选择适合的实现效果的音效模式。
例如,在响度增强模式中,可以通过改变百分比栏的数值,改变电子设备200的第二音频数据的振幅,以调整电子设备200播放第二音频数据的音量。其中,百分比栏的数值越大,电子设备200播放第二音频数据的音量越高,百分比栏的数值越小,电子设备200播放第二音频数据的音量越低。
在环绕模式中,可以通过改变百分比栏的数值,改变环绕声道处理算法中的环绕因子的值,以调整环绕音效的播放效果。其中,环绕因子的值越大,环绕声道的数据模拟的虚拟音源的位置距离用户越远,环绕效果越明显。即,随着环绕因子的增大,用户收听电子设备200播放环绕声道的数据时,感知到自身与音源的距离也随之增大。其中,百分比栏的数值越大,环绕因子的值越大,环绕效果越明显。百分比栏的数值越小,环绕因子的值越小,环绕效果越不明显。
在对白增强模式中,可以通过改变百分比栏的数值,改变电子设备200的第二音频数据的振幅,以调整电子设备200播放第二音频数据的音量。其中,百分比栏的数值越大,电子设备200播放第二音频数据的音量越高,人声越突出。百分比栏的数值越小,电子设备200播放第二音频数据的音量越低,人声越不明显。
在节奏增强模式中,可以通过改变百分比栏的数值,改变电子设备200的第二音频数据的振幅,以调整电子设备200播放第二音频数据的音量。其中,百分比栏的数值越大,电子设备200播放第二音频数据的音量越高,低频越明显。百分比栏的数值越小,电子设备200播放第二音频数据的音量越低,低频越不明显。在一些示例中,电子设备200可以基于第二音频数据,控制马达振动。马达振动的强度由第二音频数据的振幅决定,当电子设备100基于百分比栏的数值调整第二音频数据的振幅,电子设备200振动的强度也随之改变。
在全能增强模式中,可以通过改变百分比栏的数值,改变电子设备200的第二音频数据的振幅,以调整电子设备200播放第二音频数据的音量。并且改变环绕声道处理算法中的环绕因子的值,以调整环绕音效的播放效果。其中,基于百分比栏的数值调整振幅和环绕因子的具体描述可以参见上述实施例,在此不再赘述。
在智能增强模式中,电子设备100可以基于百分比栏的数值,同步调整智能增强模式中的多个音效模式。
这样,可以根据百分比数值调整电子设备200播放的第二音频数据的响度,环绕声范围,震感强度等。
需要说明的是,百分比栏的值始终大于零,电子设备100和电子设备200按照音效模式,播放对应的音频数据。
在一些示例中,电子设备100的百分比栏被划分为十等分,电子设备100可以将各个音效模式的百分比栏的初始值设置50%。如果百分比栏的值被调整为大于50%,则对应音效模式下处理的声道方式根据影响因子增加,如果百分比栏的值被调整为小于50%,则对应音效模式下处理的声道方式根据影响因子减小。
例如,在环绕声场模式中,影响因子可以理解为环绕声道处理算法中的环绕因子。当百分比栏的值为50%时,当百分比栏的值被调整到100%时,环绕声道处理算法中的环绕因子的值可以按照预先设置的对应关系增大,加强环绕声道的扩展效果,该扩展效果强于百分比栏的值为50%的音效模式的效果。当百分比栏的值被调整到10%时,环绕声道处理算法中的环绕因子的值可以按照预先设置的对应关系减小,减弱环绕声道的扩展效果,该扩展效果弱于百分比栏的值为50%的音效模式的效果。
这样,当百分比栏的值为当前音效模式的最小值时,即10%。相对于未开启协同发声模式时,即使环绕音效的环绕效果弱于百分比栏的值为50%的音效模式的效果,电子设备100和电子设备200共同播放音频数据,比单个电子设备单独播放音频数据带给用户的体验更好。
再例如,在响度增强模式、对白增强模式、节奏增强模式中,当百分比栏的值为50%时,第二音频数据的响度为5dB。当百分比栏的值被调整为小于50%时,例如被调整为20%,第二音频数据的响度被调整为2dB,在此,影响因子可以看做响度系数,当百分比栏的值为20%,响度系数的值为0.4。
示例性的,如图6D所示,电子设备100在显示一个或多个音效模式选项时,可以显示有百分比栏641。百分比栏641可以用于改变电子设备100选择的音效模式的强度。百分比栏641可以包括滑块641A。电子设备100可以接收用户向左拖动滑块641A的输入,减小音效模式的强度。电子设备100可以接收用户向右拖动滑块641A的输入,增大音效模式的强度。可以理解的是,该减小/增大音效模式的强度的具体操作可以参见上述实施例,在此不再赘述。
可选的,百分比栏641还可以包括数值信息641B,数值信息641B可以用于表示百分比栏的数值,提示用户当前音效模式的效果处于。
在此,电子设备100的百分比栏641的数值信息641B的值为默认数值50%。电子设备100可以直接将第二音频数据发送给电子设备200,电子设备200可以播放该第二音频数据。
电子设备100可以在接收到向左拖动滑块641A的输入,降低百分比栏641的值,例如,该数值信息641B的值被降低为20%,电子设备100可以基于该百分比栏的数值,处理第二音频数据,再将第二音频数据发送至电子设备200,电子设备200可以播放处理后的第二音频数据。可以理解的是,百分比栏的值为50%的场景中环绕音效的效果比分比栏的值为20%的场景中环绕音效的效果强。例如,图6D所示场景中,模拟的虚拟音源和用户的距离可以为20厘米,使得用户感觉音源的声音的宽度在电子设备100中轴线向左右延伸各0.2m的距离。在百分比栏的值由50%减小到20%后,虚拟声道的数据模拟的虚拟音源和用户的距离可以为50厘米,使得用户感觉音源的声音的宽度在电子设备100中轴线向左右延伸各0.5m的距离。虚拟音源距离用户越远,环绕效果越明显。可以理解的是,上述百分比栏的值和距离的数值的对应关系仅为示例,具体实现中该距离可以为其他数值,本申请实施例对此不作限定。
在一种可能的实现方式中,电子设备100可以在电子设备200和电子设备100的相对位置发声变化时,基于电子设备200的位姿数据处理第一音频数据,使得用户感知到音源位于用户的视线正前方。或者,基于电子设备200的位姿数据处理第二音频数据,使得用户感知到音源处于电子设备100处。这样,可以让用户感知音源位置,给用户带来更好的空间感。
如图7A所示,电子设备200和电子设备100的相对位置改变,在此,位置改变的原因为用户的头部转动导致电子设备200的位置发生变化。电子设备200可以将电子设备200的位姿数据发送给电子设备100,电子设备100可以基于位姿数据得到角度系数,角度系数可以表示电子设备200相对于电子设备100所处的方位。
不限于电子设备200位置移动,导致电子设备200和电子设备100的相对位置改变,电子设备100的位置移动,也可以导致电子设备200和电子设备100的相对位置改变。电子设备100都可以基于电子设备 200的位姿数据得到角度系数。可选的,电子设备100可以基于电子设备100的位姿数据和电子设备200的位姿数据,得到角度系数。
在一些示例中,电子设备100可以使用角度系数,改变第一音频数据中左声道的数据和右声道的数据的相位差,使得用户听到电子设备100播放的声音时,感觉音源位于用户视线前方。
具体的,电子设备200从图5所示的位置移动到如图7A所示的位置时,电子设备100可以基于电子设备200的位姿数据,得到角度系数,再基于角度系数处理第一音频数据,得到矫正后的第一音频数据。电子设备100播放基于角度系数矫正后的第一音频数据,使得用户感受到音源位置位于用户视线前方,如图7B所示。其中,电子设备200处于图5所示的位置时,电子设备100的播放的第一音频数据,可以让用户感受到音源位于用户视线前方,如图7C中的(a)所示,扬声器A播放第一音频数据的左声道的数据,扬声器B播放第一音频数据的右声道的数据,扬声器A和扬声器B播放的声波信号可以使得用户认为音源处于图7C中的(a)所示的虚拟音源处。
电子设备200处于图7A或图7B所示的位置时,电子设备100的播放的矫正后的第一音频数据,也可以让用户感受到音源位于用户视线前方,如图7C中的(b)所示,扬声器A播放第一音频数据的左声道的数据,扬声器B播放第一音频数据的右声道的数据,扬声器A和扬声器B播放的声波信号可以使得用户认为音源处于图7C中的(b)所示的虚拟音源处。可以理解的是,图7C中的(a)所示的虚拟音源的位置和图7C中的(b)所示的虚拟音源的位置相同。这样,可以保证用户无论如何移动,都能听到来自视线正前方的声音。
也就是说,电子设备100可以基于电子设备100与电子设备200的相对位置,处理第一音频数据,使得电子设备100的声音信号模拟的虚拟音源始终位于用户视线前方。这样,可以使得虚拟音源始终位于用户视线前方,让用户感受到声音永远跟随用户移动的音频播放体验。在一些示例中,电子设备100可以基于该相对位置关系,调节电子设备100播放的左声道的数据与右声道的数据的振幅、相位差中的一项或多项,得到使用电子设备100的声音信号模拟的位于用户实现前方的虚拟音源。
在一些实施例中,电子设备100可以调整电子设备100的扬声器的声音信号的波束方向,使其朝向电子设备200的方向。这样,由于用户佩戴电子设备200,将电子设备100的声音信号的波束方向调整对齐电子设备200的方向,便于用户收听电子设备100播放的声音信号。
例如,如图7C中的(a)所示,以北向为基准方向为例,示例性说明移动前后电子设备100的声音的波束变化。该电子设备100位于该电子设备200的时钟方位1(例如,12点钟方向),该电子设备100播放该第一音频数据时该扬声器A发送的声波波束的波束方向与北向的夹角为角度1,且该扬声器B发送的声波波束的波束方向与北向的夹角为角度2;
如图7C中的(b)所示,该电子设备100位于该电子设备200的时钟方位2(例如,10点钟方向),该电子设备100播放该第一音频数据时扬声器A发送的声波波束的波束方向与北向的夹角为角度3,且该扬声器B发送的声波波束的波束方向与北向的夹角为角度4,该时钟方位1与该时钟方位2不同,并且该角度1与该角度3不同,并且该角度2与该角度4不同。需要说明的是,北向仅为说明声音信号波束方向的对照方向,不限于北向,也可以使用其他任意方向作为对照方向,本申请实施例对此不作限定。
在另一些示例中,电子设备100可以使用角度系数,改变第二音频数据中左声道的数据和右声道的数据的相位差,使得用户听到电子设备200播放的声音时,感觉音源位于电子设备100处。
具体的,电子设备200从图5所示的位置移动到如图7A所示的位置时,电子设备100可以基于电子设备200的位姿数据,得到角度系数,再基于角度系数处理第二音频数据,得到矫正后的第二音频数据。电子设备100可以将矫正后的第二音频数据发送至电子设备200,电子设备200可以播放该基于角度系数矫正后的第二音频数据,使得用户感受到音源位置位于电子设备100处,如图7D所示。其中,电子设备200处于图5所示的位置时,电子设备200的播放的第二音频数据,可以让用户感受到音源位于电子设备100处,如图7E中的(a)所示,扬声器C播放第二音频数据的左声道的数据,扬声器D播放第二音频数据的右声道的数据,扬声器C和扬声器D播放的声波信号可以使得用户认为音源处于图7E中的(a)所示的虚拟音源处。
电子设备200处于图7A或图7D所示的位置时,电子设备200的播放的矫正后的第二音频数据,也可以让用户感受到音源位于电子设备100处,如图7E中的(b)所示,扬声器C播放第二音频数据的左声道的数据,扬声器D播放第二音频数据的右声道的数据,扬声器C和扬声器D播放的声波信号可以使得用户认为音源处于图7E中的(b)所示的虚拟音源处。可以理解的是,图7E中的(a)所示的虚拟音源的位置和图7E中的(b)所示的虚拟音源的位置相同。这样,可以保证用户无论如何移动,都能听到来自视 线正前方的声音。
也就是说,电子设备100可以调整电子设备200的扬声器的声音信号,使得电子设备200播放的声音信号模拟得到的虚拟音源位于电子设备100处。这样,无论用户怎么移动,都可以通过电子设备200发出的声音信号,模拟虚拟音源位于电子设备100处,让用户认为声音由电子设备100发出。在一些示例中,电子设备100可以基于电子设备200的位置调整电子设备200的第二音频数据的左声道的数据与右声道的数据的相位差、振幅中的一项或多项,以调整电子设备200播放的声音信号模拟得到的虚拟音源的位置。
其中,通过图7E示例性说明,电子设备200播放的声音信号的波束方向与北向的相对关系。如图7E中的(a)所示,以北向为基准方向为例,示例性说明移动前后电子设备100的声音的波束变化。该电子设备100位于该电子设备200的时钟方位1(例如,12点钟方向),该电子设备200播放该第二音频数据时该扬声器C发送的声波波束的波束方向与北向的夹角为角度1,且该扬声器D发送的声波波束的波束方向与北向的夹角为角度2;
如图7E中的(b)所示,该电子设备100位于该电子设备200的时钟方位2(例如,10点钟方向),该电子设备200播放该第二音频数据时扬声器C发送的声波波束的波束方向与北向的夹角为角度3,且该扬声器D发送的声波波束的波束方向与北向的夹角为角度4,该时钟方位1与该时钟方位2不同,并且该角度1与该角度3不同,并且该角度2与该角度4不同。需要说明的是,北向仅为说明声音信号波束方向的对照方向,不限于北向,也可以使用其他任意方向作为对照方向,本申请实施例对此不作限定。
同理,电子设备100可以基于电子设备100与电子设备200的相对位置的变化,调整电子设备100的声音信号的波束方向,将该波束方向设置为朝向电子设备200的方向。
可以理解的是,由于电子设备200佩戴于用户某个身体部位,因此,不需要改变电子设备200的声音信号的波束方向,电子设备200的扬声器发出的声音信号的波束方向始终朝向用户。
需要说明的是,由于电子设备200仅在支持播放立体声时,才可以改变第二音频数据中左右声道的数据的相位差,改变虚拟音源的位置。因此,电子设备100仅在电子设备100播放包括左声道、右声道的音频数据,或,电子设备100播放包括左环绕声道、右环绕声道的音频数据时,才可以基于角度系数处理第二音频数据,使得用户感觉到音源位于电子设备100所在的位置。也就是说,电子设备100的音效模式为全能增强模式、响度增强模式、环绕增强模式,或包括该全能增强模式、响度增强模式、环绕增强模式中的一个或多个模式组成的智能增强模式时,电子设备100可以基于角度系数处理第一音频数据,或者,可以基于角度系数处理第二音频数据。电子设备100的音效模式为对白增强模式、节奏增强模式,或包括该对白增强模式、节奏增强模式中的一个或多个模式组成的智能增强模式时,电子设备100仅可以基于角度系数处理第一音频数据。不可以基于角度系数处理第二音频数据。
在一些实施例中,电子设备200可以每隔预设时间(例如,1s)给电子设备100发送电子设备200的位姿数据,或者,电子设备100可以每隔预设时间从电子设备200处获取电子设备200的位姿数据。这样,电子设备100可以每隔预设时间基于电子设备200位姿数据,处理第一音频数据或第二音频数据。在另一些实施例中,电子设备200可以在检测到电子设备200的位置移动时,向电子设备100发送电子设备200的位姿数据。这样,可以节约电子设备100基于位姿数据调整第一音频数据或第二音频数据的功耗。
在另一些应用场景中,以电子设备200为智能手表为例,介绍本申请实施例提供的音频播放方法。
如图8所示,电子设备100可以为平板电脑,电子设备200为智能手表。需要说明的是,不限于平板电脑,电子设备100还可以为其他包括显示屏和多个扬声器的电子设备,例如,手机。
其中,电子设备100包括一个或多个扬声器,该一个或多个扬声器包括扬声器E和扬声器F。其中,扬声器E位于显示屏的左侧,扬声器F位于显示屏的右侧。电子设备200包括至少一个扬声器,该至少一个扬声器包括扬声器G。其中,电子设备200被佩戴时,扬声器G位于用户的手腕附近。电子设备100支持播放包括左右声道的音频数据,电子设备200仅支持播放单声道的音频数据。需要说明的是,由于图8中描绘视角的限制,扬声器F被遮挡,图8中无法示出扬声器F。
其中,电子设备100和电子设备200在第一模式下播放音频数据的步骤如下所示,
电子设备100和电子设备200开启蓝牙功能。电子设备100可以在蓝牙界面中显示一个或多个蓝牙设备选项,蓝牙设备选项可以表示附近搜索到的蓝牙设备,可以用于触发电子设备100和该蓝牙设备选项对应的电子设备建立通信连接。该一个或多个蓝牙设备选项包括用于触发和电子设备200建立蓝牙连接的蓝牙设备选项。
电子设备100可以响应于针对用于触发和电子设备200建立蓝牙连接的蓝牙设备选项的输入,和电子 设备200建立蓝牙连接。蓝牙连接仅为示例,不限于蓝牙连接,电子设备100和电子设备200也可以建立其他无线或有线通信连接,本申请实施例对此不作限定。
电子设备100可以在和电子设备200建立蓝牙连接后,显示如图9A所示的蓝牙界面901。
示例性的,如图9A所示,电子设备100显示的蓝牙界面901可以包括一个或多个蓝牙设备选项。该一个或多个蓝牙设备选项包括蓝牙设备选项903,该蓝牙设备选项903还可以显示有电子设备200的设备名称,在此,电子设备200的设备名称为“Iwatch”。可选的,蓝牙设备选项903还可以显示连接状态信息,该连接状态信息可以用于表示电子设备100和电子设备200的连接状态(例如,未连接,已连接等),在此,连接状态为已连接。
蓝牙界面901还可以包括选项卡904,选项卡904包括独立音频选项905和协同音频选项906。其中,独立音频选项905可以用于触发电子设备100自行播放声音。协同音频选项906可以用于触发电子设备100通过电子设备100和电子设备200同时播放声音。可选的,选项卡904还可以显示电子设备200的设备图像。
电子设备100可以接收针对协同音频选项906的输入(例如单击),响应于该输入,确定电子设备100和电子设备200支持的一个或多个音效模式。其中,电子设备100确定一个或多个音效模式的具体描述可以参见上述实施例,在此不再赘述。
电子设备100可以在确定出电子设备100和电子设备200支持的一个或多个音频模式后,显示如图9B所示的音效选择界面911。
如图9B所示,电子设备100显示音效选择界面911,音效选择界面911包括音效模式列表912,该音效模式列表912包括一个或多个音效模式选项,该一个或多个音频选项和电子设备100确定出的一个或多个音效模式相对应。音效模式选项可以包括音效模式的标识,音效模式选项可以用于触发电子设备100基于音效模式选项指示的音效模式处理音源数据。其中,该一个或多个音效模式选项可以包括但不限于音效模式选项912A、音效模式选项912B以及音效模式选项912C。其中,音效模式选项A对应智能增强模式,音效模式选项912B对应节奏增强模式,音效模式选项912C对应对白增强模式,其中,各个音效模式的描述可以参见图3所示实施例,在此不再赘述。在此,音效模式选项A处于选中状态。可选的,音效选择界面911还可以包括返回控件,返回控件可以用于触发电子设备100返回上一级界面。
在一些实施例中,电子设备100还可以在音效选择界面911中显示百分比栏。
接下来以音效模式选项A被选中为例,介绍后续步骤。示例性的,当电子设备100的音效模式选项A被选中后,电子设备100可以接收到返回桌面的输入,显示包括一个或多个应用图标的桌面,该一个或多个应用图标可以包括音乐应用图标。电子设备100可以接收到针对音乐应用图标的输入,显示音乐播放界面921。其中,音乐播放界面921可以包括但不限于歌曲名称,播放控件等。其中,播放控件可以用于触发电子设备100播放歌曲名称指示的歌曲。
电子设备100接收到针对播放控件的输入后,响应于该输入,基于智能增强模式处理音源数据(即,歌曲的音频数据),得到第一音频数据和第二音频数据,其中,电子设备100基于音效模式处理音源数据的描述可以参见图4所示实施例,在此不再赘述。
电子设备100得到第一音频数据和第二音频数据后,可以将第二音频数据发送至电子设备200。电子设备100和电子设备200可以同时开始播放各自的音频数据,如图9C所示。电子设备100通过扬声器E和扬声器F播放第一音频数据,电子设备200通过扬声器G和扬声器C播放第二音频数据。可以理解的是,电子设备100正在播放歌曲,电子设备100取消显示播放控件,并显示暂停控件923,暂停控件923可以用于触发电子设备100和电子设备200停止播放音频数据。
在一些示例中,若智能增强模式为节奏增强模式,或者,电子设备100的音效模式选项912B被选中,电子设备200可以在播放第二音频数据时,基于第二音频数据得到马达脉冲信号,并将马达脉冲信号输入到电子设备200的马达中,电子设备200可以随着第二音频数据中音频信号的频率变化,控制马达以对应的振动频率振动。这样,电子设备200可以给用户增加触觉上的感受,以听觉和触觉同时播放音源数据,提升用户体验。
在一些实施例中,若电子设备200不包括扬声器,仅包括马达,电子设备100也可以提供节奏增强模式。电子设备100可以提取音源数据中的低频声道的数据,将包括低频声道的数据的第二音频数据发送给电子设备200,电子设备200可以基于第二音频数据控制马达振动。这样,即使电子设备200不包括扬声器,电子设备200也可以通过振动给用户更好地收听效果。
在一些示例中,电子设备200可以将马达振动的震感强度设置为5个级别,从级别1至级别5,震感 逐渐加强。电子设备200可以按照第二音频数据的振幅的大小,将第二音频数据划分出5个振幅范围,从振幅范围1至振幅范围5,振幅值逐渐增大。其中,第二音频数据中落在振幅范围1的音频数据对应的震感强度为级别1,第二音频数据中落在振幅范围2的音频数据对应的震感强度为级别2,以此类推。这样,可以实现马达振动的强度随着音频数据的振幅变化而变化。
这样,电子设备100可以给用户提供多种音效模式,实现不同的音频播放效果,提升用户体验。
在一些实施例中,电子设备100可以在与2个及以上支持播放音频数据的电子设备建立通信连接后,与该2个及以上的电子设备协同播放音源数据。
在一些实施例中,该2个及以上的支持播放音频数据的电子设备可以为可穿戴设备。这样,可以便于用户在多种场景,通过随身携带的电子设备实现多设备协同发声,增强环绕感。
其中,该电子设备100与2个及以上的电子设备协同播放音源数据的描述可以参见上述电子设备100与电子设备200协同播放音源数据的描述。其中,电子设备100可以显示一个或多个音效模式,该一个或多个音效模式可以指示电子设备100处理音源数据,得到电子设备100以及该2个及以上的电子设备播放的音频数据。例如,该一个或多个音效模式可以包括电子设备100播放上述第一音频数据,该2个及以上的电子设备共同播放上述第二音频数据(或者,该2个及以上的电子设备中的部分电子设备播放上述第一音频数据,另一部分电子设备播放上述第二音频数据)的音效模式。或者,该一个或多个音效模式可以包括各个电子设备播放基于音源数据得到的不同的音频数据的音效模式,即,电子设备100、该2个及以上的电子设备播放的音频数据都不同。
例如,若电子设备100为手机,多个支持播放音频数据的电子设备可以包括但不限于智能手表、智能眼镜、耳机等。手机可以在与智能手表以及智能眼镜都建立通信连接后,显示协同播放控件,协同播放控件可以用于触发手机、智能手表以及智能眼镜协同播放音源数据。电子设备100可以显示一个或多个音效模式选项,电子设备100提供音效模式选项1。当音效模式选项1被选中时,手机可以用于播放基于音源数据得到的左声道的数据以及右声道的数据,智能手表可以用于播放基于音源数据得到的低频声道的数据,智能眼镜可以用于播放基于音源数据得到的左环绕声道的数据、右环绕声道的数据。这样,在多种场景下,通过便携式设备实现多设备多声道协同播放,给用户带来环绕声场的体验效果。可以理解的是,音效模式选项1对应的各个电子设备播放的音频数据的描述仅为示例,不应对此构成具体限定。
可选的,电子设备100还可以接收用户输入,选择电子设备100与该多个支持播放音频数据的电子设备中的一个或多个电子设备协同播放音源数据。
以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (25)

  1. 一种音频播放方法,其特征在于,所述方法应用于音频播放系统,所述音频播放系统包括第一电子设备和第二电子设备,所述第一电子设备与所述第二电子设备建立通信连接,所述方法包括:
    所述第一电子设备显示协同播放控件,所述协同播放控件用于指示所述第一电子设备与所述第二电子设备共同播放音源数据;
    所述第一电子设备接收针对所述协同播放控件的第一输入;
    所述第一电子设备响应于所述第一输入,显示多个音效模式选项,所述多个音效模式选项包括第一音效模式选项;
    所述第一电子设备响应于针对所述第一音效模式选项的第二输入,标记所述第一音效模式选项;
    所述第一电子设备将第二音频数据发送至所述第二电子设备;
    所述第二电子设备播放所述第二音频数据;
    当所述第二电子设备播放所述第二音频数据时,所述第一电子设备播放第一音频数据,所述第一音频数据和所述第二音频数据均至少包括所述音源数据的至少部分内容。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    在所述第一电子设备接收所述第一输入之前,所述第一电子设备接收到播放所述音源数据的第三输入;或者,
    所述第一电子设备在接收到所述第二输入之后,接收到播放所述音源数据的第三输入。
  3. 根据权利要求1或2所述的方法,其特征在于,
    当所述第一音效模式为节奏增强模式、对白增强模式、环绕增强模式、全能增强模式或智能增强模式中的任一种时,所述第一音频数据包括的至少部分声道与所述第二音频数据包括的至少部分声道不相同;或/和,
    当所述第一音效模式为响度增强模式时,所述第一音频数据的至少部分声道与所述第二音频数据的至少部分声道相同。
  4. 根据权利要求3所述的方法,其特征在于,所述第一音效模式选项为所述节奏增强模式选项,所述第一音频数据包括左声道的数据和右声道的数据,所述第二音频数据包括低频声道的数据。
  5. 根据权利要求4所述的方法,其特征在于,所述第二电子设备包括马达;
    所述第二电子设备播放所述第二音频时,所述方法还包括:
    所述第二电子设备将所述第二音频数据转换为脉冲信号;
    所述第二电子设备将所述脉冲信号传输给所述第二电子设备的马达;
    所述第二电子设备的所述马达振动。
  6. 根据权利要求3所述的方法,其特征在于,所述第一音效模式选项为所述对白增强模式选项,所述第一音频数据包括左声道的数据和右声道的数据,所述第二音频数据包括中置声道的数据。
  7. 根据权利要求3所述的方法,其特征在于,所述第一音效模式选项为所述响度增强模式选项,所述第一音频数据包括左声道的数据和右声道的数据,所述第二音频数据包括左声道的数据和右声道的数据。
  8. 根据权利要求7所述的方法,其特征在于,当所述第二电子设备包括1个扬声器,所述第二电子设备使用所述1个扬声器播放所述第二音频数据的左声道的数据和右声道的数据;或者,
    当所述第二电子设备包括2个扬声器,所述2个扬声器包括第一扬声器和第二扬声器,所述第二电子设备使用所述第一扬声器播放所述左声道的数据,使用所述第二扬声器播放所述右声道的数据;或者,
    若所述第二电子设备包括3个及以上扬声器,所述3个及以上扬声器包括第一扬声器、第二扬声器和第三扬声器,使用所述第一扬声器播放所述左声道的数据,使用所述第二扬声器播放所述右声道的数据,使用所述第三扬声器播放所述第二音频数据的所述左声道的数据和所述右声道的数据。
  9. 根据权利要求3所述的方法,其特征在于,所述第一音效模式选项为所述环绕增强模式选项,所述 第一音频数据包括左声道的数据和右声道的数据,所述第二音频数据包括左环绕声道的数据和右环绕声道的数据。
  10. 根据权利要求3所述的方法,其特征在于,所述第一音效模式选项为所述全能增强模式选项,所述第一音频数据包括左声道的数据和右声道的数据,所述第二音频数据包括左环绕声道的数据、右环绕声道的数据和中置声道的数据。
  11. 根据权利要求10所述的方法,其特征在于,当所述第二电子设备包括2个扬声器,所述2个扬声器包括第一扬声器和第二扬声器,所述第二电子设备使用所述第一扬声器播放所述左环绕声道的数据和所述中置声道的数据,使用所述第二扬声器播放所述右环绕声道的数据和所述中置声道的数据;或者,
    当所述第二电子设备包括3个及以上扬声器,所述3个及以上扬声器包括第一扬声器、第二扬声器和第三扬声器,使用所述第一扬声器播放所述左环绕声道的数据,使用所述第二扬声器播放所述右环绕声道的数据,使用所述第三扬声器播放所述中置声道的数据。
  12. 根据权利要求1-3中任一项所述的方法,其特征在于,响应于所述第一输入,所述第一电子设备显示多个音效模式选项,具体包括:
    所述第一电子设备响应于所述第一输入,基于所述第二电子设备的设备类型与存储的所述设备类型与音效模式选项的对应关系,获取所述多个音效模式选项;
    其中,所述第二电子设备的设备类型与所述多个音效模式相对应。
  13. 根据权利要求12所述的方法,其特征在于,所述第二电子设备的设备类型为智能手表,所述多个音效模式选项包括所述智能增强模式选项、所述响度增强模式选项、所述节奏增强模式选项和所述对白增强模式选项中的一种或几种,所述智能增强模式选项对应的智能增强模式为节奏增强模式与响度增强模式的组合。
  14. 根据权利要求12所述的方法,其特征在于,所述智能增强模式对应的所述第一电子设备播放的音频数据包括左声道的数据和右声道的数据,所述第二电子设备播放的音频数据包括左声道的数据、右声道的数据和低频声道的数据。
  15. 根据权利要求12所述的方法,其特征在于,所述第二电子设备的设备类型为智能眼镜、挂颈式音箱或蓝牙耳机,所述多个音效模式选项包括所述智能增强模式选项、所述响度增强模式选项、所述节奏增强模式选项、所述对白增强模式选项、所述环绕增强模式选项和所述全能增强模式选项中的一种或几种,所述智能增强模式选项对应的智能增强模式为节奏增强模式与响度增强模式的组合,或者,为环绕增强模式与对白增强模式的组合。
  16. 根据权利要求15所述的方法,其特征在于,当所述音源数据属于视频文件,所述智能增强模式为所述环绕增强模式与所述对白增强模式的组合,所述智能增强模式对应的所述第一电子设备播放的音频数据包括左声道的数据和右声道的数据,所述第二电子设备播放的音频数据包括左环绕声道的数据、右环绕声道的数据和中置声道的数据;或者,
    当所述音源数据属于音频文件,所述智能增强模式为所述节奏增强模式与所述响度增强模式的组合,所述智能增强模式对应的所述第一电子设备播放的音频数据包括左声道的数据和右声道的数据,所述第二电子设备播放的音频数据包括左声道的数据、右声道的数据和低频声道的数据。
  17. 根据权利要求3-16中任一项所述的方法,其特征在于,在所述第一电子设备接收所述第二输入之前,所述多个音效模式选项包括被标记的所述智能增强模式选项;所述第一电子设备响应于针对所述第一音效模式选项的第二输入,标记所述第一音效模式选项,具体包括:
    所述第一电子设备响应于针对所述第一音效模式选项的第二输入,取消标记所述智能增强模式选项,并标记所述第一音效模式选项。
  18. 根据权利要求3-17中任一项所述的方法,其特征在于,当所述第二电子设备包括第一扬声器与第二扬声器,且所述第一音效模式选项为所述响度增强模式选项、所述环绕增强模式选项或所述全能增强模式选项;所述方法还包括:
    当所述第二电子设备和/或所述第一电子设备姿态变化,所述第一音频数据不变,并且所述第一电子设备发送给所述第二电子设备的所述第二音频数据随着所述姿态变化而变化。
  19. 根据权利要求1-17中任一项所述的方法,其特征在于,当所述第一电子设备包括第四扬声器和第五扬声器;所述方法还包括:
    当所述第二电子设备和/或所述第一电子设备姿态变化,所述第二音频数据不变,并且所述第一电子设备播放的所述第一音频数据随着所述姿态变化而变化。
  20. 根据权利要求1-19中任一项所述的方法,其特征在于,所述方法还包括:
    当所述第二电子设备和/或所述第一电子设备位置变化,所述第一电子设备的扬声器发出的声波信号的波束方向随着位置变化而变化。
  21. 根据权利要求1-20中任一项所述的方法,其特征在于,所述方法还包括:
    所述第一电子设备与所述第二电子设备的距离为第一距离,所述第一电子设备播放所述第一音频数据的音量为第一音量;所述第一电子设备与所述第二电子设备的距离为第二距离,所述第一电子设备播放所述第一音频数据的音量为第二音量,所述第一距离小于所述第二距离,且所述第一音量小于所述第二音量。
  22. 根据权利要求1-21中任一项所述的方法,其特征在于,所述第一电子设备响应于所述第一输入,显示多个音效模式时,所述方法还包括:
    所述第一电子设备显示百分比栏;
    若所述第一音效模式选项为响度增强模式选项、节奏增强模式选项或对白增强模式选项,所述百分比栏的值为第一值时,所述第二电子设备播放所述第二音频数据的音量为第三音量,所述百分比栏的值为第二值时,所述第二电子设备播放所述第二音频数据的音量为第四音量,所述第一值小于所述第二值,且所述第三音量低于所述第四音量;
    若所述第一音效模式选项为全能增强模式选项或环绕模式选项,所述百分比栏的值为第三值,所述第二电子设备播放所述第二音频数据时模拟音源与用户的距离为第三距离,所述百分比栏的值为第四值,所述第二电子设备播放所述第二音频数据时模拟音源与用户的距离为第四距离,所述第三值小于所述第四值且所述第三距离小于所述第四距离。
  23. 一种音频播放系统,其特征在于,包括:第一电子设备和第二电子设备;其中,
    第一电子设备,被配置为用于实现上述权利要求1-22中所述第一电子设备执行的方法步骤;
    第二电子设备,被配置为用于实现上述权利要求1-22中所述第二电子设备执行的方法步骤。
  24. 一种音频播放方法,应用于第一电子设备,其特征在于,所述方法包括:
    所述第一电子设备显示协同播放控件,所述协同播放控件用于指示所述第一电子设备与所述第二电子设备共同播放音源数据;
    所述第一电子设备接收针对所述协同播放控件的第一输入;
    所述第一电子设备响应于所述第一输入,显示多个音效模式选项,所述多个音效模式选项包括第一音效模式选项;
    所述第一电子设备响应于针对所述第一音效模式选项的第二输入,标记所述第一音效模式选项;
    所述第一电子设备将第二音频数据发送至所述第二电子设备;
    当所述第二电子设备播放所述第二音频数据时,所述第一电子设备播放第一音频数据,所述第一音频数据和所述第二音频数据均至少包括所述音源数据的至少部分内容。
  25. 一种电子设备,为第一电子设备,其特征在于,包括:一个或多个处理器、多个扬声器和一个或多个存储器;其中,一个或多个存储器、所述多个扬声器与一个或多个处理器耦合,所述一个或多个存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,当所述一个或多个处理器在执行所述计算机指令时,使得所述第一电子设备执行如权利要求24所述的方法。
PCT/CN2023/114402 2022-08-30 2023-08-23 一种音频播放方法、系统及相关装置 WO2024046182A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211049076.4A CN117676448A (zh) 2022-08-30 2022-08-30 一种音频播放方法、系统及相关装置
CN202211049076.4 2022-08-30

Publications (1)

Publication Number Publication Date
WO2024046182A1 true WO2024046182A1 (zh) 2024-03-07

Family

ID=90073812

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/114402 WO2024046182A1 (zh) 2022-08-30 2023-08-23 一种音频播放方法、系统及相关装置

Country Status (2)

Country Link
CN (1) CN117676448A (zh)
WO (1) WO2024046182A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105163241A (zh) * 2015-09-14 2015-12-16 小米科技有限责任公司 音频播放方法及装置、电子设备
CN106502620A (zh) * 2016-10-26 2017-03-15 宇龙计算机通信科技(深圳)有限公司 多媒体文件的多终端协同播放方法和终端
CN108170400A (zh) * 2017-12-27 2018-06-15 上海传英信息技术有限公司 音乐播放控制方法与终端
CN113890932A (zh) * 2020-07-02 2022-01-04 华为技术有限公司 一种音频控制方法、系统及电子设备
WO2022110939A1 (zh) * 2020-11-27 2022-06-02 荣耀终端有限公司 一种设备推荐方法及电子设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105163241A (zh) * 2015-09-14 2015-12-16 小米科技有限责任公司 音频播放方法及装置、电子设备
CN106502620A (zh) * 2016-10-26 2017-03-15 宇龙计算机通信科技(深圳)有限公司 多媒体文件的多终端协同播放方法和终端
CN108170400A (zh) * 2017-12-27 2018-06-15 上海传英信息技术有限公司 音乐播放控制方法与终端
CN113890932A (zh) * 2020-07-02 2022-01-04 华为技术有限公司 一种音频控制方法、系统及电子设备
WO2022110939A1 (zh) * 2020-11-27 2022-06-02 荣耀终端有限公司 一种设备推荐方法及电子设备

Also Published As

Publication number Publication date
CN117676448A (zh) 2024-03-08

Similar Documents

Publication Publication Date Title
CN113873378B (zh) 一种耳机噪声处理方法、装置及耳机
WO2021147415A1 (zh) 实现立体声输出的方法及终端
WO2021083128A1 (zh) 一种声音处理方法及其装置
WO2021000817A1 (zh) 环境音处理方法及相关装置
CN115729511A (zh) 一种播放音频的方法及电子设备
US11798234B2 (en) Interaction method in virtual reality scenario and apparatus
CN111103975B (zh) 显示方法、电子设备及系统
CN114422935B (zh) 音频处理方法、终端及计算机可读存储介质
US11870941B2 (en) Audio processing method and electronic device
CN111065020B (zh) 音频数据处理的方法和装置
EP4203447A1 (en) Sound processing method and apparatus thereof
WO2022089563A1 (zh) 一种声音增强方法、耳机控制方法、装置及耳机
WO2024046182A1 (zh) 一种音频播放方法、系统及相关装置
CN115641867A (zh) 语音处理方法和终端设备
CN114691064A (zh) 一种双路投屏的方法及电子设备
WO2024027259A1 (zh) 信号处理方法及装置、设备控制方法及装置
WO2023185698A1 (zh) 一种佩戴检测方法及相关装置
US20240045651A1 (en) Audio Output Method, Media File Recording Method, and Electronic Device
WO2023197997A1 (zh) 穿戴设备、拾音方法及装置
CN116320880B (zh) 音频处理方法和装置
WO2024066933A1 (zh) 扬声器控制方法及设备
WO2022242301A1 (zh) 振动描述文件的生成方法、装置、设备及可读存储介质
WO2022242299A1 (zh) 驱动波形的调整方法及装置、电子设备、可读存储介质
CN116048241A (zh) 一种提示方法、扩展现实设备及介质
CN115549715A (zh) 一种通信方法、相关电子设备及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23859215

Country of ref document: EP

Kind code of ref document: A1