WO2023185589A1 - Volume control method and electronic device - Google Patents

Volume control method and electronic device Download PDF

Info

Publication number
WO2023185589A1
WO2023185589A1 PCT/CN2023/083111 CN2023083111W WO2023185589A1 WO 2023185589 A1 WO2023185589 A1 WO 2023185589A1 CN 2023083111 W CN2023083111 W CN 2023083111W WO 2023185589 A1 WO2023185589 A1 WO 2023185589A1
Authority
WO
WIPO (PCT)
Prior art keywords
volume
audio data
audio
electronic device
output
Prior art date
Application number
PCT/CN2023/083111
Other languages
French (fr)
Chinese (zh)
Inventor
陈刚
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023185589A1 publication Critical patent/WO2023185589A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72442User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality for playing music files
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions

Definitions

  • Embodiments of the present application relate to the field of terminal equipment, and in particular, to a volume control method and electronic equipment.
  • Embodiments of the present application provide a volume control method and electronic device.
  • the electronic device can automatically adjust the volume of the audio data to keep the volume of the audio data within an appropriate volume range, effectively improving the user's listening experience.
  • embodiments of the present application provide a volume control method.
  • the method includes: the electronic device obtains first audio data.
  • the electronic device detects that the first volume of the first audio data does not meet the first output volume range, the electronic device obtains the first volume corresponding to the first audio data based on the first volume and the first output volume range. parameter.
  • the first volume is the average volume of audio data within a preset duration of the first audio data, and the first output volume range is acquired in advance.
  • the electronic device corrects the first audio data based on the first volume parameter to obtain the second audio data.
  • the average volume of the second audio data is the second volume, and the second volume is within the first output volume range. Then, the electronic device plays the second audio data.
  • the electronic device can correct the audio data based on the volume parameter so that the volume of the audio data is adjusted to the output volume range, thereby avoiding the problem of too high or too low volume caused by different audio data when played by the electronic device, so as to effectively Improve user listening experience.
  • the audio data may be music or audio corresponding to a video.
  • the volume may be greater than the maximum value of the output volume range, or the volume may be less than the minimum value of the output volume range.
  • the volume of the second audio data is different from that of the first audio data.
  • the preset duration can be set according to actual needs, and is not limited in this application.
  • the electronic device plays the second audio data, and the playing volume is within the output volume range.
  • the method further includes: receiving an adjustment operation when the electronic device plays the second audio data, and the adjustment operation is used to adjust the volume of the second audio data.
  • the electronic equipment follows the One cycle is used to collect the volume of the second audio data.
  • the electronic device obtains the second output volume range based on the collected volume of the second audio data.
  • the electronic device can intensively collect the volume of the audio data when the user adjusts the volume, so as to update the output volume range based on the collected volume. That is to say, during the process of playing audio data, the electronic device can detect the user's behavior and update the volume output range, so that the output volume range always meets the user's listening habits.
  • the electronic device obtains the second output volume range based on the collected volume of the second audio data, including: obtaining the second output volume range collected from the beginning to the end of the adjustment operation.
  • the average volume of the volume of the second audio data when the adjustment operation is used to indicate increasing the volume of the second audio data, if the average volume of the collected second audio data is greater than the minimum value of the first output volume range , the minimum value of the second output volume range is the average volume of the collected second audio data, and the maximum value of the second output volume range is the maximum value of the first output volume range; if the collected second audio data
  • the average volume of the volume is less than the minimum value of the first output volume range, and the second output volume range is equal to the first output volume range; or, in the case where the adjustment operation is used to indicate turning down the volume of the second audio data, if the collected The average volume of the second audio data is less than the maximum value of the first output volume range.
  • the maximum value of the second output volume range is the average volume of the collected second audio data.
  • the minimum value of the second output volume range is The minimum value of the first output volume range; if the average volume of the collected second audio data is greater than the maximum value of the first output volume range, the second output volume range is equal to the first output volume range. In this way, the electronic device can dynamically update the output volume range based on different adjustment scenarios, so that the output volume range always meets the user's needs, that is, the user's listening habits.
  • the method further includes: when the electronic device plays the second audio data, the electronic device collects the volume of the second audio data according to the duration of the second cycle; the second cycle The duration is longer than the first cycle duration.
  • the electronic device obtains the second output volume range based on the collected volume of the second audio data. In this way, the electronic device can collect the volume through sparse collection, dynamically update the output volume range based on the volume, and reduce the power consumption caused by volume collection.
  • the minimum value of the second output volume range is the first output volume
  • the minimum value of the range, the maximum value of the second output volume range is the volume of the collected second audio data; or, if the volume of the collected second audio data is less than the minimum value of the first output volume range, the second output volume
  • the maximum value of the range is the maximum value of the first output volume range, and the minimum value of the second output volume range is the volume of the collected second audio data; or, if the volume of the collected second audio data is greater than or equal to the first
  • the minimum value of the output volume range is less than or equal to the maximum value of the first output volume range, and the second output volume range is equal to the first output volume range.
  • the method further includes: the electronic device obtains third audio data, wherein the average volume of the audio data within the preset duration of the third audio data is the third volume ; The electronic device detects that the third volume does not satisfy the second output volume range, and obtains the second volume parameter corresponding to the third audio data based on the third volume and the second output volume range; the electronic device detects the third volume parameter based on the second volume parameter.
  • the audio data is corrected to obtain fourth audio data; the average volume of the fourth audio data is the fourth volume, and the fourth volume is within the second output volume range; the electronic device plays the fourth audio data.
  • the electronic device can obtain the volume parameters corresponding to the audio based on the updated output volume range, and correct the audio based on the volume parameters, so that the audio
  • the consistent volume is always maintained within the volume range that the user is accustomed to to satisfy the user's listening experience.
  • the electronic device detects that the first volume does not satisfy the first output volume range, and obtains the corresponding first audio data based on the first volume and the first output volume range.
  • the first volume parameter includes: if the first volume is greater than the maximum value of the first output volume range, the electronic device obtains the first volume parameter based on the first volume and the maximum value of the first output volume range; or, if the first volume is less than the minimum value of the first output volume range, the electronic device obtains the first volume parameter based on the first volume and the minimum value of the first output volume range.
  • the electronic device can obtain the volume parameters of the audio based on different correspondences between the volume and the output volume range. For example, when the input volume is large, the output volume can be reduced through the volume parameter so that the output volume is within the output volume range. If the input volume is small, use the volume parameter to increase the output volume so that the output volume is within the output volume range.
  • the electronic device corrects the first audio data based on the first volume parameter to obtain the second audio data, including: the electronic device corrects the first audio data based on the first audio data, the first The volume parameter and the output volume parameter are used to obtain the second audio data; the output volume parameter includes at least one of the following: a track volume parameter, a stream volume parameter, and a master volume; the track volume parameter is used to indicate the settings of an application that plays the second audio data. Volume; the stream volume parameter is used to indicate the set volume of the audio stream corresponding to the first audio data; the master volume is used to indicate the set volume of the electronic device.
  • the electronic device can obtain the output volume corresponding to the input volume of the audio data based on at least one setting volume of the electronic device itself (ie, the output volume parameter), and correct the output volume of the audio based on the volume parameter, so that the audio The output volume remains within the output volume range.
  • the method further includes: the electronic device obtains fifth audio data, wherein the average volume of the audio data within the preset duration of the fifth audio data is the fifth volume. ; The electronic device detects that the fifth volume does not satisfy the first output volume range, and obtains the third volume parameter corresponding to the fifth audio data based on the fifth volume and the first output volume range; the electronic device detects the fifth volume parameter based on the third volume parameter.
  • the audio data is corrected to obtain sixth audio data; wherein the average volume of the sixth audio data is the sixth volume, and the sixth volume is within the first output volume range; the electronic device sends the sixth audio data to another electronic device; electronic The device performs data interaction with another electronic device through a wireless connection; the electronic device detects that the connection with the other electronic device is disconnected, and the electronic device obtains the audio data to be played in the fifth audio data, wherein the preset of the audio data to be played is Suppose the average volume of the audio data within the duration is the seventh volume; the electronic device detects that the seventh volume does not meet the first output volume range, and obtains the third volume corresponding to the audio data to be played based on the seventh volume and the first output volume range.
  • each electronic device can obtain the volume parameter corresponding to the audio data based on the output volume range of the electronic device.
  • the electronic device stops cooperating with other electronic devices the electronic device can still obtain the volume parameters of the audio data based on the output volume range of the electronic device, and correct the audio data through the volume parameters so that the volume of the audio data remains within the electronic range. within the output volume range of the device. This prevents the problem of the audio volume played by a single device being too loud or too small after multiple devices are switched to a single device when the device volume is adjusted in a collaborative scene.
  • the method further includes: the electronic device obtains eighth audio data, wherein the average volume of the audio data within the preset duration of the eighth audio data is the ninth volume. ;
  • the eighth audio data is different from the first audio data;
  • the ninth volume is different from the first volume;
  • the electronic device detects that the ninth volume does not meet the first output volume range, and based on the ninth volume and the first output volume range, obtains the same value as the eighth
  • the fifth volume parameter corresponding to the audio data the fifth volume parameter is different from the first volume parameter; the electronic device corrects the eighth audio data based on the fifth volume parameter to obtain the ninth audio data; wherein, the average volume of the ninth audio data is the tenth volume, and the tenth volume is within the first output volume range; the electronic device plays the tenth audio data.
  • the electronic device can obtain the corresponding volume parameters according to different sound sources, so that when different audio data is played in the electronic device, the electronic device can automatically adjust the volume of the audio data through the volume parameters, so that Users can keep the playback volume of audio data within the user's accustomed volume range without manual adjustment, thereby effectively improving the user experience.
  • the electronic device obtains the first audio data, including: the electronic device obtains the first audio data from the target application; or the electronic device receives the first audio data sent by the second electronic device. an audio data.
  • the electronic device can automatically adjust the volume of the audio data of the application in the device.
  • the electronic device can also automatically adjust the audio data sent by other electronic devices so that the audio played by the electronic device remains within the output volume range of the electronic device.
  • the electronic device plays the second audio data, including: the electronic device plays the second audio data through the speaker; or the electronic device plays the second audio data through the earphone connected to the electronic device. 2. Audio data.
  • the embodiments of the present application can be applied to local playback scenarios and headphone playback scenarios.
  • inventions of the present application provide an electronic device.
  • the electronic device includes: one or more processors, memory; and one or more computer programs, wherein the one or more computer programs are stored on the memory, and when the computer program is executed by the one or more processors, the electronic device Execute the following steps: obtain the first audio data; detect that the first volume of the first audio data does not meet the first output volume range, and obtain the first volume corresponding to the first audio data based on the first volume and the first output volume range.
  • the first volume is the average volume of the audio data within the preset duration of the first audio data, and the first output volume range is obtained in advance; the first audio data is corrected based on the first volume parameter to obtain the first Two audio data; wherein the average volume of the second audio data is the second volume, and the second volume is within the first output volume range; and the second audio data is played.
  • the electronic device when the computer program is executed by one or more processors, the electronic device is caused to perform the following steps: when the electronic device plays the second audio data, an adjustment operation is received, and the adjustment operation is used to adjust the second audio data. Volume; during the process from the beginning to the end of the adjustment operation, the volume of the second audio data is collected according to the duration of the first cycle; based on the collected volume of the second audio data, the second output volume range is obtained.
  • the electronic device when the computer program is executed by one or more processors, the electronic device is caused to perform the following steps: obtain the second value collected from the beginning to the end of the adjustment operation.
  • the average volume of the volume of the audio data when the adjustment operation is used to indicate increasing the volume of the second audio data, if the average volume of the collected second audio data is greater than the minimum value of the first output volume range, the The minimum value of the second output volume range is the average volume of the collected second audio data, and the maximum value of the second output volume range is the maximum value of the first output volume range; if the volume of the collected second audio data is The average volume is less than the minimum value of the first output volume range, and the second output volume range is equal to the first output volume range; or, in the case where the adjustment operation is used to indicate turning down the volume of the second audio data, if the collected second Volume of audio data The average volume is less than the maximum value of the first output volume range, the maximum value of the second output volume range is the average volume of the collected
  • the electronic device when the computer program is executed by one or more processors, the electronic device performs the following steps: when the electronic device plays the second audio data, according to the second The cycle duration is to collect the volume of the second audio data; the second cycle duration is greater than the first cycle duration; based on the collected volume of the second audio data, the second output volume range is obtained.
  • the minimum value of the second output volume range is the first output volume
  • the minimum value of the range, the maximum value of the second output volume range is the volume of the collected second audio data; or, if the volume of the collected second audio data is less than the minimum value of the first output volume range, the second output volume
  • the maximum value of the range is the maximum value of the first output volume range, and the minimum value of the second output volume range is the volume of the collected second audio data; or, if the volume of the collected second audio data is greater than or equal to the first
  • the minimum value of the output volume range is less than or equal to the maximum value of the first output volume range, and the second output volume range is equal to the first output volume range.
  • the electronic device when the computer program is executed by one or more processors, the electronic device is caused to perform the following steps: obtain third audio data, wherein the third audio data The average volume of the audio data within the preset duration is the third volume; it is detected that the third volume does not meet the second output volume range, and based on the third volume and the second output volume range, the second volume corresponding to the third audio data is obtained parameter; correct the third audio data based on the second volume parameter to obtain fourth audio data; wherein the average volume of the fourth audio data is the fourth volume, and the fourth volume is within the second output volume range; play the fourth audio data.
  • the electronic device when the computer program is executed by one or more processors, the electronic device is caused to perform the following steps: if the first volume is greater than the maximum value of the first output volume range , based on the first volume and the maximum value of the first output volume range, obtain the first volume parameter; or, if the first volume is less than the minimum value of the first output volume range, based on the first volume and the minimum value of the first output volume range , obtain the first volume parameter.
  • the electronic device when the computer program is executed by one or more processors, the electronic device is caused to perform the following steps: based on the first audio data, the first volume parameter and the output volume parameters to obtain the second audio data; the output volume parameters include at least one of the following: track volume parameters, stream volume parameters, and master volume; the track volume parameters are used to indicate the set volume of the application that plays the second audio data; the stream volume parameters It is used to indicate the set volume of the audio stream corresponding to the first audio data; the master volume is used to indicate the set volume of the electronic device.
  • the electronic device when the computer program is executed by one or more processors, the electronic device is caused to perform the following steps: obtain fifth audio data, wherein the fifth audio data The average volume of the audio data within the preset duration is the fifth volume; it is detected that the fifth volume does not meet the first output volume range, and based on the fifth volume and the first output volume range, the third volume corresponding to the fifth audio data is obtained parameter; correct the fifth audio data based on the third volume parameter to obtain sixth audio data; wherein the average volume of the sixth audio data is the sixth volume, and the sixth volume is within the first output volume range; to another electronic The device sends a sixth tone audio data; the electronic device performs data interaction with another electronic device through a wireless connection; detecting that the connection with the other electronic device is disconnected, the electronic device obtains the audio data to be played in the fifth audio data, wherein the audio data to be played The average volume of the audio data within the preset duration is the seventh volume; it is detected that the seventh volume does not meet the first output
  • the electronic device when the computer program is executed by one or more processors, the electronic device is caused to perform the following steps: obtain the eighth audio data, wherein the eighth audio data
  • the average volume of the audio data within the preset duration is the ninth volume; the eighth audio data is different from the first audio data; the ninth volume is different from the first volume; it is detected that the ninth volume does not satisfy the first output volume range, based on the The ninth volume and the first output volume range are used to obtain the fifth volume parameter corresponding to the eighth audio data; the fifth volume parameter is different from the first volume parameter; the eighth audio data is corrected based on the fifth volume parameter to obtain the ninth audio data; wherein the average volume of the ninth audio data is the tenth volume, and the tenth volume is within the first output volume range; the tenth audio data is played.
  • the electronic device when the computer program is executed by one or more processors, the electronic device is caused to perform the following steps: obtain the first audio data from the target application; or, receive the second audio data.
  • the second electronic device sends the first audio data.
  • the electronic device when the computer program is executed by one or more processors, the electronic device is caused to perform the following steps: playing the second audio data through the speaker; or, by communicating with the electronic device The device connects the headphones to play the second audio data.
  • the second aspect and any implementation manner of the second aspect respectively correspond to the first aspect and any implementation manner of the first aspect.
  • the technical effects corresponding to the second aspect and any implementation manner of the second aspect may be referred to the technical effects corresponding to the above-mentioned first aspect and any implementation manner of the first aspect, which will not be described again here.
  • embodiments of the present application provide an audio processing method.
  • the method includes: the first electronic device obtains first sub-audio data and first orientation information, and the first orientation information is used to indicate the relative position between the first electronic device and the earphone; the first electronic device and the earphone are connected through a wireless connection Perform data interaction; the first electronic device receives the second sub-audio data and the second orientation information sent by the second electronic device, and the second orientation information is used to indicate the relative position between the second electronic device and the earphone; the first electronic device is based on The first orientation information and the second orientation information mix the first sub-audio data and the second sub-audio data to obtain the first audio data; the first electronic device sends the first audio data to the earphone to play the first audio data through the earphone. audio data.
  • the electronic device can mix the audio data of multiple electronic devices through the orientation information to achieve stereo playback of the earphones, so that the user can hear the stereo effect of audio from multiple devices when using the earphones.
  • the wireless connection may be maintained based on the Bluetooth protocol or the Wi-Fi protocol.
  • the first electronic device mixes the first sub-audio data and the second sub-audio data based on the first orientation information and the second orientation information, including: the first electronic device obtains the first sub-audio data based on the first orientation information.
  • One sub-audio data corresponds to the third sub-audio data of the first channel of the earphone, and the first sub-audio data corresponds to the fourth sub-audio data of the second channel of the earphone; the third sub-audio data and the fourth sub-audio data
  • the phase, timbre, sound level and/or audio starting position between the data are different; based on the second orientation information, the first electronic device obtains the second sub-audio data corresponding to The fifth sub-audio data of the first channel of the earphone, and the first sub-audio data corresponds to the sixth sub-audio data of the second channel of the earphone; the phase between the fifth sub-audio data and the sixth sub-audio data,
  • the timbre, sound level and/or audio starting position are different; the first electronic device obtains the seventh sub-audio data
  • the electronic device sends the seventh sub-audio data and the eighth sub-audio data to the earphone, through the earphone.
  • the first channel plays the seventh sub-audio data
  • the second channel of the earphone plays the eighth sub-audio data.
  • the electronic device can determine the phase difference, timbre difference, sound level difference and time difference of the two-channel audio of the earphone based on the orientation information, thereby achieving a two-channel stereo sound effect of the earphone.
  • the starting position of the audio is the playback time of the audio played in the mono channel of the headset.
  • the first orientation information includes distance information and direction information between the first electronic device and the earphone
  • the second orientation information includes the distance information between the second electronic device and the earphone. distance information and direction information.
  • the electronic device can adjust the audio of each electronic device in the mixed audio based on the distance and direction between each device and the earphones to achieve a stereo effect.
  • the distance between the first electronic device and the earphone is smaller than the distance between the second electronic device and the earphone.
  • the first electronic device is the master device in the embodiment of the present application
  • the second electronic device is the slave device in the embodiment of the present application.
  • the quality of communication between the master device and the headset is better than the communication quality between the slave device and the headset.
  • the method further includes: the first electronic device obtains third position information, and the third position information is used to indicate the relative position between the first electronic device and the headset; The third position information is different from the first position information; the first electronic device receives the fourth position information sent by the second electronic device, and the fourth position information is used to indicate the relative position between the second electronic device and the earphone; the fourth position information Different from the second position information; the first electronic device mixes the first sub-audio data and the second sub-audio data based on the third position information and the fourth position information to obtain the second audio data; the first electronic device transmits data to the earphones Second audio data is sent to play the second audio data through the headphones.
  • the electronic device can also adjust the mixing effect based on the real-time position between each electronic device and the headphones to achieve a more realistic stereo effect.
  • the method before the first electronic device sends the first audio data to the earphone, the method further includes: the electronic device detects that the first volume does not meet the first output volume range, based on the first A volume and a first output volume range, obtaining a first volume parameter corresponding to the first audio data; the first output volume range is obtained in advance; the first volume is the audio data within the preset duration of the first audio data average volume;
  • the electronic device corrects the first audio data based on the first volume parameter to obtain the corrected first audio data; wherein the average volume of the corrected first audio data is the second volume, and the second volume is within the first output volume range. Inside. In this way, the electronic device can correct the mixed audio to keep the output volume of the mixed audio within the output volume range, thereby realizing automatic adjustment of the volume of the audio data to effectively improve the user experience.
  • the method further includes: the first electronic device receives a first operation, and the first operation is used to instruct the mixing mode of the first electronic device and the second electronic device. Switch to single device mode; in response to the first operation, the first electronic device sends first instruction information to the second electronic device, and the first instruction information is used to instruct the second electronic device to stop sending the second sub-audio data; the first electronic device The first sub-audio data is sent to the earphone to play the first sub-audio data through the earphone. In this way, the electronic device can achieve Switching between mixing mode and single-device playback mode.
  • the electronic device can also correct the audio data to be played so that the volume of the audio data remains within the output volume range.
  • inventions of the present application provide an electronic device.
  • the electronic device includes: one or more processors, memory; and one or more computer programs, wherein the one or more computer programs are stored on the memory, and when the computer program is executed by the one or more processors, the electronic device Instructions to perform a method of the third aspect or any possible implementation of the third aspect.
  • the present application provides a computer-readable medium for storing a computer program, the computer program comprising instructions for performing the method in the first aspect or any possible implementation of the first aspect.
  • the present application provides a computer-readable medium for storing a computer program, the computer program including instructions for executing the method in the third aspect or any possible implementation of the third aspect.
  • the present application provides a computer program, the computer program comprising instructions for performing a method in the first aspect or any possible implementation of the first aspect.
  • the present application provides a computer program, which includes instructions for executing the method in the third aspect or any possible implementation of the third aspect.
  • this application provides a chip, which includes a processing circuit and transceiver pins.
  • the transceiver pin and the processing circuit communicate with each other through an internal connection path, and the processing circuit executes the method in the first aspect or any possible implementation of the first aspect to control the receiving pin to receive the signal, so as to Control the sending pin to send signals.
  • this application provides a chip, which includes a processing circuit and transceiver pins.
  • the transceiver pin and the processing circuit communicate with each other through an internal connection path, and the processing circuit performs the method in the third aspect or any possible implementation of the third aspect to control the receiving pin to receive the signal, so as to Control the sending pin to send signals.
  • Figure 1 shows a schematic diagram of the hardware structure of the electronic device
  • Figure 2 shows a schematic diagram of the software structure of the electronic device
  • Figure 3 is an exemplary module interaction diagram
  • Figure 4 is a schematic diagram of an exemplary user interface
  • Figure 5 is an exemplary output volume adjustment schematic diagram
  • Figure 6 is a schematic diagram of an exemplary user interface
  • Figure 7 is a schematic diagram of an exemplary volume control method
  • Figure 8 is a schematic diagram illustrating an exemplary output volume range acquisition method
  • Figures 9a to 9b are schematic diagrams of module interaction of an exemplary volume control method
  • Figure 9c is an exemplary output volume adjustment schematic diagram
  • Figure 10 is an exemplary multi-device collaboration scenario
  • Figures 11a to 11b are schematic diagrams of module interaction of an exemplary volume control method
  • Figures 12a to 12b are exemplary output volume adjustment schematic diagrams
  • Figures 13a to 13b are schematic diagrams of the principles of an exemplary mixing scene
  • Figure 14 is a schematic diagram of an exemplary application scenario
  • Figure 15 is an exemplary voting schematic diagram
  • Figure 16 is a schematic diagram of an exemplary application scenario
  • Figure 17 is a schematic diagram of an exemplary user interface
  • Figure 18 is a schematic diagram of module interaction of an exemplary mixing scene
  • Figure 19 is a schematic diagram of an exemplary mixing process
  • Figure 20 is a schematic diagram of audio data processing of an exemplary audio mixing scenario
  • Figure 21a is a schematic diagram of audio data processing of an exemplary audio mixing scenario
  • Figure 21b is a schematic diagram of the effect of an exemplary mixing scene
  • Figure 22 is a schematic diagram of audio data processing of an exemplary audio mixing scenario
  • Figure 23 is a schematic diagram of audio data processing of an exemplary audio mixing scenario
  • Figure 24 is a schematic flowchart of a control method in an exemplary switching mode scenario
  • Figure 25 is a schematic flowchart of a control method in an exemplary switching mode scenario
  • Figure 26 is a schematic flowchart of a control method in an exemplary switching mode scenario
  • Figure 27 is a schematic diagram of an exemplary fade-in and fade-out process
  • Figure 28 is a schematic structural diagram of an exemplary device.
  • a and/or B can mean: A exists alone, A and B exist simultaneously, and they exist alone. B these three situations.
  • first and second in the description and claims of the embodiments of this application are used to distinguish different objects, rather than to describe a specific order of objects.
  • first target object, the second target object, etc. are used to distinguish different target objects, rather than to describe a specific order of the target objects.
  • multiple processing units refer to two or more processing units; multiple systems refer to two or more systems.
  • FIG. 1 shows a schematic structural diagram of an electronic device 100 .
  • the electronic device 100 shown in FIG. 1 is only an example of an electronic device, and the electronic device 100 may have more or fewer components than shown in the figure, and two or more components may be combined. , or can have different component configurations.
  • the various Components may be implemented in hardware including one or more signal processing and/or application specific integrated circuits, software, or a combination of hardware and software.
  • the electronic device is a mobile phone as an example for explanation. In other embodiments, the electronic device may also be a tablet, a speaker, a wearable device, a smart home device, and other devices, which are not limited in this application.
  • the electronic device 100 may include: a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2.
  • Mobile communication module 150 wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone interface 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display screen 194, And subscriber identification module (subscriber identification module, SIM) card interface 195, etc.
  • SIM subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light. Sensor 180L, bone conduction sensor 180M, etc.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) wait.
  • application processor application processor, AP
  • modem processor graphics processing unit
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • baseband processor baseband processor
  • NPU neural-network processing unit
  • different processing units can be independent devices or integrated in one or more processors.
  • the controller may be the nerve center and command center of the electronic device 100 .
  • the controller can generate operation control signals based on the instruction operation code and timing signals to complete the control of fetching and executing instructions.
  • the processor 110 may also be provided with a memory for storing instructions and data.
  • the memory in processor 110 is cache memory. This memory may hold instructions or data that have been recently used or recycled by processor 110 . If the processor 110 needs to use the instructions or data again, it can be called directly from the memory. Repeated access is avoided and the waiting time of the processor 110 is reduced, thus improving the efficiency of the system.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger can be a wireless charger or a wired charger.
  • the charging management module 140 may receive charging input from the wired charger through the USB interface 130 .
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, internal memory 121, external memory, display screen 194, camera 193, wireless communication module 160, etc.
  • the wireless communication function of the electronic device 100 can be implemented through the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor and the baseband processor.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 100 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization. For example: Antenna 1 can be reused as a diversity antenna for a wireless LAN. In other embodiments, antennas may be used in conjunction with tuning switches.
  • the mobile communication module 150 can provide wireless communication including 2G/3G/4G/5G applied to the electronic device 100. s solution.
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), etc.
  • the wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) network), Bluetooth (bluetooth, BT), and global navigation satellites.
  • WLAN wireless local area networks
  • Wi-Fi wireless fidelity
  • Bluetooth bluetooth, BT
  • global navigation satellites Global navigation satellite system, GNSS
  • frequency modulation frequency modulation, FM
  • NFC near field communication technology
  • infrared technology infrared, IR
  • the antenna 1 of the electronic device 100 is coupled to the mobile communication module 150, and the antenna 2 is coupled to the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi) -zenith satellite system (QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like.
  • the GPU is an image processing microprocessor and is connected to the display screen 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
  • the display screen 194 is used to display images, videos, etc.
  • Display 194 includes a display panel.
  • the display panel can use a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • AMOLED organic light-emitting diode
  • FLED flexible light-emitting diode
  • Miniled MicroLed, Micro-oLed, quantum dot light emitting diode (QLED), etc.
  • the electronic device 100 may include 1 or N display screens 194, where N is a positive integer greater than 1.
  • Camera 193 is used to capture still images or video.
  • the object passes through the lens to produce an optical image that is projected onto the photosensitive element.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to implement the data storage function. Such as saving music, videos, etc. files in external memory card.
  • Internal memory 121 may be used to store computer executable program code, which includes instructions.
  • the processor 110 executes instructions stored in the internal memory 121 to execute various functional applications and data processing of the electronic device 100 .
  • the internal memory 121 may include a program storage area and a data storage area. Among them, the stored program area can store an operating system, at least one application program required for a function (such as a sound playback function, an image playback function, etc.).
  • the storage data area may store data created during use of the electronic device 100 (such as audio data, phone book, etc.).
  • the electronic device 100 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signals. Audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be provided in the processor 110 , or some functional modules of the audio module 170 may be provided in the processor 110 .
  • Speaker 170A also called “speaker” is used to convert audio electrical signals into sound signals.
  • the electronic device 100 can listen to music through the speaker 170A, or listen to hands-free calls.
  • Receiver 170B also called “earpiece” is used to convert audio electrical signals into sound signals.
  • the electronic device 100 answers a call or a voice message, the voice can be heard by bringing the receiver 170B close to the human ear.
  • Microphone 170C also called “microphone” or “microphone” is used to convert sound signals into electrical signals. When making a call or sending a voice message, the user can speak close to the microphone 170C with the human mouth and input the sound signal to the microphone 170C.
  • the electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C, which in addition to collecting sound signals, may also implement a noise reduction function. In other embodiments, the electronic device 100 can also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions, etc.
  • the headphone interface 170D is used to connect wired headphones.
  • the headphone interface 170D may be a USB interface 130, or may be a 3.5mm open mobile terminal platform (OMTP) standard interface, or a Cellular Telecommunications Industry Association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA Cellular Telecommunications Industry Association of the USA
  • the buttons 190 include a power button, a volume button, etc.
  • Key 190 may be a mechanical key. It can also be a touch button.
  • the electronic device 100 may receive key inputs and generate key signal inputs related to user settings and function control of the electronic device 100 .
  • the software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • the embodiment of this application takes the Android system with a layered architecture as an example to illustrate the software structure of the electronic device 100 .
  • FIG. 2 is a software structure block diagram of the electronic device 100 according to the embodiment of the present application.
  • the layered architecture of the electronic device 100 divides the software into several layers, and each layer has clear roles and division of labor.
  • the layers communicate through software interfaces.
  • the Android system is divided into four layers, from top to bottom: application layer, application framework layer, Android runtime and system libraries, and kernel layer.
  • the application layer can include a series of application packages.
  • the application package can include camera, gallery, calendar, calling, map, navigation, WLAN, Bluetooth, music, video, short message and other applications.
  • the application framework layer provides an application programming interface (API) and programming framework for applications in the application layer.
  • API application programming interface
  • the application framework layer includes some predefined functions.
  • the application framework layer can include window manager, content provider, media manager, phone manager, resource manager, notification manager, etc.
  • a window manager is used to manage window programs.
  • the window manager can obtain the display size, determine whether there is a status bar, lock the screen, capture the screen, etc.
  • Content providers are used to store and retrieve data and make this data accessible to applications.
  • Said data can include videos, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
  • the view system includes visual controls, such as controls that display text, controls that display pictures, etc.
  • a view system can be used to build applications.
  • the display interface can be composed of one or more views.
  • a display interface including a text message notification icon may include a view for displaying text and a view for displaying pictures.
  • the phone manager is used to provide communication functions of the electronic device 100 .
  • call status management including connected, hung up, etc.
  • the resource manager provides various resources to applications, such as localized strings, icons, pictures, layout files, video files, etc.
  • the notification manager allows applications to display notification information in the status bar, which can be used to convey notification-type messages and can automatically disappear after a short stay without user interaction.
  • the notification manager is used to notify download completion, message reminders, etc.
  • the notification manager can also be notifications that appear in the status bar at the top of the system in the form of charts or scroll bar text, such as notifications for applications running in the background, or notifications that appear on the screen in the form of conversation windows. For example, text information is prompted in the status bar, a beep sounds, the electronic device vibrates, the indicator light flashes, etc.
  • the media manager which can also be called a media service, is used to manage audio data and image data, such as controlling the data flow direction of audio data and image data and writing audio streams and image streams to MP4 files.
  • the media manager can be used to adjust the output volume of audio data, mix audio data for multi-device audio output scenarios, etc.
  • System libraries can include multiple functional modules. For example: surface manager (surface manager), media libraries (Media Libraries), 3D graphics processing libraries (for example: OpenGL ES), 2D graphics engines (for example: SGL), etc.
  • the surface manager is used to manage the display subsystem and provides the fusion of 2D and 3D layers for multiple applications.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, composition, and layer processing.
  • 2D Graphics Engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer at least includes display driver, Wi-Fi driver, Bluetooth driver, camera driver, audio driver, sensor driver, etc.
  • the components shown in FIG. 2 do not constitute specific limitations on the electronic device 100 .
  • the electronic device 100 may include more or fewer components than shown in the figures, or some components may be combined, some components may be separated, or some components may be arranged differently.
  • Figure 3 is an exemplary module interaction schematic diagram.
  • the video application outputs the audio data of audio A (which may also be called audio data A) to the media manager.
  • the media manager receives audio data input by other applications, which may be called input audio data (denoted as data_in in this application).
  • input audio data denoted as data_in in this application.
  • the size of the audio data can also be understood as the amplitude corresponding to the audio data, that is, the volume of the audio. Therefore, in this embodiment of the present application, data_in can be used to represent input audio data, and can also be used to represent the input volume of audio.
  • the media manager can obtain the output volume of audio A (which may also be called the audio output volume) based on the input volume of audio A.
  • the audio data output by the media manager to other modules or applications can be called output audio data, which is represented as data_out in the embodiment of this application. Similar to the input audio data, the amplitude of the output audio data is the volume of the output audio. Therefore, in this embodiment of the present application, data_out can be used to represent output audio data, or can also be used to represent the output volume of audio.
  • the solution in the embodiment of the present application mainly describes the control of the volume. Therefore, in the embodiment of the present application, data_in is mainly used to represent the input volume of the audio, and data_out is mainly used to represent the output volume of the audio. , the description will not be repeated below.
  • data_in is the input volume of audio A.
  • the volume corresponding to the audio data generated by the video application is the input volume.
  • track_volume (track volume) represents the volume of the application.
  • the volume adjustment option in the music application can be used to adjust the output volume of the music being played by the music application.
  • this volume does not affect the volume of other media and is only effective for the audio played by the music application.
  • stream_volume represents the volume of a certain stream.
  • the Android system includes 10 types of streams, including but not limited to: media streams, call streams, etc.
  • the sound and vibration option interface 401 in the setting options of the mobile phone includes a volume option 402.
  • the volume options 402 include but are not limited to: "Incoming calls, messages, notifications" options 4021, "Alarm clock” options 4022, "Music, videos, games” options 4023, “Calls” options 4024, and “Smart voice” options 4025, etc.
  • the "call” option 4024 corresponds to the call stream and is used to adjust the output volume of the call.
  • Alarm clock option 4022 corresponds to the alarm clock stream, which is used to adjust the output volume of the alarm clock.
  • the "incoming call, message, notification” option 4021 corresponds to the incoming call, message, and notification stream volume alias (stream volume alias). stream volume alias is used to set the volume of the same group of streams.
  • the slider of the "Incoming Call, Message, Notification” option 4021 to set the volume of the "Incoming Call, Message, Notification” stream voloum alias, it can be understood as being used to set the volume of the incoming call stream (i.e., the volume of the incoming call prompt), the information stream The volume (that is, the information prompt volume) and the volume of the notification stream (that is, the notification prompt volume).
  • the volume of incoming call alerts, message alerts, and notification alerts will be adjusted accordingly, but other streams, such as alarm clocks and call volumes will not be adjusted.
  • the "Music, Video, Game” option 4023 corresponds to the "Music, Video, Game” stream voloum alias, which can also be called the media stream voloum alias.
  • the media can be set , that is, the volume of each stream (including music stream, video stream, game stream) in the "music, video, game” stream volume alias.
  • master_volume (master volume) is used to set all stream_volume and track_volume. This value can be written into the device file corresponding to the audio device (ie, the sound card file) to control the volume of all objects. This value can also not be written to the sound card file, but used as a multiplier factor to affect the volume of all objects.
  • the factors that affect the output volume include but are not limited to at least one of the following: input volume, stream volume, track volume, and master volume.
  • factors other than input volume can also be called output volume parameters.
  • the user can adjust the output volume by adjusting any of the output volume parameters. .
  • the media manager outputs the data_out of audio A (that is, the output audio data of audio A) to the audio driver.
  • the audio driver can play audio A through the playback device (such as a speaker), and the playback volume is is the volume corresponding to data_out, which can also be understood as the amplitude corresponding to the audio data of audio A.
  • the media manager can perform corresponding encoding and other processing on the audio A.
  • the specific processing method can refer to the existing technical embodiments. This application will not provide a detailed description, and the description will not be repeated below.
  • the video application switches audio A to audio B in response to user operations.
  • the video application outputs the input audio data of audio B (expressed as data_in(B)) to the media manager.
  • the media manager obtains the output audio data of audio B based on formula (1), which can also be understood as obtaining audio B.
  • the output volume (denoted as data_out(B)).
  • the output volume of audio B depends on the input volume of audio B.
  • the audio The output volume of audio B is much smaller than the output volume of audio A.
  • the user can adjust the volume of media stream volume alias (including music stream, video stream, and game stream) through the volume keys to increase the output volume of audio B.
  • the volume adjustment box 602 includes a volume bar for indicating the media volume. It should be noted that in the embodiment of this application, the volume key is set to adjust the media volume as an example for explanation. In other embodiments, the volume can also be adjusted in any other way, which is not limited in this application.
  • the media manager increases the value of stream_volume in the output volume parameter, so that the output volume of audio B, that is, data_out(B), increases.
  • the video application switches back to audio A in response to the received user operation, correspondingly, the video application outputs audio data of audio A to the media manager (ie, input audio data).
  • the media manager obtains the output volume of audio A based on the current, that is, adjusted output volume parameter (where the value of stream_volume has been increased). Then, the output volume of audio A will become very loud. This phenomenon can be called popping sound, which will affect the user experience.
  • embodiments of the present application provide a volume control method that can control the audio output volume within a preset range, and the preset range is set according to user needs, so that It can solve the problem of popping sound when switching audio and effectively improve the user experience.
  • Figure 7 is a schematic diagram of an exemplary volume control method. Please refer to Figure 7 , which specifically includes:
  • the mobile phone when the mobile phone is playing or outputting audio data to other devices, the mobile phone can subscribe to the output volume to obtain the output volume.
  • the output volume in the embodiment of the present application is represented as "data_out”.
  • the media manager in the mobile phone can obtain the output volume corresponding to the output audio data based on the input volume corresponding to the audio input by the application (ie, the input audio data). .
  • the media manager can output audio data to the audio driver for playback through speakers and other devices.
  • the media manager can also output audio data to a Wi-Fi driver or Bluetooth driver to transfer it to other devices for playback. That is to say, the output volume in the embodiment of the present application can be understood as the volume when the mobile phone plays audio, or the output volume corresponding to the mobile phone outputting audio data to other devices.
  • volume_coefficient volume parameter
  • appropriate volume parameters are set to control the output volume within the range required by the user.
  • the volume parameters can be obtained and saved by the media manager, for example, they can be saved in the memory (or other locations, which are not limited by this application).
  • the specific acquisition method of the audio parameters will be as follows. Detailed description is given in the examples.
  • the media manager may be set with a default volume parameter, for example the volume parameter may be 0.5.
  • This default volume parameter can be used by the media manager to process audio before obtaining the volume parameter based on the method described below. For example, when the phone is turned on for the first time and audio is played for the first time, the media manager can obtain the output volume based on the default volume parameters to avoid popping sounds during the first playback. Optionally, if the media controller does not set a default volume parameter, the media controller can obtain the output volume based on formula (1) when playing audio for the first time, which is not limited in this application.
  • factors that affect the output volume include but are not limited to at least one of the following: input volume, stream volume, track volume, master volume, and volume parameters.
  • stream volume (stream_volume), track volume (track_volume), and master volume (master_volume) can be called output volume parameters.
  • the user can adjust any of the output volume parameters. to adjust the output volume.
  • the mobile phone subscribes to the output volume and can sample the output volume at a preset sampling period to obtain the output volume.
  • the mobile phone can set two types of sampling periods, including a first sampling period and a second sampling period.
  • the first sampling period can also be called a sparse sampling period
  • the second sampling period can also be called a dense sampling period.
  • the sampling period duration of the first sampling period is longer than the period duration of the second sampling period.
  • the period duration of the first sampling period may be on the second level, and the period duration of the second sampling period may be on the millisecond level.
  • the mobile phone can detect the user's behavior to detect whether the output volume adjustment behavior occurs. It can also be understood as detecting whether the adjustment described above occurs. The behavior of the output volume parameter.
  • the mobile phone when the mobile phone plays audio data and does not detect that the user adjusts the output volume, the mobile phone collects the output volume in the first sampling period. In another example, when the mobile phone plays audio data and detects that the user adjusts the output volume, the mobile phone collects the output volume in the second sampling period.
  • the above-described user adjustment of the output volume may include but is not limited to at least one of the following: clicking the volume key; clicking or dragging each volume option in the setting interface (such as each slide bar in Figure 4); clicking The volume keys of the remote control; adjusting the volume through voice commands or gestures, etc. are not limited in this application.
  • a smart volume control adjustment option can be set in the mobile phone, and the user can use the volume control function adjustment option to instruct the mobile phone to execute the volume control scheme in the embodiment of the present application during audio playback.
  • the volume control function adjustment option can be set in at least one of the drop-down menu, the control center, the negative screen, and the sound and vibration setting interface, which is not limited in this application.
  • the mobile phone obtains and saves the output volume range.
  • the mobile phone can obtain the output volume range.
  • the output volume range includes the maximum output volume and the minimum output volume.
  • the mobile phone can be based on formula (3) and formula (4) respectively obtain the maximum value of the output volume (expressed as data_out_max in this application) and the minimum value of the output volume (expressed as data_out_min in this application) to obtain the output volume range.
  • data_out_max Math.max(data_out_max,data_out)
  • data_out_min Math.min(data_out_min,data_out) (4)
  • the mobile phone After the mobile phone collects the output volume, it compares the output volume with the maximum and minimum values of the saved output volume range.
  • the collected output volume is greater than the maximum value of the saved output volume range, the collected output volume is the maximum value of the new output volume range, and the minimum value of the output volume range remains unchanged. That is, the mobile phone updates the saved output volume range.
  • the maximum value of the updated output volume range is the collected output volume, and the minimum value is still the minimum value of the previously saved output volume range.
  • the updated output volume range is the same as the previous output volume range, that is, The maximum value of the updated output volume range is still the maximum value of the previous output volume range, and the minimum value of the updated output volume range is still the minimum value of the previous output volume range.
  • the collected output volume is less than the minimum value of the saved output volume range, the collected output volume is the minimum value of the new output volume range, and the maximum value of the output volume range remains unchanged. That is, the mobile phone updates the saved output volume range.
  • the maximum value of the updated output volume range is the maximum value of the previously saved output volume range.
  • the minimum value of the updated output volume range is the collected output volume. .
  • the mobile phone after each sparse collection (that is, collection according to the first sampling cycle), the mobile phone updates the output volume range according to the collected output volume, and the updated output volume range is the same as the previous output.
  • the volume ranges can be the same or different.
  • the mobile phone when the mobile phone collects the output volume in the second sampling period, the mobile phone can also obtain the output volume range.
  • the mobile phone specifically, the media manager
  • detects that the user has increased the output volume for example, the user clicks the volume key to increase the media stream parameter (i.e., stream_volume)
  • the mobile phone specifically, the media manager
  • Media Manager which will not be described again below
  • the maximum output volume value of the output volume range is still the maximum value of the output volume range obtained last time.
  • data_out Math.average(data_out1,data_out2, etc.
  • data_out_min Math.max(data_out_min,data_out) (6)
  • the mobile phone detects that the user adjusts the output volume (i.e., adjusts the output volume parameters mentioned above), during the process of the user increasing the output volume (i.e., starting from the detection of the user Adjust the volume to the end of the adjustment), and the mobile phone collects the output volume in the second sampling period. From the time the user starts adjusting the volume to the end of the adjustment, the mobile phone collects n output volumes in the second sampling period, including data_out1, data_out2..., and the mobile phone obtains the average output volume based on the n collected output volumes. Then, the mobile phone obtains the minimum value of the output volume range based on formula (6).
  • the mobile phone compares the minimum value of the output volume range saved last time with the average output volume obtained this time. Quantity comparison. In one example, if the average output volume is greater than the minimum value of the previously saved output volume range, the average output volume is used as the minimum value of the new output volume range, and the maximum value is still the maximum value of the previously obtained output volume range. In another example, if the average output volume is less than the minimum value of the previously saved output volume range, the minimum value of the previously saved output volume range is used as the minimum value of the new output volume range, and the maximum value is still the previously obtained value. The maximum value of the output volume range, that is to say, the new output volume range is consistent with the previously saved output volume range.
  • the mobile phone specifically, the media manager
  • the media manager detects that the user has turned down the output volume, for example, the user clicks the volume button to turn down the media stream parameter (ie, stream_volume)
  • the phone will The output volume is collected in the second sampling period, and the maximum output volume value of the output volume range is obtained based on formula (7) and formula (8) (expressed as data_out_max in this application).
  • the minimum output volume value of the output volume range is still the minimum value of the output volume range obtained last time.
  • data_out Math.average(data_out1,data_out2, etc.
  • data_out_max Math.min(data_out_max,data_out) (8)
  • the mobile phone detects that the user has increased the output volume (that is, from the time when the user begins to adjust the volume to the end of the adjustment), the mobile phone collects the output volume in the second sampling period. From the time the user starts adjusting the volume to the end of the adjustment, the mobile phone collects n output volumes in the second sampling period, including data_out1, data_out2..., and the mobile phone obtains the average output volume based on the n collected output volumes. Then, the mobile phone obtains the maximum value of the output volume range based on formula (8). Specifically, the mobile phone compares the maximum value of the output volume range saved last time with the average output volume obtained this time.
  • the average output volume is used as the maximum value of the new output volume range, and the minimum value is still the minimum value of the previously obtained output volume range.
  • the maximum value of the previously saved output volume range is used as the maximum value of the new output volume range, and the minimum value is still the previously obtained maximum value. The minimum value of the output volume range. That is to say, the new output volume range is consistent with the previously saved output volume range.
  • the media sensor collects the output volume generated by the media sensor.
  • the media sensor may be the first time Execute the steps to obtain the output volume range.
  • the media sensor collects in a sparse collection cycle, and the collected output volume is data_out1. This value can be regarded as the minimum value or the maximum value.
  • the media sensor detects that the collection period has arrived, and the media sensor collects the output volume. For example, the collected output volume is data_out2.
  • the media sensor compares data_out2 with data_out1.
  • the media sensor determines that the maximum value of the output volume range is data_out2 and the minimum value is data_out1, that is, (data_out1, data_out2). For example, at time t3, the media sensor detects that the collection period has arrived, and the media sensor collects the output volume. For example, the collected output volume is data_out3. Based on formula (3) and formula (4), the media sensor determines that data_out3 is greater than the maximum value of the currently saved output volume range (data_out2). The media sensor updates the output volume range. The maximum value of the updated output volume range is data_out3, and the minimum value is Still data_out1, that is (data_out1, data_out3).
  • the media manager detects that the user has turned down the output volume, and then the media manager performs an intensive collection cycle from time t4 to the end of the user adjustment (for example, time t7). Collect in a row and average the collected output volumes. For example, the average value obtained is data_out4.
  • the media manager obtains the maximum value of the output volume range based on formula (7) and formula (8). Assuming that data_out4 is greater than the maximum value of the currently saved output volume range (data_out3), the maximum value of the updated output volume range is still data_out3, that is, the updated output volume range is (data_out1, data_out3).
  • the collection interval and volume in Figure 8 are only illustrative examples and are not limited in this application. It should be further noted that, not shown in Figure 8 , for example, after intensive collection (that is, after time t7), the media manager will continue collection in a sparse collection cycle and update the output volume range.
  • each time the media manager obtains a new output volume range it saves the new output volume range.
  • the media manager can overwrite the previous output volume range to save memory usage, which is not limited in this application.
  • the mobile phone detects the audio change and obtains the volume parameters based on the new audio input volume and output volume range.
  • audio changes in the embodiment of this application may include but are not limited to: switching audio source files, switching audio source devices, switching output devices, etc.
  • the audio source file switching is optionally the audio file switching received by the media manager.
  • the mobile phone is playing song A.
  • the audio output by the music application to the media manager is the audio data of song A.
  • the mobile phone switches to play song B in response to the user operation.
  • the audio output by the music application to the media manager is switched to the audio data of audio B, which is the audio source switching.
  • the switching of audio source devices may be the switching of audio source devices during the audio delivery service in a multi-device scenario.
  • the mobile phone transmits song A to the vehicle-mounted device through a wireless connection (which can be a Wi-Fi connection or a Bluetooth connection, which is not limited in this application and will not be repeated below), and the vehicle-mounted device receives and plays song A.
  • a wireless connection which can be a Wi-Fi connection or a Bluetooth connection, which is not limited in this application and will not be repeated below
  • the user uses a tablet to connect to the vehicle-mounted device, and uses tablet A to play song A to the vehicle-mounted device.
  • the audio source device is switched from the mobile phone to the tablet.
  • output device switching may be switching of output devices during audio delivery services in a multi-device scenario.
  • the mobile phone plays song A to the car-mounted device through a wireless connection, and in response to the received user operation, the mobile phone plays song A to the tablet through a wireless connection with the tablet.
  • switching its output device from a car-mounted device to a tablet device is an output device switching.
  • the mobile phone detects any of the above situations, that is, after determining that the audio has changed, the mobile phone executes the volume parameter acquisition step. That is to say, after the audio is changed, the mobile phone obtains the volume parameters corresponding to the audio based on the method described below. Before the audio is changed, the mobile phone no longer needs to obtain the volume parameters of the audio.
  • the mobile phone after detecting the audio change, the mobile phone (specifically, the media manager) obtains the input volume of the changed audio (the concept can be referred to the above, and will not be described again here).
  • the mobile phone can obtain the corresponding volume parameters based on the relationship between the input volume and the output volume range.
  • the volume parameter can be used to adjust the output volume of the audio.
  • the output volume of the audio can be adjusted to within the output volume range by setting corresponding volume parameters for different audios (ie, different input volumes).
  • the specific method is as follows:
  • the mobile phone obtains the average input volume of the audio.
  • the mobile phone For example, still taking the scene of playing audio on a mobile phone as an example, after the mobile phone (specifically, the media manager) detects the audio change, it can obtain the preset length of the audio (that is, the changed audio) before playing the changed audio.
  • the audio data corresponds to the input volume.
  • the preset length can be, for example, 5 seconds, which can be set based on actual needs, and is not limited in this application.
  • the mobile phone can average the input volume of the obtained audio data of a preset length to obtain the average input volume.
  • the mobile phone obtains the audio volume parameters based on the relationship between the average audio input volume and the output volume range.
  • the mobile phone obtains the volume parameter of the audio based on formula (9) (in this application, it is expressed as volume_coefficient, concept Please refer to the above and will not go into details here):
  • data_out_max is the maximum value of the output volume range mentioned above.
  • data_in_average is the average input volume of the obtained audio.
  • G_max is a system constant and can be set according to actual needs. For example, in the embodiment of this application, G_max is 0.5, which is not limited in this application.
  • the mobile phone obtains the volume parameter of the audio based on formula (10):
  • data_out_min is the minimum value of the output volume range mentioned above.
  • data_in_average is the average input volume of the obtained audio.
  • G_min is a system constant and can be set according to actual needs. For example, in the embodiment of this application, G_min is 0.5, which is not limited in this application.
  • the mobile phone detects that the average input volume of the audio does not exceed the output volume range, that is, the average input volume of the audio is greater than or equal to the minimum value of the output volume range, and less than or equal to the maximum value of the output volume range, then the audio's The volume parameter is equal to 1.
  • S704 The mobile phone obtains the new audio output volume based on the volume parameters, input volume and output volume parameters.
  • the mobile phone after the mobile phone obtains the audio parameters of the changed audio (ie, new audio), it can obtain the changed output volume based on formula (2).
  • the "new audio” described in the embodiment of the present application in the audio source change scenario is audio data that is different from the audio before switching.
  • the "new audio” may be different audio data from before the switch, or it may be the same audio data as before the switch.
  • the media manager both are considered new. Audio.
  • the mobile phone detects the output device switching. After the output device is switched, the music application may re-output audio data to the media manager.
  • the audio data may be the same as or different from the audio data before the switch, which is not limited by this application.
  • the mobile phone After the mobile phone obtains the audio output volume, it can output the audio and audio output volume to the audio driver for playback, or it can output the audio and audio output volume to other devices through the communication module for playback.
  • the video application plays audio A in response to the received user operation.
  • the video application outputs the input audio data of audio A to the media manager, and the corresponding input volume can be expressed as data_in(A).
  • the video application can divide the audio A into segments and transmit them to the media manager segment by segment.
  • the media manager receives and caches the received audio A.
  • the specific transmission method may refer to the existing technical embodiments and is not covered in this application. limited.
  • the media manager receives input audio data for audio A.
  • the media manager can be based on the input audio data corresponding to Enter the volume parameters and the currently saved (that is, the most recently saved) output volume range to obtain the volume parameters of audio A.
  • the media manager obtains the average input volume (data_in_average(A)) of the audio data of the preset length (for example, the first 5 seconds) of the beginning of audio A.
  • the Media Manager compares Audio A's average input volume to the output volume range.
  • the output volume range currently saved by the media manager is (data_out_min1, data_out_max1) as an example.
  • the media manager detects that the average input volume (data_in_average(A)) of audio A is within the output volume range, that is, data_out_min1 ⁇ data_in_average(A) ⁇ data_out_max1, then the media manager determines the volume parameter corresponding to audio A (denoted as volume_coefficient (A)) is 1.
  • the media manager obtains the output volume corresponding to the output audio data of audio A (denoted as data_out(A)) based on formula (2). Please refer to Figure 9a. After the media manager obtains data_out(A), the media manager outputs the output audio data of audio A to the audio driver. The audio driver plays the output audio data of audio A through the speaker of the mobile phone, and the playback volume is the value corresponding to data_out(A).
  • the media manager performs the first sampling period (i.e., the sparse sampling period mentioned above) on the output volume of audio A (i.e., the output volume generated by the media manager). Collect, and update the output volume range based on the collected output volume.
  • the first sampling period i.e., the sparse sampling period mentioned above
  • the output volume of audio A i.e., the output volume generated by the media manager.
  • the video application responds to the received user operation, changes the played audio, and switches audio A to audio B.
  • the video application outputs the input audio data of audio B to the media manager, and the corresponding input volume is represented as data_in(B).
  • the media manager determines the audio source change in response to receiving input audio data for Audio B.
  • the media manager determines that the audio source has changed, it will re-execute the volume parameter acquisition process, that is, the media manager acquires the volume parameters of audio B.
  • the media manager obtains the average input volume (data_in_average(B)) of the audio data of the preset length (for example, the first 5 seconds) at the beginning of audio B.
  • the media manager compares Audio B's average input volume to the output volume range.
  • the output volume range currently saved by the media manager is (data_out_min1, data_out_max1) as an example. In other words, the output volume range obtained by the media player during playing audio A has not changed.
  • the media manager detects that the average input volume (data_in_average(B)) of audio B exceeds the output volume range and is less than the minimum value data_out_min1 of the output volume range.
  • the media manager may obtain the volume parameter of audio B (denoted as volume_coefficient(B)) based on formula (10).
  • the media manager can obtain the output volume (denoted as data_out(B)) corresponding to the audio data of audio B based on formula (2). Please refer to Figure 9b.
  • the media manager After the media manager obtains data_out(B), the media manager outputs the audio data of audio B to the audio driver.
  • the audio driver plays the output audio data of audio B through the speaker of the mobile phone, and the playback volume is the value corresponding to data_out(B).
  • the above example only takes the average input volume of audio B to be greater than the maximum value of the output volume range as an example. If the average input volume of audio B is less than the minimum value of the output volume range, the media manager can use the formula ( 10) Obtain the volume parameters of audio B. The steps for other parts are the same and the examples will not be repeated here.
  • the media manager collects the output volume of audio B in the first sampling period (i.e., the sparse sampling period mentioned above), and Based on collection reaches the output volume, and updates the output volume range.
  • the media manager will only execute the volume parameter acquisition process after detecting audio changes. Therefore, the output volume range obtained during audio playback will not affect the currently played audio. Before the audio playback ends or other audio is switched, the output volume is calculated based on the volume parameter obtained. The output volume range updated during playback will be used in the step of obtaining volume parameters after the audio is changed.
  • the video application switches Audio B back to Audio A in response to the received user action.
  • the media player repeats the process in Figure 9a.
  • the video application outputs the input audio data of audio A to the media manager, where the input volume corresponding to the input audio data of audio A is data_in(A).
  • the media manager determines the audio source change in response to receiving input audio data for audio A.
  • the media manager will re-execute the volume parameter acquisition process. For example, the media manager obtains the average input volume of audio A (data_in_average(A)).
  • Media Manager compares Audio A's average input volume to the output volume range.
  • the currently saved output volume range is (data_out_min1, data_out_max1) as an example.
  • the media manager detects that the average input volume of audio A is within the output volume range.
  • the media manager can determine that the volume parameter (volume_coefficient(A)) corresponding to audio A is 1, and based on the volume parameter of audio A and the input volume of audio A and the output volume parameter to obtain the output volume of audio A.
  • volume_coefficient(A) volume parameter
  • volume_coefficient(A) its corresponding volume parameter is volume_coefficient(A).
  • the mobile phone can obtain the volume parameters corresponding to audio B based on the input volume of audio B, so as to output the volume parameters (i.e., the parameters in the dotted box, including stream volume (stream_volume) , track volume (track_volume), master volume (master_volume)) remain unchanged, that is, when the output volume parameters are the same as before switching to audio B, the mobile phone can adjust the output volume of audio B to within the output volume range through the volume parameters.
  • the media manager can re-obtain the corresponding volume parameters based on the switched audio.
  • the output volume parameters are the same, by setting appropriate volume parameters, the output volume of audio A can also fall within the output volume range obtained based on the user's listening habits. That is to say, in this embodiment of the present application, even if the input volume of the audio after switching is smaller than the input volume of the audio before switching, by setting the corresponding volume parameter, the user does not need to increase the volume.
  • the audio is switched back to the previous audio, since the output volume parameter has not changed, there will be no popping problem.
  • the audio output volume described in the embodiment of the present application is within the output volume range can be understood to mean that the average audio volume is within the output volume range, or it can be understood that most of the audio data volume is within the output volume range.
  • the volume of a small part of the audio may be greater or less than the output volume range.
  • the output volume corresponding to all the audio data of the audio data is within the output volume range, then during the process of playing the audio data, The output volume is always maintained within the output volume range. For example, if the output volume corresponding to part of the audio data is not within the output volume range, the electronic device can update the output volume range when a volume that does not meet the output volume range is collected.
  • the above scenario is based on a scenario where the output volume range remains unchanged.
  • the media manager collects the output volume in the second sampling period (that is, the intensive sampling period) during the process of the user increasing the volume.
  • the media manager can update the output volume range based on the collected output volume.
  • the specific acquisition method can be referred to above and will not be repeated here.
  • the output volume range may also change during the sparse collection period, and this application will not give examples one by one.
  • the updated output volume range may be the same as or different from the previous output volume range, which is not limited in this application.
  • the media manager can obtain the volume parameters corresponding to audio B based on the updated output volume range.
  • the media manager will collect the audio in an intensive collection cycle in the manner described above, and update the output volume range.
  • the media manager can obtain the volume parameters corresponding to audio A based on the currently saved (ie, the most recently updated) output volume range, that is, the most recently updated output volume range.
  • the video application may also switch to other audio in response to the received user operation.
  • the specific method is consistent with switching to audio A, and this application will not give examples one by one.
  • the audio played when switching back to audio A is the same as the audio played before switching to audio B.
  • the video application may also adopt a method of resuming playback at a break point. For example, if audio A1 was played before switching to audio B, the media manager can obtain the volume parameter corresponding to audio A1 and obtain the corresponding output volume. After the video application switches audio B back to audio A, it can optionally play audio A2 in audio A.
  • the data of audio A2 and audio A1 are different, and the input volume can be the same or different.
  • the media manager can obtain the corresponding volume parameter based on the input volume of audio A2, and obtain the output volume of audio A2.
  • the specific implementation is similar to that in Figure 9a and Figure 9b and will not be described again here.
  • audio switching may be to switch the currently playing audio A to audio B in response to a received user operation during the playing of audio A.
  • audio B may be played after the audio A is played (for example, after a period of time), which is not limited in this application.
  • the media manager when the user adjusts the output volume parameter to adjust the audio output volume, when the media manager obtains the output volume range, it can compare the newly obtained maximum value of the output volume range with the previously obtained maximum value.
  • the maximum value of the output volume range is averaged to serve as the updated maximum value of the output volume range. Accordingly, the media manager averages the newly obtained minimum value of the output volume range and the previously obtained minimum value of the output volume range as the updated minimum value of the output volume range, thus preventing the user from adjusting the volume. This causes the output volume range to fluctuate excessively.
  • the volume control method in the embodiment of the present application is not only applied in the scenario where the mobile phone plays audio as mentioned above (which can also be understood as a single-device scenario), but can also be applied in a multi-device collaboration scenario.
  • Figure 10 is an exemplary multi-device collaboration scenario. Please refer to Figure 10.
  • the mobile phone outputs audio data of audio A to the TV through a wireless connection with the TV.
  • the TV receives and plays the audio data of audio A.
  • the device types and device quantities in Figure 10 are only illustrative examples.
  • the scenario may be that mobile phone A is wirelessly connected to a TV and a tablet respectively, and outputs the audio data of audio A to the TV and the tablet respectively.
  • Both the TV and the tablet can play the audio data of audio A.
  • the processing method The processing method of each device in the scenario in Figure 10 is the same, and this application will not give examples one by one.
  • the wireless connection described in the embodiments of this application may be maintained based on Bluetooth protocol or Wi-Fi protocol, which is not limited by this application.
  • the wireless connection is a Wi-Fi connection as an example for explanation.
  • the specific establishment process of the wireless connection may refer to existing technical embodiments, and will not be described in detail in this application.
  • the video application of the mobile phone determines to play audio A in response to the received user operation.
  • the video application of the mobile phone outputs the input audio data A1 of audio A to the media manager of the mobile phone, and the corresponding input volume is recorded as data_in(A1).
  • the media manager of the mobile phone receives the input audio data of audio A and obtains the volume parameters of audio A. Specifically, the media manager of the mobile phone obtains the average input volume of audio A (data_in_average(A1)). The media manager compares the average input volume of audio A with the currently saved output volume range and obtains the corresponding volume parameter (volume_coefficient(A1)). For specific details, please refer to the relevant content in Figure 9a and will not be described again here.
  • the media manager of the mobile phone can obtain the output volume data_out of audio A based on the input volume, volume parameters and current output volume parameters of audio A (including stream volume (stream_volume), track volume (track_volume), and master volume (master_volume)). (A1).
  • the media manager of the mobile phone can collect the output volume of audio A, and update the output volume range based on the collected output volume.
  • the specific method can be referred to above and will not be described again here.
  • the media manager of the mobile phone outputs the output audio data A1 of audio A to the Wi-Fi driver, where the output volume corresponding to the output audio data A1 is data_out(A1).
  • the Wi-Fi driver of the mobile phone outputs the output audio data A1 of audio A to the Wi-Fi driver of the TV.
  • the TV's Wi-Fi driver optionally outputs the output audio data A1 of the audio A to the TV's screen projection application (it can also be other collaborative applications, which is not limited in this application).
  • the screen projection application outputs the output audio data A1 of audio A to the media manager of the TV.
  • the audio data received by it are all recorded as input audio data, then the audio data of audio A received by the media manager is represented as the input of audio A.
  • Audio data A2 the corresponding input volume is data_in(A2). Among them, data_in(A2) is equal to data_out(A1).
  • the media manager of the TV obtains the volume parameter of audio A on the TV side based on the received input volume data_in(A2) of audio A.
  • the TV's media manager can obtain the average input volume of audio A (data_in_average(A2)) based on the input volume of audio A, data_in(A2).
  • data_in_average(A2) is the same as or different from data_in_average(A1). There are no restrictions on application.
  • the media manager of the TV compares the average input volume of audio A (data_in_average(A2)) with the output volume range currently saved on the TV side.
  • the output volume range on the TV side is the same as or different from the output volume range on the mobile phone side, which is not limited in this application.
  • the media manager of the TV can obtain the volume parameter (volume_coefficient(A2)) of audio A on the TV side based on the comparison result.
  • the volume parameter (volume_coefficient(A2)) is the same as or different from the volume parameter (volume_coefficient(A1)), which is not limited in this application.
  • the media manager of the TV can obtain audio A based on the input volume data_in (A2), volume parameter (volume_coefficient (A1)) of audio A, and the output volume parameter of the TV side (the same as or different from the mobile phone side).
  • the output volume data_out(A2) on the TV side can be noted that for the specific details of obtaining each parameter, please refer to the relevant content in the above embodiments, and the description will not be repeated here.
  • the media manager of the TV can output audio data A2 of audio A to the audio driver, and the corresponding output volume is data_out(A2).
  • the audio driver controls the speaker (or other playback device) to play the audio data of audio A, and the playback volume is data_out(A2).
  • the media manager of the TV can also collect the output volume of audio A and update the output volume range.
  • the media manager of the TV can also collect the output volume of audio A and update the output volume range.
  • the TV in a multi-device collaboration scenario, can adjust the output volume of the audio to the listening volume range that the user is accustomed to on the TV side, that is, the output volume range of the TV side, through the volume parameter corresponding to the audio.
  • the audio input volume obtained by the TV i.e., the output volume on the mobile phone side
  • the audio output volume on the TV side can be adjusted to within the output volume range by obtaining appropriate volume parameters. .
  • the user can adjust the output volume of the TV through the TV's remote control.
  • the media manager of the TV increases the output volume parameters (including stream volume (stream_volume), track volume (track_volume), and master volume (master_volume)) on the TV side in response to the received user operation.
  • the corresponding changes are the output volume on the TV side and the output volume range on the TV side, but the output volume and output volume range on the mobile phone side will not be changed.
  • the mobile phone when the mobile phone cancels the audio transmission with the TV (the wireless connection may be maintained or disconnected, which is not limited by this application), if the mobile phone continues to play audio A on the mobile phone side, the mobile phone can The process in 9a obtains the output volume of audio A.
  • the media manager on the mobile phone detects the audio change (ie, the output device changes).
  • the mobile phone obtains the output volume of audio A, it needs to obtain the output volume of audio A again based on the method in Figure 9a.
  • Volume parameter and based on the new volume parameter (ie, volume parameter A3), the output volume of audio A is obtained.
  • the volume parameter A3 is the same as or different from the volume parameter A1.
  • the mobile phone can play audio A through the audio driver of the mobile phone.
  • the playback volume is the output volume obtained by the mobile phone based on the output volume range and volume parameters on the mobile phone side.
  • the TV side after the audio transmission with the mobile phone is cancelled, if the TV side plays audio (such as audio B), the output volume of audio B played by the TV side is obtained based on the output volume range updated by the TV side and the corresponding volume parameters. Arrived.
  • the output volume of device A side is usually increased, that is, the output volume parameter of device A side is increased, that is, the input volume of device B is increased. of Volume up to increase the output volume on side B of the device.
  • device A and device B are disconnected (the collaboration between devices may also be canceled, which is not limited in this application)
  • device A continues to play audio A. Since the output volume parameter on the side of device A has been increased, when device A plays audio A, it will continue to use the increased output volume parameter to obtain the output volume of audio A, which may cause popping sound when device A plays audio A. .
  • device A and device B obtain the output volume, they obtain the output volume based on their respective output volume ranges, output volume parameters, and obtained volume parameters. of. That is to say, even if the volume output by device A to device B is larger or smaller, device B can also obtain the appropriate volume parameters to adjust the output volume of audio A on the side of device B to device B through the volume parameters. within the output volume range of the side. In other words, the user does not need to adjust the output volume parameters of device A or device B. Through the volume parameters, device B can adjust the output volume of audio A on device B to within the output volume range to meet the user's listening habits. .
  • the output volume parameters and volume parameters of device A and device B are independent of each other and do not affect each other. Therefore, after device A and device B are disconnected, when device A or device B plays other audio , both device A or device B will re-obtain the corresponding volume parameters for the played audio to adjust the output volume of the new audio to the output volume range of the respective devices, without causing the problem of popping sound, effectively improving the user experience. .
  • the user can adjust the output volume of device B through device A and/or device B.
  • the user can adjust the output volume of audio A on the device B side by adjusting the output volume parameter on the device A side (for the principle, please refer to the above and will not be repeated here).
  • device A can obtain the corresponding volume parameters based on the output volume range obtained after adjusting the volume, and further obtain the output volume.
  • the corresponding volume parameters can still be obtained according to the saved output volume parameters and output volume range, and the output volume can be obtained.
  • the user can adjust the output volume of audio A on the device B side by adjusting the output volume parameter on the device B side.
  • the output volume parameter on the device B side can be adjusted.
  • device A is disconnected from device B, when device A is playing audio, its output volume parameters and output volume range are not affected (that is, the parameters in the dotted box are the same). Therefore, When device A plays audio, the output volume it plays is still within the output volume range, which can effectively avoid the popping problem that occurs after device switching.
  • FIGs 13a and 13b are exemplary schematic diagrams of the principles. Please refer to Figure 13a.
  • the scene includes devices such as mobile phones, headphones, tablets, and televisions. It should be noted that the number and type of devices in Figure 13a are only illustrative examples and are not limited in this application.
  • a mobile phone as a central device, can obtain the audio data sent by each slave device.
  • the mobile phone can connect to a tablet and a TV through a wireless connection (such as a Wi-Fi connection, or other connection methods.
  • This application does not cover (Limited)
  • the audio data of audio A sent by the tablet and the audio data of audio B sent by the TV are received.
  • the input volume corresponding to the audio data of audio A is data_in(A)
  • the input volume corresponding to the audio data of audio B is data_in(B).
  • the mobile phone can mix the audio of the mobile phone (such as audio C), the audio of the TV (audio B) and the audio of the tablet (audio C) to obtain the mixed audio data.
  • the output volume corresponding to the mixed audio is data_out (X).
  • the mobile phone can output the audio data of the mixed audio to the headset to play the mixed audio through the headset, and the playback volume is data_out(X).
  • each device sends The audio input volume is the volume adjusted based on the volume control method described above.
  • the mobile phone also adjusts the output volume of the mixed audio based on the volume control method in this application, so that the output volume of the mixed audio is controlled within the output volume range of the mobile phone.
  • the central device in the embodiment of the present application can obtain the mixed audio based on the relative position (including distance and/or angle) between each device and the earphones, so that the central device can obtain the mixed audio on the earphone side. Achieve stereo effect.
  • the user can hear the audio of each device in the network (including mobile phones, tablets and TVs) through earphones, and the sound effect of each audio is the same as the auditory effect when the user does not use earphones.
  • Proximity that is, the sound played in the headphones can achieve the spatial hearing effect of the distance and direction of the sound.
  • the central device is optionally used to connect and interact with slave devices, to issue instructions to each slave device, and to obtain audio data from the slave devices.
  • the central device is also used to connect and interact with the headset to obtain instructions from the headset and transmit audio data to the headset.
  • Slave devices are devices in the network other than the central device.
  • the networking can be Wi-Fi networking, Bluetooth networking, or a hybrid networking of Wi-Fi and Bluetooth.
  • the connection between the mobile phone and the TV can be a Wi-Fi connection.
  • the connection between the mobile phone and the tablet may be a Bluetooth connection, which is not limited in this application.
  • each device in the network in this embodiment of the present application has the same account. The specific determination method of the central device and the slave device will be described in detail in the following embodiments.
  • Figure 14 is a schematic diagram of an exemplary scene. Please refer to Figure 14.
  • a Wi-Fi network formed between a mobile phone, a TV, and a tablet is used as an example for explanation. That is, the wireless connections of each device in the network are maintained based on the Wi-Fi protocol. For example, after the TV and tablet in the user's home are turned on, they can be automatically discovered and connected (or manually connected, which will not be described here) to form a home network.
  • the home network may also include other devices, such as Bluetooth speakers and other smart home devices, which are not limited in this application.
  • the mobile phone executes the Wi-Fi discovery process, and after discovering each device (including TV and tablet) in the Wi-Fi network, automatically connects to each device. to join the Wi-Fi network.
  • the embodiments of this application only briefly describe the structure and establishment process of the network. For the specific connection process, reference can be made to existing technical embodiments, which are not limited in this application.
  • the earphones can automatically connect to the electronic device.
  • the headset automatically connects to the last connected device (such as a tablet) as an example for explanation.
  • the earphones can also select the nearest device for connection, which is not limited in this application.
  • the headset establishes a Bluetooth connection with the tablet.
  • the specific establishment process may refer to existing technical embodiments, and will not be described in detail in this application.
  • each device in the network can initiate a voting process to select the central device and slave devices.
  • Figure 15 is an exemplary voting schematic diagram. Please refer to Figure 15.
  • Each device in the network (including mobile phones, tablets, and TVs) sends (for example, broadcast messages) voting information to other devices in the network.
  • Ballot information includes device information, device capability information and location information. Among them, the device information includes but is not limited to: device model, device name, device address and other information.
  • the capability information of the device includes but is not limited to: the communication type supported by the device, whether it supports the mixing function, etc.
  • the location information is optionally distance information between the device and the headset.
  • the distance information can be measured through Bluetooth ranging, Ultra Wide Band (UWB), etc., which is not limited in this application.
  • the device may also be called a candidate device or an alternative device during the election phase, which is not limited by this application.
  • each device in the network can receive voting information sent by other devices. Take the mobile phone as an example.
  • the mobile phone sends voting information to the TV and tablet.
  • the voting information includes relevant information of the mobile phone.
  • the mobile phone will also receive the voting information sent by the TV and the voting information sent by the tablet.
  • the voting information sent by the TV includes the relevant information of the TV
  • the voting information sent by the tablet includes the relevant information of the tablet.
  • each device in the network can have preset voting rules, and the voting rules can be set according to actual needs.
  • the device closest to the headset can be selected based on the location information in each ballot, which is not limited in this application.
  • each device selects a mobile phone as the central device according to the preset voting rules.
  • the mobile phone selects the mobile phone itself as the central device according to the preset voting rules based on its own device information, location information, etc., as well as the received voting information from the TV and the voting information from the tablet.
  • the preset rules are the same as those of mobile phones, and the obtained voting information is also the same. Therefore, the central device selected by each device in the network is consistent.
  • mobile phones are all selected as the central device.
  • other candidate devices that are not the central device in the network serve as slave devices.
  • the headset after selecting the central device, the headset can be switched to the central device, that is, disconnecting from the tablet, and establishing a Bluetooth connection with the mobile phone (ie, the central device).
  • a tablet non-central device is used as an example for explanation.
  • the device currently connected to the headset may or may not be a central device, which is not limited in this application.
  • a handshake process is periodically executed between the central device and each device, that is, the central device and each slave device perform a handshake process in each cycle (for example, it can be 5s, which can be set according to actual needs.
  • This application (not limited) at the triggering time, the detection information is exchanged to detect whether the status of the central device is normal.
  • the slave device can determine that the status of the central device is abnormal and the network Each device in the system re-executes the voting process, and the central device after re-voting is different from the previous central device.
  • the headphones will be connected to the new central device, and the new central device will continue to perform the steps described in the embodiments below, such as the mixing step.
  • each device can obtain relative position information with the headset.
  • the relative position information includes distance information and/or angle information from the earphone.
  • the relative position between the mobile phone and the earphone is obtained: distance A and angle A.
  • the relative position between the TV and the headphones is obtained: distance B and angle B.
  • the relative position between the tablet and the earphones is obtained: distance C, angle C.
  • each device can obtain angle information based on the Angle of Arrival (AOA) algorithm or the Angle of Departure (AOD) algorithm, UWB and other measurement methods, which are not limited in this application.
  • AOA Angle of Arrival
  • AOD Angle of Departure
  • each slave device in the network (such as a TV and a tablet) sends the obtained relative position information to the central device.
  • each device can periodically acquire relative position information, and each slave device sends the relative position information acquired in each cycle to the central device.
  • FIG 17 is a schematic diagram of an exemplary user interface. Please refer to (1) of Figure 17.
  • the sound and vibration setting interface 1701 includes a mixing setting option box 1702. The user can click this option to start the mixing function of the mobile phone.
  • a flat panel or TV can also have a mixing function.
  • the mixing function in the TV or tablet can prompt the central device to be a mobile phone to prompt the user to perform operations on the mobile phone.
  • the central device can also synchronize relevant information to the slave device, so that the slave device can also implement the user's operation on the mobile phone, and send instructions generated in response to the received user operation to the central device, so that the central device Issue relevant control instructions within the network.
  • the mobile phone starts the mixing function in response to the received user operation.
  • the mobile phone can calculate the relative position between all devices in the network and the headset based on the most recently obtained relative position information between the mobile phone and the headset, as well as the most recently obtained relative position information between the other slave devices and the headset. position.
  • the mobile phone may use the direction of the focus device that the user is operating as the direction directly in front of the user, or may use the direction in which the earphones are facing as the direction directly in front of the user, which is not limited in this application.
  • the mobile phone displays the obtained relative orientations between all devices and the earphones in the mixing setting option box 1702 .
  • (2) of Figure 17 is only a schematic example.
  • the distance and orientation between each device and the headset can be identified in the figure, and information such as icons of each device can also be displayed. This application is not limited.
  • the user can manually adjust the relative position between each device and the headset through the interface provided in (2) of Figure 17 .
  • the relative position displayed in the interface may be inaccurate due to measurement errors and other issues.
  • the user can adjust the relative position to the headset by dragging the corresponding device icon. Taking the tablet as an example, the user can drag the icon of the tablet to increase the angle between the tablet and the headset.
  • the mobile phone responds to the received user operation and calculates the angle between the dragged tablet icon and the headset icon. And save the new relative position information of the tablet, which is the distance information previously sent by the tablet and the updated angle information.
  • the mobile phone can send the new relative position information of the tablet to the tablet.
  • the user can remove the device through the interface in (2) of Figure 17 .
  • the phone responds to the received user's long press operation and displays an option box.
  • the option box can include a delete option, and the user can click the delete option to delete the tablet.
  • the mobile phone determines to remove the tablet from the mixing scene, and the mobile phone cancels the display of the tablet icon in the mixing setting option box 1702. Moreover, the mobile phone will not receive the audio sent by the tablet during the subsequent mixing process.
  • the mobile phone can also send instruction information to the tablet to instruct the tablet to stop sending relative position, audio and other information, and the tablet stops sending audio and other information to the mobile phone.
  • the audio mix does not include the audio corresponding to the tablet. It should be noted that this removal solution only removes the tablet's audio from the mix, and the tablet is still in the network.
  • the user can re-enable the mix function to trigger each device to re-execute the relative position acquisition process described above.
  • the mobile phone (or other devices such as tablets) can send trigger information to each device in the network after receiving the user's click on the mixing setting option to trigger the
  • Each device performs the voting process described above, and after the voting process is completed, the headset is connected to the central device.
  • each device in the network performs the relative position acquisition process described above.
  • the mobile phone (or other devices such as tablets) can send trigger information to each slave device to trigger each slave device to obtain the relative position to the headset.
  • Each slave device feeds back the obtained relative position information to the mobile phone.
  • the mobile phone calculates the group based on the relative position information between itself and the headset and the received relative position information.
  • the relative position of each device in the network is displayed in the mixing setting option box, which can effectively save the calculation burden of each device and reduce data interaction.
  • this method is less real-time than the method described above, and it may take a few seconds before the relative position of each device is displayed in the display box.
  • the headset can continue to connect to the tablet.
  • the mobile phone receives the user's click on the mixing option, the mobile phone establishes a connection with the headset.
  • the connection between the earphones and the tablet can be maintained or disconnected, which is not limited in this application.
  • the central device determines the orientation of each device, it can perform a mixing process while the devices in the network are playing audio, thereby achieving the effect shown in Figure 13b.
  • the user plays games on the mobile phone (that is, the game audio is played on the mobile phone), the video is played on the tablet, and the music is played on the TV.
  • the user's human ears can hear the game audio from the mobile phone, the video audio from the tablet, and the music audio from the TV.
  • the earphones can send wearing instruction information to the mobile phone.
  • the mobile phone i.e., the central device
  • the mobile phone sends mixing trigger instruction information to the slave devices (tablet and TV) to instruct each device to stop playing audio and output the audio data. to the mobile phone for mixing through the mobile phone, and output to the headphones for playback, as shown in Figure 13a.
  • the mobile phone and each slave device can perform soft clock synchronization.
  • the soft clock synchronization optionally synchronizes the system time between the mobile phone and each slave device to avoid audio inconsistency caused by network delay. Synchronization problem. After soft clock synchronization is performed on the mobile phone, tablet, and TV, the system time between the devices is consistent. It should be noted that in the embodiment of this application, only the system time that is synchronized by the soft clock is used as an example for explanation, and this application does not make a limitation. It should be further noted that this soft clock synchronization step can be executed at any time after the central device is elected and before the mobile phone performs mixing, which is not limited by this application.
  • the tablet plays audio A through its own audio device (such as a speaker).
  • its internal processing flow follows the volume control method described above, that is, the media manager in the tablet adjusts the output volume through the volume parameters. For specific details, please refer to the above and will not be discussed here. Repeat.
  • the tablet determines the audio change, that is, the output device is switched, which means that the output device is changed from the audio device of the tablet to the mobile phone.
  • the video application in the tablet outputs the input audio data of audio A to the media manager, and the corresponding input volume is data_in(A).
  • the media manager will re-execute the volume parameter acquisition process after determining that the audio has changed. For example, the media manager may determine the volume parameter of audio A based on the input volume of audio A and the output volume range currently saved by the tablet. Next, the media manager obtains the output volume (data_out(A)) of audio A based on the input volume, volume parameters, and output volume parameters of audio A.
  • data_out(A) the output volume parameters of audio A.
  • the media manager of the tablet adds soft clock information to the output audio data of audio A.
  • the soft clock information is the time information after soft clock synchronization as described above.
  • the specific method of adding soft clock information may refer to existing technical embodiments, and will not be described in detail in this application.
  • the media manager of the tablet outputs the output audio data of audio A after adding the soft clock information to the Wi-Fi driver.
  • the output volume corresponding to the output audio data of audio A is (data_out(A)).
  • the Wi-Fi driver transmits the output audio data of audio A (clock information has been added and will not be repeated below) to the mobile phone.
  • the Wi-Fi driver of the mobile phone receives the output audio data of audio A.
  • the mobile phone's Wi-Fi driver outputs the output audio data of audio A to the media manager.
  • the audio data of audio A received by the media manager is the input audio data of audio A for the media manager.
  • the output audio data of audio A will still be described below and will not be replaced by the input audio data of audio A.
  • the mobile phone side plays the game audio through the speaker. After the mobile phone detects that the user is wearing headphones, it can be determined that the output device is switched to headphones, that is, the audio changes.
  • the mobile phone side also re-executes the output volume acquisition process described above for the audio that needs to be played on the mobile phone side. Specifically, please refer to Figure 18.
  • the game application outputs the input audio data of audio C to the media manager of the mobile phone, where the input volume corresponding to the input audio data of audio C is data_in(C).
  • the media manager of the mobile phone can obtain the output volume of audio C.
  • the media manager of the mobile phone can determine the volume parameters of audio C based on the input volume of audio C (data_in(C)) and the output volume range currently saved by the mobile phone.
  • the media manager obtains the output volume (data_out(C)) of audio C based on the input volume, volume parameters, and output volume parameters of audio C.
  • the mobile phone can perform a mixing process based on the output audio data of audio C of the mobile phone, the output audio data of audio A sent by the received tablet, and the output audio data of audio B sent by the TV to obtain the audio of the mixed audio. data.
  • Figure 19 is a schematic diagram of an exemplary mixing process. Please refer to Figure 19, which specifically includes:
  • the mobile phone can align audio A, audio B and audio C based on its own soft clock, as well as the received soft clock in audio A and the soft clock in audio B, so that audio A, audio B and The audio starting point of audio C is synchronized.
  • audio A, audio B and The audio starting point of audio C is synchronized.
  • the media manager of the mobile phone can calculate the time difference, sound level difference, phase difference and timbre difference between each audio based on the relative position information of each device (including mobile phone, TV and tablet), thereby realizing different devices The stereo effect of the audio mix.
  • the time difference may be the time difference between the sound reaching the user's two ears (it may also be played by two earphones).
  • the time difference reaches about 0.6ms, the user can feel that the sound comes entirely from one side. That is to say, by adjusting the time difference between the audio output by the two headphones, the user can perceive that the audio source is shifted in a certain direction.
  • the sound level difference is optionally such that the sound level on the side close to the sound source is larger and the sound level on the other side is smaller.
  • the sound level difference between the audio heard by the user's two ears can reach about 25dB.
  • the sound level difference between the two earphones can be adjusted, for example, the sound level of the audio in one channel of the earphones can be increased, while the sound level of the other audio channel remains unchanged or reduced, thereby This allows the user to perceive that the audio source is shifted in a certain direction.
  • the phase difference may be the phase difference between the audio signals received by the two earphones. It should be noted, Even if the sound level and time received by the two earphones are the same, adjusting the phase between the audio received by the two earphones can also make the user perceive that the audio source is shifted in a certain direction.
  • the timbre difference may be the difference in timbre (ie, frequency) between audio signals received by the two earphones.
  • timbre ie, frequency
  • the higher the frequency of the audio the greater the attenuation when it goes around the head and reaches the other ear.
  • the timbre of the audio received by the two earphones can be adjusted so that the user perceives that the source of the audio is shifted in a certain direction.
  • FIG. 20 Exemplarily, the output audio data of audio A, the output audio data of audio B, and the output audio data of audio C obtained by the media manager of the mobile phone are shown in Figure 20, where each number is 4 bit, that is, 4 bits. Each sampling period is 16 bits, which occupies two grid lengths in Figure 20, that is, 16 bits. It should be noted that the audio data shown in Figure 20 is only a schematic example and is not limited in this application.
  • the media manager of the mobile phone can obtain the time difference of each audio in the left and right channels of the headset based on the orientation between the mobile phone and the headset (that is, the angle information in the relative position information), and adjust the time difference of the audio in the left and right channels to virtually
  • the relative position of the sound source in the user's hearing is determined, and the virtual sound source is shifted in a certain direction to approximate the relative position between the actual sound source (such as a tablet) and the mobile phone.
  • the tablet is in front and right of the earphones, and the angle between the tablet and the earphones is angle C.
  • the audio output to the left channel can be delayed by 3 sampling periods. That is to say, the starting point of the audio of the right channel differs from the starting point of the audio of the left channel by 3 sampling periods.
  • the right channel of the headset plays audio A first, and after 3 sampling periods, the left channel plays audio A, thereby realizing the time difference between the left channel and the right channel of audio A in the headset.
  • the sound source can be adjusted so that the user can auditorily perceive the sound source of audio A, that is, the virtual sound source is in front of the user's right, close to the tablet and earphones. actual position between.
  • the time difference adjustment principle of the audio between the TV and the mobile phone can also be referred to Figure 21b, and the description will not be repeated below.
  • the TV is directly in front of the earphones, that is, the angle between it and the earphones (ie, angle B) is 90 degrees.
  • the audio output by audio B to the left channel of the earphone is consistent with the audio output to the right channel of the earphone, so that the virtual sound source is directly in front of the user's auditory perception.
  • the mobile phone is in the left front of the earphone, and the angle between the mobile phone and the earphone is angle A.
  • the phone's media manager can delay the audio output to the right channel by 3 sampling periods. That is to say, the starting point of the audio of the left channel differs from the starting point of the audio of the right channel by 3 sampling periods.
  • the left channel of the headset plays audio C first, and after 3 sampling periods, the left channel plays audio C, thereby realizing the time difference between the left channel and the right channel of the headset of audio C. Since there is a time difference between the audio received by the left and right channels, the sound source can be adjusted so that the user can auditorily perceive the sound source of audio C, that is, the virtual sound source is in front and left of the user.
  • the time difference is adjusted based on the orientation information.
  • the media manager adjusts the audio delay of the left and right channels to realize the offset of the direction of the virtual sound source.
  • the relative position information corresponding to each device in the network may include distance distance information and/or angle information. If the relative position information includes distance information and angle information, the media manager of the central device (ie, the mobile phone) can further adjust each audio based on the distance information. For example, still taking the scenario shown in Figure 16 as an example, the media manager of the mobile phone can obtain the distance attenuation value of the audio C of the mobile phone based on the distance information between the mobile phone and the headset (ie, distance A).
  • D is the distance value between the device and the headset
  • D_min is the minimum distance value among the distance values between each device and the headset.
  • the distance value of the device with the smallest media manager distance is The distance is used as a benchmark to calculate the volume attenuation value of other devices. This calculation method is only an illustrative example and is not limited by this application.
  • the media manager may add the attenuation value corresponding to the distance information to the audio data of each audio before performing the steps shown in Figure 21a, or after performing the steps shown in Figure 21a.
  • the media manager before performing the steps shown in Figure 21a, can add the audio data of audio A shown in Figure 20 to the distance attenuation value corresponding to audio A, and the media manager can add the audio data of audio B to the audio data of audio A shown in Figure 20.
  • the distance attenuation value corresponding to audio B, and the media manager adds the audio data of audio C to the distance attenuation value corresponding to audio C.
  • the distance attenuation value corresponding to audio B is optionally 0.
  • the media manager may continue to execute the process in Figure 21a based on the obtained results corresponding to each audio.
  • the media manager can add attenuation values to the audio data of the left and right channels of each audio respectively.
  • the media manager can add the audio data of the left channel to the distance attenuation value corresponding to audio A to obtain the output audio data of the left channel, and the media manager can add the audio data of the right channel to The distance attenuation value corresponding to audio A is used to obtain the output audio data of the right channel.
  • the media manager processes the audio data of the left and right channels of each audio in sequence, and based on the processed results, continues to execute the process in Figure 22.
  • S1903 performs linear mixing of the two-channel audio data from multiple devices.
  • the media manager of the mobile phone superimposes the output audio data of audio A, the audio data of audio B, and the output audio data of audio C corresponding to the right channel to obtain a mix of the right channel. Audio. Furthermore, the media manager superimposes the output audio data of audio A, the output audio data of audio B, and the output audio data of audio C corresponding to the left channel to obtain the mixed audio of the left channel. Optionally, in order to prevent the superimposed audio data from overflowing, the media manager can average the superimposed audio data of the left and right channels to obtain the output audio data of the mixed audio of the left and right channels.
  • the corresponding output volume is data_out(X1).
  • the media manager can also adjust the timbre difference, phase difference, and/or sound level difference to achieve stereo sound effects. This application will not give examples one by one.
  • the method of delaying the sampling point to achieve the time difference in the embodiment of the present application is only an illustrative example.
  • the media manager can also obtain stereo sound effects based on the HRTF (Head-Related Transfer Function) algorithm, which is not limited in this application.
  • HRTF Head-Related Transfer Function
  • the media manager of the mobile phone obtains the output of the mixed audio of the left and right channels. After receiving the audio data, the media manager can adjust the volume parameters of the output volume of the mixed audio (data_out(X1)) to adjust the output volume of the mixed audio (data_out(X1)) to within the output volume range of the mobile phone.
  • the media manager can obtain the volume parameter of the audio of the left channel based on the output volume and output volume range of the audio of the left channel obtained in Figure 22.
  • the media manager multiplies the audio data of the left channel by the output volume parameter and volume parameter (that is, as shown in formula (2) above) to obtain the output audio data of the left channel, and the corresponding output volume is data_out(X2). To optimize the output volume, adjust the output volume of the audio output from the left channel to within the output volume range.
  • the volume parameter corresponding to the audio of the right channel is the same as that of the left channel, and the media manager can Multiply the output volume of the right channel by the output volume parameter and the volume parameter to obtain the output audio data of the right channel.
  • the corresponding output volume is data_out(X2) to optimize the output volume.
  • the audio output of the right channel is Adjust the output volume to within the output volume range. It should be noted that in other embodiments, only the audio data of the left and right channels can be multiplied by the volume parameter to adjust the output volume, which is not limited in this application.
  • the media manager outputs the obtained output audio data of the mixed audio (including the output audio data of the left channel and the right channel) to the Bluetooth driver.
  • the output volume of the mixed audio is data_out(X2).
  • the Bluetooth driver can output the output audio data of the mixed audio to the headphones through the Bluetooth connection.
  • the left channel of the headset plays the output audio data of the mixed audio corresponding to the left channel above (the audio data of the left channel shown in Figure 23), and the corresponding playback volume is data_out (X2).
  • the right channel of the headset plays the output audio data of the mixed audio corresponding to the right channel above (the audio data of the right channel shown in Figure 23), and the corresponding playback volume is data_out (X2).
  • the mobile phone when the mobile phone mixes audio based on the obtained relative position information, the mobile phone always performs calculations based on the obtained relative position. That is to say, in this scenario, the position of the sound source represented by the stereo sound played by the headphones is optionally as shown in Figure 16, that is, the position of the sound source remains unchanged.
  • the relative position with the earphones can be periodically obtained, and the slave device can periodically (can be set based on actual needs, this application does not Send relative position information to the central device in a limited manner. After obtaining the relative position information, the mobile phone can mix the audio of each device based on the newly obtained relative position information.
  • the central device can adjust the mixing effect of the mixed audio based on the relative position information obtained in real time to adjust the relative position between each virtual sound source and the earphones. For example, when the user wears headphones and walks in the room, the mobile phone can adjust the attenuation value and time difference (or the timbre difference, etc.) of each audio corresponding to the left and right channels of the headphones based on the relative position conversion between each device and the headphones.
  • This application does not limit), thereby realizing virtual sound source position transformation, obtaining a more realistic stereo sound effect, and improving the user experience.
  • a control method is also provided to support the audio changing scene in the mixing scene played by multiple devices.
  • audio changes include but are not limited to: switching modes, switching devices, and switching audio sources.
  • the switching mode is optionally switching between a multi-device mixing mode and a single-device mode.
  • the switching device is optionally a switching of the audio source device in single device mode. Switching the sound source may optionally be switching the sound source played by at least one device in the multi-device mixing mode or the sound source device in the single device mode.
  • Figure 24 is a schematic flowchart of a control method in an exemplary switching mode scenario. Please refer to Figure 24. It specifically includes:
  • the headset sends switching mode instruction information to the mobile phone.
  • the earphones in the embodiments of the present application can provide control schemes corresponding to the various switching functions described above.
  • the user can pinch the earphones to indicate the switching mode.
  • the user operation described in the embodiment of the present application can also be the user's voice input.
  • the user speaks a specified voice instruction to the earphone pickup device (such as a microphone).
  • the earphone can detect the user instruction and send it to the user.
  • the command is output to the mobile phone, and the mobile phone can recognize the voice command.
  • each user operation in the embodiment of this application is only a schematic example, and this application does not limit it, and the description will not be repeated below.
  • the headset can send switching mode instruction information to the central device (i.e., the mobile phone).
  • the current mode of the network is the mixing mode, that is, the mode shown in Figure 13a and Figure 13b, as an example.
  • the mobile phone receives the switching mode indication information and can determine to change the current mode. , that is, the mixing mode switches to single device mode.
  • the mobile phone receives the switching mode indication information and can determine to switch the current mode, that is, the single-device mode to the mixing mode.
  • the specific solution will be in steps S2404 to S2407. illustrate.
  • the user operations and gestures described in this application are only illustrative examples.
  • the user can tap the earphone to indicate switching modes, which is not limited by this application.
  • S2402a The mobile phone sends pause playback instruction information to the TV.
  • S2402b The mobile phone sends pause playback instruction information to the tablet.
  • the mobile phone can send pause play instruction information to the TV and the tablet respectively to instruct the TV and the tablet. Pause audio.
  • the TV and tablet stop transmitting audio data to the central device (i.e., the mobile phone), and the TV and tablet will not play audio on their own devices.
  • the mobile phone outputs audio C to the headset.
  • the mobile phone outputs the output audio data of audio C to the earphone.
  • the mobile phone can still make audio adjustments to the audio C of the mobile phone, such as the time difference adjustment of the audio of the left and right channels mentioned above, in order to simulate the real sound source direction.
  • the media manager of the mobile phone can adjust the output volume of audio C based on the volume parameter corresponding to audio C (the volume parameters can be obtained as described above and will not be described again here).
  • the media manager then adjusts the time difference of audio C in the left and right channels (it may also be a difference in timbre, etc., which is not limited in this application) and the output volume attenuation, etc. based on the relative position information of the mobile phone and the earphones.
  • the mobile phone when the mobile phone is in single-device mode, it does not need to adjust the audio source direction, that is, it can directly output the audio data according to the process in Figure 9a, that is, the audio data of the left and right channels played by the headphones and their corresponding
  • the output volume is the same and is not limited in this application.
  • the switched single device mode is the central device by default.
  • the user can also control the switching of audio source devices in single device mode through headphones or a central device (i.e., mobile phone). The specific implementation will be explained in Figure 25.
  • the headset outputs switching mode instruction information to the mobile phone.
  • the user can pinch the earphones again (it may also be other operations, which are not limited in this application) to indicate switching modes.
  • the headset sends a switching mode instruction message to the central device (i.e., the mobile phone). interest.
  • the mobile phone receives the switching mode instruction information and determines to switch the current mode, that is, the single device mode to the mixing mode.
  • the user controls through the earphone as an example. In other embodiments, the user can also control on the central device, which is not limited in this application.
  • the mobile phone sends continue play instruction information to the TV.
  • the mobile phone sends continue playback instruction information to the tablet.
  • the mobile phone determines to switch the single device mode to the mixing mode, it sends continue playback instruction information to the TV and tablet respectively to instruct the TV and tablet to continue transmitting the corresponding audio to the mobile phone.
  • the TV outputs audio B to the mobile phone.
  • the tablet outputs audio A to the mobile phone.
  • the TV performs resumption of transmission, that is, the TV continues to send the audio after the paused playback to the mobile phone.
  • the TV continues to send the audio after the paused playback to the mobile phone.
  • the mobile phone outputs mixed audio to the headset.
  • the mobile phone performs the mixing process described above based on the audio data of audio A corresponding to the phone, the received audio data of audio B of the TV, and the audio data of audio C of the tablet.
  • the audio data of audio A corresponding to the phone
  • the received audio data of audio B of the TV the received audio data of audio B of the TV
  • the audio data of audio C of the tablet please refer to the above. We will not go into details here.
  • Figure 25 is a schematic flowchart of a control method in an exemplary switching mode scenario. Please refer to Figure 25, which specifically includes:
  • the headset sends switching device instruction information to the mobile phone.
  • the headset responds to a received user operation (for example, tapping three times, etc., which is not limited in this application), and sends switching device instruction information to the mobile phone to instruct the switching of the audio source device.
  • a received user operation for example, tapping three times, etc., which is not limited in this application
  • the process in Figure 25 is implemented in single device mode. That is to say, the networking mode needs to be switched to single device mode before the process in Figure 25 can be implemented.
  • the optional default audio source device is the central device (i.e. mobile phone). The user can instruct the central device to switch the current audio source device to the specified devices, such as televisions.
  • S2502 The mobile phone sends continue play instruction information to the TV.
  • the mobile phone determines to switch the audio source device to the television. It should be noted that if the user controls through the headset, the mobile phone can respond to the received switching device instruction information and switch the audio source devices in sequence. The order may be based on the distance from the earphones, or may be based on other rule settings, which is not limited in this application. For example, after receiving the switching device instruction message, the mobile phone can switch the audio source device to the TV in sequence. If the mobile phone receives the device switching instruction message again, it can switch the audio source device to the tablet according to the sequence. Of course, the user can also control the audio source device to switch to a specified device on the mobile phone. This application does not limit this, and the description will not be repeated below.
  • the mobile phone will stop transmitting the mobile phone's audio to the headset in response to the received switching device instruction.
  • the tablet is still in the paused playback state.
  • both the TV and tablet will pause audio playback.
  • the mobile phone can send a continue play instruction to the TV to instruct the TV to play audio.
  • the TV outputs audio B to the mobile phone.
  • the TV continues to send audio B to the mobile phone in response to the received continue playing instruction information.
  • the audio B that the TV can output can be the audio after the moment when the playback is paused, that is, the playback is resumed after power outage. It may also be to re-output audio B, which is not limited in this application.
  • the mobile phone outputs audio B to the headset.
  • the mobile phone receives the audio data of audio B sent by the earphone, processes the audio data, and outputs the output audio data corresponding to audio B to the earphone.
  • the mobile phone can perform processing based on the cross-device transmission scheme in Figure 17.
  • the mobile phone can process audio B based on the mixing scheme. The processing method is similar to the description in S2403 and will not be described again here.
  • S2505 The headset sends switching device instruction information to the mobile phone.
  • the user can control the earphones multiple times to sequentially switch audio source devices in single device mode.
  • the headset sends switching device instruction information to the mobile phone in response to the received user operation.
  • S2506 The mobile phone sends pause playback instruction information to the TV.
  • S2507 The mobile phone sends continue playback instruction information to the tablet.
  • the tablet outputs audio A to the mobile phone.
  • the mobile phone determines that the audio source device needs to be switched from TV B to the tablet.
  • the mobile phone sends a pause playback instruction message to the TV to instruct TV B to pause audio playback.
  • the mobile phone sends continue playback instruction information to the tablet to instruct the tablet to continue playing audio.
  • the TV pauses audio playback, that is, it no longer transmits audio data to the mobile phone.
  • the tablet sends the paused audio data to the mobile phone.
  • the mobile phone outputs audio A to the headset.
  • Figure 26 is a schematic flowchart of a control method in an exemplary switching mode scenario. Please refer to Figure 26, which specifically includes:
  • the headset sends switching audio source instruction information to the mobile phone.
  • the headset receives a user operation (the user operation can be set according to actual needs, and is not limited in this application), and the user operation is used to instruct switching of sound sources.
  • the headset responds to the received user operation and sends audio source switching instruction information to the mobile phone.
  • S2602 The mobile phone sends audio source switching instruction information to the TV.
  • the mobile phone in response to the received audio source switching instruction information, sends the audio source switching instruction information to the TV.
  • the TV outputs audio D to the mobile phone.
  • the TV switches the output audio source, for example, switches audio A to audio D, and outputs audio data corresponding to audio D to the mobile phone.
  • the TV after the TV detects an audio change (ie, audio source switching), the TV side will re-execute the volume parameter acquisition process and adjust the output volume of audio D to obtain the output volume of audio D. Specific details can be found above and will not be repeated here. narrate.
  • the tablet outputs audio A to the mobile phone.
  • the TV switches the audio source, but the tablet does not receive the instruction to switch the audio source. Accordingly, the tablet continues to output audio data corresponding to audio A to the mobile phone.
  • the mobile phone outputs mixed audio to the headset.
  • the mobile phone will perform the mixing process described above. It should be noted that for mobile phones, it also detects audio source switching, that is, the audio source input by the TV is switched. Correspondingly, when mixing on the mobile phone, the volume parameter acquisition process also needs to be re-executed. After the mobile phone obtains the audio data of the mixed audio, it outputs the audio data to the headset, and the headset plays the audio data of the mixed audio.
  • audio source switching can also be implemented.
  • the principle is similar to that in Figure 26.
  • the central device also sends audio source switching instruction information to the current audio source device. It can be understood that this
  • the control information in the network in the application embodiment is all delivered by the central device to each slave device. For specific implementation, please refer to the above and will not be described again here.
  • the mobile phone in response to the received audio source switching instruction information, can send the audio source switching instruction information to each device in the network.
  • the specific implementation of switching audio sources between slave devices and mobile phones in the network is similar to that in Figure 26, and will not be described again here.
  • the audio change scenarios in the embodiments of the present application include audio source switching, output device switching, and audio source device switching (including audio source device switching in a multi-device collaboration scenario and audio source switching in a mixing scenario). device switching), etc.
  • audio source device switching including audio source device switching in a multi-device collaboration scenario and audio source switching in a mixing scenario.
  • device switching etc.
  • the audio played by the device will have a transition time of several seconds, causing the audio to be incoherent and affecting the user's audio-visual experience.
  • the embodiment of the present application also provides an audio change transition solution, which can make the transition smooth after the audio change and avoid the breakpoint problem caused by the audio change.
  • the mobile phone is still used as an audio output device as an example.
  • the mobile phone can use the Hanning window to implement the fade-in and fade-out of the audio switching.
  • the audio played by the mobile phone before switching the audio source is audio A
  • the audio played after the switching is audio B
  • the media manager takes the audio data of the preset duration (e.g. 3s) before the switching time point of audio A (i.e. the fade-out part shown in the figure), and takes the audio data of the preset duration (i.e. 3s) at the beginning of audio B (i.e. The fade-in portion shown in the image).
  • the media manager sets the Hanning window, and the length of the Hanning window is the preset duration, such as 3s.
  • the Hanning window may include a first sub-window (ie, the first half) and a second sub-window (ie, the second half).
  • the length of the first sub-window is the same as the length of the second sub-window.
  • the media manager processes the switched audio, that is, the audio data of the preset duration of audio B based on the first sub-window, so that the output volume of the audio data of the fade-in part gradually increases.
  • the media manager processes the audio before switching, that is, the audio data of the preset duration of audio A based on the second sub-window, so that the output volume of the audio data of the fade-out part gradually decreases.
  • the media manager can multiply the audio data corresponding to the fade-in part of audio B by the first sub-window of the Hanning window (i.e., the first half of the window function) to obtain the audio data of the fade-in part. (referred to as fade-in audio data). And, the media manager multiplies the audio data corresponding to the fade-out part of audio A by the second sub-window of the Hanning window (ie, the second half of the window function) to obtain the audio data of the fade-out part.
  • the media manager can overlay the obtained fade-in audio data and fade-out audio data to obtain the audio data played by the audio change.
  • the media manager obtains the fade-out audio data of audio A and obtains the fade-in audio data of audio B in the manner described above.
  • the media manager can superimpose the fade-in audio data and the fade-out audio data to obtain the fade-in and fade-out audio data, thereby achieving a smooth transition of the audio while maintaining the original audio length.
  • the audio data transmitted by the media manager to the audio driver is the superimposed audio shown in Figure 27.
  • the fade-in and fade-out part the audio that the user hears is the fade-in and fade-out part, in which the audio data of audio A gradually decreases and the audio data of audio B gradually increases. And, after the fade-in and fade-out part is played, the audio data of audio B continues to be played.
  • the electronic device includes corresponding hardware and/or software modules that perform each function.
  • the present application can be implemented in the form of hardware or a combination of hardware and computer software.
  • FIG. 28 shows a schematic block diagram of a device 2800 according to an embodiment of the present application.
  • the device 2800 may include: a processor 2801 and a transceiver/transceiver pin 2802, and optionally, a memory 2803.
  • bus 2804 which includes a power bus, a control bus, and a status signal bus in addition to a data bus.
  • bus 2804 various buses are referred to as bus 2804 in the figure.
  • the memory 2803 may be used for instructions in the foregoing method embodiments.
  • the processor 2801 can be used to execute instructions in the memory 2803, and control the receiving pin to receive signals, and control the transmitting pin to send signals.
  • the device 2800 may be the electronic device or a chip of the electronic device in the above method embodiment.
  • This embodiment also provides a computer storage medium that stores computer instructions.
  • the electronic device When the computer instructions are run on an electronic device, the electronic device causes the electronic device to execute the above related method steps to implement the method in the above embodiment.
  • This embodiment also provides a computer program product.
  • the computer program product When the computer program product is run on a computer, it causes the computer to perform the above related steps to implement the method in the above embodiment.
  • inventions of the present application also provide a device.
  • This device may be a chip, a component or a module.
  • the device may include a connected processor and a memory.
  • the memory is used to store computer execution instructions.
  • the processor can execute computer execution instructions stored in the memory, so that the chip executes the methods in each of the above method embodiments.
  • the electronic equipment, computer storage media, computer program products or chips provided in this embodiment are all used to execute the corresponding methods provided above. Therefore, the beneficial effects they can achieve can be referred to the corresponding methods provided above. The beneficial effects of the method will not be repeated here.

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Environmental & Geological Engineering (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

Provided in the embodiments of the present application are a volume control method and an electronic device. The method comprises: an electronic device acquiring first audio data to be played, wherein the average volume of the first audio data within a preset duration is a first volume; when the first volume does not meet a first output volume range, the electronic device acquiring a volume parameter corresponding to the first audio data, and correcting the first audio data on the basis of the volume parameter, so as to obtain second audio data where the volume of which is within the first output volume range; and the electronic device playing the second audio data. Therefore, when the electronic device plays audio data, automatic volume adjustment is realized, such that the volume of the audio data is adjusted to be within an output volume range, thereby meeting the auditory experience requirement of a user.

Description

音量控制方法及电子设备Volume control method and electronic device
本申请要求于2022年03月28日提交中国国家知识产权局、申请号为202210310062.7、申请名称为“音量控制方法及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims priority to the Chinese patent application filed with the State Intellectual Property Office of China on March 28, 2022, with application number 202210310062.7 and application name "Volume Control Method and Electronic Device", the entire content of which is incorporated into this application by reference. middle.
技术领域Technical field
本申请实施例涉及终端设备领域,尤其涉及一种音量控制方法及电子设备。Embodiments of the present application relate to the field of terminal equipment, and in particular, to a volume control method and electronic equipment.
背景技术Background technique
随着终端设备技术的发展,终端的媒体业务的应用场景越来越广泛。用户可使用终端设备听音乐、看视频等。但是,在用户使用终端设备播放音频(可以是音乐,也可以是视频中的音频)的过程中,由于不同音频的输入音量可能不同,可能造成音频的输出音量也不相同,则用户需要手动调节音量,以使得音频的输出音量在适宜音量范围内,影响用户使用体验。With the development of terminal equipment technology, the application scenarios of terminal media services are becoming more and more extensive. Users can use terminal devices to listen to music, watch videos, etc. However, when the user uses the terminal device to play audio (which can be music or audio in a video), since the input volume of different audio may be different, the output volume of the audio may also be different, and the user needs to manually adjust it. Volume, so that the audio output volume is within the appropriate volume range, affecting the user experience.
发明内容Contents of the invention
本申请实施例提供一种音量控制方法及电子设备。在该方法中,电子设备可对音频数据的音量进行自动调节,以使得音频数据的音量保持在适宜音量范围内,有效提升用户听觉体验。Embodiments of the present application provide a volume control method and electronic device. In this method, the electronic device can automatically adjust the volume of the audio data to keep the volume of the audio data within an appropriate volume range, effectively improving the user's listening experience.
第一方面,本申请实施例提供一种音量控制方法。该方法包括:电子设备获取第一音频数据。接着,电子设备在检测到第一音频数据的第一音量不满足第一输出音量范围的情况下,电子设备基于第一音量与第一输出音量范围,获取与第一音频数据对应的第一音量参数。其中,第一音量为第一音频数据的预设时长内的音频数据的平均音量,第一输出音量范围为预先获取到的。电子设备基于第一音量参数对第一音频数据进行校正,以得到第二音频数据。其中,第二音频数据的平均音量为第二音量,第二音量在第一输出音量范围内。接着,电子设备播放第二音频数据。这样,电子设备可基于音量参数对音频数据进行校正,以使得音频数据的音量调节到输出音量范围内,从而避免不同音频数据在电子设备播放时导致的音量过大或过小的问题,以有效提升用户听觉体验。In a first aspect, embodiments of the present application provide a volume control method. The method includes: the electronic device obtains first audio data. Next, when the electronic device detects that the first volume of the first audio data does not meet the first output volume range, the electronic device obtains the first volume corresponding to the first audio data based on the first volume and the first output volume range. parameter. The first volume is the average volume of audio data within a preset duration of the first audio data, and the first output volume range is acquired in advance. The electronic device corrects the first audio data based on the first volume parameter to obtain the second audio data. The average volume of the second audio data is the second volume, and the second volume is within the first output volume range. Then, the electronic device plays the second audio data. In this way, the electronic device can correct the audio data based on the volume parameter so that the volume of the audio data is adjusted to the output volume range, thereby avoiding the problem of too high or too low volume caused by different audio data when played by the electronic device, so as to effectively Improve user listening experience.
示例性的,音频数据可以是音乐,也可以是视频对应的音频。For example, the audio data may be music or audio corresponding to a video.
示例性的,音量不满足输出音量范围可选地为音量大于输出音量范围的最大值,或者是音量小于输出音量范围的最小值。For example, if the volume does not meet the output volume range, the volume may be greater than the maximum value of the output volume range, or the volume may be less than the minimum value of the output volume range.
示例性的,第二音频数据与第一音频数据的音量不同。For example, the volume of the second audio data is different from that of the first audio data.
示例性的,预设时长可以根据实际需求设置,本申请不做限定。For example, the preset duration can be set according to actual needs, and is not limited in this application.
示例性的,电子设备播放第二音频数据,播放的音量在输出音量范围内。For example, the electronic device plays the second audio data, and the playing volume is within the output volume range.
根据第一方面,方法还包括:在电子设备播放第二音频数据时,接收到调节操作,调节操作用于调节第二音频数据的音量。在调节操作开始至结束过程中,电子设备按照第 一周期时长,采集第二音频数据的音量。电子设备基于采集到的第二音频数据的音量,得到第二输出音量范围。这样,电子设备可在用户调节音量时,密集采集音频数据的音量,以根据采集到的音量,更新输出音量范围。也就是说,电子设备在播放音频数据的过程中,可通过检测用户行为,更新音量输出范围,从而使得输出音量范围始终满足用户的听觉习惯。According to the first aspect, the method further includes: receiving an adjustment operation when the electronic device plays the second audio data, and the adjustment operation is used to adjust the volume of the second audio data. From the beginning to the end of the adjustment operation, the electronic equipment follows the One cycle is used to collect the volume of the second audio data. The electronic device obtains the second output volume range based on the collected volume of the second audio data. In this way, the electronic device can intensively collect the volume of the audio data when the user adjusts the volume, so as to update the output volume range based on the collected volume. That is to say, during the process of playing audio data, the electronic device can detect the user's behavior and update the volume output range, so that the output volume range always meets the user's listening habits.
根据第一方面,或者以上第一方面的任意一种实现方式,电子设备基于采集到的第二音频数据的音量,得到第二输出音量范围,包括:获取调节操作开始至结束过程中采集到的第二音频数据的音量的平均音量;在调节操作用于指示调大第二音频数据的音量的情况下,若采集到的第二音频数据的音量的平均音量大于第一输出音量范围的最小值,第二输出音量范围的最小值为采集到的第二音频数据的音量的平均音量,第二输出音量范围的最大值为第一输出音量范围的最大值;若采集到的第二音频数据的音量的平均音量小于第一输出音量范围的最小值,第二输出音量范围等于第一输出音量范围;或者,在调节操作用于指示调小第二音频数据的音量的情况下,若采集到的第二音频数据的音量的平均音量小于第一输出音量范围的最大值,第二输出音量范围的最大值为采集到的第二音频数据的音量的平均音量,第二输出音量范围的最小值为第一输出音量范围的最小值;若采集到的第二音频数据的音量的平均音量大于第一输出音量范围的最大值,第二输出音量范围等于第一输出音量范围。这样,电子设备可基于不同的调节场景,动态更新输出音量范围,从而使得输出音量范围始终满足用户需求,即满足用户的听觉习惯。According to the first aspect, or any implementation of the above first aspect, the electronic device obtains the second output volume range based on the collected volume of the second audio data, including: obtaining the second output volume range collected from the beginning to the end of the adjustment operation. The average volume of the volume of the second audio data; when the adjustment operation is used to indicate increasing the volume of the second audio data, if the average volume of the collected second audio data is greater than the minimum value of the first output volume range , the minimum value of the second output volume range is the average volume of the collected second audio data, and the maximum value of the second output volume range is the maximum value of the first output volume range; if the collected second audio data The average volume of the volume is less than the minimum value of the first output volume range, and the second output volume range is equal to the first output volume range; or, in the case where the adjustment operation is used to indicate turning down the volume of the second audio data, if the collected The average volume of the second audio data is less than the maximum value of the first output volume range. The maximum value of the second output volume range is the average volume of the collected second audio data. The minimum value of the second output volume range is The minimum value of the first output volume range; if the average volume of the collected second audio data is greater than the maximum value of the first output volume range, the second output volume range is equal to the first output volume range. In this way, the electronic device can dynamically update the output volume range based on different adjustment scenarios, so that the output volume range always meets the user's needs, that is, the user's listening habits.
根据第一方面,或者以上第一方面的任意一种实现方式,方法还包括:在电子设备播放第二音频数据时,电子设备按照第二周期时长,采集第二音频数据的音量;第二周期时长大于第一周期时长。电子设备基于采集到的第二音频数据的音量,得到第二输出音量范围。这样,电子设备可通过稀疏采集的方式采集音量,并基于音量动态更新输出音量范围,并且可降低音量采集所造成的功耗。According to the first aspect, or any implementation of the first aspect above, the method further includes: when the electronic device plays the second audio data, the electronic device collects the volume of the second audio data according to the duration of the second cycle; the second cycle The duration is longer than the first cycle duration. The electronic device obtains the second output volume range based on the collected volume of the second audio data. In this way, the electronic device can collect the volume through sparse collection, dynamically update the output volume range based on the volume, and reduce the power consumption caused by volume collection.
根据第一方面,或者以上第一方面的任意一种实现方式,若采集到的第二音频数据的音量大于第一输出音量范围的最大值,第二输出音量范围的最小值为第一输出音量范围的最小值,第二输出音量范围的最大值为采集到的第二音频数据的音量;或者,若采集到的第二音频数据的音量小于第一输出音量范围的最小值,第二输出音量范围的最大值为第一输出音量范围的最大值,第二输出音量范围的最小值为采集到的第二音频数据的音量;或者,若采集到的第二音频数据的音量大于或等于第一输出音量范围的最小值,且小于或等于第一输出音量范围的最大值,第二输出音量范围等于第一输出音量范围。这样,电子设备可基于采集到的音量,实现对输出音量范围的动态更新。According to the first aspect, or any implementation of the first aspect above, if the volume of the collected second audio data is greater than the maximum value of the first output volume range, the minimum value of the second output volume range is the first output volume The minimum value of the range, the maximum value of the second output volume range is the volume of the collected second audio data; or, if the volume of the collected second audio data is less than the minimum value of the first output volume range, the second output volume The maximum value of the range is the maximum value of the first output volume range, and the minimum value of the second output volume range is the volume of the collected second audio data; or, if the volume of the collected second audio data is greater than or equal to the first The minimum value of the output volume range is less than or equal to the maximum value of the first output volume range, and the second output volume range is equal to the first output volume range. In this way, the electronic device can dynamically update the output volume range based on the collected volume.
根据第一方面,或者以上第一方面的任意一种实现方式,方法还包括:电子设备获取第三音频数据,其中,第三音频数据的预设时长内的音频数据的平均音量为第三音量;电子设备检测到第三音量不满足第二输出音量范围,基于第三音量与第二输出音量范围,获取与第三音频数据对应的第二音量参数;电子设备基于第二音量参数对第三音频数据进行校正,得到第四音频数据;其中,第四音频数据的平均音量为第四音量,第四音量在第二输出音量范围内;电子设备播放第四音频数据。这样,电子设备可基于更新的输出音量范围,获取音频对应的音量参数,并基于音量参数对音频进行校正,从而使得音频 始终的音量始终保持在用户习惯的音量范围内,以满足用户听觉体验。According to the first aspect, or any implementation of the first aspect above, the method further includes: the electronic device obtains third audio data, wherein the average volume of the audio data within the preset duration of the third audio data is the third volume ; The electronic device detects that the third volume does not satisfy the second output volume range, and obtains the second volume parameter corresponding to the third audio data based on the third volume and the second output volume range; the electronic device detects the third volume parameter based on the second volume parameter. The audio data is corrected to obtain fourth audio data; the average volume of the fourth audio data is the fourth volume, and the fourth volume is within the second output volume range; the electronic device plays the fourth audio data. In this way, the electronic device can obtain the volume parameters corresponding to the audio based on the updated output volume range, and correct the audio based on the volume parameters, so that the audio The consistent volume is always maintained within the volume range that the user is accustomed to to satisfy the user's listening experience.
根据第一方面,或者以上第一方面的任意一种实现方式,电子设备检测到第一音量不满足第一输出音量范围,基于第一音量与第一输出音量范围,获取与第一音频数据对应的第一音量参数,包括:若第一音量大于第一输出音量范围的最大值,电子设备基于第一音量与第一输出音量范围的最大值,获取第一音量参数;或者,若第一音量小于第一输出音量范围的最小值,电子设备基于第一音量与第一输出音量范围的最小值,获取第一音量参数。这样,电子设备可基于音量与输出音量范围的不同对应关系,获取音频的音量参数。示例性的,在输入音量较大的情况下,可以通过音量参数调小输出音量,使得输出音量在输出音量范围内。若输入音量较小的情况下,通过音量参数调大输出音量,使得输出音量在输出音量范围内。According to the first aspect, or any implementation of the above first aspect, the electronic device detects that the first volume does not satisfy the first output volume range, and obtains the corresponding first audio data based on the first volume and the first output volume range. The first volume parameter includes: if the first volume is greater than the maximum value of the first output volume range, the electronic device obtains the first volume parameter based on the first volume and the maximum value of the first output volume range; or, if the first volume is less than the minimum value of the first output volume range, the electronic device obtains the first volume parameter based on the first volume and the minimum value of the first output volume range. In this way, the electronic device can obtain the volume parameters of the audio based on different correspondences between the volume and the output volume range. For example, when the input volume is large, the output volume can be reduced through the volume parameter so that the output volume is within the output volume range. If the input volume is small, use the volume parameter to increase the output volume so that the output volume is within the output volume range.
根据第一方面,或者以上第一方面的任意一种实现方式,电子设备基于第一音量参数对第一音频数据进行校正,得到第二音频数据,包括:电子设备基于第一音频数据、第一音量参数以及输出音量参数,得到第二音频数据;输出音量参数包括以下至少之一:音轨音量参数、流音量参数、主音量;音轨音量参数用于指示播放第二音频数据的应用的设置音量;流音量参数用于指示第一音频数据对应的音频流的设置音量;主音量用于指示电子设备的设置音量。这样,电子设备可基于电子设备自身的至少一种设置音量(即输出音量参数),得到与音频数据的输入音量对应的输出音量,并基于音量参数对音频的输出音量进行校正,以使得音频的输出音量保持在输出音量范围内。According to the first aspect, or any implementation of the above first aspect, the electronic device corrects the first audio data based on the first volume parameter to obtain the second audio data, including: the electronic device corrects the first audio data based on the first audio data, the first The volume parameter and the output volume parameter are used to obtain the second audio data; the output volume parameter includes at least one of the following: a track volume parameter, a stream volume parameter, and a master volume; the track volume parameter is used to indicate the settings of an application that plays the second audio data. Volume; the stream volume parameter is used to indicate the set volume of the audio stream corresponding to the first audio data; the master volume is used to indicate the set volume of the electronic device. In this way, the electronic device can obtain the output volume corresponding to the input volume of the audio data based on at least one setting volume of the electronic device itself (ie, the output volume parameter), and correct the output volume of the audio based on the volume parameter, so that the audio The output volume remains within the output volume range.
根据第一方面,或者以上第一方面的任意一种实现方式,方法还包括:电子设备获取第五音频数据,其中,第五音频数据的预设时长内的音频数据的平均音量为第五音量;电子设备检测到第五音量不满足第一输出音量范围,基于第五音量与第一输出音量范围,获取与第五音频数据对应的第三音量参数;电子设备基于第三音量参数对第五音频数据进行校正,得到第六音频数据;其中,第六音频数据的平均音量为第六音量,第六音量在第一输出音量范围内;电子设备向另一电子设备发送第六音频数据;电子设备与另一电子设备通过无线连接进行数据交互;电子设备检测到与另一电子设备的连接断开,电子设备获取第五音频数据中待播放的音频数据,其中,待播放的音频数据的预设时长内的音频数据的平均音量为第七音量;电子设备检测到第七音量不满足第一输出音量范围,基于第七音量与第一输出音量范围,获取与待播放的音频数据对应的第四音量参数;电子设备基于第四音量参数对待播放的音频数据进行校正,得到第七音频数据;其中,第七音频数据的平均音量为第八音量,第八音量在第一输出音量范围内;电子设备播放第七音频数据。这样,多设备协同播放音频数据的场景中,各电子设备可基于电子设备的输出音量范围,获取到与音频数据对应的音量参数。在电子设备停止与其它电子设备协同的情况下,电子设备仍然可以基于电子设备的输出音量范围,获取音频数据的音量参数,并通过音量参数对音频数据进行校正,使得音频数据的音量保持在电子设备的输出音量范围内。从而防止多设备协同场景调节设备音量后,导致多设备切换到单设备后,单设备播放的音频的音量过大或过小的问题。According to the first aspect, or any implementation of the first aspect above, the method further includes: the electronic device obtains fifth audio data, wherein the average volume of the audio data within the preset duration of the fifth audio data is the fifth volume. ; The electronic device detects that the fifth volume does not satisfy the first output volume range, and obtains the third volume parameter corresponding to the fifth audio data based on the fifth volume and the first output volume range; the electronic device detects the fifth volume parameter based on the third volume parameter. The audio data is corrected to obtain sixth audio data; wherein the average volume of the sixth audio data is the sixth volume, and the sixth volume is within the first output volume range; the electronic device sends the sixth audio data to another electronic device; electronic The device performs data interaction with another electronic device through a wireless connection; the electronic device detects that the connection with the other electronic device is disconnected, and the electronic device obtains the audio data to be played in the fifth audio data, wherein the preset of the audio data to be played is Suppose the average volume of the audio data within the duration is the seventh volume; the electronic device detects that the seventh volume does not meet the first output volume range, and obtains the third volume corresponding to the audio data to be played based on the seventh volume and the first output volume range. Four volume parameters; the electronic device corrects the audio data to be played based on the fourth volume parameter to obtain seventh audio data; wherein the average volume of the seventh audio data is the eighth volume, and the eighth volume is within the first output volume range; The electronic device plays seventh audio data. In this way, in a scenario where multiple devices play audio data collaboratively, each electronic device can obtain the volume parameter corresponding to the audio data based on the output volume range of the electronic device. When the electronic device stops cooperating with other electronic devices, the electronic device can still obtain the volume parameters of the audio data based on the output volume range of the electronic device, and correct the audio data through the volume parameters so that the volume of the audio data remains within the electronic range. within the output volume range of the device. This prevents the problem of the audio volume played by a single device being too loud or too small after multiple devices are switched to a single device when the device volume is adjusted in a collaborative scene.
根据第一方面,或者以上第一方面的任意一种实现方式,方法还包括:电子设备获取第八音频数据,其中,第八音频数据的预设时长内的音频数据的平均音量为第九音量; 第八音频数据与第一音频数据不同;第九音量与第一音量不同;电子设备检测到第九音量不满足第一输出音量范围,基于第九音量与第一输出音量范围,获取与第八音频数据对应的第五音量参数;第五音量参数与第一音量参数不同;电子设备基于第五音量参数对第八音频数据进行校正,得到第九音频数据;其中,第九音频数据的平均音量为第十音量,第十音量在第一输出音量范围内;电子设备播放第十音频数据。这样,电子设备在音源切换的场景下,可以根据不同的音源获取对应的音量参数,以使得不同的音频数据在电子设备中播放时,电子设备能够通过音量参数自动调节音频数据的音量,从而使得用户无需手动调节,即可使得音频数据的播放音量保持在用户习惯的音量范围内,从而有效提升用户使用体验。According to the first aspect, or any implementation of the above first aspect, the method further includes: the electronic device obtains eighth audio data, wherein the average volume of the audio data within the preset duration of the eighth audio data is the ninth volume. ; The eighth audio data is different from the first audio data; the ninth volume is different from the first volume; the electronic device detects that the ninth volume does not meet the first output volume range, and based on the ninth volume and the first output volume range, obtains the same value as the eighth The fifth volume parameter corresponding to the audio data; the fifth volume parameter is different from the first volume parameter; the electronic device corrects the eighth audio data based on the fifth volume parameter to obtain the ninth audio data; wherein, the average volume of the ninth audio data is the tenth volume, and the tenth volume is within the first output volume range; the electronic device plays the tenth audio data. In this way, in the scenario of audio source switching, the electronic device can obtain the corresponding volume parameters according to different sound sources, so that when different audio data is played in the electronic device, the electronic device can automatically adjust the volume of the audio data through the volume parameters, so that Users can keep the playback volume of audio data within the user's accustomed volume range without manual adjustment, thereby effectively improving the user experience.
根据第一方面,或者以上第一方面的任意一种实现方式,电子设备获取第一音频数据,包括:电子设备从目标应用获取第一音频数据;或者,电子设备接收第二电子设备发送的第一音频数据。这样,电子设备可对本设备中的应用的音频数据实现音量的自动调节。电子设备还可以对其它电子设备发送的音频数据进行自动调节,以使得电子设备播放的音频保持在本电子设备的输出音量范围内。According to the first aspect, or any implementation of the first aspect above, the electronic device obtains the first audio data, including: the electronic device obtains the first audio data from the target application; or the electronic device receives the first audio data sent by the second electronic device. an audio data. In this way, the electronic device can automatically adjust the volume of the audio data of the application in the device. The electronic device can also automatically adjust the audio data sent by other electronic devices so that the audio played by the electronic device remains within the output volume range of the electronic device.
根据第一方面,或者以上第一方面的任意一种实现方式,电子设备播放第二音频数据,包括:电子设备通过扬声器播放第二音频数据;或者,电子设备通过与电子设备连接的耳机播放第二音频数据。这样,本申请实施例可应用于本机播放场景以及耳机播放场景。According to the first aspect, or any implementation of the above first aspect, the electronic device plays the second audio data, including: the electronic device plays the second audio data through the speaker; or the electronic device plays the second audio data through the earphone connected to the electronic device. 2. Audio data. In this way, the embodiments of the present application can be applied to local playback scenarios and headphone playback scenarios.
第二方面,本申请实施例提供一种电子设备。该电子设备包括:一个或多个处理器、存储器;以及一个或多个计算机程序,其中一个或多个计算机程序存储在存储器上,当计算机程序被一个或多个处理器执行时,使得电子设备执行以下步骤:获取第一音频数据;检测到第一音频数据的第一音量不满足第一输出音量范围,基于第一音量与第一输出音量范围,获取与第一音频数据对应的第一音量参数;其中,第一音量为第一音频数据的预设时长内的音频数据的平均音量,第一输出音量范围为预先获取到的;基于第一音量参数对第一音频数据进行校正,得到第二音频数据;其中,第二音频数据的平均音量为第二音量,第二音量在第一输出音量范围内;播放第二音频数据。In a second aspect, embodiments of the present application provide an electronic device. The electronic device includes: one or more processors, memory; and one or more computer programs, wherein the one or more computer programs are stored on the memory, and when the computer program is executed by the one or more processors, the electronic device Execute the following steps: obtain the first audio data; detect that the first volume of the first audio data does not meet the first output volume range, and obtain the first volume corresponding to the first audio data based on the first volume and the first output volume range. Parameters; wherein, the first volume is the average volume of the audio data within the preset duration of the first audio data, and the first output volume range is obtained in advance; the first audio data is corrected based on the first volume parameter to obtain the first Two audio data; wherein the average volume of the second audio data is the second volume, and the second volume is within the first output volume range; and the second audio data is played.
根据第二方面,当计算机程序被一个或多个处理器执行时,使得电子设备执行以下步骤:在电子设备播放第二音频数据时,接收到调节操作,调节操作用于调节第二音频数据的音量;在调节操作开始至结束过程中,按照第一周期时长,采集第二音频数据的音量;基于采集到的第二音频数据的音量,得到第二输出音量范围。According to the second aspect, when the computer program is executed by one or more processors, the electronic device is caused to perform the following steps: when the electronic device plays the second audio data, an adjustment operation is received, and the adjustment operation is used to adjust the second audio data. Volume; during the process from the beginning to the end of the adjustment operation, the volume of the second audio data is collected according to the duration of the first cycle; based on the collected volume of the second audio data, the second output volume range is obtained.
根据第二方面,或者以上第二方面的任意一种实现方式,当计算机程序被一个或多个处理器执行时,使得电子设备执行以下步骤:获取调节操作开始至结束过程中采集到的第二音频数据的音量的平均音量;在调节操作用于指示调大第二音频数据的音量的情况下,若采集到的第二音频数据的音量的平均音量大于第一输出音量范围的最小值,第二输出音量范围的最小值为采集到的第二音频数据的音量的平均音量,第二输出音量范围的最大值为第一输出音量范围的最大值;若采集到的第二音频数据的音量的平均音量小于第一输出音量范围的最小值,第二输出音量范围等于第一输出音量范围;或者,在调节操作用于指示调小第二音频数据的音量的情况下,若采集到的第二音频数据的音量 的平均音量小于第一输出音量范围的最大值,第二输出音量范围的最大值为采集到的第二音频数据的音量的平均音量,第二输出音量范围的最小值为第一输出音量范围的最小值;若采集到的第二音频数据的音量的平均音量大于第一输出音量范围的最大值,第二输出音量范围等于第一输出音量范围。According to the second aspect, or any implementation of the above second aspect, when the computer program is executed by one or more processors, the electronic device is caused to perform the following steps: obtain the second value collected from the beginning to the end of the adjustment operation. The average volume of the volume of the audio data; when the adjustment operation is used to indicate increasing the volume of the second audio data, if the average volume of the collected second audio data is greater than the minimum value of the first output volume range, the The minimum value of the second output volume range is the average volume of the collected second audio data, and the maximum value of the second output volume range is the maximum value of the first output volume range; if the volume of the collected second audio data is The average volume is less than the minimum value of the first output volume range, and the second output volume range is equal to the first output volume range; or, in the case where the adjustment operation is used to indicate turning down the volume of the second audio data, if the collected second Volume of audio data The average volume is less than the maximum value of the first output volume range, the maximum value of the second output volume range is the average volume of the collected second audio data volume, and the minimum value of the second output volume range is the average volume of the first output volume range. Minimum value; if the average volume of the collected second audio data is greater than the maximum value of the first output volume range, the second output volume range is equal to the first output volume range.
根据第二方面,或者以上第二方面的任意一种实现方式,当计算机程序被一个或多个处理器执行时,使得电子设备执行以下步骤:在电子设备播放第二音频数据时,按照第二周期时长,采集第二音频数据的音量;第二周期时长大于第一周期时长;基于采集到的第二音频数据的音量,得到第二输出音量范围。According to the second aspect, or any implementation of the above second aspect, when the computer program is executed by one or more processors, the electronic device performs the following steps: when the electronic device plays the second audio data, according to the second The cycle duration is to collect the volume of the second audio data; the second cycle duration is greater than the first cycle duration; based on the collected volume of the second audio data, the second output volume range is obtained.
根据第二方面,或者以上第二方面的任意一种实现方式,若采集到的第二音频数据的音量大于第一输出音量范围的最大值,第二输出音量范围的最小值为第一输出音量范围的最小值,第二输出音量范围的最大值为采集到的第二音频数据的音量;或者,若采集到的第二音频数据的音量小于第一输出音量范围的最小值,第二输出音量范围的最大值为第一输出音量范围的最大值,第二输出音量范围的最小值为采集到的第二音频数据的音量;或者,若采集到的第二音频数据的音量大于或等于第一输出音量范围的最小值,且小于或等于第一输出音量范围的最大值,第二输出音量范围等于第一输出音量范围。According to the second aspect, or any implementation of the second aspect above, if the volume of the collected second audio data is greater than the maximum value of the first output volume range, the minimum value of the second output volume range is the first output volume The minimum value of the range, the maximum value of the second output volume range is the volume of the collected second audio data; or, if the volume of the collected second audio data is less than the minimum value of the first output volume range, the second output volume The maximum value of the range is the maximum value of the first output volume range, and the minimum value of the second output volume range is the volume of the collected second audio data; or, if the volume of the collected second audio data is greater than or equal to the first The minimum value of the output volume range is less than or equal to the maximum value of the first output volume range, and the second output volume range is equal to the first output volume range.
根据第二方面,或者以上第二方面的任意一种实现方式,当计算机程序被一个或多个处理器执行时,使得电子设备执行以下步骤:获取第三音频数据,其中,第三音频数据的预设时长内的音频数据的平均音量为第三音量;检测到第三音量不满足第二输出音量范围,基于第三音量与第二输出音量范围,获取与第三音频数据对应的第二音量参数;基于第二音量参数对第三音频数据进行校正,得到第四音频数据;其中,第四音频数据的平均音量为第四音量,第四音量在第二输出音量范围内;播放第四音频数据。According to the second aspect, or any implementation of the above second aspect, when the computer program is executed by one or more processors, the electronic device is caused to perform the following steps: obtain third audio data, wherein the third audio data The average volume of the audio data within the preset duration is the third volume; it is detected that the third volume does not meet the second output volume range, and based on the third volume and the second output volume range, the second volume corresponding to the third audio data is obtained parameter; correct the third audio data based on the second volume parameter to obtain fourth audio data; wherein the average volume of the fourth audio data is the fourth volume, and the fourth volume is within the second output volume range; play the fourth audio data.
根据第二方面,或者以上第二方面的任意一种实现方式,当计算机程序被一个或多个处理器执行时,使得电子设备执行以下步骤:若第一音量大于第一输出音量范围的最大值,基于第一音量与第一输出音量范围的最大值,获取第一音量参数;或者,若第一音量小于第一输出音量范围的最小值,基于第一音量与第一输出音量范围的最小值,获取第一音量参数。According to the second aspect, or any implementation of the above second aspect, when the computer program is executed by one or more processors, the electronic device is caused to perform the following steps: if the first volume is greater than the maximum value of the first output volume range , based on the first volume and the maximum value of the first output volume range, obtain the first volume parameter; or, if the first volume is less than the minimum value of the first output volume range, based on the first volume and the minimum value of the first output volume range , obtain the first volume parameter.
根据第二方面,或者以上第二方面的任意一种实现方式,当计算机程序被一个或多个处理器执行时,使得电子设备执行以下步骤:基于第一音频数据、第一音量参数以及输出音量参数,得到第二音频数据;输出音量参数包括以下至少之一:音轨音量参数、流音量参数、主音量;音轨音量参数用于指示播放第二音频数据的应用的设置音量;流音量参数用于指示第一音频数据对应的音频流的设置音量;主音量用于指示电子设备的设置音量。According to the second aspect, or any implementation of the above second aspect, when the computer program is executed by one or more processors, the electronic device is caused to perform the following steps: based on the first audio data, the first volume parameter and the output volume parameters to obtain the second audio data; the output volume parameters include at least one of the following: track volume parameters, stream volume parameters, and master volume; the track volume parameters are used to indicate the set volume of the application that plays the second audio data; the stream volume parameters It is used to indicate the set volume of the audio stream corresponding to the first audio data; the master volume is used to indicate the set volume of the electronic device.
根据第二方面,或者以上第二方面的任意一种实现方式,当计算机程序被一个或多个处理器执行时,使得电子设备执行以下步骤:获取第五音频数据,其中,第五音频数据的预设时长内的音频数据的平均音量为第五音量;检测到第五音量不满足第一输出音量范围,基于第五音量与第一输出音量范围,获取与第五音频数据对应的第三音量参数;基于第三音量参数对第五音频数据进行校正,得到第六音频数据;其中,第六音频数据的平均音量为第六音量,第六音量在第一输出音量范围内;向另一电子设备发送第六音 频数据;电子设备与另一电子设备通过无线连接进行数据交互;检测到与另一电子设备的连接断开,电子设备获取第五音频数据中待播放的音频数据,其中,待播放的音频数据的预设时长内的音频数据的平均音量为第七音量;检测到第七音量不满足第一输出音量范围,基于第七音量与第一输出音量范围,获取与待播放的音频数据对应的第四音量参数;基于第四音量参数对待播放的音频数据进行校正,得到第七音频数据;其中,第七音频数据的平均音量为第八音量,第八音量在第一输出音量范围内;播放第七音频数据。According to the second aspect, or any implementation of the above second aspect, when the computer program is executed by one or more processors, the electronic device is caused to perform the following steps: obtain fifth audio data, wherein the fifth audio data The average volume of the audio data within the preset duration is the fifth volume; it is detected that the fifth volume does not meet the first output volume range, and based on the fifth volume and the first output volume range, the third volume corresponding to the fifth audio data is obtained parameter; correct the fifth audio data based on the third volume parameter to obtain sixth audio data; wherein the average volume of the sixth audio data is the sixth volume, and the sixth volume is within the first output volume range; to another electronic The device sends a sixth tone audio data; the electronic device performs data interaction with another electronic device through a wireless connection; detecting that the connection with the other electronic device is disconnected, the electronic device obtains the audio data to be played in the fifth audio data, wherein the audio data to be played The average volume of the audio data within the preset duration is the seventh volume; it is detected that the seventh volume does not meet the first output volume range, and based on the seventh volume and the first output volume range, the third volume corresponding to the audio data to be played is obtained. Four volume parameters; the audio data to be played is corrected based on the fourth volume parameter to obtain the seventh audio data; wherein the average volume of the seventh audio data is the eighth volume, and the eighth volume is within the first output volume range; playing the seventh audio data 7. Audio data.
根据第二方面,或者以上第二方面的任意一种实现方式,当计算机程序被一个或多个处理器执行时,使得电子设备执行以下步骤:获取第八音频数据,其中,第八音频数据的预设时长内的音频数据的平均音量为第九音量;第八音频数据与第一音频数据不同;第九音量与第一音量不同;检测到第九音量不满足第一输出音量范围,基于第九音量与第一输出音量范围,获取与第八音频数据对应的第五音量参数;第五音量参数与第一音量参数不同;基于第五音量参数对第八音频数据进行校正,得到第九音频数据;其中,第九音频数据的平均音量为第十音量,第十音量在第一输出音量范围内;播放第十音频数据。According to the second aspect, or any implementation of the above second aspect, when the computer program is executed by one or more processors, the electronic device is caused to perform the following steps: obtain the eighth audio data, wherein the eighth audio data The average volume of the audio data within the preset duration is the ninth volume; the eighth audio data is different from the first audio data; the ninth volume is different from the first volume; it is detected that the ninth volume does not satisfy the first output volume range, based on the The ninth volume and the first output volume range are used to obtain the fifth volume parameter corresponding to the eighth audio data; the fifth volume parameter is different from the first volume parameter; the eighth audio data is corrected based on the fifth volume parameter to obtain the ninth audio data; wherein the average volume of the ninth audio data is the tenth volume, and the tenth volume is within the first output volume range; the tenth audio data is played.
根据第二方面,或者以上第二方面的任意一种实现方式,当计算机程序被一个或多个处理器执行时,使得电子设备执行以下步骤:从目标应用获取第一音频数据;或者,接收第二电子设备发送的第一音频数据。According to the second aspect, or any implementation of the above second aspect, when the computer program is executed by one or more processors, the electronic device is caused to perform the following steps: obtain the first audio data from the target application; or, receive the second audio data. The second electronic device sends the first audio data.
根据第二方面,或者以上第二方面的任意一种实现方式,当计算机程序被一个或多个处理器执行时,使得电子设备执行以下步骤:通过扬声器播放第二音频数据;或者,通过与电子设备连接的耳机播放第二音频数据。According to the second aspect, or any implementation of the above second aspect, when the computer program is executed by one or more processors, the electronic device is caused to perform the following steps: playing the second audio data through the speaker; or, by communicating with the electronic device The device connects the headphones to play the second audio data.
第二方面以及第二方面的任意一种实现方式分别与第一方面以及第一方面的任意一种实现方式相对应。第二方面以及第二方面的任意一种实现方式所对应的技术效果可参见上述第一方面以及第一方面的任意一种实现方式所对应的技术效果,此处不再赘述。The second aspect and any implementation manner of the second aspect respectively correspond to the first aspect and any implementation manner of the first aspect. The technical effects corresponding to the second aspect and any implementation manner of the second aspect may be referred to the technical effects corresponding to the above-mentioned first aspect and any implementation manner of the first aspect, which will not be described again here.
第三方面,本申请实施例提供一种音频处理方法。该方法包括:第一电子设备获取第一子音频数据与第一方位信息,第一方位信息用于指示第一电子设备与耳机之间的相对位置;第一电子设备与耳机之间通过无线连接进行数据交互;第一电子设备接收第二电子设备发送的第二子音频数据和第二方位信息,第二方位信息用于指示第二电子设备与耳机之间的相对位置;第一电子设备基于第一方位信息与第二方位信息,对第一子音频数据与第二子音频数据进行混音,得到第一音频数据;第一电子设备向耳机发送第一音频数据,以通过耳机播放第一音频数据。这样,电子设备可通过方位信息,对多个电子设备的音频数据进行混音,以实现耳机的立体声播放,使得用户在使用耳机时,能够听到多设备音频的立体声效果。In a third aspect, embodiments of the present application provide an audio processing method. The method includes: the first electronic device obtains first sub-audio data and first orientation information, and the first orientation information is used to indicate the relative position between the first electronic device and the earphone; the first electronic device and the earphone are connected through a wireless connection Perform data interaction; the first electronic device receives the second sub-audio data and the second orientation information sent by the second electronic device, and the second orientation information is used to indicate the relative position between the second electronic device and the earphone; the first electronic device is based on The first orientation information and the second orientation information mix the first sub-audio data and the second sub-audio data to obtain the first audio data; the first electronic device sends the first audio data to the earphone to play the first audio data through the earphone. audio data. In this way, the electronic device can mix the audio data of multiple electronic devices through the orientation information to achieve stereo playback of the earphones, so that the user can hear the stereo effect of audio from multiple devices when using the earphones.
示例性的,无线连接可以是基于蓝牙协议维护的,也可以是基于Wi-Fi协议维护的。For example, the wireless connection may be maintained based on the Bluetooth protocol or the Wi-Fi protocol.
根据第三方面,第一电子设备基于第一方位信息与第二方位信息,对第一子音频数据与第二子音频数据进行混音,包括:第一电子设备基于第一方位信息,获取第一子音频数据对应于耳机的第一声道的第三子音频数据,以及第一子音频数据对应于耳机的第二声道的第四子音频数据;第三子音频数据与第四子音频数据之间的相位、音色、声级和/或音频起始位置不同;第一电子设备基于第二方位信息,获取第二子音频数据对应于 耳机的第一声道的第五子音频数据,以及第一子音频数据对应于耳机的第二声道的第六子音频数据;第五子音频数据与第六子音频数据之间的相位、音色、声级和/或音频起始位置不同;第一电子设备基于第三子音频数据与第五子音频数据,得到第七子音频数据;第一电子设备基于第四子音频数据与第六子音频数据,得到第八子音频数据;第一音频数据包括第七子音频数据和第八子音频数据;第一电子设备向耳机发送第七子音频数据与第八子音频数据,通过耳机的第一声道播放第七子音频数据,通过耳机的第二声道播放第八子音频数据。这样,电子设备可以基于方位信息,以确定耳机的双声道的音频的相位差、音色差、声级差以及时间差,从而实现耳机双声道的立体声音效。According to a third aspect, the first electronic device mixes the first sub-audio data and the second sub-audio data based on the first orientation information and the second orientation information, including: the first electronic device obtains the first sub-audio data based on the first orientation information. One sub-audio data corresponds to the third sub-audio data of the first channel of the earphone, and the first sub-audio data corresponds to the fourth sub-audio data of the second channel of the earphone; the third sub-audio data and the fourth sub-audio data The phase, timbre, sound level and/or audio starting position between the data are different; based on the second orientation information, the first electronic device obtains the second sub-audio data corresponding to The fifth sub-audio data of the first channel of the earphone, and the first sub-audio data corresponds to the sixth sub-audio data of the second channel of the earphone; the phase between the fifth sub-audio data and the sixth sub-audio data, The timbre, sound level and/or audio starting position are different; the first electronic device obtains the seventh sub-audio data based on the third sub-audio data and the fifth sub-audio data; the first electronic device obtains the seventh sub-audio data based on the fourth sub-audio data and the sixth sub-audio data. sub-audio data to obtain the eighth sub-audio data; the first audio data includes the seventh sub-audio data and the eighth sub-audio data; the first electronic device sends the seventh sub-audio data and the eighth sub-audio data to the earphone, through the earphone. The first channel plays the seventh sub-audio data, and the second channel of the earphone plays the eighth sub-audio data. In this way, the electronic device can determine the phase difference, timbre difference, sound level difference and time difference of the two-channel audio of the earphone based on the orientation information, thereby achieving a two-channel stereo sound effect of the earphone.
示例性的,音频起始位置即为音频播放在耳机单声道中的播放时间。For example, the starting position of the audio is the playback time of the audio played in the mono channel of the headset.
根据第三方面,或者以上第三方面的任意一种实现方式,第一方位信息包括第一电子设备与耳机之间的距离信息和方向信息,第二方位信息包括第二电子设备与耳机之间的距离信息和方向信息。这样,电子设备可基于各设备与耳机之间的距离和方向,调节混音音频中各电子设备的音频,以实现立体声效果。According to the third aspect, or any implementation of the above third aspect, the first orientation information includes distance information and direction information between the first electronic device and the earphone, and the second orientation information includes the distance information between the second electronic device and the earphone. distance information and direction information. In this way, the electronic device can adjust the audio of each electronic device in the mixed audio based on the distance and direction between each device and the earphones to achieve a stereo effect.
根据第三方面,或者以上第三方面的任意一种实现方式,第一电子设备与耳机之间的距离小于第二电子设备与耳机之间的距离。示例性的,第一电子设备为本申请实施例中的主设备,第二电子设备为本申请实施例中的从设备。主设备与耳机之间的通信质量优于从设备与耳机之间的通信质量。According to the third aspect, or any implementation of the above third aspect, the distance between the first electronic device and the earphone is smaller than the distance between the second electronic device and the earphone. For example, the first electronic device is the master device in the embodiment of the present application, and the second electronic device is the slave device in the embodiment of the present application. The quality of communication between the master device and the headset is better than the communication quality between the slave device and the headset.
根据第三方面,或者以上第三方面的任意一种实现方式,方法还包括:第一电子设备获取第三方位信息,第三方位信息用于指示第一电子设备与耳机之间的相对位置;第三方位信息与第一方位信息不同;第一电子设备接收第二电子设备发送的第四方位信息,第四方位信息用于指示第二电子设备与耳机之间的相对位置;第四方位信息与第二方位信息不同;第一电子设备基于第三方位信息与第四方位信息,对第一子音频数据与第二子音频数据进行混音,得到第二音频数据;第一电子设备向耳机发送第二音频数据,以通过耳机播放第二音频数据。这样,电子设备还可以基于各电子设备与耳机之间的实时位置,调整混音效果,以实现更加贴近实际的立体声效果。According to the third aspect, or any implementation of the above third aspect, the method further includes: the first electronic device obtains third position information, and the third position information is used to indicate the relative position between the first electronic device and the headset; The third position information is different from the first position information; the first electronic device receives the fourth position information sent by the second electronic device, and the fourth position information is used to indicate the relative position between the second electronic device and the earphone; the fourth position information Different from the second position information; the first electronic device mixes the first sub-audio data and the second sub-audio data based on the third position information and the fourth position information to obtain the second audio data; the first electronic device transmits data to the earphones Second audio data is sent to play the second audio data through the headphones. In this way, the electronic device can also adjust the mixing effect based on the real-time position between each electronic device and the headphones to achieve a more realistic stereo effect.
根据第三方面,或者以上第三方面的任意一种实现方式,第一电子设备向耳机发送第一音频数据之前,还包括:电子设备检测到第一音量不满足第一输出音量范围,基于第一音量与第一输出音量范围,获取与第一音频数据对应的第一音量参数;第一输出音量范围为预先获取到的;第一音量为第一音频数据的预设时长内的音频数据的平均音量;According to the third aspect, or any implementation of the above third aspect, before the first electronic device sends the first audio data to the earphone, the method further includes: the electronic device detects that the first volume does not meet the first output volume range, based on the first A volume and a first output volume range, obtaining a first volume parameter corresponding to the first audio data; the first output volume range is obtained in advance; the first volume is the audio data within the preset duration of the first audio data average volume;
电子设备基于第一音量参数对第一音频数据进行校正,得到矫正后的第一音频数据;其中,矫正后的第一音频数据的平均音量为第二音量,第二音量在第一输出音量范围内。这样,电子设备可以对混音音频进行校正,以使得混音音频的输出音量保持在输出音量范围内,从而实现对音频数据的音量的自动调节,以有效提升用户使用体验。The electronic device corrects the first audio data based on the first volume parameter to obtain the corrected first audio data; wherein the average volume of the corrected first audio data is the second volume, and the second volume is within the first output volume range. Inside. In this way, the electronic device can correct the mixed audio to keep the output volume of the mixed audio within the output volume range, thereby realizing automatic adjustment of the volume of the audio data to effectively improve the user experience.
根据第三方面,或者以上第三方面的任意一种实现方式,方法还包括:第一电子设备接收第一操作,第一操作用于指示将第一电子设备与第二电子设备的混音模式切换为单设备模式;第一电子设备响应于第一操作,向第二电子设备发送第一指示信息,第一指示信息用于指示第二电子设备停止发送第二子音频数据;第一电子设备向耳机发送第一子音频数据,以通过耳机播放第一子音频数据。这样,电子设备可在混音模式下,实现 混音模式与单设备播放模式的切换。According to the third aspect, or any implementation of the above third aspect, the method further includes: the first electronic device receives a first operation, and the first operation is used to instruct the mixing mode of the first electronic device and the second electronic device. Switch to single device mode; in response to the first operation, the first electronic device sends first instruction information to the second electronic device, and the first instruction information is used to instruct the second electronic device to stop sending the second sub-audio data; the first electronic device The first sub-audio data is sent to the earphone to play the first sub-audio data through the earphone. In this way, the electronic device can achieve Switching between mixing mode and single-device playback mode.
示例性的,在单设备播放场景下,电子设备同样可对待播放的音频数据进行校正,以使得音频数据的音量保持在输出音量范围内。For example, in a single-device playback scenario, the electronic device can also correct the audio data to be played so that the volume of the audio data remains within the output volume range.
第四方面,本申请实施例提供一种电子设备。该电子设备包括:一个或多个处理器、存储器;以及一个或多个计算机程序,其中一个或多个计算机程序存储在存储器上,当计算机程序被一个或多个处理器执行时,使得电子设备执行第三方面或第三方面的任意可能的实现方式中的方法的指令。In a fourth aspect, embodiments of the present application provide an electronic device. The electronic device includes: one or more processors, memory; and one or more computer programs, wherein the one or more computer programs are stored on the memory, and when the computer program is executed by the one or more processors, the electronic device Instructions to perform a method of the third aspect or any possible implementation of the third aspect.
第五方面,本申请提供了一种计算机可读介质,用于存储计算机程序,该计算机程序包括用于执行第一方面或第一方面的任意可能的实现方式中的方法的指令。In a fifth aspect, the present application provides a computer-readable medium for storing a computer program, the computer program comprising instructions for performing the method in the first aspect or any possible implementation of the first aspect.
第六方面,本申请提供了一种计算机可读介质,用于存储计算机程序,该计算机程序包括用于执行第三方面或第三方面的任意可能的实现方式中的方法的指令。In a sixth aspect, the present application provides a computer-readable medium for storing a computer program, the computer program including instructions for executing the method in the third aspect or any possible implementation of the third aspect.
第七方面,本申请提供了一种计算机程序,该计算机程序包括用于执行第一方面或第一方面的任意可能的实现方式中的方法的指令。In a seventh aspect, the present application provides a computer program, the computer program comprising instructions for performing a method in the first aspect or any possible implementation of the first aspect.
第八方面,本申请提供了一种计算机程序,该计算机程序包括用于执行第三方面或第三方面的任意可能的实现方式中的方法的指令。In an eighth aspect, the present application provides a computer program, which includes instructions for executing the method in the third aspect or any possible implementation of the third aspect.
第九方面,本申请提供了一种芯片,该芯片包括处理电路、收发管脚。其中,该收发管脚、和该处理电路通过内部连接通路互相通信,该处理电路执行第一方面或第一方面的任一种可能的实现方式中的方法,以控制接收管脚接收信号,以控制发送管脚发送信号。In a ninth aspect, this application provides a chip, which includes a processing circuit and transceiver pins. Wherein, the transceiver pin and the processing circuit communicate with each other through an internal connection path, and the processing circuit executes the method in the first aspect or any possible implementation of the first aspect to control the receiving pin to receive the signal, so as to Control the sending pin to send signals.
第十方面,本申请提供了一种芯片,该芯片包括处理电路、收发管脚。其中,该收发管脚、和该处理电路通过内部连接通路互相通信,该处理电路执行第三方面或第三方面的任一种可能的实现方式中的方法,以控制接收管脚接收信号,以控制发送管脚发送信号。In a tenth aspect, this application provides a chip, which includes a processing circuit and transceiver pins. Wherein, the transceiver pin and the processing circuit communicate with each other through an internal connection path, and the processing circuit performs the method in the third aspect or any possible implementation of the third aspect to control the receiving pin to receive the signal, so as to Control the sending pin to send signals.
附图说明Description of drawings
图1示出了电子设备的硬件结构示意图;Figure 1 shows a schematic diagram of the hardware structure of the electronic device;
图2示出了电子设备的软件结构示意图;Figure 2 shows a schematic diagram of the software structure of the electronic device;
图3为示例性示出的模块交互示意图;Figure 3 is an exemplary module interaction diagram;
图4为示例性示出的用户界面示意图;Figure 4 is a schematic diagram of an exemplary user interface;
图5为示例性示出的输出音量调节示意图;Figure 5 is an exemplary output volume adjustment schematic diagram;
图6为示例性示出的用户界面示意图;Figure 6 is a schematic diagram of an exemplary user interface;
图7为示例性示出的音量控制方法示意图;Figure 7 is a schematic diagram of an exemplary volume control method;
图8为示例性示出的输出音量范围获取方式示意图;Figure 8 is a schematic diagram illustrating an exemplary output volume range acquisition method;
图9a~图9b为示例性示出的音量控制方法的模块交互示意图;Figures 9a to 9b are schematic diagrams of module interaction of an exemplary volume control method;
图9c为示例性示出的输出音量调节示意图;Figure 9c is an exemplary output volume adjustment schematic diagram;
图10为示例性示出的多设备协同场景;Figure 10 is an exemplary multi-device collaboration scenario;
图11a~图11b为示例性示出的音量控制方法的模块交互示意图;Figures 11a to 11b are schematic diagrams of module interaction of an exemplary volume control method;
图12a~图12b为示例性示出的输出音量调节示意图; Figures 12a to 12b are exemplary output volume adjustment schematic diagrams;
图13a~图13b为示例性示出的混音场景的原理示意图;Figures 13a to 13b are schematic diagrams of the principles of an exemplary mixing scene;
图14为示例性示出的应用场景示意图;Figure 14 is a schematic diagram of an exemplary application scenario;
图15为示例性示出的投票选举示意图;Figure 15 is an exemplary voting schematic diagram;
图16为示例性示出的应用场景示意图;Figure 16 is a schematic diagram of an exemplary application scenario;
图17为示例性示出的用户界面示意图;Figure 17 is a schematic diagram of an exemplary user interface;
图18为示例性示出的混音场景的模块交互示意图;Figure 18 is a schematic diagram of module interaction of an exemplary mixing scene;
图19为示例性示出的混音流程示意图;Figure 19 is a schematic diagram of an exemplary mixing process;
图20为示例性示出的混音场景的音频数据处理示意图;Figure 20 is a schematic diagram of audio data processing of an exemplary audio mixing scenario;
图21a为示例性示出的混音场景的音频数据处理示意图;Figure 21a is a schematic diagram of audio data processing of an exemplary audio mixing scenario;
图21b为示例性示出的混音场景的效果示意图;Figure 21b is a schematic diagram of the effect of an exemplary mixing scene;
图22为示例性示出的混音场景的音频数据处理示意图;Figure 22 is a schematic diagram of audio data processing of an exemplary audio mixing scenario;
图23为示例性示出的混音场景的音频数据处理示意图;Figure 23 is a schematic diagram of audio data processing of an exemplary audio mixing scenario;
图24为示例性示出的切换模式场景下的控制方法流程示意图;Figure 24 is a schematic flowchart of a control method in an exemplary switching mode scenario;
图25为示例性示出的切换模式场景下的控制方法流程示意图;Figure 25 is a schematic flowchart of a control method in an exemplary switching mode scenario;
图26为示例性示出的切换模式场景下的控制方法流程示意图;Figure 26 is a schematic flowchart of a control method in an exemplary switching mode scenario;
图27为示例性示出的淡入淡出处理示意图;Figure 27 is a schematic diagram of an exemplary fade-in and fade-out process;
图28为示例性示出的装置的结构示意图。Figure 28 is a schematic structural diagram of an exemplary device.
具体实施方式Detailed ways
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. Obviously, the described embodiments are part of the embodiments of the present application, rather than all of the embodiments. Based on the embodiments in this application, all other embodiments obtained by those of ordinary skill in the art without creative efforts fall within the scope of protection of this application.
本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。The term "and/or" in this article is just an association relationship that describes related objects, indicating that three relationships can exist. For example, A and/or B can mean: A exists alone, A and B exist simultaneously, and they exist alone. B these three situations.
本申请实施例的说明书和权利要求书中的术语“第一”和“第二”等是用于区别不同的对象,而不是用于描述对象的特定顺序。例如,第一目标对象和第二目标对象等是用于区别不同的目标对象,而不是用于描述目标对象的特定顺序。The terms “first” and “second” in the description and claims of the embodiments of this application are used to distinguish different objects, rather than to describe a specific order of objects. For example, the first target object, the second target object, etc. are used to distinguish different target objects, rather than to describe a specific order of the target objects.
在本申请实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。In the embodiments of this application, words such as "exemplary" or "for example" are used to represent examples, illustrations or explanations. Any embodiment or design described as "exemplary" or "such as" in the embodiments of the present application is not to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the words "exemplary" or "such as" is intended to present the concept in a concrete manner.
在本申请实施例的描述中,除非另有说明,“多个”的含义是指两个或两个以上。例如,多个处理单元是指两个或两个以上的处理单元;多个系统是指两个或两个以上的系统。In the description of the embodiments of this application, unless otherwise specified, the meaning of “plurality” refers to two or more. For example, multiple processing units refer to two or more processing units; multiple systems refer to two or more systems.
图1示出了电子设备100的结构示意图。应该理解的是,图1所示电子设备100仅是电子设备的一个范例,并且电子设备100可以具有比图中所示的更多的或者更少的部件,可以组合两个或多个的部件,或者可以具有不同的部件配置。图1中所示出的各种 部件可以在包括一个或多个信号处理和/或专用集成电路在内的硬件、软件、或硬件和软件的组合中实现。需要说明的是,在本申请实施例中以电子设备为手机为例进行说明。在其他实施例中,电子设备还可以是平板、音箱、可穿戴设备、智能家居设备等设备,本申请不做限定。FIG. 1 shows a schematic structural diagram of an electronic device 100 . It should be understood that the electronic device 100 shown in FIG. 1 is only an example of an electronic device, and the electronic device 100 may have more or fewer components than shown in the figure, and two or more components may be combined. , or can have different component configurations. The various Components may be implemented in hardware including one or more signal processing and/or application specific integrated circuits, software, or a combination of hardware and software. It should be noted that in the embodiment of the present application, the electronic device is a mobile phone as an example for explanation. In other embodiments, the electronic device may also be a tablet, a speaker, a wearable device, a smart home device, and other devices, which are not limited in this application.
电子设备100可以包括:处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。The electronic device 100 may include: a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2. Mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone interface 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display screen 194, And subscriber identification module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light. Sensor 180L, bone conduction sensor 180M, etc.
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。The processor 110 may include one or more processing units. For example, the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) wait. Among them, different processing units can be independent devices or integrated in one or more processors.
其中,控制器可以是电子设备100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。The controller may be the nerve center and command center of the electronic device 100 . The controller can generate operation control signals based on the instruction operation code and timing signals to complete the control of fetching and executing instructions.
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。The processor 110 may also be provided with a memory for storing instructions and data. In some embodiments, the memory in processor 110 is cache memory. This memory may hold instructions or data that have been recently used or recycled by processor 110 . If the processor 110 needs to use the instructions or data again, it can be called directly from the memory. Repeated access is avoided and the waiting time of the processor 110 is reduced, thus improving the efficiency of the system.
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。The charging management module 140 is used to receive charging input from the charger. Among them, the charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from the wired charger through the USB interface 130 .
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,外部存储器,显示屏194,摄像头193,和无线通信模块160等供电。The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, internal memory 121, external memory, display screen 194, camera 193, wireless communication module 160, etc.
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。The wireless communication function of the electronic device 100 can be implemented through the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor and the baseband processor.
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals. Each antenna in electronic device 100 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization. For example: Antenna 1 can be reused as a diversity antenna for a wireless LAN. In other embodiments, antennas may be used in conjunction with tuning switches.
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信 的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。The mobile communication module 150 can provide wireless communication including 2G/3G/4G/5G applied to the electronic device 100. s solution. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), etc.
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。The wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) network), Bluetooth (bluetooth, BT), and global navigation satellites. System (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions.
在一些实施例中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。In some embodiments, the antenna 1 of the electronic device 100 is coupled to the mobile communication module 150, and the antenna 2 is coupled to the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology. The wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc. The GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi) -zenith satellite system (QZSS) and/or satellite based augmentation systems (SBAS).
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is an image processing microprocessor and is connected to the display screen 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。The display screen 194 is used to display images, videos, etc. Display 194 includes a display panel. The display panel can use a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode). emitting diode (AMOLED), flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diode (QLED), etc. In some embodiments, the electronic device 100 may include 1 or N display screens 194, where N is a positive integer greater than 1.
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。Camera 193 is used to capture still images or video. The object passes through the lens to produce an optical image that is projected onto the photosensitive element.
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。The external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement the data storage function. Such as saving music, videos, etc. files in external memory card.
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行电子设备100的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。 Internal memory 121 may be used to store computer executable program code, which includes instructions. The processor 110 executes instructions stored in the internal memory 121 to execute various functional applications and data processing of the electronic device 100 . The internal memory 121 may include a program storage area and a data storage area. Among them, the stored program area can store an operating system, at least one application program required for a function (such as a sound playback function, an image playback function, etc.). The storage data area may store data created during use of the electronic device 100 (such as audio data, phone book, etc.).
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。The electronic device 100 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playback, recording, etc.
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。The audio module 170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signals. Audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be provided in the processor 110 , or some functional modules of the audio module 170 may be provided in the processor 110 .
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备100可以通过扬声器170A收听音乐,或收听免提通话。Speaker 170A, also called "speaker", is used to convert audio electrical signals into sound signals. The electronic device 100 can listen to music through the speaker 170A, or listen to hands-free calls.
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备100接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。Receiver 170B, also called "earpiece", is used to convert audio electrical signals into sound signals. When the electronic device 100 answers a call or a voice message, the voice can be heard by bringing the receiver 170B close to the human ear.
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。电子设备100可以设置至少一个麦克风170C。在另一些实施例中,电子设备100可以设置两个麦克风170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备100还可以设置三个,四个或更多麦克风170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。Microphone 170C, also called "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a call or sending a voice message, the user can speak close to the microphone 170C with the human mouth and input the sound signal to the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C, which in addition to collecting sound signals, may also implement a noise reduction function. In other embodiments, the electronic device 100 can also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions, etc.
耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口130,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。The headphone interface 170D is used to connect wired headphones. The headphone interface 170D may be a USB interface 130, or may be a 3.5mm open mobile terminal platform (OMTP) standard interface, or a Cellular Telecommunications Industry Association of the USA (CTIA) standard interface.
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。The buttons 190 include a power button, a volume button, etc. Key 190 may be a mechanical key. It can also be a touch button. The electronic device 100 may receive key inputs and generate key signal inputs related to user settings and function control of the electronic device 100 .
电子设备100的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本申请实施例以分层架构的Android系统为例,示例性说明电子设备100的软件结构。The software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. The embodiment of this application takes the Android system with a layered architecture as an example to illustrate the software structure of the electronic device 100 .
图2是本申请实施例的电子设备100的软件结构框图。FIG. 2 is a software structure block diagram of the electronic device 100 according to the embodiment of the present application.
电子设备100的分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和系统库,以及内核层。The layered architecture of the electronic device 100 divides the software into several layers, and each layer has clear roles and division of labor. The layers communicate through software interfaces. In some embodiments, the Android system is divided into four layers, from top to bottom: application layer, application framework layer, Android runtime and system libraries, and kernel layer.
应用程序层可以包括一系列应用程序包。The application layer can include a series of application packages.
如图2所示,应用程序包可以包括相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等应用程序。As shown in Figure 2, the application package can include camera, gallery, calendar, calling, map, navigation, WLAN, Bluetooth, music, video, short message and other applications.
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。The application framework layer provides an application programming interface (API) and programming framework for applications in the application layer. The application framework layer includes some predefined functions.
如图2所示,应用程序框架层可以包括窗口管理器,内容提供器,媒体管理器,电话管理器,资源管理器,通知管理器等。 As shown in Figure 2, the application framework layer can include window manager, content provider, media manager, phone manager, resource manager, notification manager, etc.
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。A window manager is used to manage window programs. The window manager can obtain the display size, determine whether there is a status bar, lock the screen, capture the screen, etc.
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。Content providers are used to store and retrieve data and make this data accessible to applications. Said data can include videos, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。The view system includes visual controls, such as controls that display text, controls that display pictures, etc. A view system can be used to build applications. The display interface can be composed of one or more views. For example, a display interface including a text message notification icon may include a view for displaying text and a view for displaying pictures.
电话管理器用于提供电子设备100的通信功能。例如通话状态的管理(包括接通,挂断等)。The phone manager is used to provide communication functions of the electronic device 100 . For example, call status management (including connected, hung up, etc.).
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。The resource manager provides various resources to applications, such as localized strings, icons, pictures, layout files, video files, etc.
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。The notification manager allows applications to display notification information in the status bar, which can be used to convey notification-type messages and can automatically disappear after a short stay without user interaction. For example, the notification manager is used to notify download completion, message reminders, etc. The notification manager can also be notifications that appear in the status bar at the top of the system in the form of charts or scroll bar text, such as notifications for applications running in the background, or notifications that appear on the screen in the form of conversation windows. For example, text information is prompted in the status bar, a beep sounds, the electronic device vibrates, the indicator light flashes, etc.
媒体管理器,也可以称为媒体服务,用于管理音频数据和图像数据,例如控制音频数据和图像数据的数据流向以及将音频流和图像流写入MP4文件等处理。在本申请实施例中,媒体管理器可用于调整音频数据的输出音量,以及用于多设备音频输出场景的音频数据混音等。The media manager, which can also be called a media service, is used to manage audio data and image data, such as controlling the data flow direction of audio data and image data and writing audio streams and image streams to MP4 files. In this embodiment of the present application, the media manager can be used to adjust the output volume of audio data, mix audio data for multi-device audio output scenarios, etc.
系统库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(Media Libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。System libraries can include multiple functional modules. For example: surface manager (surface manager), media libraries (Media Libraries), 3D graphics processing libraries (for example: OpenGL ES), 2D graphics engines (for example: SGL), etc.
表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。The surface manager is used to manage the display subsystem and provides the fusion of 2D and 3D layers for multiple applications.
三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。The 3D graphics processing library is used to implement 3D graphics drawing, image rendering, composition, and layer processing.
2D图形引擎是2D绘图的绘图引擎。2D Graphics Engine is a drawing engine for 2D drawing.
内核层是硬件和软件之间的层。内核层至少包含显示驱动,Wi-Fi驱动,蓝牙驱动,摄像头驱动,音频驱动,传感器驱动等。The kernel layer is the layer between hardware and software. The kernel layer at least includes display driver, Wi-Fi driver, Bluetooth driver, camera driver, audio driver, sensor driver, etc.
可以理解的是,图2示出的部件并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。It can be understood that the components shown in FIG. 2 do not constitute specific limitations on the electronic device 100 . In other embodiments of the present application, the electronic device 100 may include more or fewer components than shown in the figures, or some components may be combined, some components may be separated, or some components may be arranged differently.
结合图2,图3为示例性示出的模块交互示意图。请参照图3,以手机播放视频应用的音频A为例进行说明。示例性的,视频应用向媒体管理器输出音频A(也可以称为音频数据A)的音频数据。在本申请实施例中,媒体管理器接收到其它应用输入的音频数据,可以称为输入音频数据(本申请表示为data_in)。需要说明的是,音频数据的大小,也可以理解为音频数据对应的振幅,即为音频的音量。因此,在本申请实施例中,data_in可以用于表示输入音频数据,也可以用于表示音频的输入音量。 Combined with Figure 2, Figure 3 is an exemplary module interaction schematic diagram. Please refer to Figure 3, taking audio A of the video playback application on the mobile phone as an example for explanation. For example, the video application outputs the audio data of audio A (which may also be called audio data A) to the media manager. In this embodiment of the present application, the media manager receives audio data input by other applications, which may be called input audio data (denoted as data_in in this application). It should be noted that the size of the audio data can also be understood as the amplitude corresponding to the audio data, that is, the volume of the audio. Therefore, in this embodiment of the present application, data_in can be used to represent input audio data, and can also be used to represent the input volume of audio.
媒体管理器可基于音频A的输入音量,获取音频A的输出音量(也可以称为音频输出音量)。需要说明的是,媒体管理器向其它模块或应用输出的音频数据,可以称为输出音频数据,本申请实施例中表示为data_out。与输入音频数据类似,输出音频数据的振幅,即为输出音频的音量。因此,本申请实施例中,data_out可以用于表示输出音频数据,也可以用于表示音频的输出音量。进一步需要说明的是,本申请实施例中的方案主要是对音量的控制进行描述,因此,在本申请实施例中,data_in主要用于表示音频的输入音量,data_out主要用于表示音频的输出音量,下文中不再重复说明。The media manager can obtain the output volume of audio A (which may also be called the audio output volume) based on the input volume of audio A. It should be noted that the audio data output by the media manager to other modules or applications can be called output audio data, which is represented as data_out in the embodiment of this application. Similar to the input audio data, the amplitude of the output audio data is the volume of the output audio. Therefore, in this embodiment of the present application, data_out can be used to represent output audio data, or can also be used to represent the output volume of audio. It should be further noted that the solution in the embodiment of the present application mainly describes the control of the volume. Therefore, in the embodiment of the present application, data_in is mainly used to represent the input volume of the audio, and data_out is mainly used to represent the output volume of the audio. , the description will not be repeated below.
示例性的,媒体管理器可基于公式(1),获取音频A的输出音量(data_out):
data_out=data_in*master_volume*stream_volume*track_volume   (1)
For example, the media manager can obtain the output volume (data_out) of audio A based on formula (1):
data_out=data_in*master_volume*stream_volume*track_volume (1)
其中,data_in即为音频A的输入音量。例如视频应用生成的音频数据所对应的音量即为输入音量。Among them, data_in is the input volume of audio A. For example, the volume corresponding to the audio data generated by the video application is the input volume.
track_volume(音轨音量)表示应用的音量。例如,音乐应用中的音量调节选项可用于调节音乐应用正在播放的音乐的输出音量,但是该音量不影响其它媒体的音量,仅对音乐应用所播放的音频有效。track_volume (track volume) represents the volume of the application. For example, the volume adjustment option in the music application can be used to adjust the output volume of the music being played by the music application. However, this volume does not affect the volume of other media and is only effective for the audio played by the music application.
stream_volume(流音量)表示某一stream(流)的音量。示例性的,以Android系统为例,Android系统包括10种stream,例如包括但不限于:媒体流、通话流等。举例说明,如图4所示,手机的设置选项中的声音和振动选项界面401中包括音量选项402。音量选项402中包括但不限于:“来电、信息、通知”选项4021、“闹钟”选项4022、“音乐、视频、游戏”选项4023、“通话”选项4024以及“智慧语音”选项4025等。其中,“通话”选项4024对应通话stream,用于调整通话的输出音量。“闹钟”选项4022对应闹钟stream,用于调整闹钟的输出音量。示例性的,“来电、信息、通知”选项4021对应来电、信息、通知stream voloum alias(流音量别名)。stream voloum alias用于设置同一组stream的音量。例如通过调节“来电、信息、通知”选项4021的滑动条,以设置“来电、信息、通知”stream voloum alias的音量,可以理解为用于设置来电stream的音量(即来电提示音量)、信息stream的音量(即信息提示音量)以及通知stream的音量(即通知提示音量)。也就是说,来电提示音量、信息提示音量、通知提示音量度会相应调节,但是其它的stream,例如闹钟、通话音量不会被调节。再例如“音乐、视频、游戏”选项4023对应“音乐、视频、游戏”stream voloum alias,也可以称为媒体stream voloum alias,通过调节“音乐、视频、游戏”选项4023的滑动条,可设置媒体,即“音乐、视频、游戏”stream voloum alias中的各stream(包括音乐stream、视频stream、游戏stream)的音量。stream_volume (stream volume) represents the volume of a certain stream. For example, taking the Android system as an example, the Android system includes 10 types of streams, including but not limited to: media streams, call streams, etc. For example, as shown in Figure 4, the sound and vibration option interface 401 in the setting options of the mobile phone includes a volume option 402. The volume options 402 include but are not limited to: "Incoming calls, messages, notifications" options 4021, "Alarm clock" options 4022, "Music, videos, games" options 4023, "Calls" options 4024, and "Smart voice" options 4025, etc. Among them, the "call" option 4024 corresponds to the call stream and is used to adjust the output volume of the call. "Alarm clock" option 4022 corresponds to the alarm clock stream, which is used to adjust the output volume of the alarm clock. For example, the "incoming call, message, notification" option 4021 corresponds to the incoming call, message, and notification stream volume alias (stream volume alias). stream volume alias is used to set the volume of the same group of streams. For example, by adjusting the slider of the "Incoming Call, Message, Notification" option 4021 to set the volume of the "Incoming Call, Message, Notification" stream voloum alias, it can be understood as being used to set the volume of the incoming call stream (i.e., the volume of the incoming call prompt), the information stream The volume (that is, the information prompt volume) and the volume of the notification stream (that is, the notification prompt volume). In other words, the volume of incoming call alerts, message alerts, and notification alerts will be adjusted accordingly, but other streams, such as alarm clocks and call volumes will not be adjusted. For another example, the "Music, Video, Game" option 4023 corresponds to the "Music, Video, Game" stream voloum alias, which can also be called the media stream voloum alias. By adjusting the slider of the "Music, Video, Game" option 4023, the media can be set , that is, the volume of each stream (including music stream, video stream, game stream) in the "music, video, game" stream volume alias.
master_volume(主音量)用于设置所有的stream_volume和track_volume。该数值可以写入音响设备对应的设备文件(即声卡文件)中,以控制所有对象的音量。该数值也可以不写入声卡文件中,而是作为一个乘数因子,以影响所有对象的音量。master_volume (master volume) is used to set all stream_volume and track_volume. This value can be written into the device file corresponding to the audio device (ie, the sound card file) to control the volume of all objects. This value can also not be written to the sound card file, but used as a multiplier factor to affect the volume of all objects.
由公式(1)可知,影响输出音量的因子包括但不限于以下至少之一:输入音量、流音量、音轨音量、主音量。在本申请实施例中,除输入音量以外的因子(包括流音量、音轨音量、主音量)也可以称为输出音量参数,用户可通过调节输出音量参数中的任一参数,以调节输出音量。 It can be seen from formula (1) that the factors that affect the output volume include but are not limited to at least one of the following: input volume, stream volume, track volume, and master volume. In the embodiment of this application, factors other than input volume (including stream volume, track volume, and master volume) can also be called output volume parameters. The user can adjust the output volume by adjusting any of the output volume parameters. .
仍参照图3,示例性的,媒体管理器将音频A的data_out(即音频A的输出音频数据)输出至音频驱动,音频驱动可通过播放设备(例如扬声器)播放音频A,且播放的音量即为data_out对应的音量,也可以理解为音频A的音频数据所对应的振幅。需要说明的是,媒体管理器可对音频A进行相应的编码等处理,具体处理方式可参照已有技术实施例,本申请不做详细说明,下文中不再重复说明。Still referring to Figure 3, for example, the media manager outputs the data_out of audio A (that is, the output audio data of audio A) to the audio driver. The audio driver can play audio A through the playback device (such as a speaker), and the playback volume is is the volume corresponding to data_out, which can also be understood as the amplitude corresponding to the audio data of audio A. It should be noted that the media manager can perform corresponding encoding and other processing on the audio A. The specific processing method can refer to the existing technical embodiments. This application will not provide a detailed description, and the description will not be repeated below.
请参照图5,以手机播放视频应用的音频A为例,视频应用响应于用户操作,将音频A切换为音频B。具体的,视频应用将音频B的输入音频数据(表示为data_in(B))输出给媒体管理器,媒体管理器基于公式(1),获取音频B的输出音频数据,也可以理解为获取音频B的输出音量(表示为data_out(B))。其中,在用户未调节输出音量参数(即流音量、音轨音量、主音量)之前,音频B的输出音量取决于音频B的输入音量的大小。假设音频B的输入音量远小于音频A的输入音量(data_in(A)),即data_in(B)<data_in(A),则在输出音量参数(即虚线框中的参数)相同的情况下,音频B的输出音量远小于音频A的输出音量。用户可通过调节输出音量参数,以调大音频B的输出音量。例如,用户可通过音量键以调节媒体stream voloum alias(包括音乐stream、视频stream、游戏stream)的音量,以调大音频B的输出音量。举例说明,如图6所示,用户点击音量键,视频应用界面601的界面显示音量调节框602,音量调节框602中包括音量条,用于指示媒体音量大小。需要说明的是,本申请实施例中以音量键设置为调节媒体音量为例进行说明,在其他实施例中,也可以通过其它任一方式调节音量,本申请不做限定。Please refer to Figure 5. Taking audio A of a video application played on a mobile phone as an example, the video application switches audio A to audio B in response to user operations. Specifically, the video application outputs the input audio data of audio B (expressed as data_in(B)) to the media manager. The media manager obtains the output audio data of audio B based on formula (1), which can also be understood as obtaining audio B. The output volume (denoted as data_out(B)). Before the user adjusts the output volume parameters (ie, stream volume, track volume, master volume), the output volume of audio B depends on the input volume of audio B. Assume that the input volume of audio B is much smaller than the input volume of audio A (data_in(A)), that is, data_in(B)<data_in(A), then when the output volume parameters (i.e., the parameters in the dotted box) are the same, the audio The output volume of audio B is much smaller than the output volume of audio A. Users can increase the output volume of audio B by adjusting the output volume parameter. For example, the user can adjust the volume of media stream volume alias (including music stream, video stream, and game stream) through the volume keys to increase the output volume of audio B. For example, as shown in Figure 6, the user clicks the volume key, and the video application interface 601 displays a volume adjustment box 602. The volume adjustment box 602 includes a volume bar for indicating the media volume. It should be noted that in the embodiment of this application, the volume key is set to adjust the media volume as an example for explanation. In other embodiments, the volume can also be adjusted in any other way, which is not limited in this application.
仍参照图5,示例性的,媒体管理器响应于接收到的用户操作,将输出音量参数中的stream_volume的值调大,使得音频B的输出音量,即data_out(B)增大。当视频应用响应于接收到的用户操作切换回音频A,相应的,视频应用向媒体管理器输出音频A的音频数据(即输入音频数据)。媒体管理器基于当前的,即调节后的输出音量参数(其中stream_volume的值已调大)得到音频A的输出音量。则,音频A的输出音量将会变得非常大,该现象可称为爆音,其将影响用户使用体验。Still referring to FIG. 5 , for example, in response to the received user operation, the media manager increases the value of stream_volume in the output volume parameter, so that the output volume of audio B, that is, data_out(B), increases. When the video application switches back to audio A in response to the received user operation, correspondingly, the video application outputs audio data of audio A to the media manager (ie, input audio data). The media manager obtains the output volume of audio A based on the current, that is, adjusted output volume parameter (where the value of stream_volume has been increased). Then, the output volume of audio A will become very loud. This phenomenon can be called popping sound, which will affect the user experience.
为解决音频播放场景中的音量控制问题,本申请实施例中提供一种音量控制方法,可将音频的输出音量控制在预设的范围内,并且该预设范围是根据用户需求设置的,从而能够解决音频切换时的爆音问题,有效提升用户使用体验。In order to solve the volume control problem in audio playback scenarios, embodiments of the present application provide a volume control method that can control the audio output volume within a preset range, and the preset range is set according to user needs, so that It can solve the problem of popping sound when switching audio and effectively improve the user experience.
图7为示例性示出的音量控制方法示意图,请参照图7,具体包括:Figure 7 is a schematic diagram of an exemplary volume control method. Please refer to Figure 7 , which specifically includes:
S701,手机订阅输出音量。S701, mobile phone subscription output volume.
示例性的,手机在播放或者是向其它设备输出音频数据的场景下,手机可订阅输出音量,以获取输出音量。如上文所述,本申请实施例中的输出音量表示为“data_out”,手机中的媒体管理器可基于应用输入的音频(即输入音频数据)对应的输入音量,获取输出音频数据对应的输出音量。For example, when the mobile phone is playing or outputting audio data to other devices, the mobile phone can subscribe to the output volume to obtain the output volume. As mentioned above, the output volume in the embodiment of the present application is represented as "data_out". The media manager in the mobile phone can obtain the output volume corresponding to the output audio data based on the input volume corresponding to the audio input by the application (ie, the input audio data). .
媒体管理器可将音频数据输出给音频驱动,以通过扬声器等设备播放。媒体管理器也可以将音频数据输出Wi-Fi驱动或蓝牙驱动等,以传输给其它设备进行播放。也就是说,本申请实施例中的输出音量可以理解为手机播放音频时的音量,或者是手机向其它设备输出音频数据所对应的输出音量。The media manager can output audio data to the audio driver for playback through speakers and other devices. The media manager can also output audio data to a Wi-Fi driver or Bluetooth driver to transfer it to other devices for playback. That is to say, the output volume in the embodiment of the present application can be understood as the volume when the mobile phone plays audio, or the output volume corresponding to the mobile phone outputting audio data to other devices.
示例性的,手机可基于公式(2),获取输出音量:
data_out=data_in*master_volume*stream_volume*track_volume*volume_coefficient  
(2)volume_coefficient(音量参数)用于影响输出音量,本申请实施例中通过设置适合的音量参数,以将输出音量控制在用户所需的范围内。可选地,在本申请实施例中,音量参数可由媒体管理器获取并保存,例如可以保存在内存中(也可以是其他位置,本申请不做限定),音频参数的具体获取方式将在下面的实施例中进行详细说明。可选地,媒体管理器可设置有默认音量参数,例如音量参数可以是0.5。该默认音量参数可用于媒体管理器在未基于下文所述的方式获取到音量参数之前,对音频进行处理。例如,在手机首次开机,并第一次播放音频时,媒体管理器可基于默认音量参数获取输出音量,以避免第一次播放出现爆音。可选地,若媒体控制器未设置默认音量参数,媒体控制器在第一次播放音频时,可基于公式(1)获取输出音量,本申请不做限定。
For example, the mobile phone can obtain the output volume based on formula (2):
data_out=data_in*master_volume*stream_volume*track_volume*volume_coefficient
(2) volume_coefficient (volume parameter) is used to affect the output volume. In the embodiment of the present application, appropriate volume parameters are set to control the output volume within the range required by the user. Optionally, in the embodiment of this application, the volume parameters can be obtained and saved by the media manager, for example, they can be saved in the memory (or other locations, which are not limited by this application). The specific acquisition method of the audio parameters will be as follows. Detailed description is given in the examples. Optionally, the media manager may be set with a default volume parameter, for example the volume parameter may be 0.5. This default volume parameter can be used by the media manager to process audio before obtaining the volume parameter based on the method described below. For example, when the phone is turned on for the first time and audio is played for the first time, the media manager can obtain the output volume based on the default volume parameters to avoid popping sounds during the first playback. Optionally, if the media controller does not set a default volume parameter, the media controller can obtain the output volume based on formula (1) when playing audio for the first time, which is not limited in this application.
示例性的,由公式(2)可知,本申请实施例中,影响输出音量的因子包括但不限于以下至少之一:输入音量、流音量、音轨音量、主音量以及音量参数。在本申请实施例中,如上文所述,流音量(stream_volume)、音轨音量(track_volume)、主音量(master_volume)可以称为输出音量参数,用户可通过调节输出音量参数中的任一参数,以调节输出音量。For example, as can be seen from formula (2), in the embodiment of the present application, factors that affect the output volume include but are not limited to at least one of the following: input volume, stream volume, track volume, master volume, and volume parameters. In the embodiment of this application, as mentioned above, stream volume (stream_volume), track volume (track_volume), and master volume (master_volume) can be called output volume parameters. The user can adjust any of the output volume parameters. to adjust the output volume.
在本申请实施例中,手机订阅输出音量,可以以预设的采样周期对输出音量进行采样,以获取输出音量。可选地,手机可设置两类采样周期,包括第一采样周期和第二采样周期,其中,第一采样周期也可以称为稀疏采样周期,第二采样周期也可以称为密集采样周期。第一采样周期的采样周期时长大于第二采样周期的周期时长。举例说明,第一采样周期的周期时长可以是秒级,第二采样周期的周期时长可以是毫秒级。In this embodiment of the present application, the mobile phone subscribes to the output volume and can sample the output volume at a preset sampling period to obtain the output volume. Optionally, the mobile phone can set two types of sampling periods, including a first sampling period and a second sampling period. The first sampling period can also be called a sparse sampling period, and the second sampling period can also be called a dense sampling period. The sampling period duration of the first sampling period is longer than the period duration of the second sampling period. For example, the period duration of the first sampling period may be on the second level, and the period duration of the second sampling period may be on the millisecond level.
举例说明,以手机播放音频数据的场景为例,手机在播放音频数据时,手机可对用户行为进行检测,以检测是否发生调节输出音量的行为,也可以理解为检测是否发生调节上文所述的输出音量参数的行为。For example, take the scenario where a mobile phone plays audio data. When the mobile phone plays audio data, the mobile phone can detect the user's behavior to detect whether the output volume adjustment behavior occurs. It can also be understood as detecting whether the adjustment described above occurs. The behavior of the output volume parameter.
一个示例中,当手机播放音频数据,且未检测到用户调节输出音量的情况下,手机以第一采样周期对输出音量进行采集。另一个示例中,当手机播放音频数据,且检测到用户调节输出音量的情况下,手机以第二采样周期对输出音量进行采集。In one example, when the mobile phone plays audio data and does not detect that the user adjusts the output volume, the mobile phone collects the output volume in the first sampling period. In another example, when the mobile phone plays audio data and detects that the user adjusts the output volume, the mobile phone collects the output volume in the second sampling period.
示例性的,上文所述的用户调节输出音量可以包括但不限于以下至少之一:点击音量键;点击或拖动设置界面中的各音量选项(例如图4中的各滑动条);点击遥控器的音量键;通过语音指令或手势调节音量等,本申请不做限定。Exemplarily, the above-described user adjustment of the output volume may include but is not limited to at least one of the following: clicking the volume key; clicking or dragging each volume option in the setting interface (such as each slide bar in Figure 4); clicking The volume keys of the remote control; adjusting the volume through voice commands or gestures, etc. are not limited in this application.
需要说明的是,手机中可设置音量控制智能调节选项,用户可通过该音量控制职能调节选项以指示手机在播放音频的过程中,执行本申请实施例中的音量控制方案。可选地,音量控制职能调节选项可以设置于下拉菜单中、控制中心中、负一屏、声音和振动设置界面中的至少一个,本申请不做限定。It should be noted that a smart volume control adjustment option can be set in the mobile phone, and the user can use the volume control function adjustment option to instruct the mobile phone to execute the volume control scheme in the embodiment of the present application during audio playback. Optionally, the volume control function adjustment option can be set in at least one of the drop-down menu, the control center, the negative screen, and the sound and vibration setting interface, which is not limited in this application.
S702,手机获取并保存输出音量范围。S702, the mobile phone obtains and saves the output volume range.
在本申请实施例中,手机以第一采样周期采集输出音量的过程中,手机可获取输出音量范围。输出音量范围包括输出音量最大值和输出音量最小值,手机可基于公式(3) 和公式(4)分别获取输出音量最大值(本申请表示为data_out_max)和输出音量最小值(本申请表示为data_out_min),以获取输出音量范围。
data_out_max=Math.max(data_out_max,data_out)  (3)
data_out_min=Math.min(data_out_min,data_out)  (4)
In the embodiment of the present application, during the process of collecting the output volume by the mobile phone in the first sampling period, the mobile phone can obtain the output volume range. The output volume range includes the maximum output volume and the minimum output volume. The mobile phone can be based on formula (3) and formula (4) respectively obtain the maximum value of the output volume (expressed as data_out_max in this application) and the minimum value of the output volume (expressed as data_out_min in this application) to obtain the output volume range.
data_out_max=Math.max(data_out_max,data_out) (3)
data_out_min=Math.min(data_out_min,data_out) (4)
也就是说,手机采集到输出音量后,将输出音量与已经保存的输出音量范围的最大值和最小值分别进行比较。That is to say, after the mobile phone collects the output volume, it compares the output volume with the maximum and minimum values of the saved output volume range.
一个示例中,若采集到的输出音量大于已保存的输出音量范围的最大值,则采集到的输出音量即为新的输出音量范围的最大值,输出音量范围的最小值保持不变。即,手机将已经保存的输出音量范围进行更新,更新后的输出音量范围的最大值即为采集到的输出音量,最小值仍为前一次保存的输出音量范围的最小值。In an example, if the collected output volume is greater than the maximum value of the saved output volume range, the collected output volume is the maximum value of the new output volume range, and the minimum value of the output volume range remains unchanged. That is, the mobile phone updates the saved output volume range. The maximum value of the updated output volume range is the collected output volume, and the minimum value is still the minimum value of the previously saved output volume range.
另一个示例中,若采集到的输出音量小于已保存的输出音量范围的最大值,且大于已保存的输出音量范围的最小值,则更新的输出音量范围与前一次输出音量范围相同,即,更新的输出音量范围的最大值仍为前一次输出音量范围的最大值,更新的输出音量范围的最小值仍为前一次输出音量范围的最小值。In another example, if the collected output volume is less than the maximum value of the saved output volume range and greater than the minimum value of the saved output volume range, the updated output volume range is the same as the previous output volume range, that is, The maximum value of the updated output volume range is still the maximum value of the previous output volume range, and the minimum value of the updated output volume range is still the minimum value of the previous output volume range.
又一个示例中,若采集到的输出音量小于已保存的输出音量范围的最小值,则采集到的输出音量即为新的输出音量范围的最小值,输出音量范围的最大值保持不变。即,手机将已经保存的输出音量范围进行更新,更新后的输出音量范围的最大值即前一次保存的输出音量范围的最大值,更新后的输出音量范围的最小值即为采集到的输出音量。In another example, if the collected output volume is less than the minimum value of the saved output volume range, the collected output volume is the minimum value of the new output volume range, and the maximum value of the output volume range remains unchanged. That is, the mobile phone updates the saved output volume range. The maximum value of the updated output volume range is the maximum value of the previously saved output volume range. The minimum value of the updated output volume range is the collected output volume. .
可以理解为,在本申请实施例中,手机在每次稀疏采集(即按照第一采样周期采集)后,根据采集到的输出音量更新输出音量范围,更新后的输出音量范围与前一次的输出音量范围可以相同,也可以不同。It can be understood that in the embodiment of the present application, after each sparse collection (that is, collection according to the first sampling cycle), the mobile phone updates the output volume range according to the collected output volume, and the updated output volume range is the same as the previous output. The volume ranges can be the same or different.
在本申请实施例中,手机以第二采样周期采集输出音量的过程中,手机同样可获取输出音量范围。一个示例中,若手机(具体为媒体管理器)检测到用户调大输出音量,例如用户点击音量键,以调大媒体流参数(即stream_volume),则在用户调节音量的过程中,手机(具体为媒体管理器,下文中不再重复说明)以第二采样周期采集输出音量,并基于公式(5)和公式(6)获取输出音量范围的输出音量最小值(本申请表示为data_out_min)。其中,输出音量范围的输出音量最大值仍为前一次获取到的输出音量范围的最大值。
data_out=Math.average(data_out1,data_out2,…)  (5)
data_out_min=Math.max(data_out_min,data_out)  (6)
In the embodiment of the present application, when the mobile phone collects the output volume in the second sampling period, the mobile phone can also obtain the output volume range. In one example, if the mobile phone (specifically, the media manager) detects that the user has increased the output volume, for example, the user clicks the volume key to increase the media stream parameter (i.e., stream_volume), then during the process of the user adjusting the volume, the mobile phone (specifically, the media manager) (Media Manager, which will not be described again below) collects the output volume in the second sampling period, and obtains the minimum output volume value of the output volume range based on formula (5) and formula (6) (expressed in this application as data_out_min). Among them, the maximum output volume value of the output volume range is still the maximum value of the output volume range obtained last time.
data_out=Math.average(data_out1,data_out2,…) (5)
data_out_min=Math.max(data_out_min,data_out) (6)
请参照公式(5),如上文所述,手机在检测到用户调节输出音量(即调节上文所述的输出音量参数)时,在用户调大输出音量的过程中(即从检测到用户开始调节音量至调节结束),手机以第二采样周期采集输出音量。手机从用户开始调节音量至调节结束,以第二采样周期采集到n个输出音量,包括data_out1,data_out2……,手机基于采集到的n个输出音量,获取平均输出音量。接着,手机基于公式(6),获取输出音量范围的最小值。具体的,手机将前一次保存的输出音量范围的最小值与本次获取到的平均输出音 量进行比较。一个示例中,若平均输出音量大于前一次保存的输出音量范围的最小值,则平均输出音量作为新的输出音量范围的最小值,最大值仍为前一次获取到的输出音量范围的最大值。另一个示例中,若平均输出音量小于前一次保存的输出音量范围的最小值,则前一次保存的输出音量范围的最小值作为新的输出音量范围的最小值,最大值仍为前一次获取到的输出音量范围的最大值,也就是说,新的输出音量范围与前一次保存的输出音量范围是一致的。Please refer to formula (5). As mentioned above, when the mobile phone detects that the user adjusts the output volume (i.e., adjusts the output volume parameters mentioned above), during the process of the user increasing the output volume (i.e., starting from the detection of the user Adjust the volume to the end of the adjustment), and the mobile phone collects the output volume in the second sampling period. From the time the user starts adjusting the volume to the end of the adjustment, the mobile phone collects n output volumes in the second sampling period, including data_out1, data_out2..., and the mobile phone obtains the average output volume based on the n collected output volumes. Then, the mobile phone obtains the minimum value of the output volume range based on formula (6). Specifically, the mobile phone compares the minimum value of the output volume range saved last time with the average output volume obtained this time. Quantity comparison. In one example, if the average output volume is greater than the minimum value of the previously saved output volume range, the average output volume is used as the minimum value of the new output volume range, and the maximum value is still the maximum value of the previously obtained output volume range. In another example, if the average output volume is less than the minimum value of the previously saved output volume range, the minimum value of the previously saved output volume range is used as the minimum value of the new output volume range, and the maximum value is still the previously obtained value. The maximum value of the output volume range, that is to say, the new output volume range is consistent with the previously saved output volume range.
另一个示例中,若手机(具体为媒体管理器)检测到用户调小输出音量,例如用户点击音量键,以调小媒体流参数(即stream_volume),则在用户调节音量的过程中,手机以第二采样周期采集输出音量,并基于公式(7)和公式(8)获取输出音量范围的输出音量最大值(本申请表示为data_out_max)。其中,输出音量范围的输出音量最小值仍为前一次获取到的输出音量范围的最小值。
data_out=Math.average(data_out1,data_out2,…)  (7)
data_out_max=Math.min(data_out_max,data_out)  (8)
In another example, if the mobile phone (specifically, the media manager) detects that the user has turned down the output volume, for example, the user clicks the volume button to turn down the media stream parameter (ie, stream_volume), then during the process of the user adjusting the volume, the phone will The output volume is collected in the second sampling period, and the maximum output volume value of the output volume range is obtained based on formula (7) and formula (8) (expressed as data_out_max in this application). Among them, the minimum output volume value of the output volume range is still the minimum value of the output volume range obtained last time.
data_out=Math.average(data_out1,data_out2,…) (7)
data_out_max=Math.min(data_out_max,data_out) (8)
请参照公式(7),手机在检测到用户调大输出音量的过程中(即从检测到用户开始调节音量至调节结束),手机以第二采样周期采集输出音量。手机从用户开始调节音量至调节结束,以第二采样周期采集到n个输出音量,包括data_out1,data_out2……,手机基于采集到的n个输出音量,获取平均输出音量。接着,手机基于公式(8),获取输出音量范围的最大值。具体的,手机将前一次保存的输出音量范围的最大值与本次获取到的平均输出音量进行比较。一个示例中,若平均输出音量小于前一次保存的输出音量范围的最大值,则平均输出音量作为新的输出音量范围的最大值,最小值仍为前一次获取到的输出音量范围的最小值。另一个示例中,若平均输出音量大于前一次保存的输出音量范围的最大值,则前一次保存的输出音量范围的最大值作为新的输出音量范围的最大值,最小值仍为前一次获取到的输出音量范围的最小值。也就是说,新的输出音量范围与前一次保存的输出音量范围是一致的。Please refer to formula (7). When the mobile phone detects that the user has increased the output volume (that is, from the time when the user begins to adjust the volume to the end of the adjustment), the mobile phone collects the output volume in the second sampling period. From the time the user starts adjusting the volume to the end of the adjustment, the mobile phone collects n output volumes in the second sampling period, including data_out1, data_out2..., and the mobile phone obtains the average output volume based on the n collected output volumes. Then, the mobile phone obtains the maximum value of the output volume range based on formula (8). Specifically, the mobile phone compares the maximum value of the output volume range saved last time with the average output volume obtained this time. In one example, if the average output volume is less than the maximum value of the previously saved output volume range, the average output volume is used as the maximum value of the new output volume range, and the minimum value is still the minimum value of the previously obtained output volume range. In another example, if the average output volume is greater than the maximum value of the previously saved output volume range, the maximum value of the previously saved output volume range is used as the maximum value of the new output volume range, and the minimum value is still the previously obtained maximum value. The minimum value of the output volume range. That is to say, the new output volume range is consistent with the previously saved output volume range.
举例说明,请参照图8,在手机播放音频的过程中,媒体传感器采集媒体传感器生成的输出音量,在t1时刻,假设媒体传感器当前未保存有输出音量范围,即,媒体传感器可能是第一次执行输出音量范围的获取步骤。示例性的,媒体传感器以稀疏采集周期进行采集,采集到输出音量为data_out1,该数值可以看做是最小值,也可以看做是最大值。在t2时刻,媒体传感器检测到到达采集周期,媒体传感器采集输出音量,例如,采集到的输出音量为data_out2。媒体传感器将data_out2与data_out1进行比较,假设data_out2大于data_out1,则媒体传感器确定输出音量范围的最大值为data_out2,最小值为data_out1,即(data_out1,data_out2)。示例性的,在t3时刻,媒体传感器检测到到达采集周期,媒体传感器采集输出音量。例如,采集到的输出音量为data_out3。媒体传感器基于公式(3)和公式(4),确定data_out3大于当前保存的输出音量范围的最大值(data_out2),媒体传感器更新输出音量范围,更新后的输出音量范围的最大值为data_out3,最小值仍为data_out1,即(data_out1,data_out3)。在t4时刻,媒体管理器检测到用户调小输出音量,则媒体管理器从t4时刻至用户调节结束(例如t7时刻)之间,以密集采集周期进 行采集,并将采集到的多个输出音量取平均值,例如得到的平均值为data_out4。媒体管理器基于公式(7)和公式(8),获取输出音量范围的最大值。假设data_out4大于当前保存的输出音量范围的最大值(data_out3),则更新后的输出音量范围的最大值仍为data_out3,即更新后的输出音量范围为(data_out1,data_out3)。需要说明的是图8中的采集间隔以及音量仅为示意性举例,本申请不做限定。进一步需要说明的是,图8中未示出,示例性的,在密集采集后(即t7时刻之后),媒体管理器将以稀疏采集周期继续采集,并更新输出音量范围。For example, please refer to Figure 8. During the process of playing audio on the mobile phone, the media sensor collects the output volume generated by the media sensor. At time t1, it is assumed that the media sensor does not currently save an output volume range, that is, the media sensor may be the first time Execute the steps to obtain the output volume range. For example, the media sensor collects in a sparse collection cycle, and the collected output volume is data_out1. This value can be regarded as the minimum value or the maximum value. At time t2, the media sensor detects that the collection period has arrived, and the media sensor collects the output volume. For example, the collected output volume is data_out2. The media sensor compares data_out2 with data_out1. Assuming data_out2 is greater than data_out1, the media sensor determines that the maximum value of the output volume range is data_out2 and the minimum value is data_out1, that is, (data_out1, data_out2). For example, at time t3, the media sensor detects that the collection period has arrived, and the media sensor collects the output volume. For example, the collected output volume is data_out3. Based on formula (3) and formula (4), the media sensor determines that data_out3 is greater than the maximum value of the currently saved output volume range (data_out2). The media sensor updates the output volume range. The maximum value of the updated output volume range is data_out3, and the minimum value is Still data_out1, that is (data_out1, data_out3). At time t4, the media manager detects that the user has turned down the output volume, and then the media manager performs an intensive collection cycle from time t4 to the end of the user adjustment (for example, time t7). Collect in a row and average the collected output volumes. For example, the average value obtained is data_out4. The media manager obtains the maximum value of the output volume range based on formula (7) and formula (8). Assuming that data_out4 is greater than the maximum value of the currently saved output volume range (data_out3), the maximum value of the updated output volume range is still data_out3, that is, the updated output volume range is (data_out1, data_out3). It should be noted that the collection interval and volume in Figure 8 are only illustrative examples and are not limited in this application. It should be further noted that, not shown in Figure 8 , for example, after intensive collection (that is, after time t7), the media manager will continue collection in a sparse collection cycle and update the output volume range.
示例性的,媒体管理器每次获取到新的输出音量范围后,保存新的输出音量范围。可选地,媒体管理器可以将前一次的输出音量范围覆盖,以节约内存占用,本申请不做限定。For example, each time the media manager obtains a new output volume range, it saves the new output volume range. Optionally, the media manager can overwrite the previous output volume range to save memory usage, which is not limited in this application.
S703,手机检测到音频变更,基于新的音频的输入音量与输出音量范围,获取音量参数。S703, the mobile phone detects the audio change and obtains the volume parameters based on the new audio input volume and output volume range.
示例性的,本申请实施例中音频变更可以包括但不限于:音源文件切换、音源设备切换以及输出设备切换等。其中,音源文件切换可选地为媒体管理器接收到的音频文件切换。例如,手机正在播放歌曲A,相应的,音乐应用向媒体管理器输出的音频为歌曲A的音频数据。手机响应于用户操作切换播放歌曲B,相应的,音乐应用向媒体管理器输出的音频切换为音频B的音频数据,即为音源切换。示例性的,音源设备切换可选地为多设备场景下的音频投放业务时的音源设备的切换。例如,手机通过无线连接(可以是Wi-Fi连接,也可以是蓝牙连接,本申请不做限定,下文中不再重复说明)向车载设备投放歌曲A,车载设备接收并播放歌曲A。用户使用平板连接车载设备,并使用平板A向车载设备投放歌曲A,则对于车载设备而言,其音源设备从手机切换为平板。示例性的,输出设备切换可选地为多设备场景下的音频投放业务时的输出设备的切换。例如,手机通过无线连接向车载设备投放歌曲A,手机响应于接收到的用户操作,通过与平板之间的无线连接向平板投放歌曲A。相应的,对于手机而言,其输出设备从车载设备切换为平板设备,即为输出设备切换。For example, audio changes in the embodiment of this application may include but are not limited to: switching audio source files, switching audio source devices, switching output devices, etc. The audio source file switching is optionally the audio file switching received by the media manager. For example, the mobile phone is playing song A. Correspondingly, the audio output by the music application to the media manager is the audio data of song A. The mobile phone switches to play song B in response to the user operation. Correspondingly, the audio output by the music application to the media manager is switched to the audio data of audio B, which is the audio source switching. For example, the switching of audio source devices may be the switching of audio source devices during the audio delivery service in a multi-device scenario. For example, the mobile phone transmits song A to the vehicle-mounted device through a wireless connection (which can be a Wi-Fi connection or a Bluetooth connection, which is not limited in this application and will not be repeated below), and the vehicle-mounted device receives and plays song A. The user uses a tablet to connect to the vehicle-mounted device, and uses tablet A to play song A to the vehicle-mounted device. For the vehicle-mounted device, the audio source device is switched from the mobile phone to the tablet. For example, output device switching may be switching of output devices during audio delivery services in a multi-device scenario. For example, the mobile phone plays song A to the car-mounted device through a wireless connection, and in response to the received user operation, the mobile phone plays song A to the tablet through a wireless connection with the tablet. Correspondingly, for a mobile phone, switching its output device from a car-mounted device to a tablet device is an output device switching.
在本申请实施例中,手机检测到上述任一情况,即确定音频变更后,手机执行音量参数获取步骤。也就是说,在音频变更后,手机基于下文所述的方法,获取与该音频对应的音量参数。在该音频变更之前,手机无需再获取该音频的音量参数。In the embodiment of the present application, the mobile phone detects any of the above situations, that is, after determining that the audio has changed, the mobile phone executes the volume parameter acquisition step. That is to say, after the audio is changed, the mobile phone obtains the volume parameters corresponding to the audio based on the method described below. Before the audio is changed, the mobile phone no longer needs to obtain the volume parameters of the audio.
在本申请实施例中,手机(具体为媒体管理器)在检测到音频变更后,获取变更后的音频的输入音量(概念可参照上文,此处不再赘述)。手机可基于输入音量与输出音量范围的关系,获取对应的音量参数。其中,音量参数可用于调整音频的输出音量,本申请实施例中可通过针对不同的音频(即不同的输入音量)设置对应的音量参数,以将音频的输出音量调整至输出音量范围内。具体方法如下:In the embodiment of the present application, after detecting the audio change, the mobile phone (specifically, the media manager) obtains the input volume of the changed audio (the concept can be referred to the above, and will not be described again here). The mobile phone can obtain the corresponding volume parameters based on the relationship between the input volume and the output volume range. The volume parameter can be used to adjust the output volume of the audio. In the embodiment of the present application, the output volume of the audio can be adjusted to within the output volume range by setting corresponding volume parameters for different audios (ie, different input volumes). The specific method is as follows:
1)手机获取音频的平均输入音量。1) The mobile phone obtains the average input volume of the audio.
示例性的,仍以手机播放音频场景为例,手机(具体为媒体管理器)检测到音频变更后,在播放变更后的音频之前,可获取该音频(即变更后的音频)的预设长度的音频数据对应的输入音量。预设长度例如可以是5秒,可基于实际需求设置,本申请不做限定。手机可对获取到的预设长度的音频数据的输入音量取平均值,以得到平均输入音量。 For example, still taking the scene of playing audio on a mobile phone as an example, after the mobile phone (specifically, the media manager) detects the audio change, it can obtain the preset length of the audio (that is, the changed audio) before playing the changed audio. The audio data corresponds to the input volume. The preset length can be, for example, 5 seconds, which can be set based on actual needs, and is not limited in this application. The mobile phone can average the input volume of the obtained audio data of a preset length to obtain the average input volume.
2)手机基于音频的平均输入音量与输出音量范围的关系,获取音频的音量参数。2) The mobile phone obtains the audio volume parameters based on the relationship between the average audio input volume and the output volume range.
一个示例中,若手机检测到音频的平均输入音量超出输出音量范围,且大于输出音量范围的最大值,则手机基于公式(9),获取该音频的音量参数(本申请中表示为volume_coefficient,概念可参照上文,此处不再赘述):
In one example, if the mobile phone detects that the average input volume of the audio exceeds the output volume range and is greater than the maximum value of the output volume range, the mobile phone obtains the volume parameter of the audio based on formula (9) (in this application, it is expressed as volume_coefficient, concept Please refer to the above and will not go into details here):
其中,data_out_max即为上文所述的输出音量范围的最大值。data_in_average即为获取到的音频的平均输入音量。G_max为系统常量,可根据实际需求设置,例如在本申请实施例中,G_max为0.5,本申请不做限定。Among them, data_out_max is the maximum value of the output volume range mentioned above. data_in_average is the average input volume of the obtained audio. G_max is a system constant and can be set according to actual needs. For example, in the embodiment of this application, G_max is 0.5, which is not limited in this application.
另一个示例中,若手机检测音频的平均输入音量超出输出音量范围,且小于输出音量范围的最小值,则手机基于公式(10),获取该音频的音量参数:
In another example, if the average input volume of the audio detected by the mobile phone exceeds the output volume range and is less than the minimum value of the output volume range, the mobile phone obtains the volume parameter of the audio based on formula (10):
其中,data_out_min即为上文所述的输出音量范围的最小值。data_in_average即为获取到的音频的平均输入音量。G_min为系统常量,可根据实际需求设置,例如在本申请实施例中,G_min为0.5,本申请不做限定。Among them, data_out_min is the minimum value of the output volume range mentioned above. data_in_average is the average input volume of the obtained audio. G_min is a system constant and can be set according to actual needs. For example, in the embodiment of this application, G_min is 0.5, which is not limited in this application.
又一个示例中,若手机检测音频的平均输入音量未超出输出音量范围,即音频的平均输入音量大于或等于输出音量范围的最小值,且小于或等于输出音量范围的最大值,则该音频的音量参数等于1。In another example, if the mobile phone detects that the average input volume of the audio does not exceed the output volume range, that is, the average input volume of the audio is greater than or equal to the minimum value of the output volume range, and less than or equal to the maximum value of the output volume range, then the audio's The volume parameter is equal to 1.
S704,手机基于音量参数、输入音量和输出音量参数,得到新的音频的输出音量。S704: The mobile phone obtains the new audio output volume based on the volume parameters, input volume and output volume parameters.
示例性的,手机获取到变更后的音频(即新的音频)的音频参数后,可基于公式(2)获取变更后的输出音量。具体描述可参照公式(2)的相关描述,此处不再赘述。需要说明的是,本申请实施例中所述的“新的音频”在音源变更场景中,即为与切换之前的音频不相同的音频数据。在音源设备切换与输出设备切换场景中,“新的音频”可以是与切换之前不相同的音频数据,也可能是与切换之前相同的音频数据,对于媒体管理器而言,均认为是新的音频。举例说明,在输出设备切换场景中,手机检测到输出设备切换。输出设备切换后,音乐应用可能会重新向媒体管理器输出音频数据,该音频数据可能与切换之前的音频数据相同或不同,本申请不做限定。For example, after the mobile phone obtains the audio parameters of the changed audio (ie, new audio), it can obtain the changed output volume based on formula (2). For specific description, please refer to the relevant description of formula (2) and will not be repeated here. It should be noted that the "new audio" described in the embodiment of the present application in the audio source change scenario is audio data that is different from the audio before switching. In the scenario of audio source device switching and output device switching, the "new audio" may be different audio data from before the switch, or it may be the same audio data as before the switch. For the media manager, both are considered new. Audio. For example, in the output device switching scenario, the mobile phone detects the output device switching. After the output device is switched, the music application may re-output audio data to the media manager. The audio data may be the same as or different from the audio data before the switch, which is not limited by this application.
手机获取到音频的输出音量后,可将音频与音频的输出音量输出给音频驱动进行播放,也可以将音频与音频的输出音量通过通信模块输出至其他设备进行播放。After the mobile phone obtains the audio output volume, it can output the audio and audio output volume to the audio driver for playback, or it can output the audio and audio output volume to other devices through the communication module for playback.
为使本领域人员更好的理解本申请实施例中的音频控制方法,下面以具体实施例对图7中的音频控制方法进行详细说明。In order to enable those in the art to better understand the audio control method in the embodiment of the present application, the audio control method in Figure 7 will be described in detail below with specific embodiments.
在本场景中,以手机播放音频的场景为例进行说明。请参照图9a,视频应用响应于接收到的用户操作,播放音频A。具体的,视频应用向媒体管理器输出音频A的输入音频数据,对应的输入音量可以表示为data_in(A)。可选地,视频应用可以将音频A分段划分,并逐段传输给媒体管理器,媒体管理器接收并缓存接收到的音频A,具体传输方式可参照已有技术实施例,本申请不做限定。In this scenario, the scenario of audio playing on a mobile phone is used as an example. Referring to Figure 9a, the video application plays audio A in response to the received user operation. Specifically, the video application outputs the input audio data of audio A to the media manager, and the corresponding input volume can be expressed as data_in(A). Optionally, the video application can divide the audio A into segments and transmit them to the media manager segment by segment. The media manager receives and caches the received audio A. The specific transmission method may refer to the existing technical embodiments and is not covered in this application. limited.
媒体管理器接收音频A的输入音频数据。媒体管理器可基于输入音频数据所对应的 输入音量参数与当前保存(即最近一次保存的)的输出音量范围,获取音频A的音量参数。具体的,媒体管理器获取音频A的开头的预设长度(例如开头5秒)的音频数据的平均输入音量(data_in_average(A))。The media manager receives input audio data for audio A. The media manager can be based on the input audio data corresponding to Enter the volume parameters and the currently saved (that is, the most recently saved) output volume range to obtain the volume parameters of audio A. Specifically, the media manager obtains the average input volume (data_in_average(A)) of the audio data of the preset length (for example, the first 5 seconds) of the beginning of audio A.
媒体管理器将音频A的平均输入音量与输出音量范围比较。本实例中,以媒体管理器当前保存的输出音量范围为(data_out_min1,data_out_max1)为例说明。假设媒体管理器检测到音频A的平均输入音量(data_in_average(A))在输出音量范围内,即,data_out_min1≤data_in_average(A)≤data_out_max1,则媒体管理器确定音频A对应的音量参数(记为volume_coefficient(A))为1。Media Manager compares Audio A's average input volume to the output volume range. In this example, the output volume range currently saved by the media manager is (data_out_min1, data_out_max1) as an example. Assume that the media manager detects that the average input volume (data_in_average(A)) of audio A is within the output volume range, that is, data_out_min1≤data_in_average(A)≤data_out_max1, then the media manager determines the volume parameter corresponding to audio A (denoted as volume_coefficient (A)) is 1.
媒体管理器基于公式(2)获取音频A的输出音频数据对应的输出音量(记为data_out(A))。请参照图9a,媒体管理器获取到data_out(A)后,媒体管理器将音频A的输出音频数据输出至音频驱动。音频驱动通过手机的扬声器播放音频A的输出音频数据,播放音量即为data_out(A)对应的数值。The media manager obtains the output volume corresponding to the output audio data of audio A (denoted as data_out(A)) based on formula (2). Please refer to Figure 9a. After the media manager obtains data_out(A), the media manager outputs the output audio data of audio A to the audio driver. The audio driver plays the output audio data of audio A through the speaker of the mobile phone, and the playback volume is the value corresponding to data_out(A).
仍参照图9a,手机在播放音频A的过程中,媒体管理器以第一采样周期(即上文所述的稀疏采样周期)对音频A的输出音量(即媒体管理器生成的输出音量)进行采集,并基于采集到的输出音量,更新输出音量范围。具体采集与获取方式可参照S701和S702中的描述,此处不再赘述。Still referring to Figure 9a, while the mobile phone is playing audio A, the media manager performs the first sampling period (i.e., the sparse sampling period mentioned above) on the output volume of audio A (i.e., the output volume generated by the media manager). Collect, and update the output volume range based on the collected output volume. For specific collection and acquisition methods, please refer to the descriptions in S701 and S702, and will not be described again here.
请参照图9b,示例性的,视频应用响应于接收到的用户操作,更换播放的音频,将音频A切换为音频B。视频应用向媒体管理器输出音频B的输入音频数据,对应的输入音量表示为data_in(B)。Please refer to Figure 9b. In an exemplary embodiment, the video application responds to the received user operation, changes the played audio, and switches audio A to audio B. The video application outputs the input audio data of audio B to the media manager, and the corresponding input volume is represented as data_in(B).
媒体管理器响应于接收到音频B的输入音频数据,确定音源变更。相应的,媒体管理器确定音源变更后,将重新执行音量参数获取流程,即媒体管理器获取音频B的音量参数。具体的,媒体管理器获取音频B的开头的预设长度(例如开头5秒)的音频数据的平均输入音量(data_in_average(B))。The media manager determines the audio source change in response to receiving input audio data for Audio B. Correspondingly, after the media manager determines that the audio source has changed, it will re-execute the volume parameter acquisition process, that is, the media manager acquires the volume parameters of audio B. Specifically, the media manager obtains the average input volume (data_in_average(B)) of the audio data of the preset length (for example, the first 5 seconds) at the beginning of audio B.
媒体管理器将音频B的平均输入音量与输出音量范围比较。本实例中,仍以媒体管理器当前保存的输出音量范围为(data_out_min1,data_out_max1)为例说明。也就是说,媒体播放器在播放音频A的过程中所获取到的输出音量范围未更改。The media manager compares Audio B's average input volume to the output volume range. In this example, the output volume range currently saved by the media manager is (data_out_min1, data_out_max1) as an example. In other words, the output volume range obtained by the media player during playing audio A has not changed.
本实例中,假设媒体管理器检测到音频B的平均输入音量(data_in_average(B))超出输出音量范围,且小于输出音量范围的最小值data_out_min1。媒体管理器可基于公式(10)获取音频B的音量参数(记为volume_coefficient(B))。In this example, assume that the media manager detects that the average input volume (data_in_average(B)) of audio B exceeds the output volume range and is less than the minimum value data_out_min1 of the output volume range. The media manager may obtain the volume parameter of audio B (denoted as volume_coefficient(B)) based on formula (10).
接着,媒体管理器可基于公式(2)获取音频B的音频数据对应的输出音量(记为data_out(B))。请参照图9b,媒体管理器获取到data_out(B)后,媒体管理器将音频B的音频数据输出至音频驱动。音频驱动通过手机的扬声器播放音频B的输出音频数据,播放音量为data_out(B)对应的数值。Then, the media manager can obtain the output volume (denoted as data_out(B)) corresponding to the audio data of audio B based on formula (2). Please refer to Figure 9b. After the media manager obtains data_out(B), the media manager outputs the audio data of audio B to the audio driver. The audio driver plays the output audio data of audio B through the speaker of the mobile phone, and the playback volume is the value corresponding to data_out(B).
需要说明的是,上文中仅以音频B的平均输入音量大于输出音量范围的最大值为例进行说明,若音频B的平均输入音量小于输出音量范围的最小值,则媒体管理器可基于公式(10)获取音频B的音量参数,其他部分的步骤相同,此处不再重复举例说明。It should be noted that the above example only takes the average input volume of audio B to be greater than the maximum value of the output volume range as an example. If the average input volume of audio B is less than the minimum value of the output volume range, the media manager can use the formula ( 10) Obtain the volume parameters of audio B. The steps for other parts are the same and the examples will not be repeated here.
进一步需要说明的是,如图9b所示,手机在播放音频B的过程中,媒体管理器以第一采样周期(即上文所述的稀疏采样周期)对音频B的输出音量进行采集,并基于采集 到的输出音量,更新输出音量范围。具体采集与获取方式可参照S701和S702中的描述,此处不再赘述。需要说明的是,如上文所述,媒体管理器在检测到音频变更后,才会执行音量参数的获取流程,因此,音频播放过程中所获取到的输出音量范围不会影响当前播放的音频,该音频播放结束或者是切换其他音频之前,均基于以获取到的音量参数,计算输出音量。而在播放过程中更新的输出音量范围,则将用于音频变更后的获取音量参数的步骤中。It should be further explained that, as shown in Figure 9b, when the mobile phone is playing audio B, the media manager collects the output volume of audio B in the first sampling period (i.e., the sparse sampling period mentioned above), and Based on collection reaches the output volume, and updates the output volume range. For specific collection and acquisition methods, please refer to the descriptions in S701 and S702, and will not be described again here. It should be noted that, as mentioned above, the media manager will only execute the volume parameter acquisition process after detecting audio changes. Therefore, the output volume range obtained during audio playback will not affect the currently played audio. Before the audio playback ends or other audio is switched, the output volume is calculated based on the volume parameter obtained. The output volume range updated during playback will be used in the step of obtaining volume parameters after the audio is changed.
在本实例中,视频应用响应于接收到的用户操作,将音频B切换回音频A。则媒体播放器按照图9a中的流程重复执行。具体的,视频应用将音频A的输入音频数据输出至媒体管理器,其中音频A的输入音频数据对应的输入音量为data_in(A)。媒体管理器响应于接收到的音频A的输入音频数据,确定音源变更。媒体管理器将重新执行音量参数获取流程。示例性的,媒体管理器获取音频A的平均输入音量(data_in_average(A))。媒体管理器将音频A的平均输入音量与输出音量范围进行比较。本实例中,仍以当前保存的输出音量范围为(data_out_min1,data_out_max1)为例说明。媒体管理器检测到音频A的平均输入音量在输出音量范围内,媒体管理器可确定音频A对应的音量参数(volume_coefficient(A))为1,并基于音频A的音量参数、音频A的输入音量以及输出音量参数,获取音频A的输出音量。未描述细节可参照图9a,此处不再赘述。In this example, the video application switches Audio B back to Audio A in response to the received user action. Then the media player repeats the process in Figure 9a. Specifically, the video application outputs the input audio data of audio A to the media manager, where the input volume corresponding to the input audio data of audio A is data_in(A). The media manager determines the audio source change in response to receiving input audio data for audio A. The media manager will re-execute the volume parameter acquisition process. For example, the media manager obtains the average input volume of audio A (data_in_average(A)). Media Manager compares Audio A's average input volume to the output volume range. In this example, the currently saved output volume range is (data_out_min1, data_out_max1) as an example. The media manager detects that the average input volume of audio A is within the output volume range. The media manager can determine that the volume parameter (volume_coefficient(A)) corresponding to audio A is 1, and based on the volume parameter of audio A and the input volume of audio A and the output volume parameter to obtain the output volume of audio A. For details that are not described, please refer to Figure 9a and will not be described again here.
请参照图9c,在本申请实施例中,音频A在播放过程中,其对应的音量参数为volume_coefficient(A)。在音频A切换到音频B,即音源变更后,手机可基于音频B的输入音量,获取与音频B对应的音量参数,从而在输出音量参数(即虚线框中的参数,包括流音量(stream_volume)、音轨音量(track_volume)、主音量(master_volume))不变,即与切换到音频B之前的输出音量参数相同的情况下,手机可通过音量参数将音频B的输出音量调整到输出音量范围内(需要说明的是,由于音频B的输入音量可能会出现波动,相应的,可能有部分音频的输出音量未在输出范围内,但是其输出音量与输出音量范围的差值较小,可忽略不计)。也就是说,在音频B的输入音量小于音频A的输入音量的情况下(也可以是音频B的输入音量大于音频A的输入音量,其原理类似,此处不再重复说明),用户无需手动调节音量。即如图9c所示,在输出音量参数相同的情况下,手机可通过获取适当的音量参数,以使得音频B的输出音量落入根据用户的听觉习惯获取到的输出音量范围内。仍参照图9c,当手机再次将音频切换回音频A,媒体管理器可基于切换后的音频重新获取对应的音量参数。在输出音量参数相同的情况下,通过设置适当的音量参数,可使得音频A的输出音量同样落入根据用户的听觉习惯获取到的输出音量范围内。也就是说,在本申请实施例中,即使切换后的音频的输入音量小于切换前的音频的输入音量,通过设置对应的音量参数,可使得用户无需将音量调大。相应的,当音频切换回之前的音频后,由于输出音量参数并未改变,因此不会出现爆音的问题。通过本申请实施例中的音量控制方法,用户无需反复调节音量,且不会在调节音量后出现爆音的现象,有效提升用户使用体验。需要说明的是,本申请实施例中所述的音频的输出音量在输出音量范围内可以理解为是音频的平均音量在输出音量范围内,或者可以理解为音频数据的大部分音量在输出音量范围内。也就是说,由于音频数据的高低频变换,可能有少部分音频的音量大于或小于输出音量范围。进一步需要说明的是, 如上文所述,在播放音频的过程中,电子设备可更新输出音量范围,相应的,如果音频数据的全部音频数据对应的输出音量均在输出音量范围内,则在该音频数据播放过程中,输出音量始终保持在输出音量范围内。示例性的,如果音频数据的部分音频数据对应的输出音量未在输出音量范围内,则电子设备可在采集到不满足输出音量范围的音量时,对输出音量范围进行更新。Please refer to Figure 9c. In this embodiment of the present application, during the playback process of audio A, its corresponding volume parameter is volume_coefficient(A). After audio A is switched to audio B, that is, after the audio source is changed, the mobile phone can obtain the volume parameters corresponding to audio B based on the input volume of audio B, so as to output the volume parameters (i.e., the parameters in the dotted box, including stream volume (stream_volume) , track volume (track_volume), master volume (master_volume)) remain unchanged, that is, when the output volume parameters are the same as before switching to audio B, the mobile phone can adjust the output volume of audio B to within the output volume range through the volume parameters. (It should be noted that since the input volume of audio B may fluctuate, correspondingly, the output volume of some audio may not be within the output range, but the difference between its output volume and the output volume range is small and can be ignored. ). That is to say, when the input volume of audio B is less than the input volume of audio A (it can also be that the input volume of audio B is greater than the input volume of audio A, the principle is similar and will not be repeated here), the user does not need to manually volume adjustment. That is, as shown in Figure 9c, when the output volume parameters are the same, the mobile phone can obtain appropriate volume parameters so that the output volume of audio B falls within the output volume range obtained based on the user's listening habits. Still referring to Figure 9c, when the mobile phone switches the audio back to audio A again, the media manager can re-obtain the corresponding volume parameters based on the switched audio. When the output volume parameters are the same, by setting appropriate volume parameters, the output volume of audio A can also fall within the output volume range obtained based on the user's listening habits. That is to say, in this embodiment of the present application, even if the input volume of the audio after switching is smaller than the input volume of the audio before switching, by setting the corresponding volume parameter, the user does not need to increase the volume. Correspondingly, when the audio is switched back to the previous audio, since the output volume parameter has not changed, there will be no popping problem. Through the volume control method in the embodiment of the present application, the user does not need to repeatedly adjust the volume, and no popping sound occurs after adjusting the volume, which effectively improves the user experience. It should be noted that the audio output volume described in the embodiment of the present application is within the output volume range can be understood to mean that the average audio volume is within the output volume range, or it can be understood that most of the audio data volume is within the output volume range. Inside. That is to say, due to the high and low frequency transformation of audio data, the volume of a small part of the audio may be greater or less than the output volume range. It should be further explained that, As mentioned above, during the process of playing audio, the electronic device can update the output volume range. Correspondingly, if the output volume corresponding to all the audio data of the audio data is within the output volume range, then during the process of playing the audio data, The output volume is always maintained within the output volume range. For example, if the output volume corresponding to part of the audio data is not within the output volume range, the electronic device can update the output volume range when a volume that does not meet the output volume range is collected.
需要说明的是,上文中的场景是以输出音量范围保持不变的场景说明的,在一种可能的实现方式中,若手机在播放音频A的过程中,用户调节音量(例如调大音量),则媒体管理器在检测到用户调大音量后,在用户调大音量的过程中,以第二采样周期(即密集采样周期)采集输出音量。并且,媒体管理器可基于采集到的输出音量,更新输出音量范围。具体获取方式可参照上文,此处不再赘述。在其他实施例中,输出音量范围也可能是在稀疏采集周期内发生变换,本申请不再逐一举例说明。可选地,更新后的输出音量范围可能与之前的输出音量范围相同或不同,本申请不做限定。相应的,在手机切换音频B之后,媒体管理器可基于更新后的输出音量范围获取音频B对应的音量参数。当然,若在手机播放音频B的过程中,用户调节音量,则媒体管理器按照上文所述方式,以密集采集周期进行采集,并更新输出音量范围。在手机切换到音频A之后,媒体管理器可基于当前保存(即最近一次更新)的输出音量范围,即最近一次更新的输出音量范围,获取音频A对应的音量参数。It should be noted that the above scenario is based on a scenario where the output volume range remains unchanged. In one possible implementation, if the mobile phone is playing audio A and the user adjusts the volume (for example, turning up the volume) , then after the media manager detects that the user increases the volume, the media manager collects the output volume in the second sampling period (that is, the intensive sampling period) during the process of the user increasing the volume. Moreover, the media manager can update the output volume range based on the collected output volume. The specific acquisition method can be referred to above and will not be repeated here. In other embodiments, the output volume range may also change during the sparse collection period, and this application will not give examples one by one. Optionally, the updated output volume range may be the same as or different from the previous output volume range, which is not limited in this application. Correspondingly, after the mobile phone switches audio B, the media manager can obtain the volume parameters corresponding to audio B based on the updated output volume range. Of course, if the user adjusts the volume while the mobile phone is playing audio B, the media manager will collect the audio in an intensive collection cycle in the manner described above, and update the output volume range. After the mobile phone switches to audio A, the media manager can obtain the volume parameters corresponding to audio A based on the currently saved (ie, the most recently updated) output volume range, that is, the most recently updated output volume range.
进一步需要说明的是,上文中的场景是以音频B切换回音频A为例进行说明,在另一种可能的实现方式中,视频应用响应于接收到的用户操作,也可以是切换为其它音频,其具体方式与切换到音频A是一致的,本申请不再逐一举例说明。还需要说明的是,上文中切换回音频A可选地为播放的音频与切换到音频B之前的音频是相同的。在其他实施例中,视频应用也可以采用断点续播的方式。例如,切换到音频B之前播放的是音频A1,则媒体管理器可获取音频A1所对应的音量参数,并获取到对应的输出音量。视频应用将音频B切换回音频A后,可选地播放的是音频A中的音频A2,音频A2与音频A1的数据不同,输入音量可以相同,也可以不同。媒体管理器可基于音频A2的输入音量,获取对应的音量参数,并得到音频A2的输出音量。具体实施方式与图9a和图9b中的类似,此处不再赘述。It should be further noted that the above scenario takes audio B switching back to audio A as an example. In another possible implementation, the video application may also switch to other audio in response to the received user operation. , the specific method is consistent with switching to audio A, and this application will not give examples one by one. It should also be noted that the audio played when switching back to audio A is the same as the audio played before switching to audio B. In other embodiments, the video application may also adopt a method of resuming playback at a break point. For example, if audio A1 was played before switching to audio B, the media manager can obtain the volume parameter corresponding to audio A1 and obtain the corresponding output volume. After the video application switches audio B back to audio A, it can optionally play audio A2 in audio A. The data of audio A2 and audio A1 are different, and the input volume can be the same or different. The media manager can obtain the corresponding volume parameter based on the input volume of audio A2, and obtain the output volume of audio A2. The specific implementation is similar to that in Figure 9a and Figure 9b and will not be described again here.
进一步需要说明的是,本申请实施例中所述的音频切换(或变更),可能是在播放音频A的过程中,响应于接收到的用户操作,将正在播放的音频A切换到音频B。在其他实施例中,也可以是音频A播放结束后(例如可以是间隔一段时间),再播放音频B,本申请不做限定。It should be further noted that the audio switching (or change) described in the embodiment of the present application may be to switch the currently playing audio A to audio B in response to a received user operation during the playing of audio A. In other embodiments, audio B may be played after the audio A is played (for example, after a period of time), which is not limited in this application.
进一步需要说明的是,在用户调节输出音量参数,以调节音频的输出音量的过程中,媒体管理器在获取输出音量范围时,可将新获取到的输出音量范围的最大值与前一次获取到的输出音量范围的最大值取平均值,以作为更新的输出音量范围的最大值。相应的,媒体管理器将新获取到的输出音量范围的最小值与前一次获取到的输出音量范围的最小值取平均值,以作为更新的输出音量范围的最小值,从而防止用户调节音量,导致输出音量范围波动过大。 It should be further noted that when the user adjusts the output volume parameter to adjust the audio output volume, when the media manager obtains the output volume range, it can compare the newly obtained maximum value of the output volume range with the previously obtained maximum value. The maximum value of the output volume range is averaged to serve as the updated maximum value of the output volume range. Accordingly, the media manager averages the newly obtained minimum value of the output volume range and the previously obtained minimum value of the output volume range as the updated minimum value of the output volume range, thus preventing the user from adjusting the volume. This causes the output volume range to fluctuate excessively.
本申请实施例中的音量控制方法除应用于上文所述的手机播放音频的场景(也可以理解为单设备场景)中,还可以应用于多设备协同场景中。图10为示例性示出的多设备协同场景,请参照图10,手机通过与电视之间的无线连接,向电视输出音频A的音频数据,电视接收并播放音频A的音频数据。需要说明的是,图10中的设备类型与设备数量仅为示意性举例。例如,在其他实施例中,场景可以是手机A分别与电视和平板无线连接,并将音频A的音频数据分别输出给电视和平板,电视和平板均可播放音频A的音频数据,其处理方式与图10中的场景的各设备的处理方式相同,本申请不再逐一举例说明。进一步需要说明的是,本申请实施例中所述的无线连接可以是基于蓝牙协议或Wi-Fi协议等维护的,本申请不做限定。本实例中以无线连接为Wi-Fi连接为例进行说明,无线连接的具体建立过程可参照已有技术实施例,本申请不再赘述。The volume control method in the embodiment of the present application is not only applied in the scenario where the mobile phone plays audio as mentioned above (which can also be understood as a single-device scenario), but can also be applied in a multi-device collaboration scenario. Figure 10 is an exemplary multi-device collaboration scenario. Please refer to Figure 10. The mobile phone outputs audio data of audio A to the TV through a wireless connection with the TV. The TV receives and plays the audio data of audio A. It should be noted that the device types and device quantities in Figure 10 are only illustrative examples. For example, in other embodiments, the scenario may be that mobile phone A is wirelessly connected to a TV and a tablet respectively, and outputs the audio data of audio A to the TV and the tablet respectively. Both the TV and the tablet can play the audio data of audio A. The processing method The processing method of each device in the scenario in Figure 10 is the same, and this application will not give examples one by one. It should be further noted that the wireless connection described in the embodiments of this application may be maintained based on Bluetooth protocol or Wi-Fi protocol, which is not limited by this application. In this example, the wireless connection is a Wi-Fi connection as an example for explanation. The specific establishment process of the wireless connection may refer to existing technical embodiments, and will not be described in detail in this application.
请参照图11a,示例性的,手机的视频应用响应于接收到的用户操作,确定播放音频A。手机的视频应用向手机的媒体管理器输出音频A的输入音频数据A1,对应的输入音量记为data_in(A1)。Please refer to Figure 11a. In an exemplary embodiment, the video application of the mobile phone determines to play audio A in response to the received user operation. The video application of the mobile phone outputs the input audio data A1 of audio A to the media manager of the mobile phone, and the corresponding input volume is recorded as data_in(A1).
手机的媒体管理器接收音频A的输入音频数据,并获取音频A的音量参数。具体的,手机的媒体管理器获取音频A的平均输入音量(data_in_average(A1))。媒体管理器将音频A的平均输入音量与当前保存的输出音量范围比较,并获取对应的音量参数(volume_coefficient(A1))。具体细节可参照图9a中的相关内容,此处不再赘述。The media manager of the mobile phone receives the input audio data of audio A and obtains the volume parameters of audio A. Specifically, the media manager of the mobile phone obtains the average input volume of audio A (data_in_average(A1)). The media manager compares the average input volume of audio A with the currently saved output volume range and obtains the corresponding volume parameter (volume_coefficient(A1)). For specific details, please refer to the relevant content in Figure 9a and will not be described again here.
手机的媒体管理器可基于音频A的输入音量、音量参数以及当前的输出音量参数(包括流音量(stream_volume)、音轨音量(track_volume)、主音量(master_volume)),获取音频A的输出音量data_out(A1)。The media manager of the mobile phone can obtain the output volume data_out of audio A based on the input volume, volume parameters and current output volume parameters of audio A (including stream volume (stream_volume), track volume (track_volume), and master volume (master_volume)). (A1).
仍参照图11a,手机的媒体管理器可采集音频A的输出音量,并基于采集到的输出音量,更新输出音量范围。具体方式可参照上文,此处不再赘述。Still referring to Figure 11a, the media manager of the mobile phone can collect the output volume of audio A, and update the output volume range based on the collected output volume. The specific method can be referred to above and will not be described again here.
示例性的,手机的媒体管理器将音频A的输出音频数据A1输出至Wi-Fi驱动,其中,输出音频数据A1对应的输出音量为data_out(A1)。手机的Wi-Fi驱动将音频A的输出音频数据A1输出至电视的Wi-Fi驱动。电视的Wi-Fi驱动可选地将音频A的输出音频数据A1输出至电视的投屏应用(也可以是其它协同类应用,本申请不做限定)。For example, the media manager of the mobile phone outputs the output audio data A1 of audio A to the Wi-Fi driver, where the output volume corresponding to the output audio data A1 is data_out(A1). The Wi-Fi driver of the mobile phone outputs the output audio data A1 of audio A to the Wi-Fi driver of the TV. The TV's Wi-Fi driver optionally outputs the output audio data A1 of the audio A to the TV's screen projection application (it can also be other collaborative applications, which is not limited in this application).
示例性的,投屏应用将音频A的输出音频数据A1输出至电视的媒体管理器。需要说明的是,在本申请实施例中,对于媒体管理器而言,其接收到的音频数据均记为输入音频数据,则媒体管理器接收到的音频A的音频数据表示为音频A的输入音频数据A2,对应的输入音量为data_in(A2)。其中,data_in(A2)等于data_out(A1)。For example, the screen projection application outputs the output audio data A1 of audio A to the media manager of the TV. It should be noted that in the embodiment of the present application, for the media manager, the audio data received by it are all recorded as input audio data, then the audio data of audio A received by the media manager is represented as the input of audio A. Audio data A2, the corresponding input volume is data_in(A2). Among them, data_in(A2) is equal to data_out(A1).
示例性的,电视的媒体管理器基于接收到的音频A的输入音量data_in(A2),获取音频A在电视侧的音量参数。电视的媒体管理器可基于音频A的输入音量data_in(A2),获取音频A的平均输入音量(data_in_average(A2))。需要说明的是,该平均输入音量是基于电视侧的音频A的输入音量,也就是说,手机侧的音频A的输出音量获取到的,data_in_average(A2)与data_in_average(A1)相同或不同,本申请不做限定。For example, the media manager of the TV obtains the volume parameter of audio A on the TV side based on the received input volume data_in(A2) of audio A. The TV's media manager can obtain the average input volume of audio A (data_in_average(A2)) based on the input volume of audio A, data_in(A2). It should be noted that the average input volume is based on the input volume of audio A on the TV side, that is to say, it is obtained from the output volume of audio A on the mobile phone side. data_in_average(A2) is the same as or different from data_in_average(A1). There are no restrictions on application.
示例性的,电视的媒体管理器将音频A的平均输入音量(data_in_average(A2))与电视侧当前保存的输出音量范围比较。其中,电视侧的输出音量范围与手机侧的输出音量范围相同或不同,本申请不做限定。 For example, the media manager of the TV compares the average input volume of audio A (data_in_average(A2)) with the output volume range currently saved on the TV side. The output volume range on the TV side is the same as or different from the output volume range on the mobile phone side, which is not limited in this application.
电视的媒体管理器可基于比较结果,获取音频A在电视侧的音量参数(volume_coefficient(A2))。音量参数(volume_coefficient(A2))与音量参数(volume_coefficient(A1))相同或不同,本申请不做限定。The media manager of the TV can obtain the volume parameter (volume_coefficient(A2)) of audio A on the TV side based on the comparison result. The volume parameter (volume_coefficient(A2)) is the same as or different from the volume parameter (volume_coefficient(A1)), which is not limited in this application.
示例性的,电视的媒体管理器可基于音频A的输入音量data_in(A2)、音量参数(volume_coefficient(A1))以及电视侧的输出音量参数(与手机侧的相同或不同),得到音频A在电视侧的输出音量data_out(A2)。需要说明的是,获取各参数的具体细节可参照上文实施例中的相关内容,此处不再重复说明。For example, the media manager of the TV can obtain audio A based on the input volume data_in (A2), volume parameter (volume_coefficient (A1)) of audio A, and the output volume parameter of the TV side (the same as or different from the mobile phone side). The output volume data_out(A2) on the TV side. It should be noted that for the specific details of obtaining each parameter, please refer to the relevant content in the above embodiments, and the description will not be repeated here.
电视的媒体管理器可将音频A的音频数据A2输出至音频驱动,对应的输出音量为data_out(A2)。音频驱动控制扬声器(也可以是其他播放设备)播放音频A的音频数据,播放的音量即为data_out(A2)。The media manager of the TV can output audio data A2 of audio A to the audio driver, and the corresponding output volume is data_out(A2). The audio driver controls the speaker (or other playback device) to play the audio data of audio A, and the playback volume is data_out(A2).
需要说明的是,如图11a所示,电视的媒体管理器同样可对音频A的输出音量进行采集,并更新输出音量范围。具体实现方式可参照上文实施例中的相关内容,此处不再赘述。It should be noted that, as shown in Figure 11a, the media manager of the TV can also collect the output volume of audio A and update the output volume range. For specific implementation methods, please refer to the relevant content in the above embodiments and will not be described again here.
在本申请实施例中,多设备协同场景中,电视可通过与音频对应的音量参数,将音频的输出音量调节至用户在电视侧习惯的试听音量范围,即电视侧的输出音量范围。在电视获取到的音频的输入音量(即手机侧的输出音量)较大或较小的情况下,均可以通过获取适当的音量参数,以将音频在电视侧的输出音量调节至输出音量范围内。In the embodiment of the present application, in a multi-device collaboration scenario, the TV can adjust the output volume of the audio to the listening volume range that the user is accustomed to on the TV side, that is, the output volume range of the TV side, through the volume parameter corresponding to the audio. When the audio input volume obtained by the TV (i.e., the output volume on the mobile phone side) is large or small, the audio output volume on the TV side can be adjusted to within the output volume range by obtaining appropriate volume parameters. .
在本申请实施例中,电视播放音频A的过程中,用户可通过电视的遥控器调节电视的输出音量。例如,电视的媒体管理器响应于接收到的用户操作,将电视侧的输出音量参数(包括流音量(stream_volume)、音轨音量(track_volume)、主音量(master_volume))调大。则相应改变的是电视侧的输出音量,以及电视侧的输出音量范围,而不会改变手机侧的输出音量和输出音量范围。因此,如图11b所示,当手机取消与电视的音频传输(无线连接可能保持,也可能断开,本申请不做限定)后,若手机在手机侧继续播放音频A,则手机可基于图9a中的流程获取音频A的输出音量。其中,手机取消与电视的音频传输后,手机侧的媒体管理器检测到音频变更(即输出设备变更),手机在获取音频A的输出音量时,需要重新基于图9a中的方式获取音频A的音量参数,并基于新的音量参数(即音量参数A3),得到音频A的输出音量。其中,音量参数A3与音量参数A1相同或不同。手机可通过手机的音频驱动播放音频A,播放的音量即为手机基于手机侧的输出音量范围以及音量参数获取到的输出音量。对于电视侧,其与手机取消音频传输后,若电视侧播放音频(例如音频B),电视侧所播放的音频B的输出音量,即为基于电视侧更新的输出音量范围与对应的音量参数获取到的。In this embodiment of the present application, while the TV plays audio A, the user can adjust the output volume of the TV through the TV's remote control. For example, the media manager of the TV increases the output volume parameters (including stream volume (stream_volume), track volume (track_volume), and master volume (master_volume)) on the TV side in response to the received user operation. The corresponding changes are the output volume on the TV side and the output volume range on the TV side, but the output volume and output volume range on the mobile phone side will not be changed. Therefore, as shown in Figure 11b, when the mobile phone cancels the audio transmission with the TV (the wireless connection may be maintained or disconnected, which is not limited by this application), if the mobile phone continues to play audio A on the mobile phone side, the mobile phone can The process in 9a obtains the output volume of audio A. Among them, after the mobile phone cancels the audio transmission with the TV, the media manager on the mobile phone detects the audio change (ie, the output device changes). When the mobile phone obtains the output volume of audio A, it needs to obtain the output volume of audio A again based on the method in Figure 9a. Volume parameter, and based on the new volume parameter (ie, volume parameter A3), the output volume of audio A is obtained. Among them, the volume parameter A3 is the same as or different from the volume parameter A1. The mobile phone can play audio A through the audio driver of the mobile phone. The playback volume is the output volume obtained by the mobile phone based on the output volume range and volume parameters on the mobile phone side. For the TV side, after the audio transmission with the mobile phone is cancelled, if the TV side plays audio (such as audio B), the output volume of audio B played by the TV side is obtained based on the output volume range updated by the TV side and the corresponding volume parameters. Arrived.
下面以图12a与图12b具体分析本申请的音量控制方案在多设备场景中的效果。请参照图12a,对于未引入本申请实施例中的音量控制方案的多设备场景。设备A基于公式(1)获取到音频A的输出音频数据(其中,对应的输出音量为data_out(A))后,将音频A的音频数据传输给设备B。设备B基于接收到的音频A的输出音量(data_out(A))以及设备B侧的输出音量参数,按照公式(1)获取音频A在设备B侧的输出音量(data_out(B))。示例性的,假设设备B侧的输出音量较小,已有技术中通常是通过将设备A侧的输出音量调大,即调大设备A侧的输出音量参数,也就是说,将输入设备B的 音量调大,以增加设备B侧的输出音量。当设备A与设备B断开后(也可以是取消设备间协同,本申请不做限定),设备A继续播放音频A。由于设备A侧的输出音量参数已被调大,则设备A在播放音频A时,将继续调大后的输出音量参数获取音频A的输出音量,将导致设备A在播放音频A时可能出现爆音。The following is a detailed analysis of the effect of the volume control solution of the present application in a multi-device scenario using Figures 12a and 12b. Please refer to Figure 12a for a multi-device scenario in which the volume control scheme in the embodiment of the present application is not introduced. After device A obtains the output audio data of audio A (where the corresponding output volume is data_out(A)) based on formula (1), it transmits the audio data of audio A to device B. Based on the received output volume of audio A (data_out(A)) and the output volume parameter of device B side, device B obtains the output volume of audio A on device B side (data_out(B)) according to formula (1). For example, assuming that the output volume of device B side is small, in the prior art, the output volume of device A side is usually increased, that is, the output volume parameter of device A side is increased, that is, the input volume of device B is increased. of Volume up to increase the output volume on side B of the device. When device A and device B are disconnected (the collaboration between devices may also be canceled, which is not limited in this application), device A continues to play audio A. Since the output volume parameter on the side of device A has been increased, when device A plays audio A, it will continue to use the increased output volume parameter to obtain the output volume of audio A, which may cause popping sound when device A plays audio A. .
请参照图12b,对于本申请实施例中的音量控制方案的多设备场景,设备A与设备B在获取输出音量时,均是基于各自的输出音量范围、输出音量参数以及获取到的音量参数得到的。也就是说,即使设备A向设备B输出的音量较大或较小,设备B同样可通过获取到适当的音量参数,以通过音量参数将音频A在设备B侧的输出音量,调节至设备B侧的输出音量范围内。也就是说,用户无需调节设备A或设备B侧的输出音量参数,设备B侧通过音量参数,即可将设备B侧的音频A的输出音量调节至输出音量范围内,以满足用户的听觉习惯。并且,设备A与设备B的输出音量参数与音量参数,即影响各自的输出音量的因子相互独立,互不影响,因此,设备A与设备B断开后,设备A或设备B播放其他音频时,设备A或设备B均会重新为播放的音频获取对应的音量参数,以将新的音频的输出音量,调节至各自设备的输出音量范围内,不会出现爆音的问题,有效提升用户使用体验。Please refer to Figure 12b. For the multi-device scenario of the volume control solution in the embodiment of the present application, when device A and device B obtain the output volume, they obtain the output volume based on their respective output volume ranges, output volume parameters, and obtained volume parameters. of. That is to say, even if the volume output by device A to device B is larger or smaller, device B can also obtain the appropriate volume parameters to adjust the output volume of audio A on the side of device B to device B through the volume parameters. within the output volume range of the side. In other words, the user does not need to adjust the output volume parameters of device A or device B. Through the volume parameters, device B can adjust the output volume of audio A on device B to within the output volume range to meet the user's listening habits. . Moreover, the output volume parameters and volume parameters of device A and device B, that is, the factors that affect their respective output volumes, are independent of each other and do not affect each other. Therefore, after device A and device B are disconnected, when device A or device B plays other audio , both device A or device B will re-obtain the corresponding volume parameters for the played audio to adjust the output volume of the new audio to the output volume range of the respective devices, without causing the problem of popping sound, effectively improving the user experience. .
此外,在本申请实施例中,用户可通过设备A和/或设备B以调节设备B侧的输出音量。一个示例中,用户可通过调节设备A侧的输出音量参数,以调节音频A在设备B侧的输出音量(原理可参照上文,此处不再赘述)。当设备A与设备B断开后,当设备A播放音频时,设备A可基于调节音量后所获取到的输出音量范围,获取对应的音量参数,并进一步得到输出音量。对于设备B,其输出音量参数未受影响,与设备A断开后,仍可按照已保存的输出音量参数与输出音量范围,获取到对应的音量参数,并得到输出音量。另一个示例中,用户可通过调节设备B侧的输出音量参数,以调节音频A在设备B侧的输出音量。请参照图12b,当设备A与设备B断开后,对于设备A侧,其在播放音频时,由于其输出音量参数与输出音量范围未受影响(即虚线框中的参数相同),因此,设备A在播放音频时,其播放的输出音量仍然在输出音量范围内,可有效避免设备切换后出现的爆音问题。In addition, in this embodiment of the present application, the user can adjust the output volume of device B through device A and/or device B. In one example, the user can adjust the output volume of audio A on the device B side by adjusting the output volume parameter on the device A side (for the principle, please refer to the above and will not be repeated here). After device A is disconnected from device B, when device A plays audio, device A can obtain the corresponding volume parameters based on the output volume range obtained after adjusting the volume, and further obtain the output volume. For device B, its output volume parameters are not affected. After disconnecting from device A, the corresponding volume parameters can still be obtained according to the saved output volume parameters and output volume range, and the output volume can be obtained. In another example, the user can adjust the output volume of audio A on the device B side by adjusting the output volume parameter on the device B side. Please refer to Figure 12b. When device A is disconnected from device B, when device A is playing audio, its output volume parameters and output volume range are not affected (that is, the parameters in the dotted box are the same). Therefore, When device A plays audio, the output volume it plays is still within the output volume range, which can effectively avoid the popping problem that occurs after device switching.
本申请实施例中的音量控制方案还可以应用于多设备播放的混音场景中,以实现在混音场景中的音量自适应调节。图13a与图13b为示例性示出的原理示意图,请参照图13a,场景中包括手机、耳机、平板和电视等设备。需要说明的是,图13a中的设备数量以及类型仅为示意性举例,本申请不做限定。示例性的,手机作为中心设备,可获取到各从设备发送的音频数据,例如手机可以通过与平板和电视之间的无线连接(例如Wi-Fi连接,也可以是其它连接方式,本申请不做限定)接收到平板发送的音频A的音频数据以及电视发送的音频B的音频数据。其中,音频A的音频数据对应的输入音量为data_in(A),音频B的音频数据对应的输入音量为data_in(B)。手机作为中心设备可将手机的音频(例如音频C)、电视的音频(音频B)以及平板的音频(音频C)进行混音,以得到混音音频数据,混音音频对应的输出音量为data_out(X)。手机可将混音音频的音频数据输出至耳机,以通过耳机播放混音音频,播放音量为data_out(X)。在该场景中,各设备发送的 音频的输入音量是基于上文所述的音量控制方法调节后的音量。并且,手机在混音过程中,同样基于本申请中的音量控制方法对混音音频的输出音量进行调节,从而使得混音音频的输出音量控制在手机侧的输出音量范围内。The volume control scheme in the embodiment of the present application can also be applied in a mixing scene played by multiple devices to achieve adaptive volume adjustment in a mixing scene. Figures 13a and 13b are exemplary schematic diagrams of the principles. Please refer to Figure 13a. The scene includes devices such as mobile phones, headphones, tablets, and televisions. It should be noted that the number and type of devices in Figure 13a are only illustrative examples and are not limited in this application. For example, a mobile phone, as a central device, can obtain the audio data sent by each slave device. For example, the mobile phone can connect to a tablet and a TV through a wireless connection (such as a Wi-Fi connection, or other connection methods. This application does not cover (Limited) The audio data of audio A sent by the tablet and the audio data of audio B sent by the TV are received. Among them, the input volume corresponding to the audio data of audio A is data_in(A), and the input volume corresponding to the audio data of audio B is data_in(B). As a central device, the mobile phone can mix the audio of the mobile phone (such as audio C), the audio of the TV (audio B) and the audio of the tablet (audio C) to obtain the mixed audio data. The output volume corresponding to the mixed audio is data_out (X). The mobile phone can output the audio data of the mixed audio to the headset to play the mixed audio through the headset, and the playback volume is data_out(X). In this scenario, each device sends The audio input volume is the volume adjusted based on the volume control method described above. Moreover, during the mixing process, the mobile phone also adjusts the output volume of the mixed audio based on the volume control method in this application, so that the output volume of the mixed audio is controlled within the output volume range of the mobile phone.
请参照图13b,本申请实施例中的中心设备在混音过程中,可基于各设备与耳机之间的相对位置(包括距离和/或角度),以得到混音音频,从而可以在耳机侧实现立体声效果。可以理解为,本申请实施例中,用户通过耳机可听到组网中的各设备(包括手机、平板与电视)的音频,并且,各音频的声音效果与用户不使用耳机时的听觉效果是接近的,即,耳机中播放的声音可实现声音的远近与方位的空间听觉效果。Please refer to Figure 13b. During the mixing process, the central device in the embodiment of the present application can obtain the mixed audio based on the relative position (including distance and/or angle) between each device and the earphones, so that the central device can obtain the mixed audio on the earphone side. Achieve stereo effect. It can be understood that in the embodiment of the present application, the user can hear the audio of each device in the network (including mobile phones, tablets and TVs) through earphones, and the sound effect of each audio is the same as the auditory effect when the user does not use earphones. Proximity, that is, the sound played in the headphones can achieve the spatial hearing effect of the distance and direction of the sound.
示例性的,中心设备可选地用于与从设备连接并进行数据交互,以向各从设备发布指令,并获取从设备的音频数据。中心设备还用于与耳机连接并进行数据交互,以从耳机获取指令,并向耳机传输音频数据。从设备即为组网中除中心设备以外的设备。需要说明的是,组网可以是Wi-Fi组网,也可以蓝牙组网,还可以是Wi-Fi与蓝牙的混合组网,例如,手机与电视之间的连接可以是Wi-Fi连接,手机与平板之间的连接可以是蓝牙连接,本申请不做限定。可选地,本申请实施例中的组网中的各设备是具有相同的账号。中心设备与从设备的具体确定方式将在下面的实施例中详细说明。For example, the central device is optionally used to connect and interact with slave devices, to issue instructions to each slave device, and to obtain audio data from the slave devices. The central device is also used to connect and interact with the headset to obtain instructions from the headset and transmit audio data to the headset. Slave devices are devices in the network other than the central device. It should be noted that the networking can be Wi-Fi networking, Bluetooth networking, or a hybrid networking of Wi-Fi and Bluetooth. For example, the connection between the mobile phone and the TV can be a Wi-Fi connection. The connection between the mobile phone and the tablet may be a Bluetooth connection, which is not limited in this application. Optionally, each device in the network in this embodiment of the present application has the same account. The specific determination method of the central device and the slave device will be described in detail in the following embodiments.
下面以具体实施例对图13a与图13b中的混音场景进行详细说明。图14为示例性示出的场景示意图。请参照图14,本申请实施例中以手机、电视与平板之间组成Wi-Fi组网为例进行说明,即组网中的各设备的无线连接是基于Wi-Fi协议维护的。举例说明,用户家中的电视与平板开机后,可自动发现并进行连接(也可以手动连接,此处不再赘述),以组成家庭组网。当然,家庭组网中还可以包括其它设备,例如可以包括蓝牙音箱等其它智能家居设备等,本申请不做限定。仍参照图14,示例性的,用户携带手机回家后,手机执行Wi-Fi发现流程,并在发现Wi-Fi组网中的各设备(包括电视和平板),自动与各设备进行连接,以加入Wi-Fi组网。本申请实施例中仅简单说明组网的结构与建立过程,具体连接过程可参照已有技术实施例,本申请不做限定。The mixing scenes in Figure 13a and Figure 13b will be described in detail below with specific embodiments. Figure 14 is a schematic diagram of an exemplary scene. Please refer to Figure 14. In the embodiment of this application, a Wi-Fi network formed between a mobile phone, a TV, and a tablet is used as an example for explanation. That is, the wireless connections of each device in the network are maintained based on the Wi-Fi protocol. For example, after the TV and tablet in the user's home are turned on, they can be automatically discovered and connected (or manually connected, which will not be described here) to form a home network. Of course, the home network may also include other devices, such as Bluetooth speakers and other smart home devices, which are not limited in this application. Still referring to Figure 14, for example, after the user takes the mobile phone home, the mobile phone executes the Wi-Fi discovery process, and after discovering each device (including TV and tablet) in the Wi-Fi network, automatically connects to each device. to join the Wi-Fi network. The embodiments of this application only briefly describe the structure and establishment process of the network. For the specific connection process, reference can be made to existing technical embodiments, which are not limited in this application.
请继续参照图14,用户打开耳机外壳后,耳机可自动连接电子设备。本申请实施例中以耳机自动连接上一次连接的设备(例如平板)为例进行说明。在其他实施例中,耳机也可以选择距离最近的设备进行连接,本申请不做限定。Please continue to refer to Figure 14. After the user opens the earphone shell, the earphones can automatically connect to the electronic device. In the embodiment of this application, the headset automatically connects to the last connected device (such as a tablet) as an example for explanation. In other embodiments, the earphones can also select the nearest device for connection, which is not limited in this application.
示例性的,耳机与平板建立蓝牙连接。具体建立过程可参照已有技术实施例,本申请不再赘述。在本申请实施例中,耳机与平板连接后,也就是说,耳机与组网中的设备连接后,组网内的各设备可发起投票选取流程,以选出中心设备与从设备。图15为示例性示出的投票选举示意图,请参照图15,组网内的各设备(包括手机、平板、电视)向组网内的其它设备发送(例如可以是发送广播消息)选票信息。选票信息中包括设备信息、设备的能力信息和位置信息。其中,设备信息包括但不限于:设备型号、设备名称、设备地址等信息。设备的能力信息包括但不限于:设备支持的通信类型、是否支持混音功能等。位置信息可选地为设备与耳机之间的距离信息。可选地,距离信息可以通过蓝牙测距、超宽带(Ultra Wide Band,UWB)等方式进行测量,本申请不做限定。可选地,在本申请实施例中,设备在选举阶段,也可以称为候选设备,或备选设备,本申请不做限定。 For example, the headset establishes a Bluetooth connection with the tablet. The specific establishment process may refer to existing technical embodiments, and will not be described in detail in this application. In this embodiment of the present application, after the earphones are connected to the tablet, that is to say, after the earphones are connected to the devices in the network, each device in the network can initiate a voting process to select the central device and slave devices. Figure 15 is an exemplary voting schematic diagram. Please refer to Figure 15. Each device in the network (including mobile phones, tablets, and TVs) sends (for example, broadcast messages) voting information to other devices in the network. Ballot information includes device information, device capability information and location information. Among them, the device information includes but is not limited to: device model, device name, device address and other information. The capability information of the device includes but is not limited to: the communication type supported by the device, whether it supports the mixing function, etc. The location information is optionally distance information between the device and the headset. Optionally, the distance information can be measured through Bluetooth ranging, Ultra Wide Band (UWB), etc., which is not limited in this application. Optionally, in the embodiment of this application, the device may also be called a candidate device or an alternative device during the election phase, which is not limited by this application.
示例性的,组网中的各设备均可以接收到其它设备发送的选票信息。以手机为例,手机向电视与平板发送选票信息,选票信息中包括手机的相关信息。手机还会接收到电视发送的选票信息和平板发送的选票信息,电视发送的选票信息中包括电视的相关信息,平板发送的选票信息包括平板的相关信息。For example, each device in the network can receive voting information sent by other devices. Take the mobile phone as an example. The mobile phone sends voting information to the TV and tablet. The voting information includes relevant information of the mobile phone. The mobile phone will also receive the voting information sent by the TV and the voting information sent by the tablet. The voting information sent by the TV includes the relevant information of the TV, and the voting information sent by the tablet includes the relevant information of the tablet.
示例性的,组网中的各设备可预设投票规则,投票规则可根据实际需求设置,例如可以是基于各选票中的位置信息,选择距离耳机最近的设备,本申请不做限定。本实例中,以各设备根据预设投票规则,选择手机为中心设备为例进行说明。仍以手机为例,手机基于自身的设备信息、位置信息等,以及接收到的电视的选票信息和平板的选票信息,根据预设投票规则,选择手机自身作为中心设备。对于其他设备,例如平板,其预设规则与手机相同,且获取到的选票信息也相同,因此,组网中的各设备选出的中心设备是一致的,例如均是选择手机作为中心设备。示例性的,组网中的非中心设备的其它候选设备,则作为从设备。For example, each device in the network can have preset voting rules, and the voting rules can be set according to actual needs. For example, the device closest to the headset can be selected based on the location information in each ballot, which is not limited in this application. In this example, each device selects a mobile phone as the central device according to the preset voting rules. Still taking the mobile phone as an example, the mobile phone selects the mobile phone itself as the central device according to the preset voting rules based on its own device information, location information, etc., as well as the received voting information from the TV and the voting information from the tablet. For other devices, such as tablets, the preset rules are the same as those of mobile phones, and the obtained voting information is also the same. Therefore, the central device selected by each device in the network is consistent. For example, mobile phones are all selected as the central device. For example, other candidate devices that are not the central device in the network serve as slave devices.
本申请实施例中,选择出中心设备之后,耳机可切换到中心设备,即断开与平板之间的连接,并与手机(即中心设备)建立蓝牙连接。In the embodiment of the present application, after selecting the central device, the headset can be switched to the central device, that is, disconnecting from the tablet, and establishing a Bluetooth connection with the mobile phone (ie, the central device).
需要说明的是,本申请实施例中以平板非中心设备为例进行说明,在其他实施例中,耳机当前连接的设备可能是中心设备,也可能不是,本申请不做限定。It should be noted that in the embodiment of this application, a tablet non-central device is used as an example for explanation. In other embodiments, the device currently connected to the headset may or may not be a central device, which is not limited in this application.
进一步需要说明的是,中心设备选出后,中心设备与各设备之间周期性地执行握手流程,即中心设备与各从设备在每个周期(例如可以5s,可根据实际需求设置,本申请不做限定)触发时刻,进行探测信息交互,以检测中心设备的状态是否正常。若中心设备状态异常,例如中心设备离线,则从设备在周期触发时刻,未接收到中心设备的探测信息(或者是中心设备回复的探测响应信息),从设备可确定中心设备状态异常,组网内的各设备重新执行投票流程,重新投票后的中心设备与之前的中心设备不同。It should be further noted that after the central device is selected, a handshake process is periodically executed between the central device and each device, that is, the central device and each slave device perform a handshake process in each cycle (for example, it can be 5s, which can be set according to actual needs. This application (not limited) at the triggering time, the detection information is exchanged to detect whether the status of the central device is normal. If the status of the central device is abnormal, for example, the central device is offline, and the slave device does not receive the detection information from the central device (or the detection response information returned by the central device) at the cycle trigger time, the slave device can determine that the status of the central device is abnormal and the network Each device in the system re-executes the voting process, and the central device after re-voting is different from the previous central device.
进一步需要说明的是,中心设备切换后,耳机将连接到新的中心设备,并由新的中心设备继续执行下文实施例中所述的各步骤,例如混音步骤等。It should be further noted that after the central device is switched, the headphones will be connected to the new central device, and the new central device will continue to perform the steps described in the embodiments below, such as the mixing step.
示例性的,投票流程结束,即选出中心设备之后,各设备可获取与耳机之间的相对位置信息。可选地,相对位置信息包括与耳机之间的距离信息和/或角度信息。举例说明,请参照图16,手机通过测量,获取到与耳机之间的相对位置为:距离A、角度A。电视通过测量,获取到与耳机之间的相对位置为:距离B、角度B。平板通过测量,获取到与耳机之间的相对位置为:距离C,角度C。可选地,各设备可基于到达角度(Angle of Arrival,AOA)算法或离开角度(Angle of Departure,AOD)算法、UWB等测量方法获取到角度信息,本申请不做限定。具体测量方式可参照已有技术实施例,本申请不再赘述。For example, after the voting process is completed, that is, after the central device is selected, each device can obtain relative position information with the headset. Optionally, the relative position information includes distance information and/or angle information from the earphone. For example, please refer to Figure 16. Through measurement, the relative position between the mobile phone and the earphone is obtained: distance A and angle A. Through measurement, the relative position between the TV and the headphones is obtained: distance B and angle B. Through measurement, the relative position between the tablet and the earphones is obtained: distance C, angle C. Optionally, each device can obtain angle information based on the Angle of Arrival (AOA) algorithm or the Angle of Departure (AOD) algorithm, UWB and other measurement methods, which are not limited in this application. For specific measurement methods, reference may be made to existing technical embodiments and will not be described in detail in this application.
示例性的,组网中的各从设备(例如电视和平板)将获取到的相对位置信息发送给中心设备。可选地,各设备可以周期性的获取相对位置信息,并且,各从设备将每个周期获取到的相对位置信息发送给中心设备。For example, each slave device in the network (such as a TV and a tablet) sends the obtained relative position information to the central device. Optionally, each device can periodically acquire relative position information, and each slave device sends the relative position information acquired in each cycle to the central device.
图17为示例性示出的用户界面示意图,请参照图17的(1),声音和振动设置界面1701中包括混音设置选项框1702,用户可点击该选项,以启动手机的混音功能。需要说明的是,平板或电视也可以具备混音功能。在选举出中心设备之后,电视或平板中的混音功能可以提示中心设备为手机,以提示用户在手机上进行操作。当然,在其他实施例 中,中心设备也可以将相关信息同步给从设备,以使得从设备上也可以实现用户在手机上的操作,并向中心设备发送响应于接收到的用户操作所生成的指令,以使得中心设备在组网内发布相关控制指令。Figure 17 is a schematic diagram of an exemplary user interface. Please refer to (1) of Figure 17. The sound and vibration setting interface 1701 includes a mixing setting option box 1702. The user can click this option to start the mixing function of the mobile phone. It should be noted that a flat panel or TV can also have a mixing function. After the central device is selected, the mixing function in the TV or tablet can prompt the central device to be a mobile phone to prompt the user to perform operations on the mobile phone. Of course, in other embodiments , the central device can also synchronize relevant information to the slave device, so that the slave device can also implement the user's operation on the mobile phone, and send instructions generated in response to the received user operation to the central device, so that the central device Issue relevant control instructions within the network.
仍参照图17的(1),手机响应于接收到的用户操作,启动混音功能。手机可基于最近一次获取到的手机与耳机的相对位置信息,以及最近一次获取到的其它各从设备发送的与耳机之间的相对位置信息,计算组网中的所有设备以及耳机之间的相对方位。可选地,手机可以将用户正在操作的焦点设备方位作为用户的正前方,也可以以耳机的正朝向方位作为用户的正前方,本申请不做限定。Still referring to (1) of FIG. 17 , the mobile phone starts the mixing function in response to the received user operation. The mobile phone can calculate the relative position between all devices in the network and the headset based on the most recently obtained relative position information between the mobile phone and the headset, as well as the most recently obtained relative position information between the other slave devices and the headset. position. Optionally, the mobile phone may use the direction of the focus device that the user is operating as the direction directly in front of the user, or may use the direction in which the earphones are facing as the direction directly in front of the user, which is not limited in this application.
请参照图17的(2),示例性的,手机在混音设置选项框1702中显示获取到的所有设备与耳机之间的相对方位。需要说明的是,图17的(2)中仅为示意性举例,在其他实施例中,图中可标识出各设备与耳机之间的距离以及方位,还可以显示各设备的图标等信息,本申请不做限定。Please refer to (2) of FIG. 17 . For example, the mobile phone displays the obtained relative orientations between all devices and the earphones in the mixing setting option box 1702 . It should be noted that (2) of Figure 17 is only a schematic example. In other embodiments, the distance and orientation between each device and the headset can be identified in the figure, and information such as icons of each device can also be displayed. This application is not limited.
可选地,用户可以通过图17的(2)中提供的界面,手动调节各设备与耳机之间的相对位置。例如,可能由于测量误差等问题,界面中显示的相对位置不准确,用户可以通过拖动对应的设备图标,以调整与耳机之间的相对位置。以平板为例,用户可拖动平板的图标,增大平板与耳机之间的夹角,手机响应于接收到的用户操作,计算出拖动后的平板图标与耳机图标之间的夹角,并保存平板的新的相对位置信息,即为平板之前发送的距离信息与更新后的夹角信息。手机可将平板的新的相对位置信息发送给平板。Optionally, the user can manually adjust the relative position between each device and the headset through the interface provided in (2) of Figure 17 . For example, the relative position displayed in the interface may be inaccurate due to measurement errors and other issues. The user can adjust the relative position to the headset by dragging the corresponding device icon. Taking the tablet as an example, the user can drag the icon of the tablet to increase the angle between the tablet and the headset. The mobile phone responds to the received user operation and calculates the angle between the dragged tablet icon and the headset icon. And save the new relative position information of the tablet, which is the distance information previously sent by the tablet and the updated angle information. The mobile phone can send the new relative position information of the tablet to the tablet.
在一种可能的实现方式中,用户可通过图17的(2)中的界面移除设备。举例说明,在混音播放过程中(也可以是在混音播放之前的任意时刻),若用户拖动平板的图标,并将平板的图标滑动到屏幕外。在其他实施例中也可以是其他操作,例如可以是长按后,手机响应于接收到的用户长按操作,显示选项框,选项框中可包括删除选项,用户可以点击删除选项,以删除平板。示例性的,手机响应于接收到的操作,确定将平板从混音场景中移除,手机取消平板图标在混音设置选项框1702中的显示。并且,手机在后续的混音过程中,将不会接收平板所发送的音频。可选地,也可以是手机向平板发送指示信息,以指示平板停止发送相对位置、音频等信息,平板停止向手机发送音频等信息。也就是说,混音音频中不包括平板对应的音频。需要说明的是,该移除方案仅是将平板的音频从混音中剔除,平板仍然在组网中。可选地,若用户需要将平板的音频重新加入到混音音频中,则用户可通过重新开启混音功能,以触发各设备重新执行上文所述的相对位置获取流程。In a possible implementation, the user can remove the device through the interface in (2) of Figure 17 . For example, during the mix playback process (or at any time before the mix playback), if the user drags the tablet icon and slides the tablet icon out of the screen. In other embodiments, it can also be other operations. For example, after a long press, the phone responds to the received user's long press operation and displays an option box. The option box can include a delete option, and the user can click the delete option to delete the tablet. . For example, in response to the received operation, the mobile phone determines to remove the tablet from the mixing scene, and the mobile phone cancels the display of the tablet icon in the mixing setting option box 1702. Moreover, the mobile phone will not receive the audio sent by the tablet during the subsequent mixing process. Optionally, the mobile phone can also send instruction information to the tablet to instruct the tablet to stop sending relative position, audio and other information, and the tablet stops sending audio and other information to the mobile phone. In other words, the audio mix does not include the audio corresponding to the tablet. It should be noted that this removal solution only removes the tablet's audio from the mix, and the tablet is still in the network. Optionally, if the user needs to re-add the tablet's audio to the audio mix, the user can re-enable the mix function to trigger each device to re-execute the relative position acquisition process described above.
在一种可能的实现方式中,手机(也可以是平板等其它设备)可以在接收到用户点击混音设置选项的操作后,向组网中的各设备发送触发信息,以触发组网中的各设备执行上文所述的投票流程,并在投票流程结束后,耳机连接至中心设备。接着,组网中的各设备执行上文所述的相对位置获取流程。In a possible implementation, the mobile phone (or other devices such as tablets) can send trigger information to each device in the network after receiving the user's click on the mixing setting option to trigger the Each device performs the voting process described above, and after the voting process is completed, the headset is connected to the central device. Next, each device in the network performs the relative position acquisition process described above.
在另一种可能的实现方式中,在选出中心设备后,可不作其他处理。手机(也可以是平板等其它设备)在接收到用户点击混音设置选项的操作后,可向各从设备发送触发信息,以触发各从设备获取与耳机之间的相对位置。各从设备将获取到的相对位置信息反馈给手机,手机基于自身与耳机之间的相对位置信息和接收到的相对位置信息,计算组 网中的各设备的相对位置,并在混音设置选项框中显示,这样可以有效节约各设备的计算负担,且减少数据交互。但是,该方式较之上文中所述的方式实时性较低,可能需要等待几秒才会在显示框中显示各设备的相对位置。In another possible implementation manner, after the central device is selected, no other processing may be performed. After receiving the user's click on the mixing setting option, the mobile phone (or other devices such as tablets) can send trigger information to each slave device to trigger each slave device to obtain the relative position to the headset. Each slave device feeds back the obtained relative position information to the mobile phone. The mobile phone calculates the group based on the relative position information between itself and the headset and the received relative position information. The relative position of each device in the network is displayed in the mixing setting option box, which can effectively save the calculation burden of each device and reduce data interaction. However, this method is less real-time than the method described above, and it may take a few seconds before the relative position of each device is displayed in the display box.
在又一种可能的实现方式中,在选出中心设备后,耳机可以继续连接平板。手机接收到用户点击混音选项操作后,手机与耳机建立连接。可选地,耳机与平板之间的连接可以保持,也可以断开,本申请不做限定。In another possible implementation, after selecting the central device, the headset can continue to connect to the tablet. After the mobile phone receives the user's click on the mixing option, the mobile phone establishes a connection with the headset. Optionally, the connection between the earphones and the tablet can be maintained or disconnected, which is not limited in this application.
示例性的,中心设备确定各设备的方位之后,可在组网中的设备播放音频的过程中,执行混音流程,从而实现图13b所示的效果。For example, after the central device determines the orientation of each device, it can perform a mixing process while the devices in the network are playing audio, thereby achieving the effect shown in Figure 13b.
示例性的,以用户使用手机玩游戏(即手机中播放游戏音频),平板中播放视频,且电视中播放音乐为例进行说明。在用户未佩戴耳机之前,用户人耳可听到手机的游戏音频、平板的视频音频以及电视的音乐音频。当用户带上耳机后,耳机可向手机发送佩戴指示信息。手机(即中心设备)响应于接收到的佩戴指示信息,确定用户佩戴耳机,手机向从设备(平板和电视)发送混音触发指示信息,用于指示各设备停止播放音频,并将音频数据输出至手机,以通过手机进行混音,并输出至耳机进行播放,即如图13a中所示。For example, the user plays games on the mobile phone (that is, the game audio is played on the mobile phone), the video is played on the tablet, and the music is played on the TV. Before the user wears headphones, the user's human ears can hear the game audio from the mobile phone, the video audio from the tablet, and the music audio from the TV. When the user puts on the earphones, the earphones can send wearing instruction information to the mobile phone. The mobile phone (i.e., the central device) responds to the received wearing instruction information and determines that the user is wearing the earphones. The mobile phone sends mixing trigger instruction information to the slave devices (tablet and TV) to instruct each device to stop playing audio and output the audio data. to the mobile phone for mixing through the mobile phone, and output to the headphones for playback, as shown in Figure 13a.
具体的,在本申请实施例中,手机与各从设备可执行软时钟同步,软时钟同步可选地为手机与各从设备之间的系统时间进行同步,以避免网络延迟所导致的音频不同步的问题。手机与平板、电视进行软时钟同步之后,各设备之间的系统时间是一致的。需要说明的是,本申请实施例中仅以软时钟同步的是系统时间为例进行说明,本申请不做限定。进一步需要说明的是,该软时钟同步步骤可以在中心设备选举之后,且在手机进行混音之前的任意时刻执行,本申请不做限定。Specifically, in the embodiment of the present application, the mobile phone and each slave device can perform soft clock synchronization. The soft clock synchronization optionally synchronizes the system time between the mobile phone and each slave device to avoid audio inconsistency caused by network delay. Synchronization problem. After soft clock synchronization is performed on the mobile phone, tablet, and TV, the system time between the devices is consistent. It should be noted that in the embodiment of this application, only the system time that is synchronized by the soft clock is used as an example for explanation, and this application does not make a limitation. It should be further noted that this soft clock synchronization step can be executed at any time after the central device is elected and before the mobile phone performs mixing, which is not limited by this application.
下面以平板与手机之间的交互流程为例进行说明,电视侧与手机的处理流程和交互流程是相同的,此处不再赘述。请参照图18,示例性的,平板在接收到混音触发指示信息之前,平板通过自身的音频设备(例如扬声器)播放音频A。平板在播放音频A的过程中,其内部处理流程遵循上文所述的音量控制方法,即平板中的媒体管理器通过音量参数对输出音量进行调节,具体细节可参照上文,此处不再赘述。平板接收到混音触发指示信息后,确定音频变更,即输出设备切换,也就是说输出设备从平板的音频设备变更为手机。The following takes the interaction process between the tablet and the mobile phone as an example. The processing flow and interaction flow between the TV side and the mobile phone are the same and will not be described again here. Referring to FIG. 18 , for example, before the tablet receives the mixing trigger indication information, the tablet plays audio A through its own audio device (such as a speaker). When the tablet plays audio A, its internal processing flow follows the volume control method described above, that is, the media manager in the tablet adjusts the output volume through the volume parameters. For specific details, please refer to the above and will not be discussed here. Repeat. After the tablet receives the mixing trigger indication information, it determines the audio change, that is, the output device is switched, which means that the output device is changed from the audio device of the tablet to the mobile phone.
继续参照图18,示例性的,平板中的视频应用向媒体管理器输出音频A的输入音频数据,对应的输入音量为data_in(A)。如上文所述,媒体管理器在确定音频变更后,将重新执行音量参数获取流程。示例性的,媒体管理器可基于音频A的输入音量以及平板当前保存的输出音量范围,确定音频A的音量参数。接着,媒体管理器基于音频A的输入音量、音量参数以及输出音量参数,获取音频A的输出音量(data_out(A)),具体细节可参照上文实施例,此处不再赘述。Continuing to refer to FIG. 18 , for example, the video application in the tablet outputs the input audio data of audio A to the media manager, and the corresponding input volume is data_in(A). As mentioned above, the media manager will re-execute the volume parameter acquisition process after determining that the audio has changed. For example, the media manager may determine the volume parameter of audio A based on the input volume of audio A and the output volume range currently saved by the tablet. Next, the media manager obtains the output volume (data_out(A)) of audio A based on the input volume, volume parameters, and output volume parameters of audio A. For specific details, please refer to the above embodiment and will not be described again here.
接着,在混音场景中,平板的媒体管理器将音频A的输出音频数据中加入软时钟信息。其中,软时钟信息即为上文所述的软时钟同步后的时间信息。软时钟信息的具体添加方式可参照已有技术实施例,本申请不再赘述。Next, in the mixing scene, the media manager of the tablet adds soft clock information to the output audio data of audio A. The soft clock information is the time information after soft clock synchronization as described above. The specific method of adding soft clock information may refer to existing technical embodiments, and will not be described in detail in this application.
示例性的,平板的媒体管理器将加入软时钟信息后的音频A的输出音频数据输出至Wi-Fi驱动。其中,音频A的输出音频数据对应的输出音量为(data_out(A))。平板的 Wi-Fi驱动将音频A(已加入时钟信息,下文中不再重复说明)的输出音频数据传输至手机。For example, the media manager of the tablet outputs the output audio data of audio A after adding the soft clock information to the Wi-Fi driver. Among them, the output volume corresponding to the output audio data of audio A is (data_out(A)). flat The Wi-Fi driver transmits the output audio data of audio A (clock information has been added and will not be repeated below) to the mobile phone.
仍参照图18,示例性的,手机的Wi-Fi驱动接收到音频A的输出音频数据。手机的Wi-Fi驱动将音频A的输出音频数据输出至媒体管理器。需要说明的是,与上文类似,媒体管理器接收到的音频A的音频数据,对于媒体管理器而言,即为音频A的输入音频数据。为方便描述,下文中仍以音频A的输出音频数据进行描述,不再替换为音频A的输入音频数据。Still referring to FIG. 18 , in an exemplary embodiment, the Wi-Fi driver of the mobile phone receives the output audio data of audio A. The mobile phone's Wi-Fi driver outputs the output audio data of audio A to the media manager. It should be noted that, similar to the above, the audio data of audio A received by the media manager is the input audio data of audio A for the media manager. For the convenience of description, the output audio data of audio A will still be described below and will not be replaced by the input audio data of audio A.
示例性的,在手机侧,手机侧在发送混音触发指示信息之前,手机侧通过扬声器播放游戏音频。手机侧检测到用户佩戴耳机后,可确定输出设备切换为耳机,即音频变更。手机侧同样将手机侧需要播放的音频重新执行上文所述输出音量获取流程。具体的,请参照图18,游戏应用将音频C的输入音频数据输出至手机的媒体管理器,其中,音频C的输入音频数据对应的输入音量为data_in(C)。For example, on the mobile phone side, before sending the mixing trigger instruction information, the mobile phone side plays the game audio through the speaker. After the mobile phone detects that the user is wearing headphones, it can be determined that the output device is switched to headphones, that is, the audio changes. The mobile phone side also re-executes the output volume acquisition process described above for the audio that needs to be played on the mobile phone side. Specifically, please refer to Figure 18. The game application outputs the input audio data of audio C to the media manager of the mobile phone, where the input volume corresponding to the input audio data of audio C is data_in(C).
示例性的,手机的媒体管理器可获取音频C的输出音量。具体的,手机的媒体管理器可基于音频C的输入音量(data_in(C))以及手机当前保存的输出音量范围,确定音频C的音量参数。接着,媒体管理器基于音频C的输入音量、音量参数以及输出音量参数,获取音频C的输出音量(data_out(C)),具体细节可参照上文实施例,此处不再赘述。For example, the media manager of the mobile phone can obtain the output volume of audio C. Specifically, the media manager of the mobile phone can determine the volume parameters of audio C based on the input volume of audio C (data_in(C)) and the output volume range currently saved by the mobile phone. Next, the media manager obtains the output volume (data_out(C)) of audio C based on the input volume, volume parameters, and output volume parameters of audio C. For specific details, please refer to the above embodiment and will not be described again here.
接着,手机可基于手机的音频C的输出音频数据、接收到的平板发送的音频A的输出音频数据,以及电视发送的音频B的输出音频数据,执行混音流程,以获取混音音频的音频数据。Then, the mobile phone can perform a mixing process based on the output audio data of audio C of the mobile phone, the output audio data of audio A sent by the received tablet, and the output audio data of audio B sent by the TV to obtain the audio of the mixed audio. data.
图19为示例性示出的混音流程示意图,请参照图19,具体包括:Figure 19 is a schematic diagram of an exemplary mixing process. Please refer to Figure 19, which specifically includes:
S1901,音频软时钟对齐。S1901, audio soft clock alignment.
示例性的,手机可基于自身的软时钟,以及接收到的音频A中的软时钟,和音频B中的软时钟,将音频A、音频B以及音频C对齐,以使得音频A、音频B和音频C的音频起始点同步。具体对齐方式可参照已有技术实施例,本申请不再赘述。For example, the mobile phone can align audio A, audio B and audio C based on its own soft clock, as well as the received soft clock in audio A and the soft clock in audio B, so that audio A, audio B and The audio starting point of audio C is synchronized. For specific alignment methods, reference may be made to existing technical embodiments and will not be described in detail in this application.
S1902,根据设备方位,分别计算双声道音频数据。S1902, calculate the two-channel audio data separately according to the device orientation.
本申请实施例中,手机的媒体管理器可基于各设备(包括手机、电视和平板)的相对位置信息,计算各音频之间的时间差、声级差、相位差以及音色差等,从而实现不同设备音频的混音后的立体声效。In the embodiment of the present application, the media manager of the mobile phone can calculate the time difference, sound level difference, phase difference and timbre difference between each audio based on the relative position information of each device (including mobile phone, TV and tablet), thereby realizing different devices The stereo effect of the audio mix.
示例性的,时间差可选地为声音到达用户两耳(也可以是两个耳机播放的)之间的时间差。其中,当时间差到达0.6ms左右时,用户可以感到声音完全来自某一侧。也就是说,通过调整两个耳机输出的音频之间的时间差,可以使得用户感知到音频的音源向某个方位偏移。For example, the time difference may be the time difference between the sound reaching the user's two ears (it may also be played by two earphones). Among them, when the time difference reaches about 0.6ms, the user can feel that the sound comes entirely from one side. That is to say, by adjusting the time difference between the audio output by the two headphones, the user can perceive that the audio source is shifted in a certain direction.
示例性的,声级差可选地为靠近声源一侧的声级较大,而另外一侧较小。其中,当音源在用户的某一侧时,用户两耳听到(或者是两个耳机播放的)的音频的声级差可达到25dB左右。在本申请实施例中,可通过调整音频在两个耳机的声级差,例如可以是将耳机的其中一个声道的音频的声级增大,另一个音频的声级不变或者减小,从而使得用户感知到音频的音源向某个方位偏移。For example, the sound level difference is optionally such that the sound level on the side close to the sound source is larger and the sound level on the other side is smaller. Among them, when the sound source is on one side of the user, the sound level difference between the audio heard by the user's two ears (or played by two headphones) can reach about 25dB. In this embodiment of the present application, the sound level difference between the two earphones can be adjusted, for example, the sound level of the audio in one channel of the earphones can be increased, while the sound level of the other audio channel remains unchanged or reduced, thereby This allows the user to perceive that the audio source is shifted in a certain direction.
示例性的,相位差可选地为两个耳机接收到的音频之间的相位差。需要说明的是, 即使两个耳机接收到的声音声级以及时间相同,若调整两个耳机接收到的音频之间的相位,同样可使得用户感知到音频的音源向某个方位偏移。For example, the phase difference may be the phase difference between the audio signals received by the two earphones. It should be noted, Even if the sound level and time received by the two earphones are the same, adjusting the phase between the audio received by the two earphones can also make the user perceive that the audio source is shifted in a certain direction.
示例性的,音色差可选地为两个耳机接收到的音频之间的音色(即频率)之差。其中,音频的频率越高,绕过头部到达另一只耳朵时的衰减越大。相应的,本申请实施例中,可通过调整两个耳机接收到的音频的音色,以使得用户感知到音频的音源向某个方位偏移。For example, the timbre difference may be the difference in timbre (ie, frequency) between audio signals received by the two earphones. Among them, the higher the frequency of the audio, the greater the attenuation when it goes around the head and reaches the other ear. Correspondingly, in the embodiment of the present application, the timbre of the audio received by the two earphones can be adjusted so that the user perceives that the source of the audio is shifted in a certain direction.
下面以基于相对位置中的方向信息,通过调整输出至耳机的左右声道的音频的时间差为例,对本申请实施例中的混音流程进行详细说明。请参照图20,示例性的,手机的媒体管理器获取到的音频A的输出音频数据、音频B的输出音频数据与音频C的输出音频数据如图20所示,其中,每个数字为4位,即4个比特(bit)。每个采样周期为16位,即图20中占两个格子长度,即16位。需要说明的是,图20中所示的音频数据仅为示意性举例,本申请不做限定。The mixing process in the embodiment of the present application will be described in detail below by taking the time difference of the audio output to the left and right channels of the earphones based on the direction information in the relative position as an example. Please refer to Figure 20. Exemplarily, the output audio data of audio A, the output audio data of audio B, and the output audio data of audio C obtained by the media manager of the mobile phone are shown in Figure 20, where each number is 4 bit, that is, 4 bits. Each sampling period is 16 bits, which occupies two grid lengths in Figure 20, that is, 16 bits. It should be noted that the audio data shown in Figure 20 is only a schematic example and is not limited in this application.
手机的媒体管理器可基于手机与耳机之间的方位(即相对位置信息中的角度信息),获取各音频在耳机的左右声道的时间差,并通过调整音频在左右声道的时间差,以虚拟出音源在用户听觉中的相对位置,虚拟的音源向某个方向偏移,以接近实际音源(例如平板)与手机之间的相对位置。The media manager of the mobile phone can obtain the time difference of each audio in the left and right channels of the headset based on the orientation between the mobile phone and the headset (that is, the angle information in the relative position information), and adjust the time difference of the audio in the left and right channels to virtually The relative position of the sound source in the user's hearing is determined, and the virtual sound source is shifted in a certain direction to approximate the relative position between the actual sound source (such as a tablet) and the mobile phone.
举例说明,以图16中的各设备与耳机之间的方位为例,平板在耳机的右前方,且与耳机之间的夹角为角度C。请参照图21a的(1),对于平板的音频A,可将输出至左声道的音频延后3个采样周期的时长。也就是说,右声道的音频的起始点与左声道的音频的起始点相差3个采样周期的时长。可以理解为,耳机的右声道先播放音频A,在3个采样周期时长后,左声道播放音频A,从而实现音频A在耳机的左声道与右声道之间的时间差。如图21b所示,由于左右声道接收到的音频存在时间差,可实现音源的调整,使得用户在听觉上感知音频A的音源,即虚拟出的声源在用户的右前方,接近平板与耳机之间的实际方位。电视与手机的音频的时间差调整原理同样可参照图21b,下文中不再重复说明。For example, taking the orientation between each device and the earphones in Figure 16, the tablet is in front and right of the earphones, and the angle between the tablet and the earphones is angle C. Please refer to (1) in Figure 21a. For audio A of the tablet, the audio output to the left channel can be delayed by 3 sampling periods. That is to say, the starting point of the audio of the right channel differs from the starting point of the audio of the left channel by 3 sampling periods. It can be understood that the right channel of the headset plays audio A first, and after 3 sampling periods, the left channel plays audio A, thereby realizing the time difference between the left channel and the right channel of audio A in the headset. As shown in Figure 21b, due to the time difference between the audio received by the left and right channels, the sound source can be adjusted so that the user can auditorily perceive the sound source of audio A, that is, the virtual sound source is in front of the user's right, close to the tablet and earphones. actual position between. The time difference adjustment principle of the audio between the TV and the mobile phone can also be referred to Figure 21b, and the description will not be repeated below.
参照图21a的(2),示例性的,仍以图16中的方位为例,电视在耳机的正前方,即,其与耳机之间夹角(即夹角B)为90度。相应的,音频B输出至耳机的左声道的音频与输出至耳机的右声道的音频一致,以使得虚拟出的音源在用户听觉感知的正前方。Referring to (2) of Figure 21a, for example, still taking the orientation in Figure 16 as an example, the TV is directly in front of the earphones, that is, the angle between it and the earphones (ie, angle B) is 90 degrees. Correspondingly, the audio output by audio B to the left channel of the earphone is consistent with the audio output to the right channel of the earphone, so that the virtual sound source is directly in front of the user's auditory perception.
参照图21a的(3),示例性的,仍以图16中的方位为例,手机在耳机的左前方,且与耳机之间的夹角为角度A。手机的媒体管理器可将输出至右声道的音频延后3个采样周期。也就是说,左声道的音频的起始点与右声道的音频的起始点相差3个采样周期。可以理解为,耳机的左声道先播放音频C,在3个采样周期时长后,左声道播放音频C,从而实现音频C在耳机的左声道与右声道之间的时间差。由于左右声道接收到的音频存在时间差,可实现音源的调整,使得用户在听觉上感知音频C的音源,即虚拟出的声源在用户的左前方。Referring to (3) of Figure 21a, for example, still taking the orientation in Figure 16 as an example, the mobile phone is in the left front of the earphone, and the angle between the mobile phone and the earphone is angle A. The phone's media manager can delay the audio output to the right channel by 3 sampling periods. That is to say, the starting point of the audio of the left channel differs from the starting point of the audio of the right channel by 3 sampling periods. It can be understood that the left channel of the headset plays audio C first, and after 3 sampling periods, the left channel plays audio C, thereby realizing the time difference between the left channel and the right channel of the headset of audio C. Since there is a time difference between the audio received by the left and right channels, the sound source can be adjusted so that the user can auditorily perceive the sound source of audio C, that is, the virtual sound source is in front and left of the user.
需要说明的是,本实例中仅以是以基于方位信息,调节时间差,也就是说,是以媒体管理器通过调节左右声道的音频的时延,以实现虚拟音源方向的偏移为例进行说明。在一种可能的实现方式中,如上文所述,组网中的各设备对应的相对位置信息中可包括距 离信息和/或角度信息。若相对位置信息中包括距离信息和角度信息的场景下,中心设备(即手机)的媒体管理器还可以进一步基于距离信息对各音频进行调节。示例性的,仍以图16中所示的场景为例,手机的媒体管理器可基于手机与耳机之间的距离信息(即距离A),获取手机的音频C的距离衰减值。媒体管理器可基于接收到的电视与耳机之间的距离信息(即距离B),获取电视的音频C的距离衰减值。以及,媒体管理器基于接收到的平板与耳机之间的距离信息(即距离C),获取平板的音频A的距离衰减值。示例性的,媒体管理器可基于公式(11),获取距离衰减值:
Lp=20lg(D/D_min)   (11)
It should be noted that in this example, the time difference is adjusted based on the orientation information. In other words, the media manager adjusts the audio delay of the left and right channels to realize the offset of the direction of the virtual sound source. illustrate. In a possible implementation, as mentioned above, the relative position information corresponding to each device in the network may include distance distance information and/or angle information. If the relative position information includes distance information and angle information, the media manager of the central device (ie, the mobile phone) can further adjust each audio based on the distance information. For example, still taking the scenario shown in Figure 16 as an example, the media manager of the mobile phone can obtain the distance attenuation value of the audio C of the mobile phone based on the distance information between the mobile phone and the headset (ie, distance A). The media manager may obtain the distance attenuation value of the audio C of the TV based on the received distance information between the TV and the earphones (ie, distance B). And, the media manager obtains the distance attenuation value of the audio A of the tablet based on the received distance information between the tablet and the earphones (ie, distance C). For example, the media manager can obtain the distance attenuation value based on formula (11):
Lp=20lg(D/D_min) (11)
其中,D即为设备与耳机之间的距离值,D_min即为各设备与耳机之间的距离值之中的最小距离值,也就是说,本申请实施例中媒体管理器距离最小的设备的距离为基准,计算其他设备的音量衰减值,该计算方式仅为示意性举例,本申请不做限定。Among them, D is the distance value between the device and the headset, and D_min is the minimum distance value among the distance values between each device and the headset. In other words, in the embodiment of the present application, the distance value of the device with the smallest media manager distance is The distance is used as a benchmark to calculate the volume attenuation value of other devices. This calculation method is only an illustrative example and is not limited by this application.
示例性的,媒体管理器可在执行图21a所示的步骤之前,或者是执行图21a所示的步骤之后,将各音频的音频数据加上距离信息对应的衰减值。举例说明,在执行图21a所示的步骤之前,媒体管理器可将图20中所示的音频A的音频数据加上音频A对应的距离衰减值、媒体管理器将音频B的音频数据加上音频B对应的距离衰减值,以及,媒体管理器将音频C的音频数据加上音频C对应的距离衰减值。其中,由于电视对应的距离信息为设备的距离信息中最小的,相应的,如上文所述,音频B对应的距离衰减值可选地为0。媒体管理器可基于得到的各音频对应的结果,继续执行图21a中的流程。For example, the media manager may add the attenuation value corresponding to the distance information to the audio data of each audio before performing the steps shown in Figure 21a, or after performing the steps shown in Figure 21a. For example, before performing the steps shown in Figure 21a, the media manager can add the audio data of audio A shown in Figure 20 to the distance attenuation value corresponding to audio A, and the media manager can add the audio data of audio B to the audio data of audio A shown in Figure 20. The distance attenuation value corresponding to audio B, and the media manager adds the audio data of audio C to the distance attenuation value corresponding to audio C. Among them, since the distance information corresponding to the TV is the smallest among the distance information of the device, accordingly, as mentioned above, the distance attenuation value corresponding to audio B is optionally 0. The media manager may continue to execute the process in Figure 21a based on the obtained results corresponding to each audio.
另一个示例中,媒体管理器可执行图21a之后,即获取到各音频对应的左右声道的音频数据后,媒体管理器可对各音频的左右声道的音频数据分别加上衰减值。以音频A为例,媒体管理器可将左声道的音频数据加上音频A对应的距离衰减值,得到左声道的输出音频数据,以及,媒体管理器将右声道的音频数据加上音频A对应的距离衰减值,得到右声道的输出音频数据。媒体管理器依次对各音频的左右声道的音频数据进行处理,并基于处理后的结果,继续执行图22中的流程。In another example, after the media manager executes FIG. 21a, that is, after obtaining the audio data of the left and right channels corresponding to each audio, the media manager can add attenuation values to the audio data of the left and right channels of each audio respectively. Taking audio A as an example, the media manager can add the audio data of the left channel to the distance attenuation value corresponding to audio A to obtain the output audio data of the left channel, and the media manager can add the audio data of the right channel to The distance attenuation value corresponding to audio A is used to obtain the output audio data of the right channel. The media manager processes the audio data of the left and right channels of each audio in sequence, and based on the processed results, continues to execute the process in Figure 22.
S1903,将多设备的双声道音频数据分别进行线性混音。S1903 performs linear mixing of the two-channel audio data from multiple devices.
示例性的,请参照图22,手机的媒体管理器将右声道对应的音频A的输出音频数据、音频B的音频数据与音频C的输出音频数据进行叠加,以得到右声道的混音音频。并且,媒体管理器将左声道对应的音频A的输出音频数据、音频B的输出音频数据与音频C的输出音频数据进行叠加,以得到左声道的混音音频。可选地,为了防止叠加后的音频数据溢出,媒体管理器可将左右声道叠加后的音频数据取平均值,以得到左右声道各自的混音音频的输出音频数据,对应的输出音量为data_out(X1)。For example, please refer to Figure 22. The media manager of the mobile phone superimposes the output audio data of audio A, the audio data of audio B, and the output audio data of audio C corresponding to the right channel to obtain a mix of the right channel. Audio. Furthermore, the media manager superimposes the output audio data of audio A, the output audio data of audio B, and the output audio data of audio C corresponding to the left channel to obtain the mixed audio of the left channel. Optionally, in order to prevent the superimposed audio data from overflowing, the media manager can average the superimposed audio data of the left and right channels to obtain the output audio data of the mixed audio of the left and right channels. The corresponding output volume is data_out(X1).
需要说明的是,本申请实施例中仅以调整左右声道的时间差,以实现虚拟声源方位变换。在其他实施例中,媒体管理器还可以通过调整音色差、相位差和/或声级差等以实现立体声效,本申请不再逐一举例说明。It should be noted that in the embodiment of the present application, only the time difference between the left and right channels is adjusted to realize virtual sound source orientation transformation. In other embodiments, the media manager can also adjust the timbre difference, phase difference, and/or sound level difference to achieve stereo sound effects. This application will not give examples one by one.
进一步需要说明的是,本申请实施例中的延后采样点以实现时间差的方式仅为示意性举例。在其他实施例中,媒体管理器也可以基于HRTF(Head-Related Transfer Function,头相关传输函数)算法以获取立体声音效,本申请不做限定。It should be further noted that the method of delaying the sampling point to achieve the time difference in the embodiment of the present application is only an illustrative example. In other embodiments, the media manager can also obtain stereo sound effects based on the HRTF (Head-Related Transfer Function) algorithm, which is not limited in this application.
请继续参照图18,示例性的,手机的媒体管理器获取到左右声道的混音音频的输出 音频数据后,媒体管理器可对混音音频的输出音量(data_out(X1))进行音量参数调节,以将混音音频的输出音量(data_out(X1))调节至手机的输出音量范围内。Please continue to refer to Figure 18. For example, the media manager of the mobile phone obtains the output of the mixed audio of the left and right channels. After receiving the audio data, the media manager can adjust the volume parameters of the output volume of the mixed audio (data_out(X1)) to adjust the output volume of the mixed audio (data_out(X1)) to within the output volume range of the mobile phone.
可选地,如图23所示,媒体管理器可将图22中基于获取到的左声道的音频的输出音量与输出音量范围,获取左声道的音频的音量参数。具体获取方式可参照上文的音量参数获取方式,此处不再赘述。媒体管理器将左声道的音频数据乘以输出音量参数和音量参数(即如上文中的公式(2)所示),得到左声道的输出音频数据,对应的输出音量为data_out(X2)。以实现输出音量的优化,将左声道输出的音频的输出音量调节至输出音量范围内。示例性的,由于左右声道的音频数据之间的关系是延迟的,但是其输出音量实际上是一样的,因此,右声道的音频对应的音量参数与左声道相同,媒体管理器可将右声道的输出音量乘以输出音量参数和音量参数,得到右声道的输出音频数据,对应的输出音量为data_out(X2),以实现输出音量的优化,将右声道输出的音频的输出音量调节至输出音量范围内。需要说明的是,在其他实施例中,也可以仅将左右声道的音频数据乘以音量参数,以调节输出音量,本申请不做限定。Optionally, as shown in Figure 23, the media manager can obtain the volume parameter of the audio of the left channel based on the output volume and output volume range of the audio of the left channel obtained in Figure 22. For the specific acquisition method, please refer to the volume parameter acquisition method above, which will not be described again here. The media manager multiplies the audio data of the left channel by the output volume parameter and volume parameter (that is, as shown in formula (2) above) to obtain the output audio data of the left channel, and the corresponding output volume is data_out(X2). To optimize the output volume, adjust the output volume of the audio output from the left channel to within the output volume range. For example, since the relationship between the audio data of the left and right channels is delayed, but the output volume is actually the same, therefore, the volume parameter corresponding to the audio of the right channel is the same as that of the left channel, and the media manager can Multiply the output volume of the right channel by the output volume parameter and the volume parameter to obtain the output audio data of the right channel. The corresponding output volume is data_out(X2) to optimize the output volume. The audio output of the right channel is Adjust the output volume to within the output volume range. It should be noted that in other embodiments, only the audio data of the left and right channels can be multiplied by the volume parameter to adjust the output volume, which is not limited in this application.
请继续参照图18,示例性的,媒体管理器将获取到的混音音频的输出音频数据(包括左声道和右声道的输出音频数据)输出至蓝牙驱动。其中,混音音频的输出音量为data_out(X2)。蓝牙驱动可将混音音频的输出音频数据通过蓝牙连接输出至耳机。具体的,耳机的左声道播放上文中的左声道对应的混音音频的输出音频数据(图23中所示的左声道的音频数据),对应的播放音量为data_out(X2)。耳机的右声道播放上文中的右声道对应的混音音频的输出音频数据(图23中所示的右声道的音频数据),对应的播放音量为data_out(X2)。Please continue to refer to Figure 18. For example, the media manager outputs the obtained output audio data of the mixed audio (including the output audio data of the left channel and the right channel) to the Bluetooth driver. Among them, the output volume of the mixed audio is data_out(X2). The Bluetooth driver can output the output audio data of the mixed audio to the headphones through the Bluetooth connection. Specifically, the left channel of the headset plays the output audio data of the mixed audio corresponding to the left channel above (the audio data of the left channel shown in Figure 23), and the corresponding playback volume is data_out (X2). The right channel of the headset plays the output audio data of the mixed audio corresponding to the right channel above (the audio data of the right channel shown in Figure 23), and the corresponding playback volume is data_out (X2).
需要说明的是,上文实施例中,手机基于获取到的相对位置信息,对音频进行混音处理过程中,手机始终是基于以获取到的相对位置进行计算的。也就是说,在该场景中,耳机播放的立体声所体现的音源的位置可选地如图16中所示,即音源的位置保持不变。在一种可能的实现方式中,组网中的各设备交互音频数据的过程中,可以周期性地获取与耳机之间的相对位置,从设备可周期性(可基于实际需求设置,本申请不做限定)地向中心设备发送相对位置信息。手机可以在获取到相对位置信息后,基于新获取到的相对位置信息,对各设备的音频进行混音,混音的具体方式与上文相同,此处不再赘述。这样,本申请实施例中,中心设备可基于实时获取到的相对位置信息,对混音音频的混音效果进行调整,以调整各虚拟音源与耳机之间的相对位置。例如,用户佩戴耳机在房间中行走的过程中,手机可基于各设备与耳机之间的相对位置的转换,调节各音频在耳机左右声道对应的衰减值以及时间差(也可以是音色差等,本申请不做限定),从而实现虚拟的音源位置变换,得到更贴合实际的立体声效,提升用户使用体验。It should be noted that in the above embodiment, when the mobile phone mixes audio based on the obtained relative position information, the mobile phone always performs calculations based on the obtained relative position. That is to say, in this scenario, the position of the sound source represented by the stereo sound played by the headphones is optionally as shown in Figure 16, that is, the position of the sound source remains unchanged. In a possible implementation, during the process of exchanging audio data between devices in the network, the relative position with the earphones can be periodically obtained, and the slave device can periodically (can be set based on actual needs, this application does not Send relative position information to the central device in a limited manner. After obtaining the relative position information, the mobile phone can mix the audio of each device based on the newly obtained relative position information. The specific mixing method is the same as above and will not be described again here. In this way, in this embodiment of the present application, the central device can adjust the mixing effect of the mixed audio based on the relative position information obtained in real time to adjust the relative position between each virtual sound source and the earphones. For example, when the user wears headphones and walks in the room, the mobile phone can adjust the attenuation value and time difference (or the timbre difference, etc.) of each audio corresponding to the left and right channels of the headphones based on the relative position conversion between each device and the headphones. This application does not limit), thereby realizing virtual sound source position transformation, obtaining a more realistic stereo sound effect, and improving the user experience.
在本申请实施例中还提供一种控制方法,以支持多设备播放的混音场景中的音频变更场景。在多设备播放的混音场景中,音频变更包括但不限于:切换模式、切换设备以及切换音源。示例性的,切换模式可选地为多设备混音模式与单设备模式的切换。切换设备可选地为在单设备模式下的音源设备的切换。切换音源可选地为将多设备混音模式中的至少一个设备,或者是单设备模式下的音源设备播放的音源进行切换。 In the embodiment of the present application, a control method is also provided to support the audio changing scene in the mixing scene played by multiple devices. In the mixing scenario of multi-device playback, audio changes include but are not limited to: switching modes, switching devices, and switching audio sources. For example, the switching mode is optionally switching between a multi-device mixing mode and a single-device mode. The switching device is optionally a switching of the audio source device in single device mode. Switching the sound source may optionally be switching the sound source played by at least one device in the multi-device mixing mode or the sound source device in the single device mode.
下面以具体实施例对上述切换场景进行逐一说明,图24为示例性示出的切换模式场景下的控制方法流程示意图,请参照图24,具体包括:The above switching scenarios will be explained one by one with specific embodiments below. Figure 24 is a schematic flowchart of a control method in an exemplary switching mode scenario. Please refer to Figure 24. It specifically includes:
S2401,耳机向手机发送切换模式指示信息。S2401, the headset sends switching mode instruction information to the mobile phone.
示例性的,本申请实施例中的耳机可提供与上文所述的各种切换功能对应的控制方案,例如,用户可通过捏合耳机,以指示切换模式。可选地,本申请实施例中所述的用户操作还可以是用户的语音输入,例如用户对准耳机拾音装置(例如话筒)说出指定的语音指令,耳机可检测到用户指令,并将指令输出给手机,手机可对语音指令进行识别。需要说明的是,本申请实施例中的各用户操作仅为示意性举例,本申请不做限定,下文中不再重复说明。For example, the earphones in the embodiments of the present application can provide control schemes corresponding to the various switching functions described above. For example, the user can pinch the earphones to indicate the switching mode. Optionally, the user operation described in the embodiment of the present application can also be the user's voice input. For example, the user speaks a specified voice instruction to the earphone pickup device (such as a microphone). The earphone can detect the user instruction and send it to the user. The command is output to the mobile phone, and the mobile phone can recognize the voice command. It should be noted that each user operation in the embodiment of this application is only a schematic example, and this application does not limit it, and the description will not be repeated below.
示例性的,耳机接收到用户操作,可向中心设备(即手机)发送切换模式指示信息。在本实施例中,以组网当前的模式为混音模式,即图13a与图13b中所示的模式为例进行说明,相应的,手机接收到切换模式指示信息,可确定将当前的模式,即混音模式切换为单设备模式。当然,如果组网中的当前模式为单设备模式,则手机接收到切换模式指示信息,可确定将当前的模式,即单设备模式切换为混音模式,具体方案将在S2404~S2407的步骤中说明。For example, after receiving a user operation, the headset can send switching mode instruction information to the central device (i.e., the mobile phone). In this embodiment, the current mode of the network is the mixing mode, that is, the mode shown in Figure 13a and Figure 13b, as an example. Correspondingly, the mobile phone receives the switching mode indication information and can determine to change the current mode. , that is, the mixing mode switches to single device mode. Of course, if the current mode in the network is the single-device mode, the mobile phone receives the switching mode indication information and can determine to switch the current mode, that is, the single-device mode to the mixing mode. The specific solution will be in steps S2404 to S2407. illustrate.
需要说明的是,本申请中所述的用户操作以及手势等仅为示意性举例,例如用户可以通过敲击耳机,以指示切换模式,本申请不做限定。It should be noted that the user operations and gestures described in this application are only illustrative examples. For example, the user can tap the earphone to indicate switching modes, which is not limited by this application.
S2402a,手机向电视发送暂停播放指示信息。S2402a: The mobile phone sends pause playback instruction information to the TV.
S2402b,手机向平板发送暂停播放指示信息。S2402b: The mobile phone sends pause playback instruction information to the tablet.
示例性的,手机响应于接收到的切换模式指示信息,确定将当前的模式,即混音模式切换为单设备模式后,手机可分别向电视与平板发送暂停播放指示信息,以指示电视与平板暂停播放音频。电视与平板响应于接收到的暂停播放指示信息,停止向中心设备(即手机)传输音频数据,并且,电视与平板也不会在自身的设备上播放音频。For example, after the mobile phone responds to the received switching mode instruction information and determines to switch the current mode, that is, the mixing mode to the single device mode, the mobile phone can send pause play instruction information to the TV and the tablet respectively to instruct the TV and the tablet. Pause audio. In response to the received pause playback instruction information, the TV and tablet stop transmitting audio data to the central device (i.e., the mobile phone), and the TV and tablet will not play audio on their own devices.
S2403,手机向耳机输出音频C。S2403, the mobile phone outputs audio C to the headset.
示例性的,手机向耳机输出音频C的输出音频数据。一个示例中,手机仍然可以对手机的音频C进行音频调整,例如上文所述的左右声道的音频的时间差调整,以实现模拟真实的音源方位。具体的,手机的媒体管理器可基于音频C对应的音量参数(音量参数的获取可参照上文,此处不再赘述),调整音频C的输出音量。媒体管理器再基于手机与耳机的相对位置信息,调整音频C在左右声道的音频的时间差(也可以是音色差等,本申请不做限定)以及输出音量衰减等。另一个示例中,手机在单设备模式下,也可以不进行音频源方位的调整,即直接按照图9a中的流程输出音频数据,也就是说,耳机播放的左右声道的音频数据及其对应的输出音量均是一样的,本申请不做限定。For example, the mobile phone outputs the output audio data of audio C to the earphone. In one example, the mobile phone can still make audio adjustments to the audio C of the mobile phone, such as the time difference adjustment of the audio of the left and right channels mentioned above, in order to simulate the real sound source direction. Specifically, the media manager of the mobile phone can adjust the output volume of audio C based on the volume parameter corresponding to audio C (the volume parameters can be obtained as described above and will not be described again here). The media manager then adjusts the time difference of audio C in the left and right channels (it may also be a difference in timbre, etc., which is not limited in this application) and the output volume attenuation, etc. based on the relative position information of the mobile phone and the earphones. In another example, when the mobile phone is in single-device mode, it does not need to adjust the audio source direction, that is, it can directly output the audio data according to the process in Figure 9a, that is, the audio data of the left and right channels played by the headphones and their corresponding The output volume is the same and is not limited in this application.
需要说明的是,在本申请实施例中,混音模式切换到单设备模式后,默认为切换后的单设备模式为中心设备。在其他实施例中,用户还可以通过耳机或中心设备(即手机)控制单设备模式下的音源设备切换,具体实现将在图25中说明。It should be noted that in the embodiment of the present application, after the mixing mode is switched to the single device mode, the switched single device mode is the central device by default. In other embodiments, the user can also control the switching of audio source devices in single device mode through headphones or a central device (i.e., mobile phone). The specific implementation will be explained in Figure 25.
S2404,耳机向手机输出切换模式指示信息。S2404, the headset outputs switching mode instruction information to the mobile phone.
示例性的,用户可再次通过捏合耳机(也可以是其他操作,本申请不做限定),以指示切换模式。耳机响应于接收到的用户操作,向中心设备(即手机)发送切换模式指示信 息。手机接收到切换模式指示信息,确定将当前的模式,即单设备模式切换到混音模式。需要说明的是,本实例中均是以用户通过耳机进行控制为例,在其他实施例中,用户也可以在中心设备上进行控制,本申请不做限定。For example, the user can pinch the earphones again (it may also be other operations, which are not limited in this application) to indicate switching modes. In response to the received user operation, the headset sends a switching mode instruction message to the central device (i.e., the mobile phone). interest. The mobile phone receives the switching mode instruction information and determines to switch the current mode, that is, the single device mode to the mixing mode. It should be noted that in this example, the user controls through the earphone as an example. In other embodiments, the user can also control on the central device, which is not limited in this application.
S2405a,手机向电视发送继续播放指示信息。S2405a, the mobile phone sends continue play instruction information to the TV.
S2405b,手机向平板发送继续播放指示信息。S2405b, the mobile phone sends continue playback instruction information to the tablet.
示例性的,手机确定将单设备模式切换到混音模式后,分别向电视与平板发送继续播放指示信息,以指示电视与平板继续向手机传输对应的音频。For example, after the mobile phone determines to switch the single device mode to the mixing mode, it sends continue playback instruction information to the TV and tablet respectively to instruct the TV and tablet to continue transmitting the corresponding audio to the mobile phone.
S2406a,电视向手机输出音频B。S2406a, the TV outputs audio B to the mobile phone.
S2406b,平板向手机输出音频A。S2406b, the tablet outputs audio A to the mobile phone.
示例性的,电视响应于接收到的手机发送的继续播放指示信息,执行断点续传,即,电视将暂停播放之后的音频继续发送至手机。平板同理,此处不再赘述。For example, in response to the continued playback instruction information received from the mobile phone, the TV performs resumption of transmission, that is, the TV continues to send the audio after the paused playback to the mobile phone. The same goes for tablets, so I won’t go into details here.
S2407,手机向耳机输出混音音频。S2407, the mobile phone outputs mixed audio to the headset.
示例性的,手机基于本机对应的音频A的音频数据以及接收到的电视的音频B的音频数据和平板的音频C的音频数据,执行上文所述的混音流程,具体实现可参照上文,此处不再赘述。For example, the mobile phone performs the mixing process described above based on the audio data of audio A corresponding to the phone, the received audio data of audio B of the TV, and the audio data of audio C of the tablet. For specific implementation, please refer to the above. We will not go into details here.
图25为示例性示出的切换模式场景下的控制方法流程示意图,请参照图25,具体包括:Figure 25 is a schematic flowchart of a control method in an exemplary switching mode scenario. Please refer to Figure 25, which specifically includes:
S2501,耳机向手机发送切换设备指示信息。S2501, the headset sends switching device instruction information to the mobile phone.
示例性的,在单设备模式下,耳机响应于接收到的用户操作(例如可以是敲击三下等,本申请不做限定),向手机发送切换设备指示信息,以指示切换音源设备。For example, in the single device mode, the headset responds to a received user operation (for example, tapping three times, etc., which is not limited in this application), and sends switching device instruction information to the mobile phone to instruct the switching of the audio source device.
需要说明的是,图25中的流程是在单设备模式下实现的,也就是说,组网需要先将模式切换到单设备模式,才能实现图25中的流程。如上文所述,混音模式切换到单设备模式后,可选地默认的音源设备为中心设备(即手机),用户可在单设备模式下,指示中心设备将当前的音源设备切换为指定的设备,例如电视。It should be noted that the process in Figure 25 is implemented in single device mode. That is to say, the networking mode needs to be switched to single device mode before the process in Figure 25 can be implemented. As mentioned above, after the mixing mode is switched to single device mode, the optional default audio source device is the central device (i.e. mobile phone). The user can instruct the central device to switch the current audio source device to the specified devices, such as televisions.
S2502,手机向电视发送继续播放指示信息。S2502: The mobile phone sends continue play instruction information to the TV.
示例性的,手机响应于接收到的切换设备指示信息,确定将音源设备切换为电视。需要说明的是,若用户通过耳机控制,则手机可以响应于接收到的切换设备指示信息,按照顺序依次切换音源设备。顺序可以是基于与耳机的距离,也可以是基于其他规则设置,本申请不做限定。举例说明,手机接收到切换设备指示信息后,可将音源设备按照顺序,切换到电视。若手机再次接收到切换设备指示信息,则可按照顺序,将音源设备切换为平板。当然,用户也可以在手机中控制音源设备切换为指定的设备,本申请不做限定,下文中不再重复说明。For example, in response to the received switching device instruction information, the mobile phone determines to switch the audio source device to the television. It should be noted that if the user controls through the headset, the mobile phone can respond to the received switching device instruction information and switch the audio source devices in sequence. The order may be based on the distance from the earphones, or may be based on other rule settings, which is not limited in this application. For example, after receiving the switching device instruction message, the mobile phone can switch the audio source device to the TV in sequence. If the mobile phone receives the device switching instruction message again, it can switch the audio source device to the tablet according to the sequence. Of course, the user can also control the audio source device to switch to a specified device on the mobile phone. This application does not limit this, and the description will not be repeated below.
示例性的,在本实施例中,手机响应于接收到的切换设备指示,将停止向耳机传输手机的音频。并且,平板仍然处于暂停播放状态。如上文所述,混音模式转换为单设备模式后,电视与平板均暂停播放音频。示例性的,手机可向电视发送继续播放指示,以指示电视播放音频。For example, in this embodiment, the mobile phone will stop transmitting the mobile phone's audio to the headset in response to the received switching device instruction. Moreover, the tablet is still in the paused playback state. As mentioned above, after the mixing mode is converted to single-device mode, both the TV and tablet will pause audio playback. For example, the mobile phone can send a continue play instruction to the TV to instruct the TV to play audio.
S2503,电视向手机输出音频B。 S2503, the TV outputs audio B to the mobile phone.
示例性的,电视响应于接收到的继续播放指示信息,继续向手机发送音频B。需要说明的是,电视可以输出的音频B可以是暂停播放时刻之后的音频,即断电续播。也可以是重新输出音频B,本申请不做限定。For example, the TV continues to send audio B to the mobile phone in response to the received continue playing instruction information. It should be noted that the audio B that the TV can output can be the audio after the moment when the playback is paused, that is, the playback is resumed after power outage. It may also be to re-output audio B, which is not limited in this application.
S2504,手机向耳机输出音频B。S2504, the mobile phone outputs audio B to the headset.
示例性的,手机接收耳机发送的音频B的音频数据,对音频数据进行处理后,向耳机输出音频B对应的输出音频数据。一个示例中,手机可以基于图17中的跨设备传输方案进行处理,具体实现可参照图17中的描述,此处不再赘述。另一个示例中,手机可以基于混音方案,对音频B进行处理,处理方式与S2403中的描述类似,此处不再赘述。For example, the mobile phone receives the audio data of audio B sent by the earphone, processes the audio data, and outputs the output audio data corresponding to audio B to the earphone. In one example, the mobile phone can perform processing based on the cross-device transmission scheme in Figure 17. For specific implementation, please refer to the description in Figure 17, which will not be described again here. In another example, the mobile phone can process audio B based on the mixing scheme. The processing method is similar to the description in S2403 and will not be described again here.
S2505,耳机向手机发送切换设备指示信息。S2505: The headset sends switching device instruction information to the mobile phone.
示例性的,如上文所述,用户可通过多次控制耳机,以在单设备模式下顺序切换音源设备。耳机响应于接收到的用户操作,向手机发送切换设备指示信息。For example, as mentioned above, the user can control the earphones multiple times to sequentially switch audio source devices in single device mode. The headset sends switching device instruction information to the mobile phone in response to the received user operation.
S2506,手机向电视发送暂停播放指示信息。S2506: The mobile phone sends pause playback instruction information to the TV.
S2507,手机向平板发送继续播放指示信息。S2507: The mobile phone sends continue playback instruction information to the tablet.
S2508,平板向手机输出音频A。S2508, the tablet outputs audio A to the mobile phone.
示例性的,手机响应于接收到的切换设备指示信息,确定需要将音源设备从电视B切换到平板。相应的,手机向电视发送暂停播放指示信息,以指示电视B暂停播放音频。并且,手机向平板发送继续播放指示信息,以指示平板继续播放音频。For example, in response to the received switching device instruction information, the mobile phone determines that the audio source device needs to be switched from TV B to the tablet. Correspondingly, the mobile phone sends a pause playback instruction message to the TV to instruct TV B to pause audio playback. Furthermore, the mobile phone sends continue playback instruction information to the tablet to instruct the tablet to continue playing audio.
示例性的,电视响应于接收到的暂停播放指示信息,暂停播放音频,即不再向手机传输音频数据。平板响应于接收到的继续播放指示信息,向手机发送暂停后的音频数据。For example, in response to the received pause playback instruction information, the TV pauses audio playback, that is, it no longer transmits audio data to the mobile phone. In response to the received instruction to continue playing, the tablet sends the paused audio data to the mobile phone.
S2509,手机向耳机输出音频A。S2509, the mobile phone outputs audio A to the headset.
具体描述与S2504类似,此处不再赘述。The specific description is similar to S2504 and will not be described again here.
需要说明的是,在图24~26中虽未描述,但是在各方案中,各设备输出音频数据之前,均需要按照本申请实施例中的音量控制方案对音频数据进行处理,以对音频数据的输出音量进行调节,具体实现方式可参照上文,此处不再赘述。It should be noted that although it is not described in Figures 24 to 26, in each scheme, before each device outputs audio data, it needs to process the audio data according to the volume control scheme in the embodiment of the present application to control the audio data. The output volume can be adjusted. The specific implementation method can be referred to the above, and will not be repeated here.
图26为示例性示出的切换模式场景下的控制方法流程示意图,请参照图26,具体包括:Figure 26 is a schematic flowchart of a control method in an exemplary switching mode scenario. Please refer to Figure 26, which specifically includes:
S2601,耳机向手机发送切换音源指示信息。S2601, the headset sends switching audio source instruction information to the mobile phone.
示例性的,耳机接收到用户操作(用户操作可根据实际需求设置,本申请不做限定),该用户操作用于指示切换音源。耳机响应于接收到的用户操作,向手机发送音源切换指示信息。For example, the headset receives a user operation (the user operation can be set according to actual needs, and is not limited in this application), and the user operation is used to instruct switching of sound sources. The headset responds to the received user operation and sends audio source switching instruction information to the mobile phone.
S2602,手机向电视发送切换音源指示信息。S2602: The mobile phone sends audio source switching instruction information to the TV.
示例性的,手机响应于接收到的音源切换指示信息,向电视发送切换音源指示信息。For example, in response to the received audio source switching instruction information, the mobile phone sends the audio source switching instruction information to the TV.
S2603a,电视向手机输出音频D。S2603a, the TV outputs audio D to the mobile phone.
示例性的,电视响应于接收到的切换音源指示信息,切换输出的音源,例如将音频A切换为音频D,并向手机输出音频D对应的音频数据。需要说明的是,对于电视而言,电视检测到音频变更(即音源切换)后,在电视侧将重新执行音量参数获取流程,并对音频D进行输出音量调节,以得到音频D的输出音量。具体细节可参照上文,此处不再赘 述。For example, in response to the received audio source switching instruction information, the TV switches the output audio source, for example, switches audio A to audio D, and outputs audio data corresponding to audio D to the mobile phone. It should be noted that for the TV, after the TV detects an audio change (ie, audio source switching), the TV side will re-execute the volume parameter acquisition process and adjust the output volume of audio D to obtain the output volume of audio D. Specific details can be found above and will not be repeated here. narrate.
S2603b,平板向手机输出音频A。S2603b, the tablet outputs audio A to the mobile phone.
示例性的,本实例中电视切换音源,平板未接收到切换音源的指示,相应的,平板继续向手机输出音频A对应的音频数据。Illustratively, in this example, the TV switches the audio source, but the tablet does not receive the instruction to switch the audio source. Accordingly, the tablet continues to output audio data corresponding to audio A to the mobile phone.
S2604,手机向耳机输出混音音频。S2604, the mobile phone outputs mixed audio to the headset.
示例性的,手机将执行上文所述的混音流程。需要说明的是,对于手机而言,其同样检测到音源切换,即电视输入的音源发生切换。相应的,手机侧在混音时,同样需要重新执行音量参数获取流程。手机获取到混音音频的音频数据后,将音频数据输出至耳机,耳机播放混音音频的音频数据。For example, the mobile phone will perform the mixing process described above. It should be noted that for mobile phones, it also detects audio source switching, that is, the audio source input by the TV is switched. Correspondingly, when mixing on the mobile phone, the volume parameter acquisition process also needs to be re-executed. After the mobile phone obtains the audio data of the mixed audio, it outputs the audio data to the headset, and the headset plays the audio data of the mixed audio.
在一种可能的实现方式中,在单设备模式下,同样可实现音源切换,其原理与图26中类似,同样是由中心设备向当前的音源设备发送音源切换指示信息,可以理解为,本申请实施例中的组网中的控制信息均是由中心设备下发给各从设备的,具体实现可参照上文,此处不再赘述。In a possible implementation, in single device mode, audio source switching can also be implemented. The principle is similar to that in Figure 26. The central device also sends audio source switching instruction information to the current audio source device. It can be understood that this The control information in the network in the application embodiment is all delivered by the central device to each slave device. For specific implementation, please refer to the above and will not be described again here.
在另一种可能的实现方式中,在混音模式下,手机响应于接收到的音源切换指示信息,可以向组网中的各设备发送音源切换指示信息。组网中的各从设备以及手机切换音源,其具体实现与图26中类似,此处不再赘述。In another possible implementation, in the mixing mode, in response to the received audio source switching instruction information, the mobile phone can send the audio source switching instruction information to each device in the network. The specific implementation of switching audio sources between slave devices and mobile phones in the network is similar to that in Figure 26, and will not be described again here.
需要说明的是,图24~图26中的未描述部分,均可参照上文实施例中的相关内容,此处不再重复说明。It should be noted that for the undescribed parts in FIGS. 24 to 26 , reference may be made to the relevant content in the above embodiments, and descriptions will not be repeated here.
在一种可能的实现方式中,本申请各实施例中的音频变更场景,即包括音源切换、输出设备切换、音源设备切换(包括多设备协同场景中的音源设备切换以及混音场景中的音源设备切换)等。示例性的,在音频变更后,设备播放的音频会存在几秒钟的过渡时间,导致音频不连贯,影响用户视听体验。本申请实施例中还提供一种音频变更过渡方案,可使得音频变更后平滑过渡,避免音频变更导致的断点问题。具体的,仍以手机为音频输出设备为例进行说明,手机(具体为媒体管理器)在检测到音频变更后,可以使用汉宁窗实现音频切换的淡入淡出。举例说明,如图27所示,以手机切换音源前播放的音频为音频A,切换后播放的音频为音频B为例进行说明。媒体管理器取音频A切换时间点前的预设时长(例如3s)的音频数据(即图中所示的淡出部分),且取音频B开头的预设时长(即3s)的音频数据(即图中所示的淡入部分)。媒体管理器设置汉宁窗,汉宁窗的长度即为预设时长,例如3s。如图27所示,汉宁窗可包括第一子窗口(即前半段)和第二子窗口(即后半段)。第一子窗口的长度与第二子窗口长度相同。媒体管理器基于第一子窗口对切换后的音频,即音频B的预设时长的音频数据进行处理,以使得淡入部分的音频数据的输出音量逐渐增大。媒体管理器基于第二子窗口对切换前的音频,即音频A的预设时长的音频数据进行处理,以使得淡出部分的音频数据的输出音量逐渐减小。具体的,媒体管理器可基于如下公式获取淡入效果的音频数据与淡出效果的音频数据:
淡入音频数据=淡入音频数据*第一子窗口窗函数  (12)
淡出音频数据=淡出音频数据*第二子窗口窗函数  (13)
In a possible implementation, the audio change scenarios in the embodiments of the present application include audio source switching, output device switching, and audio source device switching (including audio source device switching in a multi-device collaboration scenario and audio source switching in a mixing scenario). device switching), etc. For example, after the audio is changed, the audio played by the device will have a transition time of several seconds, causing the audio to be incoherent and affecting the user's audio-visual experience. The embodiment of the present application also provides an audio change transition solution, which can make the transition smooth after the audio change and avoid the breakpoint problem caused by the audio change. Specifically, the mobile phone is still used as an audio output device as an example. After detecting audio changes, the mobile phone (specifically, the media manager) can use the Hanning window to implement the fade-in and fade-out of the audio switching. For example, as shown in Figure 27, the audio played by the mobile phone before switching the audio source is audio A, and the audio played after the switching is audio B is taken as an example. The media manager takes the audio data of the preset duration (e.g. 3s) before the switching time point of audio A (i.e. the fade-out part shown in the figure), and takes the audio data of the preset duration (i.e. 3s) at the beginning of audio B (i.e. The fade-in portion shown in the image). The media manager sets the Hanning window, and the length of the Hanning window is the preset duration, such as 3s. As shown in Figure 27, the Hanning window may include a first sub-window (ie, the first half) and a second sub-window (ie, the second half). The length of the first sub-window is the same as the length of the second sub-window. The media manager processes the switched audio, that is, the audio data of the preset duration of audio B based on the first sub-window, so that the output volume of the audio data of the fade-in part gradually increases. The media manager processes the audio before switching, that is, the audio data of the preset duration of audio A based on the second sub-window, so that the output volume of the audio data of the fade-out part gradually decreases. Specifically, the media manager can obtain the audio data of the fade-in effect and the audio data of the fade-out effect based on the following formula:
Fade in audio data = fade in audio data * first sub-window window function (12)
Fade out audio data = fade out audio data * second sub-window window function (13)
由公式(12)和公式(13),媒体管理器可将音频B的淡入部分对应的音频数据乘以汉宁窗的第一子窗口(即窗函数前半段),以得到淡入部分的音频数据(简称淡入音频 数据)。以及,媒体管理器将音频A的淡出部分对应的音频数据乘以汉宁窗的第二子窗口(即窗函数后半段),以得到淡出部分的音频数据。According to formula (12) and formula (13), the media manager can multiply the audio data corresponding to the fade-in part of audio B by the first sub-window of the Hanning window (i.e., the first half of the window function) to obtain the audio data of the fade-in part. (referred to as fade-in audio data). And, the media manager multiplies the audio data corresponding to the fade-out part of audio A by the second sub-window of the Hanning window (ie, the second half of the window function) to obtain the audio data of the fade-out part.
媒体管理器可将获取到的淡入音频数据与淡出音频数据叠加,以获取音频变更所播放的音频数据。举例说明,仍参照图27,示例性的,以手机播放音频A的过程中,切换为音频B为例进行说明。媒体管理器已上文所述的方式,获取音频A的淡出音频数据,并获取音频B的淡入音频数据。媒体管理器可将淡入音频数据与淡出音频数据进行叠加,以得到淡入淡出音频数据,从而在保持原有音频长度的情况下,实现音频的平滑过渡。相应的,媒体管理器向音频驱动传输的音频数据即为图27中所示的叠加后的音频,音频切换过程中,耳机中播放的即为淡入淡出部分。相应的,用户听到的音频即为淡入淡出部分中,音频A的音频数据逐渐减小,音频B的音频数据逐渐增加。并且,在淡入淡出部分播放结束后,继续播放音频B的音频数据。The media manager can overlay the obtained fade-in audio data and fade-out audio data to obtain the audio data played by the audio change. For example, still referring to FIG. 27 , for example, when the mobile phone plays audio A, switching to audio B is used as an example for explanation. The media manager obtains the fade-out audio data of audio A and obtains the fade-in audio data of audio B in the manner described above. The media manager can superimpose the fade-in audio data and the fade-out audio data to obtain the fade-in and fade-out audio data, thereby achieving a smooth transition of the audio while maintaining the original audio length. Correspondingly, the audio data transmitted by the media manager to the audio driver is the superimposed audio shown in Figure 27. During the audio switching process, what is played in the headphones is the fade-in and fade-out part. Correspondingly, the audio that the user hears is the fade-in and fade-out part, in which the audio data of audio A gradually decreases and the audio data of audio B gradually increases. And, after the fade-in and fade-out part is played, the audio data of audio B continues to be played.
可以理解的是,电子设备为了实现上述功能,其包含了执行各个功能相应的硬件和/或软件模块。结合本文中所公开的实施例描述的各示例的算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。It can be understood that, in order to implement the above functions, the electronic device includes corresponding hardware and/or software modules that perform each function. In conjunction with the algorithm steps of each example described in the embodiments disclosed herein, the present application can be implemented in the form of hardware or a combination of hardware and computer software.
一个示例中,图28示出了本申请实施例的一种装置2800的示意性框图装置2800可包括:处理器2801和收发器/收发管脚2802,可选地,还包括存储器2803。In one example, FIG. 28 shows a schematic block diagram of a device 2800 according to an embodiment of the present application. The device 2800 may include: a processor 2801 and a transceiver/transceiver pin 2802, and optionally, a memory 2803.
装置2800的各个组件通过总线2804耦合在一起,其中总线2804除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图中将各种总线都称为总线2804。The various components of device 2800 are coupled together by bus 2804, which includes a power bus, a control bus, and a status signal bus in addition to a data bus. However, for the sake of clarity, various buses are referred to as bus 2804 in the figure.
可选地,存储器2803可以用于前述方法实施例中的指令。该处理器2801可用于执行存储器2803中的指令,并控制接收管脚接收信号,以及控制发送管脚发送信号。Optionally, the memory 2803 may be used for instructions in the foregoing method embodiments. The processor 2801 can be used to execute instructions in the memory 2803, and control the receiving pin to receive signals, and control the transmitting pin to send signals.
装置2800可以是上述方法实施例中的电子设备或电子设备的芯片。The device 2800 may be the electronic device or a chip of the electronic device in the above method embodiment.
其中,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。All relevant content of each step involved in the above method embodiments can be quoted from the functional description of the corresponding functional module, and will not be described again here.
本实施例还提供一种计算机存储介质,该计算机存储介质中存储有计算机指令,当该计算机指令在电子设备上运行时,使得电子设备执行上述相关方法步骤实现上述实施例中的方法。This embodiment also provides a computer storage medium that stores computer instructions. When the computer instructions are run on an electronic device, the electronic device causes the electronic device to execute the above related method steps to implement the method in the above embodiment.
本实施例还提供了一种计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述相关步骤,以实现上述实施例中的方法。This embodiment also provides a computer program product. When the computer program product is run on a computer, it causes the computer to perform the above related steps to implement the method in the above embodiment.
另外,本申请的实施例还提供一种装置,这个装置具体可以是芯片,组件或模块,该装置可包括相连的处理器和存储器;其中,存储器用于存储计算机执行指令,当装置运行时,处理器可执行存储器存储的计算机执行指令,以使芯片执行上述各方法实施例中的方法。In addition, embodiments of the present application also provide a device. This device may be a chip, a component or a module. The device may include a connected processor and a memory. The memory is used to store computer execution instructions. When the device is running, The processor can execute computer execution instructions stored in the memory, so that the chip executes the methods in each of the above method embodiments.
其中,本实施例提供的电子设备、计算机存储介质、计算机程序产品或芯片均用于执行上文所提供的对应的方法,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。Among them, the electronic equipment, computer storage media, computer program products or chips provided in this embodiment are all used to execute the corresponding methods provided above. Therefore, the beneficial effects they can achieve can be referred to the corresponding methods provided above. The beneficial effects of the method will not be repeated here.
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实 施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。 The embodiments of the present application have been described above in conjunction with the accompanying drawings, but the present application is not limited to the above specific implementations. The above-mentioned specific embodiments are only illustrative and not restrictive. Under the inspiration of this application, those of ordinary skill in the art can also make other modifications without departing from the purpose of this application and the scope protected by the claims. It can be made in many forms, all of which fall within the protection of this application.

Claims (27)

  1. 一种音量控制方法,其特征在于,包括:A volume control method, characterized by including:
    电子设备获取第一音频数据;The electronic device acquires the first audio data;
    所述电子设备检测到所述第一音频数据的第一音量不满足第一输出音量范围,基于所述第一音量与所述第一输出音量范围,获取与所述第一音频数据对应的第一音量参数;其中,所述第一音量为所述第一音频数据的预设时长内的音频数据的平均音量,所述第一输出音量范围为预先获取到的;The electronic device detects that the first volume of the first audio data does not meet the first output volume range, and obtains a third volume corresponding to the first audio data based on the first volume and the first output volume range. A volume parameter; wherein the first volume is the average volume of the audio data within the preset duration of the first audio data, and the first output volume range is acquired in advance;
    所述电子设备基于所述第一音量参数对所述第一音频数据进行校正,得到第二音频数据;其中,所述第二音频数据的平均音量为第二音量,所述第二音量在所述第一输出音量范围内;The electronic device corrects the first audio data based on the first volume parameter to obtain second audio data; wherein the average volume of the second audio data is the second volume, and the second volume is at the within the above first output volume range;
    所述电子设备播放所述第二音频数据。The electronic device plays the second audio data.
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:The method of claim 1, further comprising:
    在所述电子设备播放所述第二音频数据时,接收到调节操作,所述调节操作用于调节所述第二音频数据的音量;When the electronic device plays the second audio data, an adjustment operation is received, and the adjustment operation is used to adjust the volume of the second audio data;
    在所述调节操作开始至结束过程中,所述电子设备按照第一周期时长,采集所述第二音频数据的音量;From the beginning to the end of the adjustment operation, the electronic device collects the volume of the second audio data according to the duration of the first cycle;
    所述电子设备基于采集到的所述第二音频数据的音量,得到第二输出音量范围。The electronic device obtains a second output volume range based on the collected volume of the second audio data.
  3. 根据权利要求2所述的方法,其特征在于,所述电子设备基于采集到的所述第二音频数据的音量,得到第二输出音量范围,包括:The method of claim 2, wherein the electronic device obtains the second output volume range based on the collected volume of the second audio data, including:
    获取所述调节操作开始至结束过程中采集到的所述第二音频数据的音量的平均音量;Obtain the average volume of the volume of the second audio data collected from the beginning to the end of the adjustment operation;
    在所述调节操作用于指示调大所述第二音频数据的音量的情况下,若采集到的所述第二音频数据的音量的平均音量大于所述第一输出音量范围的最小值,所述第二输出音量范围的最小值为采集到的所述第二音频数据的音量的平均音量,所述第二输出音量范围的最大值为所述第一输出音量范围的最大值;若采集到的所述第二音频数据的音量的平均音量小于所述第一输出音量范围的最小值,所述第二输出音量范围等于所述第一输出音量范围;In the case where the adjustment operation is used to instruct to increase the volume of the second audio data, if the average volume of the collected second audio data is greater than the minimum value of the first output volume range, then The minimum value of the second output volume range is the average volume of the collected second audio data, and the maximum value of the second output volume range is the maximum value of the first output volume range; if The average volume of the volume of the second audio data is less than the minimum value of the first output volume range, and the second output volume range is equal to the first output volume range;
    或者,or,
    在所述调节操作用于指示调小所述第二音频数据的音量的情况下,若采集到的所述第二音频数据的音量的平均音量小于所述第一输出音量范围的最大值,所述第二输出音量范围的最大值为采集到的所述第二音频数据的音量的平均音量,所述第二输出音量范围的最小值为所述第一输出音量范围的最小值;若采集到的所述第二音频数据的音量的平均音量大于所述第一输出音量范围的最大值,所述第二输出音量范围等于所述第一输出音量范围。In the case where the adjustment operation is used to instruct to turn down the volume of the second audio data, if the average volume of the collected second audio data is less than the maximum value of the first output volume range, then The maximum value of the second output volume range is the average volume of the collected second audio data volume, and the minimum value of the second output volume range is the minimum value of the first output volume range; if The average volume of the volume of the second audio data is greater than the maximum value of the first output volume range, and the second output volume range is equal to the first output volume range.
  4. 根据权利要求2所述的方法,其特征在于,所述方法还包括: The method of claim 2, further comprising:
    在所述电子设备播放所述第二音频数据时,所述电子设备按照第二周期时长,采集所述第二音频数据的音量;所述第二周期时长大于所述第一周期时长;When the electronic device plays the second audio data, the electronic device collects the volume of the second audio data according to the second cycle duration; the second cycle duration is longer than the first cycle duration;
    所述电子设备基于采集到的所述第二音频数据的音量,得到第二输出音量范围。The electronic device obtains a second output volume range based on the collected volume of the second audio data.
  5. 根据权利要求4所述的方法,其特征在于,The method according to claim 4, characterized in that:
    若采集到的所述第二音频数据的音量大于所述第一输出音量范围的最大值,所述第二输出音量范围的最小值为所述第一输出音量范围的最小值,所述第二输出音量范围的最大值为采集到的所述第二音频数据的音量;或者,If the volume of the collected second audio data is greater than the maximum value of the first output volume range, the minimum value of the second output volume range is the minimum value of the first output volume range, and the second The maximum value of the output volume range is the volume of the collected second audio data; or,
    若采集到的所述第二音频数据的音量小于所述第一输出音量范围的最小值,所述第二输出音量范围的最大值为所述第一输出音量范围的最大值,所述第二输出音量范围的最小值为采集到的所述第二音频数据的音量;或者,If the volume of the collected second audio data is less than the minimum value of the first output volume range, the maximum value of the second output volume range is the maximum value of the first output volume range, and the second The minimum value of the output volume range is the volume of the collected second audio data; or,
    若采集到的所述第二音频数据的音量大于或等于所述第一输出音量范围的最小值,且小于或等于所述第一输出音量范围的最大值,所述第二输出音量范围等于所述第一输出音量范围。If the volume of the collected second audio data is greater than or equal to the minimum value of the first output volume range and less than or equal to the maximum value of the first output volume range, the second output volume range is equal to the The first output volume range.
  6. 根据权利要求2至5任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 2 to 5, characterized in that the method further includes:
    所述电子设备获取第三音频数据,其中,所述第三音频数据的预设时长内的音频数据的平均音量为第三音量;The electronic device obtains third audio data, wherein the average volume of the audio data within a preset duration of the third audio data is the third volume;
    所述电子设备检测到所述第三音量不满足所述第二输出音量范围,基于所述第三音量与所述第二输出音量范围,获取与所述第三音频数据对应的第二音量参数;The electronic device detects that the third volume does not satisfy the second output volume range, and obtains a second volume parameter corresponding to the third audio data based on the third volume and the second output volume range. ;
    所述电子设备基于所述第二音量参数对所述第三音频数据进行校正,得到第四音频数据;其中,所述第四音频数据的平均音量为第四音量,所述第四音量在所述第二输出音量范围内;The electronic device corrects the third audio data based on the second volume parameter to obtain fourth audio data; wherein the average volume of the fourth audio data is a fourth volume, and the fourth volume is at the within the above second output volume range;
    所述电子设备播放所述第四音频数据。The electronic device plays the fourth audio data.
  7. 根据权利要求1所述的方法,其特征在于,所述电子设备检测到所述第一音量不满足第一输出音量范围,基于所述第一音量与所述第一输出音量范围,获取与所述第一音频数据对应的第一音量参数,包括:The method according to claim 1, characterized in that the electronic device detects that the first volume does not satisfy the first output volume range, and obtains the corresponding value based on the first volume and the first output volume range. The first volume parameter corresponding to the first audio data includes:
    若所述第一音量大于所述第一输出音量范围的最大值,所述电子设备基于所述第一音量与所述第一输出音量范围的最大值,获取所述第一音量参数;或者,If the first volume is greater than the maximum value of the first output volume range, the electronic device obtains the first volume parameter based on the first volume and the maximum value of the first output volume range; or,
    若所述第一音量小于所述第一输出音量范围的最小值,所述电子设备基于所述第一音量与所述第一输出音量范围的最小值,获取所述第一音量参数。If the first volume is less than the minimum value of the first output volume range, the electronic device obtains the first volume parameter based on the first volume and the minimum value of the first output volume range.
  8. 根据权利要求1所述的方法,其特征在于,所述电子设备基于所述第一音量参数对所述第一音频数据进行校正,得到第二音频数据,包括:The method of claim 1, wherein the electronic device corrects the first audio data based on the first volume parameter to obtain the second audio data, including:
    所述电子设备基于所述第一音频数据、所述第一音量参数以及输出音量参数,得到所述第二音频数据;The electronic device obtains the second audio data based on the first audio data, the first volume parameter and the output volume parameter;
    所述输出音量参数包括以下至少之一:音轨音量参数、流音量参数、主音量; The output volume parameters include at least one of the following: track volume parameters, stream volume parameters, and master volume;
    所述音轨音量参数用于指示播放所述第二音频数据的应用的设置音量;The audio track volume parameter is used to indicate the set volume of the application that plays the second audio data;
    所述流音量参数用于指示所述第一音频数据对应的音频流的设置音量;The stream volume parameter is used to indicate the set volume of the audio stream corresponding to the first audio data;
    所述主音量用于指示所述电子设备的设置音量。The master volume is used to indicate the set volume of the electronic device.
  9. 根据权利要求1所述的方法,其特征在于,所述方法还包括:The method of claim 1, further comprising:
    所述电子设备获取第五音频数据,其中,所述第五音频数据的预设时长内的音频数据的平均音量为第五音量;The electronic device acquires fifth audio data, wherein the average volume of the audio data within a preset duration of the fifth audio data is the fifth volume;
    所述电子设备检测到所述第五音量不满足所述第一输出音量范围,基于所述第五音量与所述第一输出音量范围,获取与所述第五音频数据对应的第三音量参数;The electronic device detects that the fifth volume does not satisfy the first output volume range, and obtains a third volume parameter corresponding to the fifth audio data based on the fifth volume and the first output volume range. ;
    所述电子设备基于所述第三音量参数对所述第五音频数据进行校正,得到第六音频数据;其中,所述第六音频数据的平均音量为第六音量,所述第六音量在所述第一输出音量范围内;The electronic device corrects the fifth audio data based on the third volume parameter to obtain sixth audio data; wherein the average volume of the sixth audio data is a sixth volume, and the sixth volume is at the within the above first output volume range;
    所述电子设备向另一电子设备发送所述第六音频数据;所述电子设备与所述另一电子设备通过无线连接进行数据交互;The electronic device sends the sixth audio data to another electronic device; the electronic device and the other electronic device perform data interaction through a wireless connection;
    所述电子设备检测到与所述另一电子设备的连接断开,所述电子设备获取所述第五音频数据中待播放的音频数据,其中,所述待播放的音频数据的预设时长内的音频数据的平均音量为第七音量;The electronic device detects that the connection with the other electronic device is disconnected, and the electronic device obtains the audio data to be played in the fifth audio data, wherein the audio data to be played is within a preset duration. The average volume of the audio data is the seventh volume;
    所述电子设备检测到所述第七音量不满足所述第一输出音量范围,基于所述第七音量与所述第一输出音量范围,获取与所述待播放的音频数据对应的第四音量参数;The electronic device detects that the seventh volume does not satisfy the first output volume range, and obtains a fourth volume corresponding to the audio data to be played based on the seventh volume and the first output volume range. parameter;
    所述电子设备基于所述第四音量参数对所述待播放的音频数据进行校正,得到第七音频数据;其中,所述第七音频数据的平均音量为第八音量,所述第八音量在所述第一输出音量范围内;The electronic device corrects the audio data to be played based on the fourth volume parameter to obtain seventh audio data; wherein the average volume of the seventh audio data is an eighth volume, and the eighth volume is Within the first output volume range;
    所述电子设备播放所述第七音频数据。The electronic device plays the seventh audio data.
  10. 根据权利要求1所述的方法,其特征在于,所述方法还包括:The method of claim 1, further comprising:
    所述电子设备获取第八音频数据,其中,所述第八音频数据的预设时长内的音频数据的平均音量为第九音量;所述第八音频数据与所述第一音频数据不同;所述第九音量与所述第一音量不同;The electronic device acquires eighth audio data, wherein the average volume of audio data within a preset duration of the eighth audio data is a ninth volume; the eighth audio data is different from the first audio data; The ninth volume is different from the first volume;
    所述电子设备检测到所述第九音量不满足所述第一输出音量范围,基于所述第九音量与所述第一输出音量范围,获取与所述第八音频数据对应的第五音量参数;所述第五音量参数与所述第一音量参数不同;The electronic device detects that the ninth volume does not satisfy the first output volume range, and obtains a fifth volume parameter corresponding to the eighth audio data based on the ninth volume and the first output volume range. ;The fifth volume parameter is different from the first volume parameter;
    所述电子设备基于所述第五音量参数对所述第八音频数据进行校正,得到第九音频数据;其中,所述第九音频数据的平均音量为第十音量,所述第十音量在所述第一输出音量范围内;The electronic device corrects the eighth audio data based on the fifth volume parameter to obtain ninth audio data; wherein the average volume of the ninth audio data is a tenth volume, and the tenth volume is at the within the above first output volume range;
    所述电子设备播放所述第十音频数据。The electronic device plays the tenth audio data.
  11. 根据权利要求1所述的方法,其特征在于,所述电子设备获取第一音频数据,包括: The method according to claim 1, characterized in that the electronic device obtains the first audio data, including:
    所述电子设备从目标应用获取所述第一音频数据;或者,The electronic device obtains the first audio data from the target application; or,
    所述电子设备接收第二电子设备发送的所述第一音频数据。The electronic device receives the first audio data sent by the second electronic device.
  12. 根据权利要求1所述的方法,其特征在于,所述电子设备播放所述第二音频数据,包括:The method of claim 1, wherein the electronic device plays the second audio data, including:
    所述电子设备通过扬声器播放所述第二音频数据;或者,The electronic device plays the second audio data through a speaker; or,
    所述电子设备通过与所述电子设备连接的耳机播放所述第二音频数据。The electronic device plays the second audio data through an earphone connected to the electronic device.
  13. 一种电子设备,其特征在于,包括:An electronic device, characterized by including:
    一个或多个处理器、存储器;One or more processors and memories;
    以及一个或多个计算机程序,其中所述一个或多个计算机程序存储在所述存储器上,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:and one or more computer programs, wherein the one or more computer programs are stored on the memory and when executed by the one or more processors, cause the electronic device to perform the following steps:
    获取第一音频数据;Get the first audio data;
    检测到所述第一音频数据的第一音量不满足第一输出音量范围,基于所述第一音量与所述第一输出音量范围,获取与所述第一音频数据对应的第一音量参数;其中,所述第一音量为所述第一音频数据的预设时长内的音频数据的平均音量,所述第一输出音量范围为预先获取到的;It is detected that the first volume of the first audio data does not satisfy the first output volume range, and based on the first volume and the first output volume range, obtain a first volume parameter corresponding to the first audio data; Wherein, the first volume is the average volume of audio data within a preset duration of the first audio data, and the first output volume range is obtained in advance;
    基于所述第一音量参数对所述第一音频数据进行校正,得到第二音频数据;其中,所述第二音频数据的平均音量为第二音量,所述第二音量在所述第一输出音量范围内;The first audio data is corrected based on the first volume parameter to obtain second audio data; wherein the average volume of the second audio data is a second volume, and the second volume is at the first output Within the volume range;
    播放所述第二音频数据。Play the second audio data.
  14. 根据权利要求13所述的电子设备,其特征在于,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:The electronic device according to claim 13, characterized in that, when the computer program is executed by the one or more processors, the electronic device is caused to perform the following steps:
    在所述电子设备播放所述第二音频数据时,接收到调节操作,所述调节操作用于调节所述第二音频数据的音量;When the electronic device plays the second audio data, an adjustment operation is received, and the adjustment operation is used to adjust the volume of the second audio data;
    在所述调节操作开始至结束过程中,按照第一周期时长,采集所述第二音频数据的音量;From the beginning to the end of the adjustment operation, collect the volume of the second audio data according to the duration of the first cycle;
    基于采集到的所述第二音频数据的音量,得到第二输出音量范围。Based on the collected volume of the second audio data, a second output volume range is obtained.
  15. 根据权利要求13所述的电子设备,其特征在于,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:The electronic device according to claim 13, characterized in that, when the computer program is executed by the one or more processors, the electronic device is caused to perform the following steps:
    获取所述调节操作开始至结束过程中采集到的所述第二音频数据的音量的平均音量;Obtain the average volume of the volume of the second audio data collected from the beginning to the end of the adjustment operation;
    在所述调节操作用于指示调大所述第二音频数据的音量的情况下,若采集到的所述第二音频数据的音量的平均音量大于所述第一输出音量范围的最小值,所述第二输出音量范围的最小值为采集到的所述第二音频数据的音量的平均音量,所述第二输出音量范围的最大值为所述第一输出音量范围的最大值;若采集到的所述第二音频数据的音量的平均音量小于所述第一输出音量范围的最小值,所述第二输出音量范围等于所述第一输出音量范围; In the case where the adjustment operation is used to instruct to increase the volume of the second audio data, if the average volume of the collected second audio data is greater than the minimum value of the first output volume range, then The minimum value of the second output volume range is the average volume of the collected second audio data, and the maximum value of the second output volume range is the maximum value of the first output volume range; if The average volume of the volume of the second audio data is less than the minimum value of the first output volume range, and the second output volume range is equal to the first output volume range;
    或者,or,
    在所述调节操作用于指示调小所述第二音频数据的音量的情况下,若采集到的所述第二音频数据的音量的平均音量小于所述第一输出音量范围的最大值,所述第二输出音量范围的最大值为采集到的所述第二音频数据的音量的平均音量,所述第二输出音量范围的最小值为所述第一输出音量范围的最小值;若采集到的所述第二音频数据的音量的平均音量大于所述第一输出音量范围的最大值,所述第二输出音量范围等于所述第一输出音量范围。In the case where the adjustment operation is used to instruct to turn down the volume of the second audio data, if the average volume of the collected second audio data is less than the maximum value of the first output volume range, then The maximum value of the second output volume range is the average volume of the collected second audio data volume, and the minimum value of the second output volume range is the minimum value of the first output volume range; if The average volume of the volume of the second audio data is greater than the maximum value of the first output volume range, and the second output volume range is equal to the first output volume range.
  16. 根据权利要求14所述的电子设备,其特征在于,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:The electronic device according to claim 14, characterized in that, when the computer program is executed by the one or more processors, the electronic device is caused to perform the following steps:
    在所述电子设备播放所述第二音频数据时,按照第二周期时长,采集所述第二音频数据的音量;所述第二周期时长大于所述第一周期时长;When the electronic device plays the second audio data, collect the volume of the second audio data according to the second cycle duration; the second cycle duration is greater than the first cycle duration;
    基于采集到的所述第二音频数据的音量,得到第二输出音量范围。Based on the collected volume of the second audio data, a second output volume range is obtained.
  17. 根据权利要求16所述的电子设备,其特征在于,The electronic device according to claim 16, characterized in that:
    若采集到的所述第二音频数据的音量大于所述第一输出音量范围的最大值,所述第二输出音量范围的最小值为所述第一输出音量范围的最小值,所述第二输出音量范围的最大值为采集到的所述第二音频数据的音量;或者,If the volume of the collected second audio data is greater than the maximum value of the first output volume range, the minimum value of the second output volume range is the minimum value of the first output volume range, and the second The maximum value of the output volume range is the volume of the collected second audio data; or,
    若采集到的所述第二音频数据的音量小于所述第一输出音量范围的最小值,所述第二输出音量范围的最大值为所述第一输出音量范围的最大值,所述第二输出音量范围的最小值为采集到的所述第二音频数据的音量;或者,If the volume of the collected second audio data is less than the minimum value of the first output volume range, the maximum value of the second output volume range is the maximum value of the first output volume range, and the second The minimum value of the output volume range is the volume of the collected second audio data; or,
    若采集到的所述第二音频数据的音量大于或等于所述第一输出音量范围的最小值,且小于或等于所述第一输出音量范围的最大值,所述第二输出音量范围等于所述第一输出音量范围。If the volume of the collected second audio data is greater than or equal to the minimum value of the first output volume range and less than or equal to the maximum value of the first output volume range, the second output volume range is equal to the The first output volume range.
  18. 根据权利要求14至17任一项所述的电子设备,其特征在于,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:The electronic device according to any one of claims 14 to 17, characterized in that, when the computer program is executed by the one or more processors, the electronic device is caused to perform the following steps:
    获取第三音频数据,其中,所述第三音频数据的预设时长内的音频数据的平均音量为第三音量;Obtain third audio data, wherein the average volume of the audio data within a preset duration of the third audio data is the third volume;
    检测到所述第三音量不满足所述第二输出音量范围,基于所述第三音量与所述第二输出音量范围,获取与所述第三音频数据对应的第二音量参数;It is detected that the third volume does not satisfy the second output volume range, and based on the third volume and the second output volume range, obtain a second volume parameter corresponding to the third audio data;
    基于所述第二音量参数对所述第三音频数据进行校正,得到第四音频数据;其中,所述第四音频数据的平均音量为第四音量,所述第四音量在所述第二输出音量范围内;The third audio data is corrected based on the second volume parameter to obtain fourth audio data; wherein the average volume of the fourth audio data is a fourth volume, and the fourth volume is at the second output Within the volume range;
    播放所述第四音频数据。Play the fourth audio data.
  19. 根据权利要求13所述的电子设备,其特征在于,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:The electronic device according to claim 13, characterized in that, when the computer program is executed by the one or more processors, the electronic device is caused to perform the following steps:
    若所述第一音量大于所述第一输出音量范围的最大值,基于所述第一音量与所述第 一输出音量范围的最大值,获取所述第一音量参数;或者,If the first volume is greater than the maximum value of the first output volume range, based on the first volume and the third The maximum value of the output volume range is used to obtain the first volume parameter; or,
    若所述第一音量小于所述第一输出音量范围的最小值,基于所述第一音量与所述第一输出音量范围的最小值,获取所述第一音量参数。If the first volume is less than the minimum value of the first output volume range, the first volume parameter is obtained based on the first volume and the minimum value of the first output volume range.
  20. 根据权利要求13所述的电子设备,其特征在于,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:The electronic device according to claim 13, characterized in that, when the computer program is executed by the one or more processors, the electronic device is caused to perform the following steps:
    基于所述第一音频数据、所述第一音量参数以及输出音量参数,得到所述第二音频数据;Obtain the second audio data based on the first audio data, the first volume parameter and the output volume parameter;
    所述输出音量参数包括以下至少之一:音轨音量参数、流音量参数、主音量;The output volume parameters include at least one of the following: track volume parameters, stream volume parameters, and master volume;
    所述音轨音量参数用于指示播放所述第二音频数据的应用的设置音量;The audio track volume parameter is used to indicate the set volume of the application that plays the second audio data;
    所述流音量参数用于指示所述第一音频数据对应的音频流的设置音量;The stream volume parameter is used to indicate the set volume of the audio stream corresponding to the first audio data;
    所述主音量用于指示所述电子设备的设置音量。The master volume is used to indicate the set volume of the electronic device.
  21. 根据权利要求13所述的电子设备,其特征在于,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:The electronic device according to claim 13, characterized in that, when the computer program is executed by the one or more processors, the electronic device is caused to perform the following steps:
    获取第五音频数据,其中,所述第五音频数据的预设时长内的音频数据的平均音量为第五音量;Obtain fifth audio data, wherein the average volume of the audio data within the preset duration of the fifth audio data is the fifth volume;
    检测到所述第五音量不满足所述第一输出音量范围,基于所述第五音量与所述第一输出音量范围,获取与所述第五音频数据对应的第三音量参数;It is detected that the fifth volume does not satisfy the first output volume range, and based on the fifth volume and the first output volume range, obtain a third volume parameter corresponding to the fifth audio data;
    基于所述第三音量参数对所述第五音频数据进行校正,得到第六音频数据;其中,所述第六音频数据的平均音量为第六音量,所述第六音量在所述第一输出音量范围内;The fifth audio data is corrected based on the third volume parameter to obtain sixth audio data; wherein the average volume of the sixth audio data is a sixth volume, and the sixth volume is at the first output Within the volume range;
    向另一电子设备发送所述第六音频数据;所述电子设备与所述另一电子设备通过无线连接进行数据交互;Send the sixth audio data to another electronic device; the electronic device and the other electronic device perform data interaction through a wireless connection;
    检测到与所述另一电子设备的连接断开,所述电子设备获取所述第五音频数据中待播放的音频数据,其中,所述待播放的音频数据的预设时长内的音频数据的平均音量为第七音量;It is detected that the connection with the other electronic device is disconnected, and the electronic device obtains the audio data to be played in the fifth audio data, wherein the audio data within the preset duration of the audio data to be played is The average volume is seventh volume;
    检测到所述第七音量不满足所述第一输出音量范围,基于所述第七音量与所述第一输出音量范围,获取与所述待播放的音频数据对应的第四音量参数;It is detected that the seventh volume does not satisfy the first output volume range, and based on the seventh volume and the first output volume range, obtain a fourth volume parameter corresponding to the audio data to be played;
    基于所述第四音量参数对所述待播放的音频数据进行校正,得到第七音频数据;其中,所述第七音频数据的平均音量为第八音量,所述第八音量在所述第一输出音量范围内;The audio data to be played is corrected based on the fourth volume parameter to obtain seventh audio data; wherein the average volume of the seventh audio data is an eighth volume, and the eighth volume is within the first Within the output volume range;
    播放所述第七音频数据。Play the seventh audio data.
  22. 根据权利要求13所述的电子设备,其特征在于,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:The electronic device according to claim 13, characterized in that, when the computer program is executed by the one or more processors, the electronic device is caused to perform the following steps:
    获取第八音频数据,其中,所述第八音频数据的预设时长内的音频数据的平均音量为第九音量;所述第八音频数据与所述第一音频数据不同;所述第九音量与所述第一音量不同; Obtain eighth audio data, wherein the average volume of the audio data within the preset duration of the eighth audio data is a ninth volume; the eighth audio data is different from the first audio data; the ninth volume Different from said first volume;
    检测到所述第九音量不满足所述第一输出音量范围,基于所述第九音量与所述第一输出音量范围,获取与所述第八音频数据对应的第五音量参数;所述第五音量参数与所述第一音量参数不同;It is detected that the ninth volume does not satisfy the first output volume range, and based on the ninth volume and the first output volume range, a fifth volume parameter corresponding to the eighth audio data is obtained; The fifth volume parameter is different from the first volume parameter;
    基于所述第五音量参数对所述第八音频数据进行校正,得到第九音频数据;其中,所述第九音频数据的平均音量为第十音量,所述第十音量在所述第一输出音量范围内;The eighth audio data is corrected based on the fifth volume parameter to obtain ninth audio data; wherein the average volume of the ninth audio data is a tenth volume, and the tenth volume is at the first output Within the volume range;
    播放所述第十音频数据。The tenth audio data is played.
  23. 根据权利要求13所述的电子设备,其特征在于,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:The electronic device according to claim 13, characterized in that, when the computer program is executed by the one or more processors, the electronic device is caused to perform the following steps:
    从目标应用获取所述第一音频数据;或者,Obtain the first audio data from the target application; or,
    接收第二电子设备发送的所述第一音频数据。Receive the first audio data sent by the second electronic device.
  24. 根据权利要求13所述的电子设备,其特征在于,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:The electronic device according to claim 13, characterized in that, when the computer program is executed by the one or more processors, the electronic device is caused to perform the following steps:
    通过扬声器播放所述第二音频数据;或者,Play the second audio data through a speaker; or,
    通过与所述电子设备连接的耳机播放所述第二音频数据。The second audio data is played through headphones connected to the electronic device.
  25. 一种计算机存储介质,其特征在于,包括计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如权利要求1-12任一项所述的方法。A computer storage medium, characterized by including computer instructions, which when the computer instructions are run on an electronic device, cause the electronic device to execute the method according to any one of claims 1-12.
  26. 一种计算机程序产品,其特征在于,当所述计算机程序产品在计算机上运行时,使得所述计算机执行如权利要求1-12任一项所述的方法。A computer program product, characterized in that, when the computer program product is run on a computer, it causes the computer to execute the method according to any one of claims 1-12.
  27. 一种芯片,其特征在于,包括一个或多个接口电路和一个或多个处理器;所述接口电路用于从电子设备的存储器接收信号,并向所述处理器发送所述信号,所述信号包括存储器中存储的计算机指令;当所述处理器执行所述计算机指令时,使得所述电子设备执行权利要求1-12任一项所述的方法。 A chip, characterized in that it includes one or more interface circuits and one or more processors; the interface circuit is used to receive signals from a memory of an electronic device and send the signals to the processor, and the The signal includes computer instructions stored in the memory; when the processor executes the computer instructions, the electronic device is caused to perform the method of any one of claims 1-12.
PCT/CN2023/083111 2022-03-28 2023-03-22 Volume control method and electronic device WO2023185589A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210310062.7 2022-03-28
CN202210310062.7A CN116866472A (en) 2022-03-28 2022-03-28 Volume control method and electronic equipment

Publications (1)

Publication Number Publication Date
WO2023185589A1 true WO2023185589A1 (en) 2023-10-05

Family

ID=88199196

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/083111 WO2023185589A1 (en) 2022-03-28 2023-03-22 Volume control method and electronic device

Country Status (2)

Country Link
CN (1) CN116866472A (en)
WO (1) WO2023185589A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170094215A1 (en) * 2015-09-24 2017-03-30 Samantha WESTERN Volume adjusting apparatus and method
CN109996143A (en) * 2019-03-07 2019-07-09 上海蔚来汽车有限公司 Volume adjusting method, device, system and audio-frequence player device and vehicle
CN111258532A (en) * 2020-02-19 2020-06-09 西安闻泰电子科技有限公司 Volume adaptive adjustment method and device, storage medium and electronic equipment
CN113676595A (en) * 2021-07-12 2021-11-19 杭州逗酷软件科技有限公司 Volume adjusting method, terminal device and computer readable storage medium
CN113824835A (en) * 2021-10-25 2021-12-21 Oppo广东移动通信有限公司 Volume control method and device, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170094215A1 (en) * 2015-09-24 2017-03-30 Samantha WESTERN Volume adjusting apparatus and method
CN109996143A (en) * 2019-03-07 2019-07-09 上海蔚来汽车有限公司 Volume adjusting method, device, system and audio-frequence player device and vehicle
CN111258532A (en) * 2020-02-19 2020-06-09 西安闻泰电子科技有限公司 Volume adaptive adjustment method and device, storage medium and electronic equipment
CN113676595A (en) * 2021-07-12 2021-11-19 杭州逗酷软件科技有限公司 Volume adjusting method, terminal device and computer readable storage medium
CN113824835A (en) * 2021-10-25 2021-12-21 Oppo广东移动通信有限公司 Volume control method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN116866472A (en) 2023-10-10

Similar Documents

Publication Publication Date Title
WO2020253844A1 (en) Method and device for processing multimedia information, and storage medium
WO2019090726A1 (en) Method for selecting bluetooth device, terminal, and system
US20100048133A1 (en) Audio data flow input/output method and system
WO2020132839A1 (en) Audio data transmission method and device applied to monaural and binaural modes switching of tws earphone
CN113890932A (en) Audio control method and system and electronic equipment
US10425758B2 (en) Apparatus and method for reproducing multi-sound channel contents using DLNA in mobile terminal
CN111770416B (en) Loudspeaker control method and device and electronic equipment
CN109379490B (en) Audio playing method and device, electronic equipment and computer readable medium
US20150117674A1 (en) Dynamic audio input filtering for multi-device systems
WO2020063065A1 (en) Audio transmission method and apparatus, electronic device, and storage medium
WO2024021736A1 (en) Transmission method, apparatus, and system for bluetooth multimedia packet, and device
CN110830970A (en) Audio transmission method, device, equipment and storage medium between Bluetooth equipment
KR20170043319A (en) Electronic device and audio ouputting method thereof
WO2022213689A1 (en) Method and device for voice communicaiton between audio devices
WO2023185589A1 (en) Volume control method and electronic device
US11665271B2 (en) Controlling audio output
CN109155803B (en) Audio data processing method, terminal device and storage medium
WO2022120782A1 (en) Multimedia playback synchronization
CN113395576A (en) Scene switching method, computer equipment and storage medium
CN113271385A (en) Call forwarding method
WO2022002218A1 (en) Audio control method, system, and electronic device
CN113613230B (en) Scanning parameter determination method and electronic equipment
US11689690B2 (en) Method and device for audio and video synchronization
CN116744215B (en) Audio processing method and device
WO2024109399A1 (en) State notification method and apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23777960

Country of ref document: EP

Kind code of ref document: A1