WO2024032590A1 - 一种音频播放方法及相关装置 - Google Patents

一种音频播放方法及相关装置 Download PDF

Info

Publication number
WO2024032590A1
WO2024032590A1 PCT/CN2023/111689 CN2023111689W WO2024032590A1 WO 2024032590 A1 WO2024032590 A1 WO 2024032590A1 CN 2023111689 W CN2023111689 W CN 2023111689W WO 2024032590 A1 WO2024032590 A1 WO 2024032590A1
Authority
WO
WIPO (PCT)
Prior art keywords
sound
electronic device
data
channel
audio
Prior art date
Application number
PCT/CN2023/111689
Other languages
English (en)
French (fr)
Inventor
胡少武
陈丽
常晶
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202211415563.8A external-priority patent/CN117596538A/zh
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2024032590A1 publication Critical patent/WO2024032590A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control

Definitions

  • the present application relates to the field of terminals, and in particular, to an audio playback method and related devices.
  • the present application provides an audio playback method and related devices, which enables electronic equipment to coordinately emit sounds through the sound-emitting unit of the first part and the sound-emitting unit of the second part.
  • the user is provided with a variety of audio playback effects, so that the user can feel different Audio playback experience.
  • this application provides an audio playback method applied to an electronic device.
  • the electronic device includes a first part and a second part.
  • the first part and the second part rotate or expand around the central axis of the electronic device;
  • the first part includes a or A plurality of first sound-emitting units,
  • the second part includes one or more second sound-emitting units;
  • the method includes:
  • the sound effect mode of the electronic device is the first mode, and the electronic device receives the first input for playing the first audio;
  • the electronic device controls one or more first sound-generating units to play the first audio data, and controls one or more second sound-generating units to play the second audio data.
  • the first audio data and the second audio data each at least include At least part of the content of the audio source data of the first audio;
  • the electronic device receives the second input
  • the electronic device switches the sound effect mode of the electronic device from the first mode to the second mode;
  • the electronic device receives a third input for playing the first audio
  • the electronic device controls one or more first sound-generating units to play third audio data, and controls one or more second sound-generating units to play fourth audio data.
  • Both the third audio data and the fourth audio data at least include At least part of the content of the sound source data of the first audio is different from the first audio data and the third audio data.
  • the electronic device when the electronic device is in different sound effect modes, the same sound source data can be processed differently to obtain different first audio data and third audio data, and the different audio data can be played through the first sound unit to achieve different Playback effect.
  • the sound-emitting unit of the first part of the electronic device and the sound-emitting unit of the second part jointly produce sound, the electronic device can improve the consistency of the sound and the picture when displaying the video picture through the first part, allowing the user to perceive the sound source and the sound.
  • the images are in the same position, enhancing user immersion.
  • the channels included in the first audio data are partially/all different from the channels included in the third audio data. In this way, data from different channels can achieve different playback effects.
  • the method further includes: when the first mode is any one of a low-frequency enhancement mode, a dialogue enhancement mode, and a surround enhancement mode, at least part of the channels included in the first audio data are different from those of the second audio
  • the data includes at least some of the channels that are different; and/or,
  • the first mode is the loudness enhancement mode
  • at least part of the channels of the first audio data is the same as at least part of the channels of the second audio data.
  • the electronic device includes one or more sound effect modes, and the one or more sound effect modes include a first mode and a second mode; before the electronic device receives the second input, the method further includes:
  • the electronic device displays one or more sound effect mode options.
  • the one or more sound effect mode options correspond to the one or more sound effect modes.
  • the one or more sound effect mode options include a first mode option and a second mode option.
  • the first mode The options correspond to the first mode, the second mode options correspond to the second mode, and the first mode options are marked; wherein the second input is the input for the second mode option;
  • setting the sound effect mode of the electronic device to the second mode specifically includes:
  • the electronic device switches the sound effect mode of the electronic device from the first mode to the second mode, cancels the marking of the first mode option, and marks the second mode option.
  • the electronic device can provide the user with different sound effect mode options, so that the user can adjust the sound effect mode of the electronic device, so that the electronic device plays audio in the specified sound effect mode.
  • the first mode option is a low-frequency enhancement mode option
  • the first mode is a low-frequency enhancement mode
  • the first audio data includes data of a low-frequency channel
  • the second audio data includes data of a left channel and a right channel data, left and right surround channel data, and/or center channel data.
  • the electronic device can use the first sound unit to play the data of the low-frequency channel, thereby enhancing the low-frequency vibration effect during audio playback.
  • the first mode option is a dialogue enhancement mode option
  • the first mode is a dialogue enhancement mode
  • the first audio data includes center channel data
  • the second audio data includes
  • the second audio data includes The data of the left channel and the data of the right channel, the data of the left surround channel and the data of the right surround channel and/or the data of the low frequency channel.
  • the first mode option is the loudness enhancement mode option
  • the first mode is the loudness enhancement mode
  • the first audio data includes data of the left channel and the data of the right channel
  • the second audio data includes the data of the left channel. channel data and right channel data.
  • the electronic device when the electronic device includes a first sound-emitting unit, the electronic device uses a first sound-emitting unit to play the data of the left channel and the data of the right channel of the first audio data; or,
  • the electronic device uses one first sounding unit to play the data of the left channel, and uses the other first sounding unit to play the data of the right channel; or,
  • the electronic device uses at least one first sound unit to play the data of the left channel, uses at least one first sound unit to play the data of the right channel, and uses at least one first sound unit to play the data of the right channel.
  • the data of the left channel and the data of the right channel of the second audio data are played.
  • the first sound-generating units can be reasonably used to play the channel data of the first audio data, making full use of the sound-generating unit resources provided by the electronic device.
  • the first mode option is the surround enhancement mode
  • the first mode is the surround enhancement mode
  • the first audio data includes data of the left surround channel and the data of the right surround channel
  • the second audio data includes Data for the left channel and data for the right channel, data for the center channel, and/or data for the low frequency channel.
  • the loudness of the small audio data of the sound source data is enhanced.
  • small sounds can be highlighted and the sense of detail can be enhanced. This allows users to hear small sounds while playing games, observe the movements of game characters, and enhance the gaming experience. Users can hear obvious background sounds, such as wind, insects, etc., when watching movies, which makes the picture more vivid.
  • the method when the electronic device displays one or more sound effect mode options, the method also includes:
  • the electronic device displays a slider
  • the first mode option is the loudness enhancement mode option, the low frequency enhancement mode option or the dialogue enhancement mode option
  • the value of the slide bar is the first value
  • the volume at which the electronic device plays the first audio data is the third volume
  • the value of the slide bar is At the second value
  • the volume at which the electronic device plays the first audio data is the fourth volume
  • the first value is smaller than the second value
  • the third volume is lower than the fourth volume
  • the value of the slide bar is the third value
  • the distance between the simulated sound source and the user when the electronic device plays the first audio data is the third distance
  • the value of the slide bar is the fourth value
  • the electronic device plays the In the first audio data
  • the distance between the simulated sound source and the user is the fourth distance
  • the third value is smaller than the fourth value
  • the third distance is smaller than the fourth distance.
  • the method also includes:
  • the electronic device receives a fourth input for playing the first video
  • the electronic device In response to the fourth input, the electronic device identifies one or more objects in the video frame of the first video, and identifies audio data of one or more objects in the audio file of the first video, and the one or more objects include the first object. ;
  • the electronic device uses one or more first sounding units and/or one or more second sounding units that is closest to the first object to play the audio data of the first object. In this way, the electronic device can use one or more sound units closest to the first object to play the sound of the first object, so that the user can hear the position of the first object in the picture and enhance the immersion of video playback.
  • the first sound-emitting unit located at the first position is used to play sky sounds; and/or,
  • the first sound-emitting unit located at the second position is used to play ground sounds, and the distance between the first position and the central axis is greater than the distance between the second position and the central axis; and/or,
  • the first sound-emitting unit located in the third position is used to play the data of the left channel
  • the first sound-emitting unit located in the fourth position is used to play the data of the right channel.
  • the third position is located on the left side of the first part, and the fourth position located to the right of the first section; and/or,
  • the first sound unit located in the fifth position is used to play the data of the left surround channel
  • the first sound unit located in the sixth position is used to play the data of the right surround channel.
  • the third position is located on the left side of the first part.
  • Position four is located to the right of the first section.
  • the electronic device can play the data of the sound channels in different directions through the sound-emitting units in different positions, reflecting the sense of audio direction.
  • the electronic device is a laptop computer
  • the first part includes a display screen of the electronic device
  • the second part includes a keyboard and/or touch panel of the electronic device.
  • the notebook computer can simultaneously emit sound through the sound-emitting units of the first part and the second part, thereby enhancing the audio playback effect.
  • the first part of the electronic device includes a first shell and a second shell, and the first shell includes a display screen of the electronic device;
  • the first mode is the surround enhancement mode, and the first sound unit located in the first shell Used to drive the first shell to play the first audio data; or,
  • the first mode is a low-frequency enhancement mode, and the first sound-emitting unit located in the second shell is used to drive the second shell to play the first audio data.
  • the electronic device can drive the first shell or the second shell of the electronic device to produce sound, so that the sound range is wider and the sound effect is better.
  • the electronic device is a folding screen device, the first part includes a first screen, the second part includes a second screen, the folding mode of the electronic device is left and right folding, and the electronic device includes a folded state and an unfolded state;
  • the sounding unit located on the first side of the electronic device among the one or more first sounding units and the one or more second sounding units is used to play the data of the left channel
  • the sound-generating unit located on the second side of the electronic device is used to play the data of the right channel; or, the one or more first sound-generating units and the one or more second sound-generating units
  • the sound unit located on the first side of the electronic device is used to play sky sounds
  • the sound unit located on the second side of the electronic device among the one or more first sound units and the one or more second sound units is used to play ground sounds.
  • the first side is different from the second side;
  • one or more first sound units are used to play the data of the left channel
  • one or more second sound units are used to play the data of the right channel
  • one or more first sound units are used to play the data of the right channel.
  • the second sound unit is used to play the data of the left surround channel
  • one or more second sound units are used to play the data of the right surround channel.
  • the electronic device is a folding screen device, the first part includes a first screen, the second part includes a second screen, the electronic device is folded up and down, and the electronic device includes a folded state and an unfolded state;
  • one or more first sound units and one or more second sound units play audio source data
  • one or more first sound units are used to play the data of the left channel, and one or more second sound units are used to play the data of the right channel; or, one or more first sound units are used to play the data of the right channel.
  • the unit is used to play sky sounds, and one or more second sound units are used to play ground sounds.
  • the sky sound includes one or more of thunder, the sound of flying objects, and the sound of wind
  • the ground sound includes one or more of footsteps, insects, and rain.
  • electronic devices can enhance the sound of sky objects, as well as the sounds of ground objects, reflecting the sense of direction up and down.
  • the raindrops fall on the eaves or big trees
  • the sound of the raindrops belongs to the sky sound
  • the raindrops fall on the ground, lake, etc.
  • the rain sound here represents the sound of the raindrops falling on the ground.
  • the sound source data includes data of channels required for the first audio data in the first mode, and the number of channels in the sound source data other than the channels of the first audio data is the first number;
  • Methods also include:
  • the electronic device upmixes the sound source data, or copies the data of the channels in the sound source data except the first audio data to obtain the fifth audio data; wherein, the fifth audio The data includes data of the channels required for the first audio data, and the number of channels in the fifth audio data other than the channels of the first audio data is the same as the number of the second sound-emitting units, and the second audio data includes data except the channels of the first audio data. Data of a channel other than a channel of audio data;
  • the electronic device downmixes the sound source data, or superimposes the data of some channels in the sound source data to obtain sixth audio data; wherein the sixth audio data includes the first audio data. data of required channels, and the number of channels in the sixth audio data other than the channels of the first audio data is the same as the number of the second sound-emitting units, and the second audio data includes channels other than the channels of the first audio data. data for channels other than audio.
  • the electronic device processes the audio source data based on the number of the second sound-emitting units, fully utilizes the sound-emitting unit resources of the electronic device, and realizes the playback of the audio data.
  • embodiments of the present application provide an electronic device, including a first part and a second part.
  • the first part of the electronic device includes a display screen of the electronic device, and the second part of the electronic device includes a keyboard and/or touch screen of the electronic device. Board; the first part includes one or more first sound-generating units, the second part includes one or more second sound-generating units, and the electronic device includes one or more processors; wherein,
  • One or more processors configured to obtain first audio data and second audio data based on the sound source data of the first audio in response to an input of playing the first audio, where both the first audio data and the second audio data at least include sound source data. at least part of the content;
  • One or more first sound units used for playing first audio data
  • One or more second sound units used for playing second audio data.
  • the electronic device can play the first audio through the first sound-emitting unit of the first part and the second sound-emitting unit of the second part, and simultaneously produce sound through the sound-emitting units in multiple positions, thereby achieving better audio playback effect and improving the immersion of audio playback. feel.
  • the electronic device is a laptop computer.
  • the notebook computer can play the first audio through the sound unit located near the display screen and the sound unit located near the keyboard, so that the sound beam of the notebook computer faces the direction of the user's ears, the sound is clearer, and the notebook computer plays audio data
  • the position of the simulated sound source is near the display screen of the laptop, and the sound and the picture are highly consistent, so that when the notebook plays a video, the sound and the picture are synchronized.
  • the electronic device is configured to implement the audio playback method in any of the possible implementations of the first aspect.
  • embodiments of the present application provide an electronic device, including: one or more processors, one or more first sound-generating units, one or more second sound-generating units, and one or more memories; wherein, One or more memories, one or more first sound-generating units, and one or more second sound-generating units are respectively coupled to one or more processors.
  • the one or more memories are used to store computer program codes.
  • the computer program codes include computer programs. Instructions, when one or more processors execute computer instructions, cause the electronic device to execute the audio playback method in any possible implementation of the first aspect.
  • embodiments of the present application provide a computer storage medium that includes computer instructions.
  • the computer instructions When the computer instructions are run on a first electronic device, the first electronic device causes the first electronic device to execute any of the possible implementations of the first aspect. audio playback method.
  • the present application provides a chip system.
  • the chip system is applied to a first electronic device.
  • the chip system includes one or more processors.
  • the processor is used to call computer instructions to cause the first electronic device to execute any of the above-mentioned first aspects. Audio playback method in one possible implementation.
  • the present application provides a computer program product containing instructions.
  • the computer program When the computer program is run on a first electronic device, the first electronic device causes the first electronic device to perform audio playback in any of the possible implementations of the first aspect. method.
  • Figure 1 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • Figure 2A is a schematic diagram of the position of a sound-generating unit of an electronic device provided by an embodiment of the present application
  • Figure 2B is a schematic diagram of a sound field of an electronic device provided by an embodiment of the present application.
  • Figure 2C is another sound field schematic diagram of the electronic device provided by the embodiment of the present application.
  • Figure 3A is a schematic diagram of the position of the sound unit of another electronic device provided by an embodiment of the present application.
  • Figure 3B is a schematic diagram showing the position of the first side sound unit of an electronic device according to an embodiment of the present application.
  • Figure 3C is a schematic diagram of the sound field of the electronic device provided by the embodiment of the present application.
  • Figure 3D is another sound field schematic diagram of an electronic device provided by an embodiment of the present application.
  • Figure 4 is a schematic flow chart of an audio playback method provided by an embodiment of the present application.
  • Figure 5 is a schematic flow chart of an electronic device processing audio source data provided by an embodiment of the present application.
  • Figures 6A-6E are a set of interface schematic diagrams provided by embodiments of the present application.
  • Figure 7 is a schematic diagram of the first side sound unit of an electronic device provided by an embodiment of the present application.
  • FIGS. 8A-8D are schematic diagrams of a set of electronic devices provided by embodiments of the present application.
  • Figure 9 is a schematic diagram of the first part of the side sound unit of another electronic device provided by an embodiment of the present application.
  • FIGS 10A-10D are schematic diagrams of another set of electronic devices provided by embodiments of the present application.
  • FIGS 11A-11C are schematic diagrams of another set of electronic devices provided by embodiments of the present application.
  • Figures 12A-12C are schematic diagrams of another set of electronic devices provided by embodiments of the present application.
  • Figure 13 is a schematic diagram of the first part of the side sound unit of another electronic device provided by an embodiment of the present application.
  • Figure 14 is a schematic diagram of the hardware structure of an electronic device provided by an embodiment of the present application.
  • first and second are used for descriptive purposes only and shall not be understood as implying or implying relative importance or implicitly specifying the quantity of indicated technical features. Therefore, the features defined as “first” and “second” may explicitly or implicitly include one or more of the features. In the description of the embodiments of this application, unless otherwise specified, “plurality” The meaning is two or more.
  • connection can be detachably connected, or can be detachably connected. It is non-detachably connected; it can be directly connected or indirectly connected through an intermediate medium.
  • orientation terms mentioned in the embodiments of this application such as “top”, “bottom”, “upper”, “lower”, “left”, “right”, “inner”, “outer”, etc., are for reference only.
  • GUI graphical user interface
  • the electronic device is a folding device, including a first part and a second part. There is a central axis between the first part and the second part, and the first part and the second part can rotate (or unfold) around the central axis to change the degree of the angle between the first part and the second part.
  • the electronic device can be used in different forms.
  • the electronic device is a folding screen device.
  • the first part of the electronic device includes the first screen of the electronic device, and the second part includes the second screen of the electronic device.
  • the first screen and the second screen may be the same piece. different parts of the screen.
  • the electronic device can be folded left and right or up and down. When the electronic device is folded left and right, the first part may be located on the left side of the second part. When the electronic device is folded up and down, the first part may be located on the upper side of the second part.
  • the folding methods of electronic devices can be divided into two categories, one is a folding screen that is folded outwards (referred to as an outward-folding folding screen), and the other is a folding screen that is folded inward (referred to as an inward-folding folding screen).
  • the outward-folding folding screen after the outward-folding folding screen is folded, the first screen and the second screen face each other. After the folding screen is unfolded, the first screen and the second screen form a third screen. In this way, when the outward-folding screen is folded, the user can use the first screen or the second screen alone.
  • the folding screen is unfolded, the first screen and the second screen form a third screen with a larger screen area, which can provide users with a larger screen and improve the user experience.
  • the first screen and the second screen face each other.
  • the foldable screen is unfolded, the first screen and the second screen form a third screen. In this way, after the inner folding screen is folded, it is convenient for the user to store and carry, and after the inner folding screen is unfolded, it is convenient for the user to use.
  • the electronic device is a notebook computer or other similar device, as shown in Figure 1
  • the first part of the electronic device is part 11 shown in Figure 1
  • the second part of the electronic device is part 11 shown in Figure 1 Part 12
  • the central axis of the electronic device is the central axis 13 shown in Figure 1.
  • the parts 11 and 12 of the electronic device can be rotated and unfolded around the central axis 13 to change the angles of the parts 11 and 12 around the central axis 13 .
  • part 11 includes A shell and B shell.
  • Part 12 includes C shell and D shell.
  • part 11 may be the screen part of the electronic device
  • part 12 may be the keyboard part of the electronic device.
  • the screen part of the electronic device may include a display screen for displaying a user interface.
  • the keyboard portion of the electronic device may include, but is not limited to, a keyboard and/or a trackpad. The user can manipulate the electronic device through the touch panel and/or keyboard, and the electronic device can respond accordingly according to instructions input by the user through the touch panel and/or keyboard.
  • the electronic device part 11 includes an A case and a B case.
  • the A shell is part of the outer shell of the electronic device
  • the B shell is part of the inner shell of the electronic device
  • the display screen of the electronic device is located in the center of the B shell.
  • the keyboard part of the electronic device includes a C shell and a D shell, where the D shell is part of the shell of the electronic device
  • the C shell includes the keyboard and/or touch panel of the electronic device.
  • the B shell and the C shell of the electronic device are adjacent to each other.
  • the A shell and the D shell of the electronic device are adjacent.
  • the electronic device can receive input from the user to flip the screen portion and change the angle between the B shell and the C shell.
  • the angle between the B shell and the C shell is 0 degrees
  • the screen part overlaps with the keyboard part, and the B shell and the C shell are blocked, making it easier for the user to store and carry it.
  • the angle value between the B shell and the C shell is within a specified angle range (for example, 45 degrees to 145 degrees)
  • the user can view the display content of the electronic device, such as a video, on the display screen of the screen portion. Instructions can also be entered into the electronic device through the keyboard portion.
  • the angle value between the B shell and the C shell of the electronic device may be 360 degrees
  • the A shell and the D shell of the electronic device overlap together
  • the user can independently manipulate the screen part of the electronic device, for example, the screen part It can include a touch screen, which can be used to receive user input and trigger the screen portion to perform operations corresponding to the input.
  • the keyboard part and the screen part are not limited to being connected together as shown in Figure 1.
  • the screen part and the keyboard part may be two separate parts.
  • the screen part includes a connection structure 15, and the keyboard part includes a connection structure 16.
  • the screen part The connection structure 15 of the part can be coupled with the connection structure 16 of the keyboard part and connected together.
  • coupling may include but is not limited to electrical connection, mechanical coupling, etc.
  • the words coupling and connection in the embodiments of this application can be considered to be equivalent in terms of specifically indicating electrical connection.
  • Electrical connection includes connecting wires to each other or indirectly connecting through other devices to realize the intercommunication of electrical signals. This embodiment does not limit the specific electrical connection method.
  • the screen part may include a touch screen, and the touch screen may be used to receive user input and trigger the screen part to perform operations corresponding to the input.
  • the keyboard part can receive the user's input and send the input to the screen part through the connection structure 15 and the connection structure 16 , and the screen part can perform operations corresponding to the input.
  • the second portion of the electronic device is provided with one or more speakers through which the electronic device can play audio.
  • one or more speakers of the electronic device can push the air to emit sound waves through vibration, and the sound waves can pass through the sound outlet of the electronic device (for example, the sound outlet located on the side of the second part of the electronic device, between the second part and The positive sound outlet on the edge adjacent to the central axis) or the gap output of the second part of the keyboard reaches the user's ears, so that the user can listen to the audio played by the electronic device.
  • all or part of the one or more speakers can be arranged with the inner surface of the C shell, the inner surface of the D shell, the gap between the C shell and the D shell, the connection point between the C shell and the D shell (i.e. the electronic side of the second part of the device), etc., the embodiment of the present application does not limit the position of the one or more speakers in the second part.
  • the speaker provided in the second part of the electronic device may be called a second part side sound unit.
  • the second side sound unit can be a dynamic speaker, a moving iron speaker, a piezoelectric speaker, a micro-electro-mechanical system (MEMS) speaker, a piezoelectric ceramic speaker, a magnetostrictive speaker, etc.
  • the surface of the C shell of the second part of the electronic device is used as the XOY plane, and the direction perpendicular to the XOY interface and pointing from the D shell to the C shell is the Z axis.
  • the origin point O can be located on the left edge of the central axis
  • the Y axis can coincide with the central axis and direction is to the right
  • the X axis is perpendicular to the Y axis
  • the second side sound unit of the electronic device is disposed on the side of the second part, wherein the second side sound unit of the electronic device includes a sound unit 21 and a sound unit 22, and the second side sound unit 21 is located on the side of the second part.
  • the side sound unit 22 of the second part is located on the right side of the second part.
  • the sound-generating unit 21 and the sound-generating unit 22 are represented by a horn shape, where the horn shape is only a representation of the sound-generating unit.
  • the horn shape does not limit the shape of the sound-generating unit, nor does it limit the specific position of the sound-generating unit.
  • the speakers in the following figures do not limit the shape of the sound-emitting unit, nor do they limit the specific position of the sound-emitting unit.
  • FIG. 2B shows the sound pressure distribution on the YOZ plane of the electronic device
  • FIG. 2C shows the sound pressure distribution on the XOZ plane of the electronic device.
  • the sound pressure at the darker position shown in FIG. 2B and FIG. 2C is stronger, and the sound pressure at the lighter color position shown in FIG. 2B and FIG. 2C is weaker.
  • the position of the second side sound unit is in the XOY plane, the closer it is to the XOY plane, the stronger the sound pressure will be.
  • the energy of sound waves is concentrated near the XOY plane, and the beam direction of sound waves with stronger energy is the same as the Z-axis direction.
  • the vibration amplitude of the second side sound unit is larger, which can easily cause resonance of structural parts such as the keyboard and produce noise. , affecting the audio playback effect.
  • the electronic device When the electronic device emits sound through the second part of the side sound unit, if the user uses the electronic device to play audio on a non-desktop surface, due to the weak reflection ability of sound waves on the non-desktop surface, the audio external playback performance will be worse than when the user uses the electronic device to play audio on the desktop. The audio playback performance is poor. Therefore, electronic equipment that uses the second part of the side sound unit to produce sound has higher environmental requirements for the electronic equipment. The desktop where the electronic equipment needs to be placed reflects the sound waves, so that the user can hear the reflected audio energy at the same time, the sound is clearer, and the playback effect is better. .
  • the second part of the electronic device is provided with one or more speakers, and the one or more speakers provided in the second part may be called a second part side sound unit.
  • the first part of the electronic device is also provided with one or more speakers, and the one or more speakers provided in the first part may be called a first part side sound unit.
  • the electronic device can play audio together with the first side sound unit through the second side sound unit.
  • the description of the side sound unit of the second part of the electronic device can be referred to the above-mentioned embodiment shown in FIG. 2A , and will not be described again here.
  • all or part of the first partial side sound-emitting unit in one or more electronic devices may be disposed on the inner surface of the A case or the B case, or the first partial side sound-emitting unit may be disposed at the connection between the A case and the B case. (side edge of the screen part).
  • the first part of the side sound unit may be a smaller speaker, such as a piezoelectric ceramic speaker, a magnetostrictive speaker, etc.
  • the first partial side sounding unit is a piezoelectric ceramic speaker
  • the first partial side sounding unit can be disposed on the back of the display screen of the electronic device case A, and the piezoelectric ceramic speaker can transfer its own deformation to the display screen through torque, so that The display vibrates and sounds.
  • the piezoelectric ceramic speaker may include multiple layers of piezoelectric ceramic sheets. When a piezoelectric ceramic piece expands and contracts, it will cause the display screen to bend and deform, causing the entire display screen to form a bending vibration, so that the display screen can push air and produce sound.
  • the first side sound-emitting unit is a piezoelectric ceramic speaker.
  • the first side sound-emitting unit may also be a moving coil speaker, a moving iron speaker, a piezoelectric speaker, a MEMS speaker, etc.
  • the embodiments of this application are This is not a limitation.
  • the C shell surface of the second part of the electronic device is used as the XOY plane, and the direction perpendicular to the XOY interface and pointing from the D shell to the C shell is the Z axis.
  • the origin point O is located on the left side of the central axis
  • the Y-axis is located on the central axis and is directed to the right
  • the X-axis is perpendicular to the Y-axis
  • the electronic device includes three first side sound units and two second side sound units.
  • the two second part side sound units of the electronic device can be respectively disposed on opposite sides of the keyboard part.
  • the second part side sound unit 31 is located on the left side of the second part
  • the second part side sound unit 32 is located on the keyboard part.
  • the first side sound-emitting unit of the electronic device can be arranged in the middle and both sides of the first part B shell, as shown in Figure 3B.
  • the three first part side sound units are all attached to the inside of the B shell, and the three first part side sound units are attached to the inside of the B shell.
  • the unit includes a first partial side sound emitting unit 33 , a first partial side sound emitting unit 34 and a first partial side sound emitting unit 35 .
  • the side sound unit 35 of the first part is located on the vertical central axis of the first part.
  • the first side sound unit 33 is located in the left area of the vertical central axis of the first part, and the first side sound unit 34 is located in the right area of the vertical central axis of the first part.
  • the distance between the first partial side sound emitting unit 33 and the first partial side sound emitting unit 35 is the same as the distance between the first partial side sound emitting unit 34 and the first partial side sound emitting unit 35 .
  • the three first partial side sound generating units and the two second partial side sound generating units are only examples.
  • the number and composition of the sound generating units of the electronic device may not be limited to the composition shown in Figure 3A.
  • the electronic device It may include 2 first partial side sound units and 2 second partial side sound units.
  • the two first side sound units may be respectively located in the left and right areas of the first part, and are symmetrical with respect to the central vertical plane of the central axis.
  • the distance between the two first side sound units and the central axis is equal, and they are equidistant from the central axis. Some of the center points are equidistant.
  • the two second part side sound units can be located in the left and right areas of the second part respectively, and are symmetrical with respect to the central vertical plane of the central axis.
  • the distance between the two second part side sound units and the central axis is equal.
  • the distance from the center point of the second part is equal.
  • the electronic device may also include a greater or smaller number of first partial side sound units and a greater or smaller number of second partial side sound units, which are not limited in the embodiments of the present application.
  • the first part of the side sound unit (the second part of the side sound unit) of the electronic device is located on the first side (for example, the left side, the right side) of the first part (the second part), in order for the user to hear For balanced sound, there must be a sound-emitting unit located in a position symmetrical to the mid-vertical plane of the sound-emitting unit relative to the central axis.
  • the position of the first partial side sound unit 35 shown in FIG. 3A is only an example and is not limited to the position shown in FIG. 3A .
  • the first partial side sound unit 35 may be located at other positions in the first part.
  • the first part of the side sound unit 35 may be located near the first part of the A shell.
  • the first part of the side sound unit 35 may be attached to the inside of the A shell and vibrate the A shell to produce sound.
  • the side sound unit 35 of the first part may be located inside the first part and the direction of the speaker is toward the A case, and a sound outlet is provided at a corresponding position of the A case.
  • the sound wave direction of the sound emitted by the first partial side sound unit 35 is directed from the B shell to the A shell.
  • the first part of the side sound unit 35 can be used to play background sounds in the audio, so that the sound source playing the background sounds is further away from the user, giving the user a better sense of immersion.
  • the first side sound unit 35 can also be used to play sounds emitted by objects (for example, people, animals, objects, etc.) that are farther away from the user in the video picture, and use sound to represent the positional relationship of each object, bringing better comfort to the user. Movie viewing experience.
  • the first partial side sound unit 35 may be located at the top of the side edge of the first portion, so that the sound wave direction of the sound emitted by the first partial side sound unit 35 is upward.
  • the first side sound unit 35 can be used to play sky sounds in audio.
  • sky sound is the sound emitted by the specified sky object in the audio.
  • the specified sky object can be a flying object (for example, an airplane, a bird, etc.), a lightning object, etc. In this way, the user can hear the height information of the specified sky object in the audio and increase the user's immersion in listening to the audio.
  • the first side sound unit 35 can also be used to play sounds emitted by objects located at the top of the video screen, using sound to represent the positional relationship of each object, thereby providing users with a better viewing experience.
  • the first side sounding unit when the first side sounding unit directly plays the audio source data, if the signal power of the audio to be played is low, the amplitude of the first side sounding unit is small and the sound quality is poor.
  • the first side sound unit may be connected to the audio power amplifier chip through a sound unit connection line.
  • the sound unit connection line may be the sound unit connection line 36, the sound unit connection line 37, and the sound unit connection line 38 shown in FIG. 3B.
  • the sound unit connection line 36 can be used to connect the first side sound unit 33 and the audio power amplifier chip
  • the sound unit connection line 37 can be used to connect the first side sound unit 34 and the audio power amplifier chip
  • the sound unit connection line 38 can be used to connect the first side sound unit 33 and the audio power amplifier chip.
  • FIG. 3B only shows a part of the connection lines of the sound-generating units, and no specific limitations should be placed on the connection lines of the sound-generating units.
  • the audio power amplifier chip can amplify the analog audio signal, and then send the amplified analog audio signal to the first side sound unit, so that the first side sound unit can vibrate and sound based on the amplified analog audio signal. In this way, the power of the amplified analog audio signal is higher, and the higher power analog audio signal is used to drive the first part of the side sound unit, so that the audio can be played with higher sound quality.
  • the second side sound unit can also be connected to the audio power amplifier chip through the sound unit connection line to receive the amplified audio signal sent by the audio power amplifier chip.
  • FIG. 3C shows the sound pressure distribution on the YOZ plane of the electronic device
  • FIG. 3D shows the sound pressure distribution on the XOZ plane of the electronic device.
  • the sound pressure at the position shown in FIG. 3C and FIG. 3D is darker as the color is stronger, and the sound pressure at the position as the color is lighter is weaker.
  • the beam direction of the sound wave with stronger energy is the vertical direction of the X-axis and the Z-axis.
  • the beam direction of the sound wave is toward the mid-perpendicular direction of the X-axis and the Z-axis, which is usually toward the position of the user's ears when using the electronic device, the sound can be better spread to both sides of the user. Near the ear, it improves the user’s audio listening experience.
  • the electronic device including only the second part of the side sounding unit has a maximum sound pressure value of 101.395 decibels in the YOZ plane, as shown in Figure 3C, including the first part of the side sounding unit and the second part
  • the maximum sound pressure value of the electronic equipment of the side sound unit in the YOZ plane is 103.513 decibels.
  • the maximum sound pressure value in the XOZ plane for the electronic device including only the second side sound unit is 115.276 decibels.
  • the electronic device including the first side sounding unit and the second side sounding unit has a maximum sound pressure value of 118.504 decibels in the XOZ plane.
  • the maximum sound pressure value of the electronic device including the first part of the side sounding unit and the second part of the side sounding unit is greater than that of the electronic device including only the second part of the side sounding unit.
  • the sound beam is directed toward the user's ears. In this way, it can bring a better listening experience to users.
  • the first side sounding unit assumes a part of the function of playing audio, so that the vibration amplitude of the second side sounding unit is reduced. , it is not easy to cause keyboard and other structures
  • the resonance of the components reduces the noise generated when the second side sound unit plays audio.
  • the first side sound unit does not require the desktop to reflect sound waves, and the placement of electronic devices has less impact on the audio playback effect.
  • the first side sound unit when the first side sound unit is only attached to the inside of the B shell, the sound propagation effect in the direction facing the user can be improved and the user's sense of immersion when watching videos can be improved.
  • the first part of the side sounding unit When the first part of the side sounding unit is only attached inside the A shell, the first part of the side sounding unit can drive the A shell to produce sound.
  • the first side sound unit can be used to play low-frequency audio signals. In this way, since the low-frequency audio signal is not very directional, it can improve the vibration of the low-frequency, enhance the atmosphere of the video playback, optimize the timbre of the audio, and will not cause the B shell to vibrate and affect the use of the screen. At the same time, it can replace part of the low-frequency energy of the speakers on the keyboard side to reduce the impact of noise experience caused by vibration on the keyboard side.
  • the electronic device can receive an input for playing the designated audio, and in response to the input, based on the first mode, process the audio data (also called audio source data) of the designated audio of the electronic device to obtain Audio data corresponding to multiple sound-emitting units of the electronic device. Multiple sound units of the electronic device play corresponding audio data at the same time. In this way, the electronic device can use the first part of the side sound unit and the second part of the side sound unit to collaboratively play the first audio. Based on the joint sound production of the two parts, the surround and immersion of the sound field of the electronic device can be improved, and the audio playback effect can be enhanced. .
  • the audio playback method includes the following steps:
  • the sound effect mode of the electronic device is the first mode, and the electronic device receives an input for playing specified audio.
  • the electronic device supports one or more sound effect modes, and the one or more sound effect modes include the first mode.
  • the sound effect mode of the electronic device is the first mode.
  • the first mode may be a sound effect mode set by default on the electronic device, or a sound effect mode selected by the user.
  • sound channels refer to mutually independent audio signals collected at different spatial locations during sound recording.
  • the number of sound channels can be understood as the number of sound sources during sound recording.
  • 5.1 channels include left channel, right channel, left surround channel, right surround channel, center channel and low-frequency channel.
  • the data of the left channel includes sound data in the sound source data that simulates the hearing range of the user's left ear
  • the data of the right channel includes sound data in the sound source data that simulates the hearing range of the user's right ear.
  • the data of the center channel includes sound data in the sound source data within the range where the hearing range of the simulated user's left ear overlaps the hearing range of the right ear.
  • the center channel includes vocal dialogue.
  • the data of the left surround channel may be used to represent the direction of the left ear, and the data of the left surround channel may include audio data in the data of the left channel that is different from the data of the center channel.
  • the data of the right surround channel may be used to represent the direction of the right ear, and the data of the right surround channel may include audio data in the data of the right channel that is different from the data of the center channel.
  • the data of the low-frequency channel includes audio data with a frequency lower than a specified frequency value (for example, 150 Hz) in the sound source data.
  • a specified frequency value for example, 150 Hz
  • the electronic device can use a low-pass filter to filter out audio data with a frequency greater than a specified frequency value in the source data to obtain low-frequency channel data.
  • the sound effect modes supported by the electronic device may include but are not limited to surround enhancement mode, low frequency enhancement mode, dialogue enhancement mode and/or loudness enhancement mode.
  • the electronic device plays audio data including the left surround channel and the right surround channel through the first side sound unit, thereby enhancing the surround feeling of the audio played by the electronic device.
  • the electronic device plays audio data including low-frequency channels through the first side sound unit to enhance the rhythm of the audio played by the electronic device.
  • the dialogue enhancement mode the electronic device plays audio data including the center channel through the first side sound unit, thereby enhancing the human voice of the audio played by the electronic device.
  • the loudness enhancement mode the electronic device plays audio data including the left channel and the right channel through the first side sound unit, thereby enhancing the loudness of the audio played by the electronic device.
  • the audio data (also called the first audio data) played by the first part of the side sound unit of the electronic device includes a channel that is different from the second part of the side sound unit of the electronic device.
  • the audio data (also called second audio data) played by the unit includes different channels. It can be understood that since the second audio data includes different channels than the first audio data, one or more of the amplitude, waveform or frequency of the second audio data and the first audio data are different.
  • the channels of the second audio data are the same as the channels of the first audio data.
  • the amplitude of the second audio data is different from the amplitude of the first audio data.
  • the second audio data played by the electronic device includes a channel
  • the second audio data played by the electronic device includes The channels included in an audio data are shown in Table 1:
  • the second audio data obtained based on the sound source data includes data of the left channel, the right channel, the center channel and/or the low-frequency channel.
  • the first audio data obtained based on the sound source data includes data of the left surround channel and the right surround channel.
  • the surround channel data can be played through the first side sound unit, so that the sound field of the electronic device has a better sense of envelopment.
  • the second audio data obtained based on the sound source data includes a left channel, a right channel, a left surround channel, a right surround channel and/or a low frequency channel.
  • the first audio data obtained based on the sound source data includes only the center channel. In this way, the clarity of dialogue can be increased through the first part of the side-firing unit.
  • the second audio data obtained based on the sound source data includes a left channel and a right channel.
  • the first audio data obtained based on the sound source data includes a left channel and a right channel. In this way, the loudness of audio playback can be increased through the first part of the side sound unit.
  • the second audio data obtained based on the sound source data includes a left channel, a right channel, a left surround channel, a right surround channel and/or a center channel.
  • the first audio data obtained based on the sound source data includes a low-frequency channel.
  • low-frequency rhythm signals can be played through electronic devices, such as sound signals of drum beats, bass, drums, etc. in specified audio, so that users can feel the power brought by heavy bass.
  • the electronic device when the electronic device only includes one first side sound unit, the electronic device supports low-frequency enhancement mode, loudness enhancement mode and/or dialogue enhancement mode. When the electronic device includes 2 or more first side sound units, the electronic device supports low frequency enhancement mode, loudness enhancement mode, surround enhancement mode and/or dialogue enhancement mode.
  • the electronic device only includes one first side sound unit.
  • the first part of the side-firing unit can play the data of the low-frequency channel.
  • the first part of the side-firing units can play back center channel data.
  • the first side sound unit can play the data of the left channel and the data of the right channel at the same time.
  • the electronic device can superimpose the data of the left channel and the data of the right channel together, and use the first side sound unit to play the superimposed data of the left channel and the data of the right channel.
  • the electronic device may downmix the data of the left channel and the data of the right channel to obtain data including only the mono channel, and use the first part of the side sound unit to play the data of the mono channel.
  • the number of channels included in the audio data after downmixing is smaller than the number of channels included in the audio data before downmixing.
  • the side sound unit of the first part of the electronic device may be located on the A shell of the first part of the electronic device, or on the B shell of the first part of the electronic device, or between the A shell and B shell of the first part of the electronic device. in the cavity between, or at the connection between the A shell and the B shell of the first part of the electronic device.
  • the first part of the side sounding unit of the electronic device is located in the A shell of the first part of the electronic device. In this way, in the low frequency enhancement mode, the electronic device plays the data of the low-frequency channel through the first part of the side sounding unit, which not only enhances the low frequency It also reduces the noise caused by the vibration of the second part.
  • the first part of the side sound unit of the electronic device is located in the B shell of the first part of the electronic device.
  • the electronic device plays the data of the center channel through the first part of the side sound unit.
  • the sound source is located near the display screen, allowing the user to feel that the sound is coming from the screen, thereby enhancing the immersion of the video playback.
  • the electronic device can use all the first side sound units to play back the data of the low-frequency channel.
  • dialogue enhancement mode the electronic device can use all first section side-firing units to play center channel data.
  • loudness enhancement mode the electronic device uses the first side sound unit located on the left side of the electronic device to play the data of the left channel, and the first side sound unit located on the right side of the electronic device plays the data of the right channel.
  • surround enhancement mode the electronic device uses The first side sound unit on the left side of the device plays the data of the left surround channel, and the first side sound unit located on the right side of the electronic device plays the data of the right surround channel.
  • the side sound unit of the first part of the electronic device can be located on the A shell of the first part of the electronic device at the same time, or on the B shell of the first part of the electronic device, or on the A shell and B shell of the first part of the electronic device. In the cavity between, or at the connection between the A shell and the B shell of the first part of the electronic device.
  • the two first part side sound units are respectively located on the left and right sides of the first part, and are symmetrical with respect to the middle vertical plane of the central axis.
  • the side sound unit of the first part of the electronic device is located in the A shell of the first part of the electronic device. In other examples, the side sound unit of the first part of the electronic device is located in the B shell of the first part of the electronic device.
  • the sound-generating unit plays the data obtained by superimposing the multiple channels, or plays the data obtained by downmixing the multiple channels.
  • the specific description of using one sound unit to play data of multiple channels will not be repeated below.
  • the electronic device can use all the first side sound units to play the data of the low-frequency channel.
  • the electronic device can use all first section side-firing units to play center channel data.
  • the electronic device can use the first side sounding unit on the left to play the data of the left channel, the first side sounding unit on the right to play the data of the right channel, and use another first side sounding unit to play the data of the left channel. channel data and right channel data.
  • the electronic device uses the first side sound unit on the left side to play the data of the left surround channel, the speaker on the right side plays the data of the right surround channel, and uses another speaker to play the data of the left surround channel and the right side speaker.
  • Surround channel data Surround channel data.
  • the side sound unit of the first part of the electronic device may be located on the A shell of the first part of the electronic device, or on the B shell of the first part of the electronic device, or between the A shell and B shell of the first part of the electronic device. in the cavity between, or at the connection between the A shell and the B shell of the first part of the electronic device.
  • a first part side sound unit of the electronic device is located in the middle of the first part and is on the mid-vertical plane of the central axis.
  • the other two side sound units of the first part of the electronic device are respectively located on the left and right sides of the first part, and are symmetrical with respect to the middle vertical plane of the central axis.
  • the side sound unit of the first part of the electronic device is located in the A shell of the first part of the electronic device. In other examples, the side sound unit of the first part of the electronic device is located in the B shell of the first part of the electronic device.
  • one first side sound unit of the electronic device is located in the A shell of the electronic device, and the other two first side sound units of the electronic device are located in the B shell of the electronic device.
  • the electronic device can play low-frequency channel data in the low-frequency enhancement mode only through the side sound unit located in the first part of the A shell of the electronic device.
  • the electronic device can use the first side sounding unit on the left side of the other two first side sounding units to play the data of the left channel and the first side sounding unit on the right side to play the data of the right channel, or, use the first side sounding unit on the left side to play the data of the right channel.
  • the first part side sound unit plays the data of the left surround channel and uses the first part side sound unit on the right to play the data of the right surround channel, or uses the other two first part side sound units to play the sound source data, or does not use this
  • the 2 first side sound units play audio data. In this way, in the low-frequency mode, the sound unit of the B shell does not play the data of the low-frequency channel, which weakens the vibration amplitude of the B shell.
  • the first side sound unit is located in the A shell of the electronic device, and the other two first side sound units are located in the B shell of the electronic device.
  • the first side sound unit can be located in the first part.
  • the other two side sounding units of the first part are located at the same time in the A shell, B shell, the cavity between A shell and B shell or the connection between A shell and B shell in the first part , and the two first partial side sound units are symmetrical with respect to the mid-vertical plane of the central axis.
  • the electronic device when the electronic device includes more first-part side sound units, the electronic device can use all the first-part side sound units to play the data of the low-frequency channel in the low-frequency enhancement mode.
  • dialogue enhancement mode center channel data is played using all first section side-firing units.
  • loudness enhancement mode the electronic device uses one or more first side sound units on the left side to play the data of the left channel, one or more first side sound units on the right side plays the data of the right channel, and uses other The first part of the side sound unit plays the data of the left channel and the data of the right channel.
  • the electronic device uses one or more first side sound units on the left side to play the data of the left surround channel, and one or more first side sound units on the right side plays the data of the right surround channel, using The other side sound units of the first section play the data of the left surround channel and the data of the right surround channel.
  • the second side sound unit can play data of channels other than the data of the low-frequency channel.
  • the second audio data can include the left channel and the right channel, the left surround channel and the right channel.
  • the second side sound unit can play data of channels other than the data of the center channel.
  • the second audio data can include left channel and right channel, left surround channel and Data for the right surround channel and/or center channel.
  • the second side sound unit can play the data of the left channel and the data of the right channel at the same time.
  • the electronic device can superimpose the data of the left channel and the data of the right channel together, and use the second part of the side sound unit to play Put the left channel data and right channel data that are superimposed together.
  • the electronic device can downmix the data of the left channel and the data of the right channel to obtain data including only the mono channel, and use the second part of the side sound unit to play the data of the mono channel.
  • the sound-generating unit plays the data obtained by superimposing the multiple channels, or plays the data obtained by downmixing the multiple channels.
  • the specific description of using one sound unit to play data of multiple channels will not be repeated below.
  • the second part of the side sound unit may be located on the C shell of the second part of the electronic device, or on the D shell of the second part of the electronic device, or on the C shell of the second part of the electronic device and D In the cavity between the shells, or at the junction of the C shell and the D shell of the second part of the electronic device.
  • the electronic device can use the second part of the side sound unit to play data of channels other than the data of the low-frequency channel.
  • the second audio data can include the left channel, the right channel, and the left surround. channel and right surround channel and/or center channel data. It can be understood that when the second audio data includes data of the left channel and the right channel, the electronic device uses the second part of the side sound unit located on the left side of the electronic device to play the data of the left channel and uses the second part of the side sound unit located on the right side of the electronic device. The second part of the side sound unit plays the right channel data.
  • the electronic device uses the second side sound unit located on the left side of the electronic device to play the data of the left surround channel and uses the third side sound unit located on the right side of the electronic device.
  • the two-part side sound unit plays the right surround channel data.
  • the electronic device uses all the second side sound units to play the data of the center channel.
  • the electronic device can use the second part of the side sound unit to play data of channels other than the data of the center channel.
  • the second audio data can include left channel and right channel, Data for left and right surround channels and/or low frequency channels. It can be understood that when the second audio data includes data of the left channel and the right channel, the electronic device uses the second part of the side sound unit located on the left side of the electronic device to play the data of the left channel and uses the second part of the side sound unit located on the right side of the electronic device. The second part of the side sound unit plays the right channel data.
  • the electronic device uses the second side sound unit located on the left side of the electronic device to play the data of the left surround channel and uses the third side sound unit located on the right side of the electronic device.
  • the two-part side sound unit plays the right surround channel data.
  • the electronic device uses all the second side sound units to play the data of the low-frequency channel.
  • the electronic device uses the second side sound unit located on the left side of the electronic device to play the data of the left channel, and the second side sound unit located on the right side of the electronic device plays the data of the right channel.
  • the electronic device may use the second part of the side sound unit to play data of channels other than the data of the left surround channel and the right surround channel.
  • the second audio data may include the left channel and the right surround channel. Data for the right channel, low frequency channel and/or center channel.
  • the electronic device uses the second part of the side sound unit located on the left side of the electronic device to play the data of the left channel and uses the second part of the side sound unit located on the right side of the electronic device.
  • the second part of the side sound unit plays the right channel data.
  • the second audio data includes data of a low-frequency channel
  • the electronic device uses all the second side sound units to play the data of the low-frequency channel.
  • the second audio data includes data of the center channel
  • the electronic device uses all the second side sound units to play the data of the center channel.
  • the second part of the side sound unit may be located on the C shell of the second part of the electronic device, or on the D shell of the second part of the electronic device, or on the C shell of the second part of the electronic device and D In the cavity between the shells, or at the junction of the C shell and the D shell of the second part of the electronic device.
  • the two second-part side sound-emitting units of the electronic device are respectively located on the left and right sides of the second part, and are symmetrical with respect to the middle vertical plane of the central axis.
  • the electronic device can use the second side sound unit to play data of channels other than the data of the low-frequency channel.
  • the second audio data can include the left channel, the right channel, the left channel, and the right channel. Data for surround channels and right surround channel and/or center channel.
  • the electronic device uses the second part of the side sound unit located on the left side of the electronic device to play the data of the left channel and uses the second part of the side sound unit located on the right side of the electronic device to play the data of the left channel.
  • the second part of the side sound unit plays the data of the right channel.
  • the electronic device may use the second part of the side sound unit located in the middle of the electronic device to play the data of the left channel and the data of the right channel, or may not use the second part of the side sound unit located in the middle of the electronic device to play the audio data.
  • the electronic device uses the second partial side sound unit located on the left side of the electronic device to play the data of the left surround channel and uses the second side sound unit located on the right side of the electronic device to play the data of the left surround channel.
  • the second part of the side sound unit plays the data of the right surround channel.
  • the electronic device may use the second part of the side sound unit located in the middle of the electronic device to play the data of the left surround channel and the data of the right surround channel, or may not use the second part of the side sound unit located in the middle of the electronic device to play the audio data.
  • the electronic device uses all the second side sound units to play the data of the center channel, or only uses the second side sound unit located in the middle of the electronic device to play the center channel.
  • Channel data When the second audio data only includes data of the center channel, the electronic device uses all the second side sound units to play the data of the center channel, or only uses the second side sound unit located in the middle of the electronic device to play the center channel.
  • the electronic device can use the data located in the electronic
  • the second side sound unit on the left side of the device plays the data of the left channel
  • the second side sound unit on the right side of the electronic device plays the data of the right channel
  • the second side sound unit located in the middle of the electronic device plays the data of the right channel.
  • the electronic device uses the second part of the side sound unit located on the left side of the electronic device to play the data of the left channel
  • the second part of the side sound unit plays the data of the left channel, the data of the right channel and the data of the center channel.
  • the electronic device uses the second side sound unit located on the left side of the electronic device to play the data of the left surround channel , using the second part side sound unit located on the right side of the electronic device to play the data of the right surround channel, and using the second part side sound unit located in the middle of the electronic device to play the data of the center channel.
  • the electronic device uses the second side sound unit located on the left side of the electronic device to play the left
  • the data of the right channel and the data of the left surround channel are played using the second side sound unit located on the right side of the electronic device.
  • the data of the right channel and the data of the right surround channel are played using the second side sound unit located in the middle of the electronic device.
  • the unit plays data from the left channel as well as data from the right channel.
  • the electronic device uses the third audio signal located on the left side of the electronic device.
  • the two-part side sound unit plays the data of the left channel and the data of the left surround channel.
  • the second part side sound unit located on the right side of the electronic device plays the data of the right channel and the data of the right surround channel.
  • the second part side sound unit located on the right side of the electronic device plays the data of the right channel and the data of the right surround channel.
  • the second part of the side sound unit in the middle plays the data of the center channel.
  • the electronic device can use the second side sound unit to play data of channels other than the data of the center channel.
  • the second audio data can include the left channel and the right channel. , left and right surround channels and/or low-frequency channel data.
  • the electronic device can use the second side sound unit on the left to play the data of the left channel, the second side sound unit on the right to play the data of the right channel, and use the other second side sound unit to play the data of the right channel.
  • the unit plays the left channel data and the right channel data.
  • the electronic device uses the second side sound unit to play data of channels other than the data of the left surround channel and the right surround channel.
  • the second audio data may include the left channel and the right surround channel. channel, low frequency channel and/or center channel data.
  • the second part of the side sound unit may be located on the C shell of the second part of the electronic device, or on the D shell of the second part of the electronic device, or on the C shell of the second part of the electronic device and D In the cavity between the shells, or at the junction of the C shell and the D shell of the second part of the electronic device.
  • a second part side sound emitting unit of the electronic device is located in the middle of the second part and is in the middle vertical plane of the central axis.
  • the other two second part side sound units of the electronic device are respectively located on the left and right sides of the second part, and are symmetrical with respect to the middle vertical plane of the central axis.
  • the electronic device can use the second side sounding units to play data of channels other than low-frequency channel data in the low-frequency enhancement mode.
  • the second part of the side sound units is used to play the data of the channels except the data of the center channel.
  • the electronic device uses one or more second-part side sound units on the left to play the data of the left channel, and one or more second-part side sound units on the right to play the data of the right channel, using The middle one or more second part side sound units play the data of the left channel and the data of the right channel.
  • the electronic device uses the second side sound unit to play data of channels other than the data of the left surround channel and the right surround channel.
  • the channels included in the first audio data can be described in Table 1.
  • the second audio data of the electronic device may be audio source data.
  • the second side sound unit of the electronic device can jointly play the audio source data.
  • the electronic device may be in each sound effect mode, and the channels included in the first audio data may be described in Table 1.
  • the electronic device can play specified audio based on the composition of the second side sound unit.
  • the electronic device can play the audio source data through the second side sound unit.
  • the electronic device can play the left channel data obtained based on the sound source data through one part of the second side sounding unit and the other part of the side sounding unit.
  • the second part of the side sound unit plays the right channel data based on the sound source data.
  • the electronic device can play each channel of the 5.1 channel obtained based on the sound source data through different second side sound units of the second side sound unit.
  • the second part of the side sound unit is divided into left channel sound unit, right channel sound unit, left surround channel sound unit, right surround channel sound unit, center channel sound unit and low frequency channel sound unit unit.
  • the audio data including the left channel is played through the left channel sound unit of the second part
  • the audio data including the right channel is played through the right channel sound unit of the second part
  • the audio data including the right channel is played through the left surround channel sound unit of the second part.
  • the audio data including the left surround channel is played through the right surround channel sounding unit of the second part
  • the audio data including the center channel is played through the center channel sounding unit of the second part.
  • playing the audio data including the low-frequency channel through the low-frequency channel sound unit of the second part. And so on.
  • the second side sounding unit of the electronic device when the second side sounding unit of the electronic device supports playing mono audio, two-channel audio and multi-channel audio, the second side sounding unit can play the maximum sound level supported by the second side sounding unit.
  • the specified audio can be played in the form of a number of audio channels, or the second side sound unit can play the specified audio in the form of audio selected by the user.
  • the sound effect modes supported by the electronic device include an all-in-one enhancement mode.
  • the second audio data obtained based on the sound source data includes a left channel, a right channel, and a center channel.
  • the first audio data obtained based on the sound source data includes a left surround channel, a right surround channel and a center channel.
  • each sound effect mode of the electronic device includes an original sound effect mode.
  • the second side sound unit and the first side sound unit of the electronic device jointly play the audio data of the designated audio.
  • the electronic device can also provide a smart enhanced mode.
  • the electronic device can set the sound effect mode to the smart enhanced mode.
  • the intelligent enhanced mode is an electronic device and a combination of one or more of one or more sound effect modes supported by the electronic device. In this way, when the user does not select the sound effect mode, the electronic device can also process the played sound source data in the intelligent enhanced mode, so that the electronic device and the electronic device can play sounds together.
  • the electronic device and the sound effect modes supported by the electronic device include dialogue enhancement mode, surround enhancement mode, loudness enhancement mode, and low frequency enhancement mode.
  • the electronic device can determine the intelligent enhanced mode based on whether to play the video.
  • the smart enhancement mode may be a combination of the surround enhancement mode and the dialogue enhancement mode. That is to say, the first audio data includes a left surround channel, a right surround channel and a center channel.
  • the smart enhancement mode may be a combination of the loudness enhancement mode and the low-frequency enhancement mode, that is, the first audio data includes a left surround channel, a right surround channel, and a low-frequency channel.
  • the electronic device can set the intelligent enhancement mode to be a combination of a low frequency enhancement mode and a loudness enhancement mode.
  • the intelligent enhanced mode is obtained by the electronic device based on the electronic device and the sound effect mode supported by the electronic device, one or more sound effect modes determined by the electronic device always include the intelligent enhanced mode.
  • the data of the left surround channel is the same as the data of the left channel
  • the data of the right surround channel is the same as the data of the right channel
  • the electronic device plays the data of the left surround channel and the data of the right surround channel.
  • the electronic device may display sound effect mode options corresponding to one or more sound effect modes supported by the electronic device, the one or more sound effect mode options include the first mode option, and the one or more sound effect modes include the first mode,
  • the first mode option corresponds to the first mode.
  • the electronic device displays one or more sound effect mode options including the first mode option, it may receive a second input selecting the first mode option to determine the sound effect mode of the electronic device.
  • the first mode option is the loudness enhancement mode option
  • the first mode corresponding to the first mode option is the loudness enhancement mode.
  • the electronic device sets the sound effect mode to the loudness enhancement mode.
  • the electronic device Based on the first mode, the electronic device processes the audio data of the specified audio to obtain the audio data of each sound-emitting unit.
  • audio data specifying audio may be called audio source data.
  • the audio source data is audio data.
  • the audio source data is the audio data of the video, or the audio data corresponding to the video.
  • the electronic device can process the sound source data based on the first mode to obtain audio data of each sounding unit.
  • the sound effect mode of the electronic device is the first mode.
  • the electronic device can parse the audio file of the specified audio.
  • the electronic device can obtain the channels included in the audio source data when parsing the audio file.
  • the electronic device can process the data of each channel included in the sound source data based on the first mode to obtain the audio data required by each sound unit in the first mode. In this way, when the sound effect mode is the first mode, the electronic device can obtain the audio data required by each sound unit of the electronic device no matter which channels the sound source data includes.
  • the electronic device may obtain information based on the sound source data including 2 channels (left channel and right channel). channel) audio data.
  • the electronic device can obtain two pieces of mono-channel data by directly copying the audio source data, and use one piece of mono-channel data as the audio data of the left channel, and the other piece as the right-channel audio data.
  • the audio data of the channel is obtained to obtain the audio source data including the left channel and the right channel, wherein the audio data of the left channel is the same as the audio data of the right channel.
  • the electronic device can copy and obtain two pieces of monophonic sound source data, and then process the two pieces of monophonic sound source data through a specified algorithm to adjust one of the phase difference, amplitude and frequency of the two pieces of sound source data. or multiple items, to obtain the audio source data including the left channel and the right channel.
  • the electronic device can obtain audio data including two channels (left channel and right channel) based on the sound source data as the first audio data and the second audio data.
  • the audio data including 2 channels will be upmixed to obtain 3 or more The audio data of the channel.
  • the upmix operation can be used to increase the number of channels included in the audio data.
  • the electronic device can obtain the first audio data including the low-frequency channel, and the second audio data including the left channel and the right channel, the left surround channel and the right surround channel, and/or the center channel.
  • the electronic device can obtain the first audio data including the left surround channel and the right surround channel, and the second audio data including the left channel and the right channel, the low frequency channel and/or the center channel.
  • the electronic device may obtain the first audio data including the center channel, the second audio data including the left channel and the right channel, the left surround channel and the right surround channel, and/or the low frequency channel.
  • the electronic device determines that the sound source data includes two channels (left channel and right channel), that is, when the sound source is a two-channel sound source, if the sound effect mode of the electronic device is the loudness enhancement mode, the electronic device can directly obtain Contains audio data for left and right channels.
  • the electronic device can upmix the audio data including 2 channels (left channel and right channel) to obtain 3 or more Audio data of more than 3 channels.
  • the electronic device can obtain the first audio data including the low-frequency channel, and the second audio data including the left channel and the right channel, the left surround channel and the right surround channel, and/or the center channel.
  • the electronic device can obtain the first audio data including the left surround channel and the right surround channel, and the second audio data including the left channel and the right channel, the low frequency channel and/or the center channel.
  • the electronic device may obtain the first audio data including the center channel, the second audio data including the left channel and the right channel, the left surround channel and the right surround channel, and/or the low frequency channel.
  • the electronic device may determine that the sound source data includes 3 or more channels (the 3 or more channels include the left channel and the right channel), that is, when the sound source is a multi-channel sound source, If the sound effect mode of the electronic device is the loudness enhancement mode, the electronic device can directly obtain the first audio data and the second audio data including the left channel and the right channel. Alternatively, the electronic device may downmix the sound source data to obtain sound source data including only the left channel and the right channel, and use the downmixed sound source data as the first audio data and the second audio data. Among them, the downmix operation can be used to reduce the number of channels included in the audio data.
  • the electronic device can use the data of the low-frequency channel as the first audio data, and use the data of other channels of the sound source data as the second audio data. .
  • the electronic device can upmix the sound source data to obtain sound source data including more channels, use the low-frequency channel data of the upmixed sound source data as the first audio data, and use the other channels of the upmixed sound source data to The data is used as the second audio data.
  • the second side sounding units of the electronic device can play data of more channels.
  • the electronic device can perform an upmixing operation on the audio source data to obtain audio source data including low-frequency channels.
  • the number of channels of the upmixed audio source data is greater than that of the audio source data before upmixing. Number of channels.
  • the electronic device may use the data of the low-frequency channel as the first audio data, and the data of other channels of the sound source data as the second audio data.
  • the electronic device can use the data of the center channel as the first audio data, and use the data of other channels of the sound source data. as the second audio data.
  • the electronic device may upmix the sound source data to obtain sound source data including more channels, use the center channel data of the upmixed sound source data as the first audio data, and use the other sound source data of the upmixed sound source data as the first audio data.
  • the data of the channel is used as the second audio data. In this way, when the number of the second side sounding units of the electronic device is larger, the second side sounding units of the electronic device can play data of more channels.
  • the electronic device can perform an upmixing operation on the sound source data to obtain sound source data including the center channel.
  • the number of channels of the upmixed sound source data is greater than that of the sound source before upmixing. Number of channels of data.
  • the electronic device may use the data of the center channel as the first audio data, and the data of other channels of the sound source data as the second audio data.
  • the electronic device can use the data of the left surround channel and the right surround channel as the first audio data, and use the sound source data as the first audio data.
  • the data of other channels of data is used as the second audio data.
  • the electronic device can upmix the sound source data to obtain sound source data including more channels, use the data of the left surround channel and the right surround channel of the upmixed sound source data as the first audio data, and use the upmixed sound source data as the first audio data. Data of other channels of the sound source data are used as second audio data. In this way, when the second part of the electronic device has a larger number of side sound units, the second part of the electronic device can play more sound channels. The data.
  • the electronic device can perform an upmix operation on the sound source data to obtain the sound source data including the left surround channel and the right surround channel.
  • the upmixed sound source data The number of channels is greater than the number of channels of the audio source data before upmixing.
  • the electronic device may use the data of the left surround channel and the right surround channel as the first audio data, and use the data of other channels of the sound source data as the second audio data.
  • the electronic device can use one second side sound unit to play data of multiple channels, or, The data of other channels of the sound source data are downmixed to obtain second audio data whose number of channels is less than or equal to the number of side sound units in the second part.
  • the electronic device may use the sound source data as the second audio data.
  • the electronic device may determine whether to perform an upmix or downmix operation on sound source data including 3 or more channels based on the number of side sound units in the second part.
  • the electronic device can perform processing on the sound source data when the number of other channels of the sound source data is less than the number of side sound units in the second part. Upmixing operation, or copying the data of some channels to obtain sound source data that includes the same number of other channels as the number of the second side sound units, and using the second side side sound units to play the data of the other channels .
  • multiple second-part side sound units are used to play the data of the same channel, or the electronic device can perform an upmixing operation on the sound source data to obtain a system that includes other sound channels that are greater than the number of the second-part side sound units. Audio source data.
  • the electronic device may not perform an upmixing or downmixing operation on the audio source data when the number of other channels of the audio source data is equal to the number of side sound units in the second part.
  • the electronic device may use one second side side sound unit to play data of multiple channels, or alternatively, the electronic device may use other sound channels of the sound source data to Perform a downmixing operation on the data of the second part of the audio channel, or superimpose the data of some of the audio channels to obtain second audio data whose number of audio channels is less than or equal to the number of the second part of the side sound units.
  • the electronic device when the sound source data does not include the data of the vocal channels required for the first audio data in the first mode, the electronic device must perform an upmix operation on the sound source data to obtain the data including the first audio data in the first mode. The audio source data of the required channel. Afterwards, the electronic device can determine whether to perform further upmixing or downmixing operations on the sound source data based on the number of side sound units in the second part.
  • the electronic device may determine whether to perform an upmix or downmix operation on sound source data including 3 or more channels based on the number of sound-emitting units of the electronic device.
  • the sound source data is upmixed to include 5.1 channels (left channel, right channel, left surround channel, right surround channel, low frequency channel and center channel) as an example to explain in detail that the electronic device obtains the audio data of each sound-emitting unit based on the first mode.
  • 5.1 channels left channel, right channel, left surround channel, right surround channel, low frequency channel and center channel
  • the process of the electronic device obtaining the audio data of each sound-emitting unit based on the first mode is shown in Figure 5:
  • the electronic device obtains the channel information of the sound source data.
  • the electronic device acquires the audio channels included in the audio source data.
  • the electronic device can obtain which channels the audio source data includes when decoding the audio file according to the decoding format of the audio file of the specified audio. It can be understood that when the electronic device obtains the sound channels included in the sound source data, it can obtain the number of sound channels included in the sound source data.
  • the electronic device determines whether the number of channels of the audio source data is less than 2. When the electronic device determines that the number of channels of the audio source data is less than 2, step S504 is executed; when the electronic device determines that the number of channels of the audio source data is greater than or equal to 2, step S504 is executed. Step S503.
  • the electronic device determines whether the number of channels of the audio source data is greater than 2. When the electronic device determines that the number of channels of the sound source data is less than or equal to 2, step S505 can be performed; when the electronic device determines that the number of channels of the sound source data is greater than 2, step S506 can be performed.
  • the electronic device processes the sound source data and obtains the included two-channel sound source data.
  • the two-channel sound source data includes the left channel and the right channel.
  • the electronic device determines that the sound source data is monaural sound source data.
  • Electronic equipment can copy monophonic sound source data to obtain binaural sound source data according to.
  • the electronic device can directly copy the mono sound source data to obtain two pieces of mono sound source data, and use one piece of the mono sound source data as the audio data of the left channel and the other piece as the audio data of the right channel. data to obtain audio source data including the left channel and the right channel, where the audio data of the left channel is the same as the audio data of the right channel.
  • the electronic device copies and obtains two pieces of monophonic sound source data, it can process the two pieces of monophonic sound source data through a specified algorithm to adjust one of the phase difference, amplitude and frequency of the two pieces of sound source data, or Multiple items are used to obtain the audio source data including the left channel and the right channel.
  • the electronic device Based on the first mode, the electronic device processes the sound source data including the left channel and the right channel to obtain audio data of each sounding unit.
  • the electronic device can process the two-channel sound source data through relevant algorithms to obtain the first audio data including the left surround channel and the right surround channel. For example, when the electronic device includes the sound unit shown in Figure 3A, the electronic device can play the data of the left surround channel through the first partial side sound unit 33, and play the data of the right surround channel through the first partial side sound unit 34, The data of the left channel is played through the second partial side sound emitting unit 31 , and the data of the right channel is played through the second partial side sound emitting unit 32 . Among them, since the electronic device includes the first partial side sound unit 35, the electronic device can control the first partial side sound unit 35 to play the data of the left surround channel and the right surround channel, or not use the first partial side sound unit 35 to play audio. data.
  • the electronic device can play the sound source data through the first side sound unit 35 .
  • the electronic device can extract the data of the low-frequency channel in the two-channel sound source data to obtain the first audio data including the low-frequency channel. For example, when the electronic device includes the sound unit shown in FIG. 3A , the electronic device can play the data of the low-frequency channel through the first partial side sound unit 33 , the first partial side sound unit 34 and the first partial side sound unit 35 .
  • the two-part side sound unit 31 plays the data of the left channel
  • the second part-side sound unit 32 plays the data of the right channel.
  • the electronic device can process the two-channel sound source data through relevant algorithms, extract the data of the center channel in the two-channel sound source data, and obtain the first audio data including the center channel. For example, when the electronic device includes the sound unit shown in FIG. 3A, the electronic device can play the data of the center channel through the first partial side sound unit 33, the first partial side sound unit 34 and the first partial side sound unit 35.
  • the second partial side sound unit 31 plays the data of the left channel
  • the second partial side sound unit 32 plays the data of the right channel.
  • the electronic device can use the two-channel sound source data as the first audio data and the second audio data.
  • the electronic device can play the data of the left channel through the first side sound unit 33, play the data of the right channel through the first side sound unit 34, and play the data of the right channel through the first side sound unit 34.
  • the two-part side sound unit 31 plays the data of the left channel
  • the second part-side sound unit 32 plays the data of the right channel. Since the electronic device includes the first partial side sound unit 35 , the electronic device can control the first partial side sound unit 35 to play the data of the left channel and the right channel, or not use the first partial side sound unit 35 to play audio data.
  • the electronic device can perform an upmixing operation on the two-channel sound source data to obtain sound source data including 5.1 channels, and then obtain the first audio data. and second audio data.
  • the electronic device can perform an upmixing operation on the two-channel sound source data to obtain sound source data including 5.1 channels, and then obtain the first audio data. and second audio data.
  • the electronic device can use mono-channel data or dual-channel data as the second audio data, or the electronic device can process the mono-channel data or dual-channel data to obtain one or two-channel data. Second audio data for multiple channels.
  • 5.1 channels include left channel, right channel, center channel, left surround channel, right surround channel and low-frequency channel.
  • the electronic device can determine whether the number of channels of the audio source data is less than 5.1.
  • the electronic device may execute step S507 when it is determined that the number of channels of the sound source data is less than or equal to 5.1; the electronic device may execute step S509 when it is determined that the number of channels of the audio source data is greater than 5.1.
  • the number of channels less than 5.1 can be understood to mean that the number of channels included in the sound source data is less than the number of channels included in the 5.1 channel.
  • the channels of the sound source data are 4 channels, 4.1 channels, 3 channels, 3.1 channels or 2.1 channels, the number of channels is less than 5.1.
  • 4-channel can be understood as the sound source data including left channel, right channel, left surround channel and right surround channel
  • 4.1-channel can be understood as the sound source data including left channel, right channel, low-frequency channel, Left surround channel and right surround channel, and so on.
  • the number of channels included in the audio source data is greater than the number of channels included in the 5.1 channel.
  • the channels of the audio source data are 7.1 channels or 10.1 channels
  • the number of channels is greater than 5.1.
  • 7.1 channel can be understood as the sound source data including left channel, right channel, left front surround channel, right front surround channel, left rear surround channel, right rear surround channel, center channel and low frequency channel.
  • the electronic device can determine whether the number of channels of the audio source data is equal to 5.1.
  • the electronic device may execute step S510 when determining that the number of channels of the audio source data is equal to 5.1; the electronic device may execute step S508 when determining whether the number of channels of the audio source data is not equal to 5.1.
  • the electronic device determines that the number of channels of the audio source data is less than 5.1, and can use a specified upmixing algorithm to upmix the audio source data to obtain 5.1 channel audio source data.
  • the electronic device determines that the number of channels of the audio source data is greater than 5.1, it can downmix the audio source data to obtain 5.1-channel audio source data through a specified down-mixing algorithm.
  • the electronic device Based on the first mode, the electronic device processes the 5.1-channel sound source data to obtain the audio data of each sound-emitting unit.
  • the electronic device can extract the data of the left surround channel and the data of the right surround channel in the 5.1-channel sound source data to obtain the first audio data.
  • the electronic device can play the data of the left surround channel through the first partial side sound unit 33, and play the data of the right surround channel through the first partial side sound unit 34, The data of the left channel is played through the second partial side sound emitting unit 31 , and the data of the right channel is played through the second partial side sound emitting unit 32 .
  • the electronic device since the electronic device includes the first partial side sound unit 35 , the electronic device can control the first partial side sound unit 35 to play the data of the low-frequency channel and/or the center channel, or play without using the first partial side sound unit 35 audio data.
  • the electronic device can also play the data of the low-frequency channel and/or the center channel through the second partial side sound unit 31 and the second partial side sound unit 32 .
  • the unprocessed sound source data includes left surround channel, right surround channel and other channels
  • the electronic device can directly extract the data of the corresponding channels of the sound source data to obtain the second audio data and the first audio data.
  • the electronic device can extract the data of the low-frequency channel in the processed sound source data to obtain the first audio data including the low-frequency channel. For example, when the electronic device includes the sound unit shown in FIG. 3A , the electronic device can play the data of the low-frequency channel through the first partial side sound unit 33 , the first partial side sound unit 34 and the first partial side sound unit 35 .
  • the two-part side sound unit 31 plays the data of the left channel and the second part-side sound unit 32 plays the data of the right channel, and/or the second part-side sound unit 31 plays the data of the left surround channel and the second part-side sound unit 32 plays the data of the left surround channel.
  • the partial side sound unit 32 plays the data of the right surround channel, and/or plays the data of the center channel through the second partial side sound unit 31 and the second partial side sound unit 32 .
  • the electronic device can directly extract the data of the corresponding channels of the audio source data to obtain the second audio data and the first audio data.
  • the electronic device can also extract the data of the center channel in the processed sound source data to obtain the first audio data including the center channel. For example, when the electronic device includes the sound unit shown in FIG. 3A, the electronic device can play the data of the center channel through the first partial side sound unit 33, the first partial side sound unit 34 and the first partial side sound unit 35.
  • the second partial side sound unit 31 plays the data of the left channel and the second partial side sound unit 32 plays the data of the right channel, and/or the second partial side sound unit 31 plays the data of the left surround channel and the second partial side sound unit 32 plays the data of the left surround channel.
  • the second partial side sound unit 32 plays the data of the right surround channel, and/or plays the data of the low frequency channel through the second partial side sound unit 31 and the second partial side sound unit 32 .
  • the second side sound unit can directly extract the data of the corresponding channels of the sound source data to obtain the second audio data and the first audio data.
  • the electronic device can process the 5.1-channel sound source data through a related down-mixing algorithm, and down-mix the sound source data including only the left channel and the right channel.
  • the electronic device may use the processed sound source data as the second audio data and the first audio data.
  • the electronic device can play the data of the left channel through the first partial side sound unit 33 and the data of the right channel through the first partial side sound unit 34 .
  • the data of the left channel is played through the second partial side sound emitting unit 31 and the data of the right channel is played through the second partial side sound emitting unit 32 .
  • the electronic device can control the first partial side sound unit 35 to play the data of the left channel and the right channel, or not use the first partial side sound unit 35 to play audio data.
  • the second part of the side sound unit can directly extract the data of the corresponding channel of the sound source data to obtain the second audio data and the first audio data.
  • the electronic device can use the audio source data, 5.1-channel data or the data of the channels other than the channel of the first audio data as the second audio data, or the electronic device The device can process audio source data or 5.1-channel data to obtain second audio data including one or more channels.
  • the electronic device may process the sound source data based on all supported sound effect modes to obtain data of all channels of all sound effect modes.
  • the electronic device can use the first side sound unit and the second side sound unit to play data of the corresponding sound channel based on the current sound effect mode. In this way, when the sound effect mode of the electronic device changes, the electronic device does not need to process the sound source data according to the modified sound effect mode, and the electronic device can directly play the audio data corresponding to the changed sound effect mode.
  • the electronic device supports sound effect modes including surround enhancement mode, dialogue enhancement mode, loudness enhancement mode and low frequency enhancement mode.
  • the electronic device processes the audio source data to obtain left channel data, right channel data, left surround channel data, right surround channel data, and center channel data. channel data and low-frequency channel data.
  • the audio source data to obtain left channel data, right channel data, left surround channel data, right surround channel data, and center channel data.
  • channel data and low-frequency channel data When the electronic device is in the surround enhancement mode, the data of the left surround channel and the data of the right surround channel are played through the first side sound unit.
  • the first side sound unit is used to play the data of the low-frequency channel.
  • the audio data of each sound-emitting unit includes at least part of the content of the sound source data.
  • at least part of the content of the sound source data includes two situations: a part of the content of the sound source data and the entire content of the sound source data.
  • Part of the content of the sound source data may be data of some channels and/or data of some frequency bands in the sound source data. It can be understood that part here includes not only part but also all.
  • the electronic device can also directly extract from the sound source data to obtain the data of the vocal channels required by each sounding unit in the sound effect mode.
  • the electronic device can process the sound source data through a specified algorithm to obtain left channel data, right channel data, etc. This is not limited in the embodiments of the present application.
  • Each sound-emitting unit of the electronic device plays its own audio data at the same time.
  • the electronic device After the electronic device obtains the audio data of each sound-emitting unit, it can use all the sound-emitting units to play their respective audio data.
  • the electronic device can use a specified algorithm (for example, the multi-channel algorithm shown in Figure 5) to process the sound source data, and send the audio data of each sounding unit obtained by processing the sound source data to the audio driver.
  • the audio driver can convert the digital signal into The audio data in the analog signal mode is converted into audio data in the analog signal mode, and then the audio data in the analog signal mode is sent to the audio power amplifier chip of each sounding unit.
  • the audio power amplifier chip of each sounding unit can amplify the audio data in the analog signal mode and pass it through each The sound generating unit plays the amplified audio data in the analog signal mode.
  • the audio driver can send audio data to the audio power amplifier chip through the integrated circuit built-in audio bus (inter-IC sound, I2S).
  • a sound-generating unit may be composed of one or more speakers.
  • One sound-generating unit can be controlled by one audio driver, or multiple sound-generating units can be controlled by one audio driver, which is not limited in the embodiments of the present application.
  • the electronic device may receive the input of the user selecting the second sound effect mode, and in response to the input, process the audio data of the specified audio based on the second sound effect mode to obtain the audio data of the electronic device.
  • the audio data played by each sound-emitting unit is controlled, and the multiple sound-emitting units are controlled to play their respective audio data at the same time to realize the playback operation of the specified audio.
  • the electronic device uses a multi-channel algorithm to process the audio data of different sound-emitting units, combines the sound-emitting units at different positions (for example, the first side sound-emitting unit, the second side sound-emitting unit) to jointly produce sounds, and amplifies them through the audio power amplifier chip and then independently transmitted to each sound-emitting unit, which can significantly improve the sound field envelopment and immersion of electronic devices playing audio.
  • the electronic device after the electronic device obtains the first audio data, it can also perform an audio processing operation on the first audio data, and then play the processed first audio data.
  • the audio processing operation may be to adjust the loudness of the first audio data.
  • the electronic device may identify small signal audio in the first audio data based on the amplitude of the first audio data, and increase the loudness of the small signal audio.
  • the small-signal audio is an audio signal in the first audio data whose loudness is within the range of -35dB and below. In this way, since the loudness of small signal audio is small and not obvious to human ears, increasing the loudness of small signal audio will help users listen to small signal audio more clearly and enhance the user's perception of audio details.
  • the small-signal audio can trigger the sound of environmental changes in the game scene for the game character (for example, the rustling sound of the game character passing through the grass, the footsteps of the game character, the sound of a car driving). passing sounds, etc.).
  • Electronic devices can increase the volume of small-signal audio, enhance game immersion, and improve user gaming experience.
  • the first audio data is audio data provided by a video application
  • the small signal audio may be environmental sounds in the video (for example, insects, birds, wind, etc.).
  • the electronic device may be in a loudness enhancement mode, and the electronic device may perform the audio processing operation on the first audio data. It should be noted that the audio data other than the small signal audio in the first audio data remains unchanged.
  • the electronic device when playing a video, can use a video recognition algorithm to identify the locations of multiple objects (for example, people, animals or objects, etc.) in the video, and can extract the locations from the audio source data of the video.
  • the electronic device can use one or more sound-emitting units closest to the object in the video frame to play the sound of the object. In this way, sound units at different positions can be used to play the sounds of different objects in the video screen, enhancing video immersion.
  • the electronic device can obtain the first audio data including the sound data of one or more objects in the video picture that are far away from the display screen based on the position of the object in the video picture and the sound data of the object, and obtain the first audio data including the sound data of one or more objects in the video picture that are far away from the display screen.
  • the second audio data is the vocal data of one or more objects closer to the display screen in the video picture (that is, the data of the center channel).
  • the electronic device plays the human voices of objects farther away from the display screen through the first part of the side sound unit, and plays the human voices of objects closer to the display screen through the second part of the side sound unit, so that the user can feel people in different positions. sound to improve users’ immersion in watching videos.
  • the electronic device can obtain the first audio data including the sound data of one or more objects located above the video image based on the position of the object in the video image and the object's sound data, and obtain the first audio data including the one or more objects located below the video image. or second audio data of sound data of a plurality of objects.
  • the electronic device sends side signals through the first part
  • the sound unit plays the sound of the object above the video
  • the second part of the side sound unit plays the sound of the object below the video, so that the user can feel the difference between the upper and lower directions in different positions, improving the user's immersion in watching the video.
  • the second audio data includes the sound data of at least one object
  • the first audio data includes the sound data of all objects except the object corresponding to the sound data of the second audio data.
  • the electronic device can put the sound data of the object closest to the display screen into the second audio data, and put the sound data of the other two objects into the first audio data.
  • the electronic device can put the sound data of the two objects closest to the display screen into the second audio data, and put the sound data of the other object into the first audio data. This is not limited in the embodiment of the present application.
  • the sound source data includes multiple center channels, and the multiple center channels correspond to objects in the video picture one-to-one.
  • the data of a center channel is the sound data of an object in the video picture.
  • the electronic device can identify the position of each object in the video picture, obtain the first audio data including the sound data of one or more objects far away from the display screen in the video picture (that is, the data of the center channel), and obtain the first audio data included in the video picture.
  • the second audio data is the sound data of one or more objects in the video picture that are closer to the display screen (that is, the data of the center channel).
  • the electronic device can determine whether the played video is related to the music scene when playing the audio source data of the video.
  • music scenes can include but are not limited to concert scenes, music video (MV) scenes, singing competition scenes, performance scenes, etc.
  • the electronic device can set the sound effect mode to a low-frequency enhancement mode or a loudness enhancement mode when it is determined that the video is related to a music scene.
  • the electronic device determines that the video has nothing to do with the music scene, it can set the intelligent enhancement mode to the dialogue enhancement mode or the surround enhancement mode.
  • the electronic device may determine whether the video is related to the music scene based on the name of the video. For example, when the name of the video includes but is not limited to words related to music such as “singing", “music", “playing”, “song”, etc., it is determined that the video is related to the music scene.
  • the electronic device can use an image recognition algorithm to identify whether there are people in the video screen performing actions such as singing or playing musical instruments. The electronic device can determine whether the person in the video screen is playing or singing. Related to the music scene.
  • the audio source data may be audio data in a video file, and the audio source data may also be audio data in an audio file corresponding to the video file.
  • the sound source data may be stored in the memory of the electronic device, or the sound source data may be obtained by the electronic device from other electronic devices (for example, a server, etc.).
  • one or more top sound units are included above the first portion of the electronic device.
  • the electronic device can play sky sounds through the one or more top sound units.
  • sky sound is the sound emitted by the specified sky object in the audio.
  • the specified sky objects can be objects such as airplanes, birds, lightning, etc.
  • users can hear the height information of specified sky objects in the audio, such as the sound of airplanes flying over the sky, the sound of thunder, etc., increasing the user's sense of immersion in listening to the audio.
  • one or more bottom sound units are included underneath the first portion of the electronic device.
  • the electronic device can play ground sounds through the one or more bottom sound units.
  • the ground sound is the sound emitted by the specified ground object in the audio.
  • the specified ground objects can be objects such as insects, bushes, and objects in contact with the ground. In this way, users can hear the height information of specified ground objects in the audio, such as the sound of insects, footsteps, rain, etc., increasing the user's sense of immersion when listening to the audio.
  • the electronic device displays one or more sound effect mode options, and the one or more sound effect mode options correspond to one or more sound effect modes of the electronic device.
  • one or more sound effect mode options include a first mode option
  • one or more sound effect modes include a first mode.
  • the first mode option corresponds to the first mode.
  • the electronic device may receive a user's input for the first mode option, and in response to the input, set the sound effect mode of the electronic device to the first mode. In this way, the electronic device can receive user input, select different sound effect modes, and when playing audio, process the audio data of the audio based on the sound effect mode of the electronic device.
  • Desktop 601 may include, but is not limited to, icons for one or more applications (eg, music icon 602).
  • a taskbar is also displayed below the icon of the one or more applications.
  • the taskbar may include one or more function controls, and the function controls may be used to trigger the electronic device to display a corresponding function window.
  • the one or more functional controls include functional controls 603 .
  • the electronic device may receive user input for the function control 603, and in response to the input, display the function window 611 as shown in FIG. 6B.
  • the function window 611 may include, but is not limited to, a sound effect mode icon 612, a volume adjustment bar, etc.
  • the sound effect mode icon 612 may be used to trigger the electronic device to display the sound effect mode options corresponding to the sound effect modes supported by the electronic device.
  • the volume adjustment bar can be used to adjust the volume of audio played by electronic devices.
  • the electronic device may receive user input for the sound effect mode icon 612, and in response to the input, display the sound effect mode selection window 621 as shown in FIG. 6C.
  • the sound effect mode selection window 621 includes one or more sound effect mode options.
  • the one or more sound effect mode options may include, but are not limited to, sound effect mode option 622, sound effect mode option 623, sound effect mode option 624, and sound effect mode option 625.
  • the sound effect mode option 622 can be used to trigger the electronic device to set the sound effect mode to the loudness enhancement mode.
  • the sound effect mode option 623 can be used to trigger the electronic device to set the sound effect mode to the surround enhanced mode.
  • the sound mode option 624 may be used to trigger the electronic device to set the sound mode to a dialogue enhancement mode.
  • Sound mode option 625 may be used to trigger the electronic device to set the sound mode to a rhythm enhancement mode.
  • the one or more sound effect mode options may include the name of the corresponding sound effect mode.
  • each sound effect mode please refer to the embodiment shown in FIG. 4 and will not be described again here.
  • the sound effect mode option 622 is selected, and the sound effect mode of the electronic device is the loudness enhancement mode.
  • the electronic device can receive the user's input on the function control 603 and display the sound effect mode selection window 621 as shown in Figure 6C.
  • the electronic device includes a first partial side sound unit 33 located on the left side of the display screen, a first partial side sound unit 34 located on the right side of the display screen, a second partial side sound unit 31 located on the left side of the keyboard, and a second partial side sound unit 31 located on the left side of the keyboard.
  • the second part of the side sound unit 32 is taken as an example to illustrate the process of playing songs by the electronic device.
  • the electronic device may receive input for the music icon 602 and display the music playing interface 630.
  • the music playback interface 630 may include but is not limited to song names, playback controls, etc. Wherein, the playback control can be used to trigger the electronic device to play the song indicated by the song name.
  • the electronic device After the electronic device receives the input for the playback control, in response to the input, it processes the sound source data (ie, the audio data of the song) based on the loudness enhancement mode to obtain the audio data of each sounding unit, wherein the electronic device processes the sound source data based on the sound effect mode.
  • the sound source data ie, the audio data of the song
  • the electronic device processes the sound source data based on the sound effect mode.
  • the electronic device After the electronic device obtains the audio data of each sound-emitting unit, it can use each sound-emitting unit to start playing its respective audio data at the same time, as shown in Figure 6D.
  • the electronic device plays the audio data of the left channel through the first partial side sound unit 33 and the second partial side sound unit 31, and the first partial side sound unit B and the second partial side sound unit B play the audio data of the right channel. It can be understood that when the electronic device is playing a song, the electronic device cancels the display of the playback control and displays the pause control.
  • the pause control can be used to trigger the electronic device and the electronic device to stop playing audio data.
  • electronic devices can provide users with multiple sound effect modes to achieve different audio playback effects and improve user experience.
  • the sound effect modes supported by the electronic device include but are not limited to dialogue enhancement mode and/or low frequency enhancement mode.
  • the electronic device may only display the sound effect mode options corresponding to the dialogue enhancement mode and/or the low frequency enhancement mode.
  • the electronic device includes at least two first partial side sound units
  • the at least two first partial side sound units include a first partial side sound unit 33 and a first partial side sound unit 34
  • the two first partial side sound units may be located on the left and right sides of the display screen of the electronic device.
  • the sound effect modes supported by the electronic device may include, but are not limited to, loudness enhancement mode, low frequency enhancement mode, surround enhancement mode and/or dialogue enhancement mode.
  • the electronic device can display sound effect mode options corresponding to the sound effect modes supported by the electronic device.
  • the electronic device can play the left channel data obtained from the audio data of the specified audio through the first side sound unit located on the left side of the display screen.
  • the first part of the side sound unit on the right side plays the data of the right channel obtained from the audio data of the specified audio.
  • the electronic device can also display a sliding bar when displaying one or more sound effect mode options.
  • the sliding bar can be used to control the electronic device to process the sound source data to obtain the first audio data based on the sliding bar.
  • the numerical value adjusts the first audio data.
  • the electronic device can receive the user's adjustment of the slide bar, set the effect of the sound effect mode, and select a suitable sound effect mode to realize the effect.
  • the value of the loudness enhancement mode slider can become the loudness factor. The larger the value of the slide bar (the value of the loudness factor), the higher the volume of the electronic device playing the first audio data. The smaller the value of the slide bar (the value of the loudness factor), the higher the volume of the electronic device playing the first audio data. Low.
  • the larger the value of the surround factor the farther the position of the virtual sound source simulated by the surround channel data is from the user, and the more obvious the surround effect will be. That is, as the surround factor increases, when the user listens to the electronic device playing surround channel data, the user perceives that the distance between himself and the sound source also increases.
  • the greater the value of the slider the greater the value of the surround factor, and the more obvious the surround effect will be.
  • the smaller the value of the slider the smaller the value of the wrap factor, and the less obvious the wrap effect.
  • the loudness, surround sound range, vibration intensity, etc. of the first audio data played by the electronic device can be adjusted according to the percentage value.
  • the value of the slider bar is always greater than zero, and the electronic device processes the sound source data according to the sound effect mode and the corresponding value of the slider bar.
  • the slide bar of the electronic device is divided into ten equal parts, and the electronic device may set the initial value of the slide bar of each sound effect mode to 50%. If the value of the slider bar is adjusted to be greater than 50%, the channel method corresponding to the sound effect mode is increased according to the influence factor. If the value of the slider bar is adjusted to less than 50%, the channel method processed in the corresponding sound effect mode is increased according to the influence factor. The impact factor decreases.
  • the influence factor in surround sound field mode, can be understood as the surround factor in the surround channel processing algorithm.
  • the value of the slide bar is 50%
  • the value of the surround factor in the surround channel processing algorithm can be increased according to the preset corresponding relationship to enhance the expansion effect of the surround channel.
  • the expansion effect is stronger than the effect of the sound effect mode with the slider value of 50%.
  • the value of the slider bar is adjusted to 10%
  • the value of the surround factor in the surround channel processing algorithm can be reduced according to the preset corresponding relationship, weakening the expansion effect of the surround channel, which is weaker than the value of the slider bar. 50% effect of sound mode.
  • the electronic device and the electronic device play the audio data together, which is better than a single electronic device playing the audio data alone.
  • the user experience is better.
  • the distance between the simulated sound source and the user is the preset distance 1
  • the value of the slide bar is the preset value 2
  • the electronic device When the first side sound unit of the device plays the first audio data, the distance between the simulated sound source and the user is the preset distance 2, the preset value 1 is less than the preset value 2 and the preset distance 1 is less than the preset distance 2.
  • the loudness enhancement mode when the value of the slide bar is 50%, the loudness of the first audio data is 5dB. When the value of the slider bar is adjusted to less than 50%, for example, to 20%, the loudness of the first audio data is adjusted to 2dB.
  • the influence factor can be seen as the loudness factor.
  • the value of the slider bar is 20 %, the value of the loudness factor is 0.4.
  • the electronic device may display the sound effect mode selection window 631 shown in FIG. 6E.
  • the sound effect mode selection window 631 includes one or more sound effect mode options.
  • the one or more sound effect mode options may include, but are not limited to, sound effect mode option 632, sound effect mode option 633, sound effect mode option 634, and sound effect mode option 635.
  • the sound effect mode option 632 can be used to trigger the electronic device to set the sound effect mode to the loudness enhancement mode.
  • the sound mode option 633 can be used to trigger the electronic device to set the sound mode to surround enhanced mode.
  • the sound effect mode option 634 may be used to trigger the electronic device to set the sound effect mode to a dialogue enhancement mode.
  • Sound mode option 635 may be used to trigger the electronic device to set the sound mode to a rhythm enhancement mode.
  • the one or more sound effect mode options may include the name of the corresponding sound effect mode.
  • Each of the one or more sound effect mode options includes a corresponding slider bar.
  • the sound effect mode option 632 includes a slider bar 641
  • the slider bar 641 includes a minimum value, a maximum value, and a slider 642 .
  • the lowest value may be used to represent the lowest value of the slide bar 641 in the loudness enhancement mode, which is 1 here, that is, 10%.
  • the highest value can be used to represent the highest value of the slider 641 of the loudness enhancement mode, here it is 10, that is, 100%.
  • Slider 642 can be used to change the value of slider bar 641.
  • the vicinity of the slider 642 may also display the value of the slider 641 when the slider 642 is at a certain position of the slider 641, here it is 5, that is, 50%.
  • the slide bars of each sound effect mode please refer to the above embodiments and will not be described again here.
  • the electronic device can reduce the value of the slide bar 641 after receiving the input of dragging the slider 642 to the left. For example, the value of the slide bar 641 is reduced to 20%.
  • the electronic device can process the sound source data based on the value of the slide bar. , get the first audio data.
  • the electronic device can play the first audio data.
  • the effect of the surround sound effect in a scene where the value of the slider bar is 50% is stronger than the effect of the surround sound effect in a scene where the value of the ratio bar is 20%.
  • the distance between the simulated virtual sound source and the user can be 20 centimeters, so that the user can feel the sound of the sound source.
  • the width of the sound extends a distance of 0.2m to the left and right from the central axis of the electronic device.
  • the distance between the virtual sound source simulated by the audio data played by the first part of the side sound unit and the user can be 50 cm, so that the user feels that the width of the sound source is on the central axis of the electronic device Extend a distance of 0.5m to the left and right. The further away the virtual sound source is from the user, the more obvious the surround effect will be. It can be understood that the above-mentioned corresponding relationship between the value of the slide bar and the value of the distance is only an example. In specific implementation, the distance can be other values, which is not limited in the embodiments of the present application.
  • the first side sound unit of the electronic device is deployed in the B shell.
  • the angles between the B shell and the C shell are in different angle ranges, different listening experiences can be brought to the user.
  • the sound wave direction of the first side sound-emitting unit is directed from the A case to the B case.
  • the angle ⁇ between the B shell and the C shell of the electronic device is in the angle range 84 shown in FIG. 8D , the A shell and the D shell overlap, and the electronic device can play pictures and sounds through the B shell. In this way, it is convenient for users to hold electronic devices to play audio and video.
  • the electronic device can be regarded as a handheld theater, with the sound and picture more synchronized and the sense of immersion stronger.
  • the electronic device can also be placed on the electronic device stand to facilitate the user to adjust the position of the electronic device.
  • the maximum value of the angle range 81 is less than or equal to the minimum value of the angle range 83 .
  • the minimum value of the angle range 81 is greater than or equal to the maximum value of the angle range 82
  • the minimum value of the angle range 84 is greater than or equal to the maximum value of the angle range 83 .
  • the angle range 82 is 0 degrees to 45 degrees
  • the angle range 81 is 45 degrees to 135 degrees
  • the angle range 83 is 135 degrees to 180 degrees
  • the angle range 84 is 180 degrees to 360 degrees.
  • the description of the sound-generating unit located in the B shell can refer to the description of the first part of the side sound-generating unit shown in Figure 3A above, and the description of the electronic device controlling the sound-generating unit of the B shell to play audio data can refer to the implementation shown in Figures 4 and 5 For example, I won’t go into details here.
  • the first side sound unit of the electronic device is deployed in the B shell.
  • the electronic device can detect the angle between the B shell and the C shell of the electronic device, and adjust the loudness of the audio data of the first side sound unit and/or the second side sound unit. In this way, when the angles of the B shell and the C shell are different, the loudness of the audio data of the sound unit can be adjusted to provide the user with a better playback effect.
  • the electronic device can directly use the sound unit to play the respective audio data, wherein the first side sound unit of the electronic device plays the audio
  • the volume of the data is the specified volume value 80.
  • the sharpness of the sound is increased. The electronic device can reduce the sharpness of the sound by reducing the frequency of high-frequency audio data.
  • the electronic device can adjust the loudness of the audio data of each sound unit based on the angle ⁇ , for example, reduce the first part of the side sound unit, or, reduce the volume of the first side-firing unit and increase the volume of the second side-firing unit.
  • the volume of the audio data played by the first side sound unit of the electronic device is less than the designated volume value 80. In this way, the reverberation effect caused by the local cavity formed between the screen part and the keyboard part reflecting back and forth the sound waves emitted by the first side sound unit can be reduced, making the sound clearer.
  • the electronic device can adjust the loudness of the audio data of each sound unit based on the angle ⁇ , for example, increase the first part of the side sound unit, or, increase the volume of the first side-firing unit and decrease the volume of the second side-firing unit.
  • the volume of the audio data played by the first side sound unit of the electronic device is greater than the designated volume value 80. In this way, since the distance between the first side sounding unit and the user increases, the electronic device can increase the volume of the first side sounding unit so that the user can hear the sound of the first side sounding unit clearly.
  • the electronic device can adjust the loudness of the audio data of each sound-emitting unit based on the angle ⁇ , for example, increase the volume of the second part side The volume of the sounding unit, or, decrease the volume of the first side sounding unit and increase the volume of the second side sounding unit.
  • the volume of the audio data played by the first side sound unit of the electronic device is less than or equal to the designated volume value 80.
  • a second part can be added The volume of the side sounding unit is divided, so that the user can hear the sound of the second side sounding unit more clearly.
  • the minimum value of the angle range 81 is greater than or equal to the maximum value of the angle range 82 .
  • the maximum value of angular range 81 is less than or equal to the minimum value of angular range 83 .
  • the maximum value of angular range 83 is less than or equal to the minimum value of angular range 84 .
  • the angle range 82 is 0 degrees to 45 degrees
  • the angle range 81 is 45 degrees to 135 degrees
  • the angle range 83 is 135 degrees to 180 degrees
  • the angle range 84 is 180 degrees to 360 degrees.
  • the electronic device when the first part of the side sound unit of the electronic device is deployed on the B shell, the electronic device may only provide the surround enhancement mode, or set the initial sound effect mode of the electronic device to the surround enhancement mode.
  • the initial sound effect mode is the sound effect mode of the electronic device after the electronic device is turned on. In this way, the electronic device can use the second part of the side sounding unit and the first part of the side sounding unit to play the specified audio at the same time, trying to keep the audio image and the picture at the same height, while improving the surround effect.
  • the vibration amplitude of the first part of the side sound unit is relatively large, in order to avoid affecting the display effect of the display screen.
  • the electronic device can set the low-frequency enhancement mode to an unselectable state during video playback. When the low-frequency enhancement mode is set to an unselectable state, the electronic device does not display the low-frequency enhancement mode option corresponding to the low-frequency enhancement mode, or the electronic device displays the low-frequency enhancement mode option that cannot be selected.
  • the first part of the side sound unit of the electronic device is deployed in the B shell, and the electronic device may directly not display the low frequency enhancement mode option corresponding to the low frequency enhancement mode.
  • the first side sound unit of the electronic device is deployed in the A shell.
  • the angles between the B shell and the C shell are in different angle ranges, different listening experiences can be brought to the user.
  • the sound wave direction of the first side sound-emitting unit is directed from the B case to the A case.
  • the A shell and the D shell overlap. In this way, it is convenient for the user to hold the electronic device.
  • the A shell vibrates and produces a stronger low-frequency vibration.
  • the maximum value of the angle range 91 is less than or equal to the minimum value of the angle range 93 .
  • the minimum value of the angle range 91 is greater than or equal to the maximum value of the angle range 92
  • the minimum value of the angle range 94 is greater than or equal to the maximum value of the angle range 93 .
  • the angle range 92 is 0 degrees to 45 degrees
  • the angle range 91 is 45 degrees to 135 degrees
  • the angle range 93 is 135 degrees to 190 degrees
  • the angle range 94 is 190 degrees to 360 degrees.
  • the first side sound unit of the electronic device is deployed in the A shell.
  • the electronic device can detect the angle between the B shell and the C shell of the electronic device, and adjust the loudness of the audio data of the first side sound unit and the second side sound unit.
  • the sound wave direction of the first side sound-emitting unit is directed from the B case to the A case.
  • the electronic device can directly use the sound unit to play the respective audio data, wherein the first side sound unit of the electronic device plays the audio
  • the volume of the data is the specified volume value of 90.
  • the electronic device can adjust the loudness of the audio data of each sound unit based on the angle ⁇ , for example, increase the first part of the side sound unit, or, increase the volume of the first side-firing unit and decrease the volume of the second side-firing unit.
  • the volume of the audio data played by the first side sound unit of the electronic device is greater than the designated volume value 90. In this way, since the sound wave direction of the first side sound unit is away from the user, increasing the volume of the first side sound unit can enhance the clarity of the audio that the user hears.
  • the electronic device can adjust the loudness of the audio data of each sound unit based on the angle ⁇ , for example, reduce the first part of the side sound the volume of the unit, or, reduce the side emission of the first part The volume of the sound unit and increase the volume of the second part of the side sound unit.
  • the volume of the audio data played by the first side sound unit of the electronic device is less than the designated volume value 90. In this way, the desktop reflection between the screen part and the desktop can be reduced, resulting in the enhancement of high-frequency energy, and the sharpness of the sound can be weakened.
  • the volume of the first side sound unit or increase the volume of the first side sound unit and decrease the volume of the second side sound unit.
  • the volume of the audio data played by the first side sound unit of the electronic device is greater than the designated volume value 90.
  • the desktop reflection between the screen part and the desktop can be enhanced, the reverberation feeling can be increased, and the volume of the first part of the side screen sound unit that is farther away from the user can be increased, so that the user can listen to the first part of the side sound unit and the second part of the side sound at the same time. unit sound.
  • the electronic device can adjust the loudness of the audio data of each sound unit based on the angle ⁇ , for example, reduce the first part of the side sound unit, or, reduce the volume of the first side-firing unit and increase the volume of the second side-firing unit.
  • the volume of the audio data played by the first side sound unit of the electronic device is less than the designated volume value 90. In this way, the reverberation effect caused by sound reflection between the A shell and the D shell can be reduced and the clarity of the sound can be enhanced.
  • the electronic device may not play the low-frequency channel data through the first part of the side sound unit.
  • the maximum value of the angle range 91 is less than or equal to the minimum value of the angle range 93 .
  • the minimum value of the angle range 91 is greater than or equal to the maximum value of the angle range 92
  • the minimum value of the angle range 94 is greater than or equal to the maximum value of the angle range 93 .
  • the angle range 92 is 0 degrees to 45 degrees
  • the angle range 91 is 45 degrees to 135 degrees
  • the angle range 93 is 135 degrees to 190 degrees
  • the angle range 94 is 190 degrees to 360 degrees.
  • the electronic device when the first part of the side sound unit of the electronic device is deployed in the A shell, the electronic device may only provide a low-frequency enhancement mode, or set the initial sound effect mode of the electronic device to the low-frequency enhancement mode.
  • the initial sound effect mode is the sound effect mode of the electronic device after the electronic device is turned on.
  • the electronic device can use the first side sound unit to play low-frequency signals and enhance the low-frequency atmosphere.
  • the first part of the side sound unit is responsible for playing low-frequency signals, which can avoid the problem of keyboard noise caused by the second part of the side sound unit playing low-frequency signals.
  • the electronic device includes a plurality of first partial side sound units, some of the first partial side sound units are deployed on the A shell of the electronic device, and the other first partial side sound units are deployed on the B shell of the electronic device.
  • the electronic device can adjust the loudness of the first part of the side sounding unit located in the A case and the loudness of the first part of the side sounding unit of the B case based on the angle between the B case and the C case of the electronic device.
  • the description of adjusting the loudness of the first part side sounding unit of the B case can also be seen in Figure 7
  • the description of the embodiment shown in FIG. 10D will not be repeated here.
  • the electronic device when the sound effect mode of the electronic device is the surround enhancement mode, can use the first part of the side sound unit located in the B shell to play the data of the left surround channel and the right surround channel, and use the first part of the side sound unit located in the A shell to play the data of the left surround channel and the right surround channel.
  • the side sound unit plays the data of other channels, or the side sound unit located in the first part of the A shell is controlled not to play the audio data.
  • the electronic device when the sound effect mode of the electronic device is the low-frequency enhancement mode, can use the first part of the side sound unit located in the A shell to play the data of the low-frequency channel, and use the first part of the side sound unit located in the B shell to play other sounds. channel data, or, control the side sound unit located in the first part of the B shell not to play audio data.
  • the electronic device can adjust the sound channel played by each sound-generating unit based on the angle between the B shell and the C shell of the electronic device.
  • the description of when the first part of the side sound unit of the electronic device is located in the A case or the B case can refer to the embodiment shown in FIG. 7 to FIG. 10D, which will not be described again here.
  • the electronic device when the electronic device is a folding screen device, and the folding mode of the electronic device is left and right folding, if the electronic device is in the folded state, as shown in Figure 11A, the electronic device includes a sound unit 1001, a sound unit 1002, and a sound unit. 1003 and sound unit 1004. Among them, the sound-emitting unit 1003 and the sound-emitting unit 1004 are located in the second part, and the second part is blocked by the first part, so the sound-emitting unit 1003 and the sound-emitting unit 1004 are not shown in FIG. 11A.
  • the sound generating unit 1001 and the sound generating unit 1002 are located in the first part of the electronic device.
  • the sound unit 1002 and the sound unit 1004 are located at the top of the electronic device
  • the sound unit 1001 and the sound unit 1003 are located at the bottom of the electronic device
  • the sound unit located at the top of the electronic device can be used to play sky sounds
  • the sound unit located at the bottom of the electronic device can Used to play ground sounds.
  • all sound units of the electronic device can play the data of the left channel and the right channel, and/or the data of the left surround channel and the right surround channel, and/or the data of the center channel, and/or, low-frequency channel data.
  • the sound unit located at the top of the electronic device can be used to play the sound of an object in the video picture that is closer to the sound unit at the top, and the sound unit located at the bottom of the electronic device can be used to play the video picture. Sounds from objects closer to the bottom sound unit.
  • the sound unit located on the left side of the display screen of the electronic device can be used to play the data of the left channel
  • the sound-generating unit located on the right side of the display screen of the electronic device can be used to play the data of the right channel
  • the sound-generating unit located on the left side of the display screen of the electronic device can be used to play the data of the left surround channel
  • the sound unit on the right side of the display can be used to play data from the right surround channel.
  • the electronic device can play the sound of objects far away from the display screen through the second side sound unit, and play the distance display in the video picture through the first side sound unit. The sound of objects closer to the screen.
  • the description of how each sound-emitting unit of the electronic device in the semi-folded state plays audio may refer to the embodiment shown in FIG. 4 and FIG. 5 , which will not be described again here.
  • the electronic device can play the data of the left channel through the sound unit located on the left side, such as the sound unit 1002 and the sound unit 1001 shown in Figure 11C, and the electronic device can play the data of the left channel through the sound unit located on the right side.
  • the sound unit on the left side such as the sound unit 1003 and the sound unit 1004 shown in FIG. 11C, plays the data of the right channel, and/or the data of the left surround channel is played through the sound unit on the left, and the data of the left surround channel is played through the sound unit on the right.
  • the unit plays the right surround channel data.
  • the sound unit of the electronic device can also play data from the low-frequency channel or the center channel. It should be noted that when the angle of the electronic device changes, for example, the central axis of the electronic device rotates from the up-and-down direction to the left-right direction as shown in FIG. The difference shown in 11C.
  • the folding screen device shown in FIGS. 11A to 11C includes a folding screen that is folded outward.
  • the A shell or D shell of the electronic device also includes a display.
  • the description of the audio data played by each sound-emitting unit when the electronic device is in the unfolded state or the half-folded state can refer to the embodiment shown in Figures 11A-11C, and will not be described again here.
  • the B shell or C shell in Figure 11A can be regarded as the A shell or D shell of the electronic device, and then refer to the description of each sound unit playing audio data in Figure 11A.
  • the electronic device when the electronic device is a folding screen device and the folding mode of the electronic device is up and down folding, and the electronic device is in an unfolded state, as shown in FIG. 12A , the electronic device includes a sound unit 1051 and a sound unit 1052 .
  • the sound-generating unit 1051 is located in the first part
  • the sound-generating unit 1052 is located in the second part of the electronic device.
  • the sound unit 1051 is located at the top of the electronic device, and the sound unit 1052 is located at the bottom of the electronic device.
  • the sound unit located at the top of the electronic device can be used to play sky sounds, and the sound unit located at the bottom of the electronic device can be used to play ground sounds.
  • all sound units of the electronic device can play the data of the left channel and the right channel, and/or the data of the left surround channel and the right surround channel, and/or the data of the center channel, and/or, low-frequency channel data.
  • the sound unit located at the top of the electronic device can be used to play the sound of an object in the video picture that is closer to the sound unit at the top, and the sound unit located at the bottom of the electronic device can be used to play the video picture. Sounds from objects closer to the bottom sound unit.
  • the sound unit located on the left side of the display screen of the electronic device can be used to play the data of the left channel
  • the sound-generating unit located on the right side of the display screen of the electronic device can be used to play the data of the right channel
  • the sound-generating unit located on the left side of the display screen of the electronic device can be used to play the data of the left surround channel
  • the sound unit on the right side of the display can be used to play data from the right surround channel.
  • the electronic device can play the sound of objects far away from the display screen through the second side sound unit, and play the distance display in the video picture through the first side sound unit. The sound of objects closer to the screen.
  • the description of how each sound-emitting unit of the electronic device in the semi-folded state plays audio may refer to the embodiment shown in FIG. 4 and FIG. 5 , which will not be described again here.
  • the electronic device can play audio source data through the sound unit. In this way, folding screen devices can achieve different playback effects when playing audio in different unfolded states.
  • the first part of the side sound unit of the electronic device is deployed at the connection between the A case and the B case (which can be understood as being deployed on the side of the screen part).
  • the first side sounding units can enhance the surround feeling in different directions through these first side sounding units.
  • the electronic device includes one or more first side sound units, and the one or more first side sound units
  • the sound-generating units include a first partial-side sound-generating unit 1101, a first partial-side sound-generating unit 1102, and a first partial-side sound-generating unit 1103.
  • the first partial side sound unit 1101 is located on the left side of the screen portion, and the sound beam direction of the first partial side sound unit 1101 is directed from the center of the display screen to the left side of the display screen.
  • the first partial side sound emitting unit 1102 is located on the right side of the screen portion, and the sound wave direction of the first partial side sound emitting unit 1102 is directed from the center of the display screen to the right side of the display screen.
  • the first partial side sound unit 1103 is located on the upper side of the screen portion, and the sound beam direction of the first partial side sound unit 1103 is directed from the center of the display screen to the upper side of the display screen.
  • the first side sound unit 11003 may be used to play sky sounds.
  • the electronic device when playing a video, can identify a specified object in the video picture, and identify the audio data of the specified object in the audio source data of the video.
  • the electronic device may play the audio data of the specified object using one or more sound units closest to the specified object based on the position of the specified object on the display screen. In this way, the electronic device can play the audio data of the specified object according to the movement trajectory of the specified object in the video screen, achieving the playback effect of the sound changing with the specified object, giving the user an audio-visual experience of the sound emitted by the specified object, and enhancing video immersion. feel.
  • the electronic device can identify all the sounds of the specified object in the sound source data through the sound characteristics of the specified object.
  • the electronic device can also identify the specified object in the video screen in real time, as well as the audio data of the specified object.
  • the designated object is a vehicle, such as an airplane, a train, a car, a ship, etc.
  • the electronic device can sequentially play the audio data of the specified object through the first partial side sound emitting unit 1101, the first partial side sound emitting unit 1103, and the first partial side sound emitting unit 1102.
  • the electronic device can play the sound of the plane through the first side sound unit 1101 when the plane is located in the lower left corner of the video screen.
  • the sound of the aircraft is played through the first side sound unit 1103.
  • the sound of the aircraft is played together through the first partial side sound unit 1102 and the first partial side sound unit 1103 . In this way, the process of the aircraft taking off can be reflected from the sound aspect.
  • the designated object can also be a certain character, an object, etc. in the video, which is not limited in the embodiments of the present application.
  • the first side sound unit and the second side sound unit of the electronic device can jointly implement the playback operation of the audio data of the specified object.
  • the electronic device can use the second part of the side sound unit to play the audio data of the specified object.
  • the electronic device can use the first side sound unit to play the audio data of the designated object.
  • the electronic device can use the first part of the side sound unit to play the audio data of the specified object.
  • the electronic device can use the second side sound unit to play the audio data of the designated object.
  • the movement of the specified object from far to near on the video screen can be reflected as the volume of the specified object gradually becoming larger.
  • the relative position of the first partial side sound unit and the second partial side sound unit can be better reflected.
  • the relative position of the first side sounding unit and the second side sounding unit can be further adjusted. It can well reflect the distance relationship of the specified object.
  • the audio data of the sound-generating unit obtained by the electronic device based on the first mode and the sound source data can be paused, or the sound-generating unit can play the audio data of the specified object at the same time. and the audio data of the sound-emitting unit obtained by the electronic device based on the first mode and the sound source data.
  • the electronic device includes one or more sound-emitting units, and the one or more sound-emitting units include one or more first partial side sound-emitting units.
  • a part of the one or more first partial side sound units is located in the A shell, a part of the side sound units is located in the B shell, and a part of the first partial side sound units is located at the connection between the A shell and the B shell.
  • the first part of the side sounding unit located in the A shell can be used to play the data of the low-frequency channel
  • the first part of the side sounding unit located in the B shell can be used to play the data of the left surround channel and the right surround channel.
  • the first part of the side sounding unit located at the connection between the A shell and the B shell can be divided into a top sounding unit and a bottom sounding unit.
  • the position of the top sound-emitting unit in the first part is higher than the position of the bottom sound-emitting unit in the first part.
  • the top sound-emitting unit can be used to play the audio data of the specified sky object, and the bottom sound-emitting unit can be used to play the audio data of the specified ground object.
  • the electronic device may be a mobile phone, a tablet computer, a desktop computer, a laptop computer, a handheld computer, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, as well as a cellular phone, a personal digital assistant (personal digital assistant) assistant (PDA), augmented reality (AR) equipment, virtual reality (VR) equipment, Mixed reality (MR) devices, artificial intelligence (AI) devices, wearable devices, vehicle-mounted devices, smart home devices and/or smart city devices.
  • PDA personal digital assistant
  • AR augmented reality
  • VR virtual reality
  • MR Mixed reality
  • AI artificial intelligence
  • wearable devices wearable devices
  • vehicle-mounted devices smart home devices and/or smart city devices.
  • the electronic device may be a desktop computer, a laptop computer, a handheld computer, a notebook computer, or a tablet computer connected to a keyboard.
  • the electronic device may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, and a battery 142.
  • the sensor module 180 may include, but is not limited to, a pressure sensor 180A, a fingerprint sensor 180B, a temperature sensor 180C, a touch sensor 180D, an ambient light sensor 180E, etc.
  • the structure illustrated in this embodiment does not constitute a specific limitation on the electronic device.
  • the electronic device may include more or less components than shown in the figures, or some components may be combined, some components may be separated, or some components may be arranged differently.
  • the components illustrated may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU), etc.
  • application processor application processor, AP
  • modem processor graphics processing unit
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller video codec
  • digital signal processor digital signal processor
  • DSP digital signal processor
  • baseband processor baseband processor
  • neural network processor neural-network processing unit
  • the controller can generate operation control signals based on the instruction operation code and timing signals to complete the control of fetching and executing instructions.
  • the processor 110 may also be provided with a memory for storing instructions and data.
  • the memory in processor 110 is cache memory. This memory may hold instructions or data that have been recently used or recycled by processor 110 . If the processor 110 needs to use the instructions or data again, it can be called directly from the memory. Repeated access is avoided and the waiting time of the processor 110 is reduced, thus improving the efficiency of the system.
  • processor 110 may include one or more interfaces.
  • Interfaces may include integrated circuit (inter-integrated circuit, I2C) interface, integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, pulse code modulation (pulse code modulation, PCM) interface, universal asynchronous receiver and transmitter (universal asynchronous receiver/transmitter (UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and /or universal serial bus (USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • UART universal asynchronous receiver and transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus
  • the I2C interface is a bidirectional synchronous serial bus, including a serial data line (SDA) and a serial clock line (derail clock line, SCL).
  • the I2S interface can be used for audio communication.
  • processor 110 may include multiple sets of I2S buses.
  • the processor 110 can be coupled with the audio module 170 through the I2S bus to implement communication between the processor 110 and the audio module 170 .
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the I2S interface to implement the function of answering calls through a Bluetooth headset.
  • the PCM interface can also be used for audio communications to sample, quantize and encode analog signals.
  • the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface.
  • the audio module 170 can also transmit audio signals to the wireless communication module 160 through the PCM interface to implement the function of answering calls through a Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus can be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • a UART interface is generally used to connect the processor 110 and the wireless communication module 160 .
  • the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to implement the Bluetooth function.
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the UART interface to implement the function of playing music through a Bluetooth headset.
  • the MIPI interface can be used to connect the processor 110 with peripheral devices such as the display screen 194 and the camera 193 .
  • MIPI interfaces include camera serial interface (CSI), display serial interface (DSI), etc.
  • the processor 110 and the camera 193 communicate through a CSI interface to implement the shooting function of the electronic device.
  • the processor 110 and the display screen 194 communicate through the DSI interface to implement the display function of the electronic device.
  • the GPIO interface can be configured through software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface can be used to connect the processor 110 with the camera 193, display screen 194, wireless communication module 160, audio module 170, sensor module 180, etc.
  • the GPIO interface can also be configured as an I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 130 is an interface that complies with the USB standard specifications. Specifically, it can be a Mini USB interface, a Micro USB interface, or a USB Type C interface, etc.
  • the USB interface 130 can be used to connect a charger to charge the electronic device, and can also be used to transmit data between the electronic device and peripheral devices. It can also be used to connect headphones to play audio through them. This interface can also be used to connect other electronic devices, such as AR devices, etc.
  • the interface connection relationships between the modules illustrated in this embodiment are only schematic illustrations and do not constitute structural limitations on the electronic equipment.
  • the electronic device may also adopt different interface connection methods in the above embodiments, or a combination of multiple interface connection methods.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger can be a wireless charger or a wired charger.
  • the charging management module 140 may receive charging input from the wired charger through the USB interface 130 .
  • the charging management module 140 may receive wireless charging input through a wireless charging coil of the electronic device. While the charging management module 140 charges the battery 142, it can also provide power to the electronic device through the power management module 141.
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, the internal memory 121, the display screen 194, the camera 193, the wireless communication module 160, and the like.
  • the power management module 141 can also be used to monitor battery capacity, battery cycle times, battery health status (leakage, impedance) and other parameters.
  • the power management module 141 may also be provided in the processor 110 .
  • the power management module 141 and the charging management module 140 may also be provided in the same device.
  • the wireless communication function of the electronic device can be implemented through the antenna, wireless communication module 160, modem processor, baseband processor, etc.
  • Antennas are used to transmit and receive electromagnetic wave signals.
  • Each antenna in an electronic device can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • a modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low-frequency baseband signal to be sent into a medium-high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal.
  • the demodulator then transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the application processor outputs sound signals through audio devices (not limited to speaker 170A, receiver 170B, etc.), or displays images or videos through display screen 194.
  • the modem processor may be a stand-alone device.
  • the modem processor may be independent of the processor 110 and may be provided in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) network), Bluetooth (BT), and global navigation satellite systems for use in electronic devices. (global navigation satellite system, GNSS), frequency modulation (FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • WLAN wireless local area networks
  • BT Bluetooth
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication technology
  • infrared technology infrared, IR
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves through the antenna, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110, frequency modulate it, amplify it, and convert it into electromagnetic waves through the antenna for radiation.
  • the electronic device implements display functions through the GPU, display screen 194, and application processor.
  • the GPU is an image processing microprocessor and is connected to the display screen 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
  • the display screen 194 is used to display images, videos, etc.
  • Display 194 includes a display panel.
  • the display panel can use a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • AMOLED organic light-emitting diode
  • FLED flexible light-emitting diode
  • Miniled MicroLed, Micro-oLed, quantum dot light emitting diode (QLED), etc.
  • the electronic device may include 1 or N display screens 194, where N is a positive integer greater than 1.
  • the electronic device can realize the shooting function through ISP, camera 193, video codec, GPU, display screen 194 and application processor.
  • the ISP is used to process the data fed back by the camera 193. For example, when taking a photo, the shutter is opened, the light is transmitted to the camera sensor through the lens, the optical signal is converted into an electrical signal, and the camera sensor passes the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye. ISP can also perform algorithm optimization on image noise and brightness. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene. In some embodiments, the ISP may be provided in the camera 193.
  • Camera 193 is used to capture still images or video.
  • the object passes through the lens to produce an optical image that is projected onto the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then passes the electrical signal to the ISP to convert it into a digital image signal.
  • ISP outputs digital image signals to DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other format image signals.
  • the electronic device may include 1 or N cameras 193, where N is a positive integer greater than 1. number.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device selects a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy.
  • Video codecs are used to compress or decompress digital video.
  • Electronic devices may support one or more video codecs. In this way, electronic devices can play or record videos in multiple encoding formats, such as: Moving Picture Experts Group (MPEG)1, MPEG2, MPEG3, MPEG4, etc.
  • MPEG Moving Picture Experts Group
  • MPEG2 MPEG2, MPEG3, MPEG4, etc.
  • NPU is a neural network (NN) computing processor.
  • NN neural network
  • Intelligent cognitive applications of electronic devices can be realized through NPU, such as image recognition, face recognition, speech recognition, text understanding, etc.
  • the external memory interface 120 can be used to connect an external non-volatile memory to expand the storage capacity of the electronic device.
  • the external non-volatile memory communicates with the processor 110 through the external memory interface 120 to implement the data storage function. For example, save music, video and other files in external non-volatile memory.
  • Internal memory 121 may be used to store computer executable program code, which includes instructions.
  • the processor 110 executes instructions stored in the internal memory 121 to execute various functional applications and data processing of the electronic device.
  • the internal memory 121 may include a program storage area and a data storage area.
  • the stored program area can store an operating system, at least one application program required for a function (such as a sound playback function, an image playback function, etc.).
  • the storage data area can store data created during the use of electronic equipment (such as audio data, phone books, etc.).
  • the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as at least one disk storage device, flash memory device, universal flash storage (UFS), etc.
  • the electronic device can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signals. Audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be provided in the processor 110 , or some functional modules of the audio module 170 may be provided in the processor 110 .
  • Speaker 170A also called “speaker” is used to convert audio electrical signals into sound signals.
  • the electronic device can listen to music through speaker 170A, or listen to hands-free calls.
  • the speaker 170A can play audio data.
  • Receiver 170B also called “earpiece” is used to convert audio electrical signals into sound signals.
  • the electronic device answers a call or a voice message, the voice can be heard by bringing the receiver 170B close to the human ear.
  • Microphone 170C also called “microphone” or “microphone” is used to convert sound signals into electrical signals. When making a call or sending a voice message, the user can speak close to the microphone 170C with the human mouth and input the sound signal to the microphone 170C.
  • the electronic device may be provided with at least one microphone 170C. In other embodiments, the electronic device may be provided with two microphones 170C, which in addition to collecting sound signals, may also implement a noise reduction function. In other embodiments, the electronic device can also be equipped with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions, etc.
  • the headphone interface 170D is used to connect wired headphones.
  • the pressure sensor 180A is used to sense pressure signals and can convert the pressure signals into electrical signals.
  • pressure sensor 180A may be disposed on display screen 194 .
  • the fingerprint sensor 180B is used to collect fingerprints.
  • Temperature sensor 180C is used to detect temperature.
  • Touch sensor 180D is also called a "touch device”.
  • the touch sensor 180D can be disposed on the display screen 194.
  • the touch sensor 180D and the display screen 194 form a touch screen, which is also called a "touch screen”.
  • the touch sensor 180D is used to detect a touch operation acting on or near the touch sensor 180D.
  • the ambient light sensor 180E is used to sense ambient light brightness.
  • buttons 190 include a power button, a keyboard, a touch pad, etc.
  • Key 190 may be a mechanical key. It can also be a touch button.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Stereophonic System (AREA)

Abstract

本申请公开了一种音频播放方法及相关装置。电子设备包括一个或多个发声单元,一个或多个发声单元包括一个或多个第二部分侧发声单元以及一个或多个第一部分侧发声单元。电子设备的音效模式为第一模式时,电子设备可以接收到播放指定音频的输入,响应于该输入,基于第一模式,处理电子设备的指定音频的音频数据(又称为音源数据),得到电子设备的多个发声单元对应的音频数据。电子设备的多个发声单元同时播放对应的音频数据。这样,可以实现电子设备使用第一部分侧发声单元以及第二部分侧发声单元协同播放第一音频,提高电子设备的声场的环绕感、沉浸感,增强音频播放效果。

Description

一种音频播放方法及相关装置
本申请要求于2022年08月12日提交中国专利局、申请号为202210966704.9、申请名称为“一种笔记本联合发声方法”的中国专利申请的优先权,以及要求于2022年11月11日提交中国专利局、申请号为202211415563.8、申请名称为“一种音频播放方法及相关装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及终端领域,尤其涉及一种音频播放方法及相关装置。
背景技术
当前,部分电子设备的显示屏和扬声器的距离较远,导致电子设备播放视频时,显示的画面与声音的来源不对应,用户观影效果差。
发明内容
本申请提供了一种音频播放方法及相关装置,实现了电子设备通过第一部分的发声单元以及第二部分的发声单元协同发声,该用户提供了多种音频播放效果,使得用户可以感受到不同的音频播放体验。
第一方面,本申请提供了一种音频播放方法,应用于电子设备,电子设备包括第一部分与第二部分,第一部分与第二部分围绕电子设备的中心轴旋转或展开;第一部分包括一个或多个第一发声单元,第二部分包括一个或多个第二发声单元;方法包括:
电子设备的音效模式为第一模式,电子设备接收到播放第一音频的第一输入;
电子设备响应于第一输入,控制一个或多个第一发声单元播放第一音频数据,控制一个或多个第二发声单元播放第二音频数据,第一音频数据与第二音频数据均至少包括第一音频的音源数据的至少部分内容;
电子设备接收到第二输入;
电子设备响应于第二输入,将电子设备的音效模式从第一模式切换为第二模式;
电子设备接收到播放第一音频的第三输入;
电子设备响应于第三输入,控制一个或多个第一发声单元播放第三音频数据,控制一个或多个第二发声单元播放第四音频数据,第三音频数据与第四音频数据均至少包括第一音频的音源数据的至少部分内容,第一音频数据与第三音频数据不同。
这样,电子设备可以在处于不同的音效模式时,针对相同的音源数据进行不同的处理,得到不同的第一音频数据和第三音频数据,通过第一发声单元播放不同的音频数据,实现不同的播放效果。并且,由于电子设备的第一部分的发声单元与第二部分的发声单元联合发声,使得电子设备可以在通过第一部分显示视频画面时,提高声音和画面的一致性,让用户感知到声音的音源与画面处于同一位置,提升用户沉浸感。
在一种可能的实现方式中,第一音频数据包括的声道与第三音频数据包括的声道部分/全部不同。这样,不同声道的数据可以实现不同的播放效果。
在一种可能的实现方式中,方法还包括:当第一模式为低频增强模式、对白增强模式和环绕增强模式中的任一种时,第一音频数据包括的至少部分声道与第二音频数据包括的至少部分声道不相同;和/或,
当第一模式为响度增强模式时,第一音频数据的至少部分声道与第二音频数据的至少部分声道相同。
在一种可能的实现方式中,电子设备包括一个或多个音效模式,一个或多个音效模式包括第一模式与第二模式;电子设备接收到第二输入之前,方法还包括:
电子设备显示一个或多个音效模式选项,一个或多个音效模式选项与一个或多个音效模式一一对应,一个或多个音效模式选项包括第一模式选项与第二模式选项,第一模式选项与第一模式对应,第二模式选项与第二模式对应,第一模式选项被标记;其中,第二输入为针对第二模式选项的输入;
响应于第二输入,将电子设备的音效模式设置为第二模式,具体包括:
电子设备响应于第二输入,将电子设备的音效模式从第一模式切换为第二模式,并且取消标记第一模式选项,标记第二模式选项。
这样,电子设备可以给用户提供不同的音效模式选项,使得用户可以调整电子设备的音效模式,使得电子设备以指定的音效模式播放音频。
在一种可能的实现方式中,第一模式选项为低频增强模式选项,第一模式为低频增强模式,第一音频数据包括低频声道的数据,第二音频数据包括左声道的数据与右声道的数据、左环绕声道的数据与右环绕声道的数据和/或中置声道的数据。这样,在低频模式下,电子设备可以使用第一发声单元播放低频声道的数据,增强音频播放时的低频震感效果。
在一种可能的实现方式中,第一模式选项为对白增强模式选项,第一模式为对白增强模式,第一音频数据包括中置声道的数据,第二音频数据包括,第二音频数据包括左声道的数据与右声道的数据、左环绕声道的数据与右环绕声道的数据和/或低频声道的数据。这样,电子设备在播放音频时,突显人声,在视频播放时,突出人物台词。
在一种可能的实现方式中,第一模式选项为响度增强模式选项,第一模式为响度增强模式,第一音频数据包括左声道的数据和右声道的数据,第二音频数据包括左声道的数据和右声道的数据。这样,可以提升电子设备播放音频数据的清晰度。
在一种可能的实现方式中,当电子设备包括1个第一发声单元,电子设备使用1个第一发声单元播放第一音频数据的左声道的数据和右声道的数据;或,
当电子设备包括2个第一发声单元,电子设备使用1个第一发声单元播放左声道的数据,使用另1个第一发声单元播放右声道的数据;或,
若电子设备包括3个及以上第一发声单元,电子设备使用至少一个第一发声单元播放左声道的数据,使用至少一个第一发声单元播放右声道的数据,使用至少一个第一发声单元播放第二音频数据的左声道的数据和右声道的数据。这样,可以基于第一发声单元的数量,合理使用第一发声单元进行第一音频数据的声道数据的播放,充分利用电子设备提供的发声单元资源。
在一种可能的实现方式中,第一模式选项为环绕增强模式,第一模式为环绕增强模式,第一音频数据包括左环绕声道的数据和右环绕声道的数据,第二音频数据包括左声道的数据与右声道的数据、中置声道的数据和/或低频声道的数据。这样,可以增加音频播放的环绕感。
在一种可能的实现方式中,电子设备处理音源数据时,加强音源数据的细小音频数据的响度。这样,可以突出细小声音,增强细节感。使得用户在玩游戏可以听到细小声音,观察游戏人物动向,提升游戏体验。用户可以在看电影时,听到明显的背景音,例如,风声、虫鸣等,更加有画面感。
在一种可能的实现方式中,电子设备显示一个或多个音效模式选项时,方法还包括:
电子设备显示滑动条;
若第一模式选项为响度增强模式选项、低频增强模式选项或对白增强模式选项,滑动条的值为第一值时,电子设备播放第一音频数据的音量为第三音量,滑动条的值为第二值时,电子设备播放第一音频数据的音量为第四音量,第一值小于第二值,且第三音量低于第四音量;
若第一模式选项为环绕模式选项,滑动条的值为第三值,电子设备播放第一音频数据时模拟音源与用户的距离为第三距离,滑动条的值为第四值,电子设备播放第一音频数据时模拟音源与用户的距离为第四距离,第三值小于第四值且第三距离小于第四距离。这样,可以通过滑动条设置音效模式的效果,用户可以调整滑动条的值,选择适合的播放效果。
在一种可能的实现方式中,方法还包括:
电子设备接收到播放第一视频的第四输入;
电子设备响应于第四输入,识别第一视频的视频画面中的一个或多个对象,并且识别第一视频的音频文件中一个或多个对象的音频数据,一个或多个对象包括第一对象;
电子设备使用一个或多个第一发声单元和/或一个或多个第二发声单元中与第一对象距离最近的发声单元播放第一对象的音频数据。这样,电子设备可以使用最接近第一对象的一个或多个发声单元播放第一对象的声音,使得用户可以用听觉感受第一对象在画面中的位置,增强视频播放的沉浸感。
在一种可能的实现方式中,位于第一位置的第一发声单元用于播放天空音;和/或,
位于第二位置的第一发声单元用于播放地面音,第一位置与中心轴的距离大于第二位置与中心轴的距离;和/或,
位于第三位置的第一发声单元用于播放左声道的数据,且位于第四位置的第一发声单元用于播放右声道的数据,第三位置位于第一部分的左侧,第四位置位于第一部分的右侧;和/或,
位于第五位置的第一发声单元用于播放左环绕声道的数据,且位于第六位置的第一发声单元用于播放右环绕声道的数据,第三位置位于第一部分的左侧,第四位置位于第一部分的右侧。
这样,电子设备可以通过不同位置的发声单元播放不同方位的声道的数据,体现出音频的方位感。
在一种可能的实现方式中,电子设备为笔记本电脑,第一部分包括电子设备的显示屏;第二部分包括电子设备的键盘和/或触摸板。这样,可以使得笔记本电脑通过第一部分与第二部分的发声单元同时发声,增强音频播放效果。
在一种可能的实现方式中,电子设备的第一部分包括第一壳与第二壳,第一壳包括电子设备的显示屏;第一模式为环绕增强模式,位于第一壳的第一发声单元用于驱动第一壳播放第一音频数据;或者,
第一模式为低频增强模式,位于第二壳的第一发声单元用于驱动第二壳播放第一音频数据。
这样,电子设备可以驱动电子设备的第一壳或第二壳发声,声音范围更加广阔,发声效果更好。
在一种可能的实现方式中,电子设备为折叠屏设备,第一部分包括第一屏,第二部分包括第二屏,电子设备的折叠方式为左右折叠,电子设备包括折叠状态和展开状态;
若电子设备处于折叠状态,一个或多个第一发声单元与一个或多个第二发声单元中位于电子设备的第一侧的发声单元用于播放左声道的数据,一个或多个第一发声单元与一个或多个第二发声单元中位于电子设备的第二侧的发声单元用于播放右声道的数据;或者,一个或多个第一发声单元与一个或多个第二发声单元中位于电子设备的第一侧的发声单元用于播放天空音,一个或多个第一发声单元与一个或多个第二发声单元中位于电子设备的第二侧的发声单元用于播放地面音,第一侧与第二侧不同;
若电子设备处于展开状态,一个或多个第一发声单元用于播放左声道的数据,一个或多个第二发声单元用于播放右声道的数据,或者,一个或多个第一发声单元用于播放左环绕声道的数据,一个或多个第二发声单元用于播放右环绕声道的数据。这样,电子设备可以在处于不同折叠形态时,使用不同位置的发声单元播放不同声道的数据,结合电子设备的折叠形态,实现更丰富的播放效果。
在一种可能的实现方式中,电子设备为折叠屏设备,第一部分包括第一屏,第二部分包括第二屏,电子设备的折叠方式为上下折叠,电子设备包括折叠状态和展开状态;
若电子设备处于折叠状态,一个或多个第一发声单元与一个或多个第二发声单元播放音源数据;
若电子设备处于展开状态,一个或多个第一发声单元用于播放左声道的数据,一个或多个第二发声单元用于播放右声道的数据;或者,一个或多个第一发声单元用于播放天空音,一个或多个第二发声单元用于播放地面音。这样,电子设备可以在处于不同折叠形态时,使用不同位置的发声单元播放不同声道的数据,结合电子设备的折叠形态,实现更丰富的播放效果。
在一种可能的实现方式中,天空音包括雷声、飞行物的声音、风声中的一种或多种,地面音包括脚步声、虫鸣、雨声中的一种或多种。这样,电子设备可以增强播放天空对象的声音,以及地面对象的声音,体现上下的方位感。其中,当雨滴落在屋檐、大树时,雨滴的声音属于天空音,雨滴落在地面、湖面等时,雨滴的声音属于地面音,此处的雨声表示雨滴落在地面的声音。
在一种可能的实现方式中,音源数据包括第一模式下第一音频数据所需的声道的数据,音源数据中除了第一音频数据的声道以外的声道的数量为第一数量;方法还包括:
若第一数量小于第二发声单元的数量,电子设备对音源数据进行上混,或者,复制音源数据中除了第一音频数据以外的声道的数据,得到第五音频数据;其中,第五音频数据包括第一音频数据所需的声道的数据,并且第五音频数据中除了第一音频数据的声道以外的声道的数量与第二发声单元的数量相同,第二音频数据包括除了第一音频数据的声道以外的声道的数据;
若第一数量大于第二发声单元的数量,电子设备对音源数据进行下混,或者,叠加音源数据中部分声道的数据,得到第六音频数据;其中,第六音频数据包括第一音频数据所需的声道的数据,并且第六音频数据中除了第一音频数据的声道以外的声道的数量与第二发声单元的数量相同,第二音频数据包括除了第一音频数据的声道以外的声道的数据。
这样,电子设备基于第二发声单元的数量,处理音源数据,充分利用电子设备的发声单元资源,实现音频数据的播放。
第二方面,本申请实施例提供了一种电子设备,包括第一部分和第二部分,电子设备的第一部分包括电子设备的显示屏,电子设备的第二部分包括电子设备的键盘和/或触摸板;第一部分包括一个或多个第一发声单元,第二部分包括一个或多个第二发声单元,电子设备包括一个或多个处理器;其中,
一个或多个处理器,用于响应于播放第一音频的输入,基于第一音频的音源数据得到第一音频数据与第二音频数据,第一音频数据与第二音频数据均至少包括音源数据的至少部分内容;
一个或多个第一发声单元,用于播放第一音频数据;
一个或多个第二发声单元,用于播放第二音频数据。
这样,电子设备可以通过第一部分的第一发声单元与第二部分的第二发声单元播放第一音频,通过多个位置的发声单元同时发声,实现更好的音频播放效果,提升音频播放的沉浸感。
在一种可能的实现方式中,电子设备为笔记本电脑。这样,笔记本电脑可以通过位于显示屏附近的发声单元和位于键盘附近的发声单元播放第一音频,使得笔记本电脑的声音波束朝向用户双耳的方向,声音更清晰,并且,使得笔记本电脑播放音频数据时模拟得到的音源的位置处于笔记本电脑的显示屏附近,声音和画面高度一致,使得笔记本在播放视频时,声音与画面同步。
在一种可能的实现方式中,电子设备,被配置为用于实现上述第一方面中任一项可能的实现方式中的音频播放方法。
第三方面,本申请实施例提供了一种电子设备,包括:一个或多个处理器、一个或多个第一发声单元、一个或多个第二发声单元和一个或多个存储器;其中,一个或多个存储器、一个或多个第一发声单元、一个或多个第二发声单元分别与一个或多个处理器耦合,一个或多个存储器用于存储计算机程序代码,计算机程序代码包括计算机指令,当一个或多个处理器在执行计算机指令时,使得电子设备执行第一方面任一项可能的实现方式中的音频播放方法。
第四方面,本申请实施例提供了一种计算机存储介质,包括计算机指令,当计算机指令在第一电子设备上运行时,使得第一电子设备执行上述第一方面任一项可能的实现方式中的音频播放方法。
第五方面,本申请提供了一种芯片系统,芯片系统应用于第一电子设备,芯片系统包括一个或多个处理器,处理器用于调用计算机指令以使得第一电子设备执行上述第一方面任一项可能的实现方式中的音频播放方法。
第六方面,本申请提供了一种包含指令的计算机程序产品,当计算机程序在第一电子设备上运行时,使得第一电子设备执行上述第一方面任一项可能的实现方式中的音频播放方法。
附图说明
图1为本申请实施例提供的一种电子设备的结构示意图;
图2A为本申请实施例提供的一种电子设备的发声单元位置示意图;
图2B为本申请实施例提供的电子设备的一种声场示意图;
图2C为本申请实施例提供的电子设备的另一种声场示意图;
图3A为本申请实施例提供的另一种电子设备的发声单元位置示意图;
图3B为本申请实施例提供的一种电子设备的第一部分侧发声单元位置示意图;
图3C为本申请实施例提供的电子设备的一种声场示意图;
图3D为本申请实施例提供的电子设备的另一种声场示意图;
图4为本申请实施例提供的一种音频播放方法的流程示意图;
图5为本申请实施例提供的一种电子设备处理音源数据的流程示意图;
图6A-图6E为本申请实施例提供的一组界面示意图;
图7为本申请实施例提供的一种电子设备的第一部分侧发声单元示意图;
图8A-图8D为本申请实施例提供的一组电子设备形态示意图;
图9为本申请实施例提供的另一种电子设备的第一部分侧发声单元示意图;
图10A-图10D为本申请实施例提供的另一组电子设备形态示意图;
图11A-图11C为本申请实施例提供的另一组电子设备形态示意图;
图12A-图12C为本申请实施例提供的另一组电子设备形态示意图;
图13为本申请实施例提供的另一种电子设备的第一部分侧发声单元示意图;
图14为本申请实施例提供的一种电子设备的硬件结构示意图。
具体实施方式
下面将结合附图对本申请实施例中的技术方案进行清楚、详尽地描述。其中,在本申请实施例的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;文本中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为暗示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征,在本申请实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
在本申请实施例的描述中,需要说明的是,除非另有明确的规定和限定,术语“安装”、“连接”应做广义理解,例如,“连接”可以是可拆卸地连接,也可以是不可拆卸地连接;可以是直接连接,也可以通过中间媒介间接连接。本申请实施例中所提到的方位用语,例如,“顶部”、“底部”、“上”、“下”、“左”、“右”、“内”、“外”等,仅是参考附图的方向,因此,使用的方位用语是为了更好、更清楚地说明及理解本申请实施例,而不是指示或暗指所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本申请实施例的限制。“多个”是指至少两个。
本申请以下实施例中的术语“用户界面(user interface,UI)”,是应用程序或操作系统与用户之间进行交互和信息交换的介质接口,它实现信息的内部形式与用户可以接受形式之间的转换。用户界面是通过java、可扩展标记语言(extensible markup language,XML)等特定计算机语言编写的源代码,界面源代码在电子设备上经过解析,渲染,最终呈现为用户可以识别的内容。用户界面常用的表现形式是图形用户界面(graphic user interface,GUI),是指采用图形方式显示的与计算机操作相关的用户界面。它可以是在电子设备的显示屏中显示的文本、图标、按钮、菜单、选项卡、文本框、对话框、状态栏、导航栏、Widget等可视的界面元素。
接下来介绍本申请实施例提供的一种电子设备。
在一种可能的实现方式中,电子设备为折叠设备,包括第一部分与第二部分。其中,第一部分与第二部分之间存在中心轴,该第一部分与第二部分可以围绕中心轴旋转(或展开),改变第一部分与第二部分之间的夹角的度数。当电子设备的第一部分与第二部分的夹角的度数不同时,可以以不同的形式使用该电子设备。
在一些实施例中,电子设备为折叠屏设备,电子设备的第一部分包括电子设备的第一屏,第二部分包括电子设备的第二屏,其中,第一屏和第二屏可以是同一块屏幕的不同部分。该电子设备的折叠方式可以为左右折叠,也可以为上下折叠。当电子设备的折叠方式为左右折叠时,第一部分可以位于第二部分的左侧。当电子设备的折叠方式为上下折叠时,第一部分可以位于第二部分的上侧。
其中,电子设备的折叠方式可以分为两类,一类为朝外翻折的折叠屏(简称外折折叠屏),另一类为朝内翻折的折叠屏(简称内折折叠屏)。其中,外折折叠屏被折叠后,第一屏和第二屏相背对。外折折叠屏被展开后,第一屏与第二屏组成第三屏。这样,当外折折叠屏被折叠后,用户可以单独使用第一屏或第二屏。当外折折叠屏被展开后,第一屏与第二屏组成屏幕面积更大的第三屏,可以给用户提供更大的屏幕,提升用户体验。
内折折叠屏被折叠后,第一屏和第二屏相对。内折折叠屏被展开时,第一屏与第二屏组成第三屏。这样,内折折叠屏被折叠后,便于用户收纳携带,内折折叠屏被展开后,便于用户使用。
在另一些实施例中,电子设备为笔记本电脑等类似形态的设备,如图1所示,电子设备的第一部分为图1所示的部分11,电子设备的第二部分为图1所示的部分12,电子设备的中心轴为图1所示的中心轴13。其中,电子设备的部分11与部分12可以围绕中心轴13旋转、展开,以改变部分11与部分12围绕中心轴13的角度。其中,部分11包括A壳与B壳。部分12包括C壳与D壳。
具体的,部分11可以为电子设备的屏幕部分,部分12可以为电子设备的键盘部分。其中,电子设备的屏幕部分可以包括用于显示用户界面的显示屏。电子设备的键盘部分可以包括但不限于键盘和/或触控板。用户可以通过触控板和/或键盘操纵电子设备,电子设备可以根据用户通过触控板和/或键盘输入的指令做出相应的响应。
电子设备的部分11包括A壳与B壳。其中,A壳为电子设备的外壳的一部分,B壳为电子设备的内壳的一部分,电子设备的显示屏位于B壳的中心。电子设备的键盘部分包括C壳和D壳,其中,D壳为电子设备的外壳的一部分,C壳包括电子设备的键盘和/或触控板。如图1的正面示意图所示,电子设备的B壳与C壳相邻。如图1的反面示意图所示,电子设备的A壳与D壳相邻。
电子设备可以接收用户翻动屏幕部分的输入,改变B壳与C壳之间的角度。当B壳与C壳之间的角度为0度时,屏幕部分与键盘部分重合,B壳与C壳被遮挡,便于用户收纳与携带。当B壳与C壳之间的角度值处于指定角度范围(例如,45度至145度)内时,用户可以在屏幕部分的显示屏上查看电子设备的显示内容,例如视频。也可以通过键盘部分向电子设备输入指令。在一些示例中,电子设备的B壳与C壳之间的角度值可以为360度,该电子设备的A壳与D壳重叠在一起,用户可以单独操纵电子设备的屏幕部分,例如,屏幕部分可以包括触控屏,触控屏可以用于接收用户的输入,触发屏幕部分执行输入对应的操作。
在一些示例中,不限于通过图1所示的键盘部分与屏幕部分连接在一起,屏幕部分与键盘部分可以为分离的两个部件,屏幕部分包括连接结构15,键盘部分包括连接结构16,屏幕部分的连接结构15可以与键盘部分的连接结构16相耦合,连接在一起。需要说明的是,耦合可以包括但不限于电连接、机械耦合等等。本申请实施例中的耦合与连接二词在具体表示电连接方面可以认为是等同的。电连接则是包括以导线之间相连或通过其他器件间接相连,实现电信号的互通,本实施例不对具体电连接方式做限定。
这样,当屏幕部分与键盘部分分离时,用户可以单独操纵屏幕部分,例如,屏幕部分可以包括触控屏,触控屏可以用于接收用户的输入,触发屏幕部分执行输入对应的操作。当屏幕部分与键盘部分相接时,键盘部分可以接收用户的输入,并将输入通过连接结构15与连接结构16发送给屏幕部分,屏幕部分可以执行输入对应的操作。
接下来以图1所示的电子设备为示例,进行后续实施例的讲解。
在一些实施例中,电子设备的第二部分设置有一个或多个扬声器,电子设备可以通过该一个或多个扬声器播放音频。示例性的,电子设备的一个或多个扬声器可以通过振动推动空气发出声波,声波可以从电子设备的出音口(例如,位于电子设备的第二部分侧面的出音口,位于第二部分与中心轴相邻的边缘的正出音口)或者第二部分的键盘的缝隙输出,到达用户的双耳,使得用户可以收听电子设备播放的音频。
其中,该一个或多个扬声器中的全部或部分扬声器可以设置与C壳的内表面、D壳的内表面、C壳与D壳之间的空隙,C壳与D壳的连接处(即电子设备的第二部分的侧边)等等,本申请实施例对该一个或多个扬声器的在第二部分中的位置不做限定。该设置在电子设备的第二部分的扬声器可以称为第二部分侧发声单元。其中,第二部分侧发声单元可以为动圈扬声器、动铁扬声器、压电扬声器、微机电系统(micro-electro-mechanical system,MEMS)扬声器、压电陶瓷扬声器以及磁致收缩扬声器等等。
示例性的,如图2A所示,以电子设备的第二部分的C壳表面作为XOY平面,以垂直于以XOY界面且从D壳指向C壳的方向为Z轴。其中,原点O点可以位于中心轴的左侧边缘,Y轴可以与中心轴重合且方向向右,X轴垂直于Y轴,且从O点指向C壳上远离中心轴的边缘。在此,电子设备的第二部分侧发声单元设置在第二部分的侧面,其中,电子设备的第二部分侧发声单元包括发声单元21和发声单元22,该第二部分侧发声单元21位于第二部分的左侧,该第二部分侧发声单元22位于第二部分的右侧。在图2A中,发声单元21和发声单元22以喇叭形状表示,其中喇叭形状仅仅是发声单元的示意,喇叭形状不限定发声单元的形状,也不限定发声单元的具体位置。同理,在以下附图中的喇叭不限定发声单元的形状,也不限定发声单元的具体位置。
具体的,图2A所示的电子设备在播放音频时的声场方向如图2B和图2C所示。其中,图2B示出了电子设备的YOZ平面的声压分布情况,图2C示出了电子设备的XOZ平面的声压分布情况。图2B和图2C中示出的颜色越深的位置处的声压越强,示出的颜色越浅的位置处的声压越弱。其中,由于第二部分侧发声单元的位置处于XOY平面,越靠近XOY平面,声压越强。而且,声波的能量集中在XOY平面附近,能量较强的声波的波束方向与Z轴方向相同。
由图2B与图2C示出的声压图像可以看出,电子设备在通过第二部分侧发声单元播放音频时,第二部分侧发声单元发出的声波在键盘部分附近形成音频声像,整体音频声像高度受限,无法充分还原空间立体声场。这样将导致用户听到的声音与看到的屏幕画面不在同一高度,造成音画位置存在偏差,影响音频收听体验。
并且,第二部分侧发声单元的大多数能量集中在键盘部分,当电子设备播放音频的音量较大时,第二部分侧发声单元的振动幅度较大,容易引发键盘等结构件共振,产生杂音,影响音频播放效果。
电子设备通过第二部分侧发声单元发声时,若用户在非桌面上使用电子设备播放音频,由于非桌面对声波的反射能力较弱,导致音频外放性能比用户在桌面上使用电子设备播放音频时的音频外放性能差。因此,电子设备使用第二部分侧发声单元发声对电子设备的环境要求较高,电子设备需要放置的桌面对声波进行反射,使得用户同时听到反射的音频能量,声音更加清晰,播放效果更好。
在本申请实施例中,电子设备的第二部分设置有一个或多个扬声器,该设置在第二部分的一个或多个扬声器可称为第二部分侧发声单元。电子设备的第一部分也设置有一个或多个扬声器,该设置在第一部分的一个或多个扬声器可称为第一部分侧发声单元。电子设备可以通过第二部分侧发声单元与第一部分侧发声单元共同播放音频。
示例性的,电子设备的第二部分侧发声单元的描述可以参见上述图2A所示实施例,在此不再赘述。其中,电子设备的一个或多个中的全部或部分第一部分侧发声单元可以被设置于A壳或B壳的内部表面,或者,第一部分侧发声单元可以被设置于A壳与B壳连接处(屏幕部分的侧边缘)。第一部分侧发声单元可以为体积较小的扬声器,例如,压电陶瓷扬声器、磁致收缩扬声器等。
例如,若第一部分侧发声单元为压电陶瓷扬声器,该第一部分侧发声单元可以设置于电子设备A壳的显示屏的背面,压电陶瓷扬声器可以通过力矩作用将自身形变传递给显示屏,使得显示屏振动发声。其中,压电陶瓷扬声器可以包括多层压电陶瓷片。当一片压电陶瓷片发生膨胀和收缩时,将带动显示屏发生弯曲变形,使得整个显示屏形成弯曲振动,从而显示屏可以推动空气并产生声音。
需要说明的是,第一部分侧发声单元为压电陶瓷扬声器仅为示例,该第一部分侧发声单元也可以为动圈扬声器、动铁扬声器、压电扬声器以及MEMS扬声器等等,本申请实施例对此不作限定。
示例性的,如图3A所示,以电子设备的第二部分的C壳表面作为XOY平面,以垂直于以XOY界面且从D壳指向C壳的方向为Z轴。其中,原点O点位于中心轴左侧,Y轴位于中心轴且方向向右,X轴垂直于Y轴,且从O点指向C壳远离中心轴的边缘。
例如,以电子设备包括3个第一部分侧发声单元,2个第二部分侧发声单元为例。电子设备的2个第二部分侧发声单元可以分别设置在键盘部分的相对两侧,例如,第二部分侧发声单元31位于第二部分的左侧,第二部分侧发声单元32位于键盘部分的右侧。电子设备的第一部分侧发声单元可以设置在第一部分的B壳的中间与两侧,如图3B所示,该3个第一部分侧发声单元都附着在B壳内部,该3个第一部分侧发声单元包括第一部分侧发声单元33、第一部分侧发声单元34以及第一部分侧发声单元35。其中,第一部分侧发声单元35位于第一部分的垂直中轴线。第一部分侧发声单元33位于第一部分的垂直中轴线的左侧区域,第一部分侧发声单元34位于第一部分的垂直中轴线的右侧区域。该第一部分侧发声单元33与第一部分侧发声单元35的距离和第一部分侧发声单元34与第一部分侧发声单元35的距离相同。
需要说明的是,该3个第一部分侧发声单元与2个第二部分侧发声单元仅为示例,该电子设备的发声单元的数量与构成可以不限于图3A所示构成,例如,该电子设备可以包括2个第一部分侧发声单元与2个第二部分侧发声单元。其中,该2个第一部分侧发声单元可以分别位于第一部分的左右区域内,且相对于中心轴的中垂面对称,该2个第一部分侧发声单元与中心轴的距离相等,并且与第一部分中心点的距离相等。同理,该2个第二部分侧发声单元可以分别位于第二部分的左右区域内,且相对于中心轴的中垂面对称,该2个第二部分侧发声单元与中心轴的距离相等,并且与第二部分中心点的距离相等。电子设备也可以包括更多或更少数量的第一部分侧发声单元,更多或更少数量的第二部分侧发声单元,本申请实施例对此不作限定。
可以理解的是,当电子设备的第一部分侧发声单元(第二部分侧发声单元)位于第一部分(第二部分)的第一侧(例如,左侧、右侧)时,为了让用户听到平衡的声音,必然有一个发声单元位于与该发声单元相对于中心轴的中垂面对称的位置。
还需要说明的是,图3A所示的第一部分侧发声单元35的位置仅为示例,不限于图3A所示的位置,第一部分侧发声单元35可以位于第一部分的其他位置。示例性的,该第一部分侧发声单元35可以位于第一部分的A壳附近,例如,第一部分侧发声单元35可以附着于A壳内部,振动A壳发声。再例如,第一部分侧发声单元35可以位于第一部分的内部且喇叭的方向朝向A壳,A壳的对应位置处设置有出声孔。使得第一部分侧发声单元35发出的声音的声波方向从B壳指向A壳。这样,第一部分侧发声单元35可以用于播放音频中的背景声音,让播放背景声音的声源与用户的距离更远,给用户带来更好的沉浸感。第一部分侧发声单元35还可以用于播放视频画面中距离用户更远的对象(例如,人物、动物、物体等)发出的声音,以声音表示各个对象的位置关系,给用户带来更好的观影体验。
又示例性的,该第一部分侧发声单元35可以位于第一部分的侧边缘的顶部,使得第一部分侧发声单元35发出的声音的声波方向朝上。第一部分侧发声单元35可以用于播放音频中的天空音。其中,天空音为音频中指定天空对象发出的声音。例如,指定天空对象可以为飞行物(例如,飞机、飞鸟等)、雷电等对象。这样,可以让用户以听觉感受音频中的指定天空对象的高度信息,增加用户收听音频的沉浸感。第一部分侧发声单元35还可以用于播放视频画面中处于画面上方的对象发出的声音,以声音表示各个对象的位置关系,给用户带来更好的观影体验。
在一些示例中,由于第一部分侧发声单元直接播放音源数据时,若待播放的音频的信号功率较低,使得第一部分侧发声单元振幅较小,音质差。第一部分侧发声单元可以通过发声单元连接线与音频功放芯片相连,例如,该发声单元连接线可以为图3B所示的发声单元连接线36、发声单元连接线37与发声单元连接线38。其中,发声单元连接线36可以用于连接第一部分侧发声单元33与音频功放芯片,发声单元连接线37可以用于连接第一部分侧发声单元34与音频功放芯片,发声单元连接线38可以用于连接第一部分侧发声单元35与音频功放芯片。需要说明的是,图3B仅示出了一部分发声单元连接线,不应对发声单元连接线构成具体限定。其中,音频功放芯片可以放大模拟音频信号,再将放大后的模拟音频信号发送给第一部分侧发声单元,使得第一部分侧发声单元可以基于放大后的模拟音频信号振动发声。这样,放大后的模拟音频信号的功率较高,以较高功率的模拟音频信号推动第一部分侧发声单元,可以以更高音质播放音频。
同理,第二部分侧发声单元也可以通过发声单元连接线与音频功放芯片相连,接收音频功放芯片发送的放大后的音频信号。
具体的,图3A所示的电子设备在播放音频时的声场方向如图3C与图3D所示。其中,图3C示出了电子设备的YOZ平面的声压分布情况,图3D示出了电子设备的XOZ平面的声压分布情况。图3C与图3D中示出的颜色越深的位置处的声压越强,示出的颜色越浅的位置处的声压越弱。其中,由于第二部分侧发声单元与第一部分侧发声单元共同发声,从图3D可以看出,能量较强的声波的波束方向为X轴与Z轴的中垂线方向。这样,由于声波的波束方向朝着X轴与Z轴的中垂线方向,该方向通常朝向用户使用电子设备时,用户的耳朵所处的位置,因此,可以将声音更好地传播到用户双耳附近,提升用户收听音频的体验。
由图3C和图3D示出的声压图像可以看出,电子设备在通过第二部分侧发声单元与第一部分侧发声单元共同播放音频时,相比于仅通过第二部分侧发声单元播放音频,提升了音频声像的在Z轴方向的高度。并且,从XOY平面的声场转变为O-XYZ三维空间的垂直3D声场,可以营造更宽更高的立体音频播放效果,使得用户在观看画面时感觉到音画高度一致,加强用户环绕感音频体验。
在一些示例中,如图2B所示,仅包括第二部分侧发声单元的电子设备在YOZ平面中最大声压值为101.395分贝,如图3C所示,包括第一部分侧发声单元和第二部分侧发声单元的电子设备在YOZ平面中最大声压值为103.513分贝。如图2C所示,仅包括第二部分侧发声单元的电子设备在XOZ平面中最大声压值为115.276分贝。如图3D所示,包括第一部分侧发声单元和第二部分侧发声单元的电子设备在XOZ平面中最大声压值为118.504分贝。由于图2B、图2C、图3C和图3D可以看出,包括第一部分侧发声单元和第二部分侧发声单元的电子设备比仅包括第二部分侧发声单元的电子设备最大声压值更大,声音波束朝向用户双耳方向。这样,可以给用户带来更好的听觉体验。
并且,在同等音频效果状态下,由于相比于第二部分侧发声单元单独播放音频的情形,第一部分侧发声单元承担了一部分播放音频的功能,使得第二部分侧发声单元的振动幅度减小,不容易引发键盘等结构 件共振,减小第二部分侧发声单元播放音频时产生的杂音。同时,第一部分侧发声单元不需要桌面反射声波,电子设备的摆放位置对音频播放效果影响减小。
需要说明的是,当第一部分侧发声单元仅附着在B壳内部时,可以提升正向面对用户方向的声音传播效果,提升用户观看视频时的沉浸感。当第一部分侧发声单元仅附着在A壳内部时,第一部分侧发声单元可以驱动A壳发声。第一部分侧发声单元可以用于播放低频音频信号。这样,由于低频音频信号指向性不强,可以提升低频的震感,增强播放视频的氛围感,优化音频的音色,并且不会造成B壳振动而影响屏幕的使用效果。同时可替换部分键盘侧扬声器低频能量,减轻键盘侧振动带来的杂音体验影响。
基于包括一个或多个发声单元的电子设备,其中,一个或多个发声单元包括一个或多个第二部分侧发声单元以及一个或多个第一部分侧发声单元。本申请实施例提供了一种音频播放方法。电子设备的音效模式为第一模式时,电子设备可以接收到播放指定音频的输入,响应于该输入,基于第一模式,处理电子设备的指定音频的音频数据(又称为音源数据),得到电子设备的多个发声单元对应的音频数据。电子设备的多个发声单元同时播放对应的音频数据。这样,可以实现电子设备使用第一部分侧发声单元以及第二部分侧发声单元协同播放第一音频,基于两个部分联合发声的方式,提高电子设备的声场的环绕感、沉浸感,增强音频播放效果。
接下来介绍本申请实施例提供的一种音频播放方法。
示例性的,如图4所示,该音频播放方法包括以下步骤:
S401.电子设备的音效模式为第一模式,电子设备接收到播放指定音频的输入。
其中,电子设备支持一个或多个音效模式,该一个或多个音效模式包括第一模式。电子设备的音效模式为第一模式。其中,第一模式可以为电子设备默认设置的音效模式,或者,为用户选中的音效模式。
首先,接下来以5.1声道中的多个声道为例,示例性的介绍本申请实施例提供的多种音效模式,以及多个音效模式下各个发声单元播放的音频数据包括的声道数据。在本申请实施例中,声道是指声音在录制时在不同空间位置采集的相互独立的音频信号,声道数可以理解为声音录制时的音源数量。其中,5.1声道包括左声道、右声道、左环绕声道、右环绕声道、中置声道和低频声道。其中,左声道的数据包括音源数据中模拟用户左耳的听觉范围的声音数据,右声道的数据包括音源数据中模拟用户右耳的听觉范围的声音数据。其中,由于左耳的听觉范围与右耳的听觉范围存在交叉,左声道的数据和右声道的数据有一部分相同,该左声道的数据与右声道的数据中相同的部分音频数据称为中置声道的数据。中置声道的数据包括音源数据中模拟用户左耳的听觉范围与右耳的听觉范围重叠的范围内的声音数据,在一些实施例中,中置声道包括人声对白。左环绕声道的数据可以用于体现左耳侧方位,该左环绕声道的数据可以包括左声道的数据中与中置声道的数据不同的音频数据。右环绕声道的数据可以用于体现右耳侧方位,该右环绕声道的数据可以包括右声道的数据中与中置声道的数据不同的音频数据。低频声道的数据包括音源数据中频率低于指定频率值(例如,150Hz)的音频数据。其中,左声道、右声道、左环绕声道、右环绕声道和中置声道的声音频率范围为20Hz-200KHz。在一些示例中,电子设备可以通过低通滤波器,过滤掉音源数据中频率大于指定频率值的音频数据,得到低频声道的数据。
在一种可能的实现方式中,电子设备支持的音效模式可以包括但不限于环绕增强模式、低频增强模式、对白增强模式和/或响度增强模式。其中,环绕增强模式下,电子设备通过第一部分侧发声单元播放包括左环绕声道和右环绕声道的音频数据,增强电子设备播放音频的环绕感。低频增强模式下,电子设备通过第一部分侧发声单元播放包括低频声道的音频数据,增强电子设备播放音频的节奏感。对白增强模式下,电子设备通过第一部分侧发声单元播放包括中置声道的音频数据,增强电子设备播放音频的人声。响度增强模式下,电子设备通过第一部分侧发声单元播放包括左声道和右声道的音频数据,增强电子设备播放音频的响度。
其中,在低频增强模式、对白增强模式或环绕增强模式下,电子设备的第一部分侧发声单元播放的音频数据(又称为第一音频数据)包括的声道与电子设备的第二部分侧发声单元播放的音频数据(又称为第二音频数据)包括的声道不相同。可以理解的是,由于第二音频数据包括的声道与第一音频数据包括的声道不相同,第二音频数据和第一音频数据的振幅、波形或频率中的一项或多项不同。
在响度增强模式下,第二音频数据的声道与第一音频数据的声道相同。可选的,在响度增强模式下,第二音频数据的振幅和第一音频数据的振幅不同。
在一些示例中,各个音效模式下,电子设备播放的第二音频数据包括的声道,以及电子设备播放的第 一音频数据包括的声道如表1所示:
表1
其中,如表1所示,环绕增强模式中,基于音源数据得到的第二音频数据包括左声道、右声道、中置声道和/或低频声道的数据。基于音源数据的得到的第一音频数据包括左环绕声道和右环绕声道的数据。这样,在环绕增强模式下,可以通过第一部分侧发声单元播放环绕声道的数据,使得电子设备的声场具有更好的包围感。
对白增强模式中,基于音源数据得到的第二音频数据包括左声道、右声道、左环绕道、右环绕声道和/或低频声道。基于音源数据的得到的第一音频数据仅包括中置声道。这样,可以通过第一部分侧发声单元增加对白的清晰度。
响度增强模式中,基于音源数据得到的第二音频数据包括左声道、右声道。基于音源数据的得到的第一音频数据包括左声道、右声道。这样,可以通过第一部分侧发声单元增加音频播放的响度。
低频增强模式中,基于音源数据得到的第二音频数据包括左声道、右声道、左环绕道、右环绕声道和/或中置声道。基于音源数据的得到的第一音频数据包括低频声道。这样,可以通过电子设备播放低频节奏信号,例如,指定音频中的鼓点、低音(bass)、架子鼓等的声音信号,使得用户可以感受到重低音带来的力量感。
需要说明的是,电子设备仅包括1个第一部分侧发声单元时,电子设备支持低频增强模式、响度增强模式和/或对白增强模式。电子设备包括2个或2个以上的第一部分侧发声单元时,电子设备支持低频增强模式、响度增强模式、环绕增强模式和/或对白增强模式。
其中,若电子设备仅包括1个第一部分侧发声单元。在低频增强模式下,第一部分侧发声单元可以播放低频声道的数据。在对白增强模式下,第一部分侧发声单元可以播放中置声道的数据。在响度增强模式下,第一部分侧发声单元可以同时播放左声道的数据和右声道的数据。具体的,电子设备可以将左声道的数据和右声道的数据叠加在一起,使用第一部分侧发声单元播放该叠加在一起的左声道的数据和右声道的数据。或者,电子设备可以将左声道的数据和右声道的数据进行下混,得到仅包括单声道的数据,并使用第一部分侧发声单元播放该单声道的数据。其中,电子设备针对音频数据进行下混操作后,下混之后的音频数据包括的声道数量小于下混之前的音频数据包括的声道数量。
其中,电子设备的第一部分侧发声单元可以位于电子设备的第一部分的A壳上,或者,位于电子设备的第一部分的B壳上,或者,位于电子设备的第一部分的A壳与B壳之间的空腔中,或者,位于电子设备的第一部分的A壳与B壳的连接处。在一些示例中,电子设备的第一部分侧发声单元位于电子设备的第一部分的A壳,这样,电子设备在低频增强模式下,通过第一部分侧发声单元播放低频声道的数据,不仅增强了低频震感,还减弱了因为第二部分抖动而产生的杂音。在另一些示例中,电子设备的第一部分侧发声单元位于电子设备的第一部分的B壳,这样,电子设备在对白增强模式下,通过第一部分侧发声单元播放中置声道的数据,可以在电子设备播放视频时,使得音源位于显示屏附近,让用户感觉声音由画面处传出,增强视频播放的沉浸感。
若电子设备包括2个第一部分侧发声单元。在低频增强模式下,电子设备可以使用所有第一部分侧发声单元播放低频声道的数据。在对白增强模式下,电子设备可以使用所有第一部分侧发声单元播放中置声道的数据。在响度增强模式下,电子设备使用位于电子设备左侧的第一部分侧发声单元播放左声道的数据,位于电子设备右侧的第一部分侧发声单元播放右声道的数据。在环绕增强模式下,电子设备使用位于电子 设备左侧的第一部分侧发声单元播放左环绕声道的数据,位于电子设备右侧的第一部分侧发声单元播放右环绕声道的数据。
其中,电子设备的第一部分侧发声单元可以同时位于电子设备的第一部分的A壳上,或者,位于电子设备的第一部分的B壳上,或者,位于电子设备的第一部分的A壳与B壳之间的空腔中,或者,位于电子设备的第一部分的A壳与B壳的连接处。需要说明的是,该2个第一部分侧发声单元分别位于第一部分的左右两侧,并且相对于中心轴的中垂面对称。
同理,在一些示例中,电子设备的第一部分侧发声单元位于电子设备的第一部分的A壳。在另一些示例中,电子设备的第一部分侧发声单元位于电子设备的第一部分的B壳。
在以下实施例中,使用一个发声单元播放多个声道的数据时,可以理解为发声单元播放该多个声道叠加得到的数据,或者,播放基于该多个声道下混得到的数据。为了减少重复的描述,以下将不再重复解释使用一个发声单元播放多个声道的数据的具体描述。
若电子设备包括3个第一部分侧发声单元。其中,在低频增强模式下,电子设备可以使用所有第一部分侧发声单元播放低频声道的数据。在对白增强模式下,电子设备可以使用所有第一部分侧发声单元播放中置声道的数据。在响度增强模式下,电子设备可以使用左侧的第一部分侧发声单元播放左声道的数据,右侧的第一部分侧发声单元播放右声道的数据,使用另一个第一部分侧发声单元播放左声道的数据和右声道的数据。在环绕增强模式下,电子设备使用左侧的第一部分侧发声单元播放左环绕声道的数据,右侧的扬声器播放右环绕声道的数据,使用另一个扬声器播放左环绕声道的数据和右环绕声道的数据。
其中,电子设备的第一部分侧发声单元可以位于电子设备的第一部分的A壳上,或者,位于电子设备的第一部分的B壳上,或者,位于电子设备的第一部分的A壳与B壳之间的空腔中,或者,位于电子设备的第一部分的A壳与B壳的连接处。需要说明的是,电子设备的1个第一部分侧发声单元位于第一部分的中间,并且处于中心轴的中垂面。电子设备的另外2个第一部分侧发声单元分别位于第一部分的左右两侧,并且相对于中心轴的中垂面对称。
在一些示例中,电子设备的第一部分侧发声单元位于电子设备的第一部分的A壳。在另一些示例中,电子设备的第一部分侧发声单元位于电子设备的第一部分的B壳。
在一些示例中,电子设备的1个第一部分侧发声单元位于电子设备的A壳,电子设备的另外2个第一部分侧发声单元位于电子设备的B壳。电子设备可以在低频增强模式下,仅通过位于电子设备的A壳的第一部分侧发声单元播放低频声道的数据。电子设备可以使用另外2个第一部分侧发声单元中左侧的第一部分侧发声单元播放左声道的数据且使用右侧的第一部分侧发声单元播放右声道的数据,或者,使用左侧的第一部分侧发声单元播放左环绕声道的数据且使用右侧的第一部分侧发声单元播放右环绕声道的数据,或者,使用另外2个第一部分侧发声单元播放音源数据,或者,不使用该2个第一部分侧发声单元播放音频数据。这样,在低频模式下,B壳的发声单元不播放低频声道的数据,减弱B壳的振动幅度。
需要说明的是,该1个第一部分侧发声单元位于电子设备的A壳,另外2个第一部分侧发声单元位于电子设备的B壳仅为示例,该1个第一部分侧发声单元可以位于第一部分的中心轴的中垂面的任意位置,另外2个第一部分侧发声单元同时位于第一部分的A壳、B壳、A壳与B壳之间的空腔或A壳与B壳之间的连接处,并且该2个第一部分侧发声单元相对于中心轴的中垂面对称。
以此类推,当电子设备包括更多的第一部分侧发声单元时,电子设备可以在低频增强模式下,使用所有第一部分侧发声单元播放低频声道的数据。在对白增强模式下,使用所有第一部分侧发声单元播放中置声道的数据。在响度增强模式下,电子设备使用左侧的一个或多个第一部分侧发声单元播放左声道的数据,右侧的一个或多个第一部分侧发声单元播放右声道的数据,使用其他的第一部分侧发声单元播放左声道的数据和右声道的数据。在环绕增强模式下,电子设备使用左侧的一个或多个第一部分侧发声单元播放左环绕声道的数据,右侧的一个或多个第一部分侧发声单元播放右环绕声道的数据,使用其他的第一部分侧发声单元播放左环绕声道的数据和右环绕声道的数据。
还需要说明的是,若电子设备仅包括1个第二部分侧发声单元。在低频增强模式下,第二部分侧发声单元可以播放除了低频声道的数据以外的声道的数据,例如,该第二音频数据可以包括左声道与右声道、左环绕声道与右环绕声道和/或中置声道的数据。在对白增强模式下,第二部分侧发声单元可以播放除了中置声道的数据以外的声道的数据,例如,该第二音频数据可以包括左声道与右声道、左环绕声道与右环绕声道和/或中置声道的数据。在响度增强模式下,第二部分侧发声单元可以同时播放左声道的数据和右声道的数据。具体的,电子设备可以将左声道的数据和右声道的数据叠加在一起,使用第二部分侧发声单元播 放该叠加在一起的左声道的数据和右声道的数据。或者,电子设备可以将左声道的数据和右声道的数据进行下混,得到仅包括单声道的数据,并使用第二部分侧发声单元播放该单声道的数据。
在以下实施例中,使用一个发声单元播放多个声道的数据时,可以理解为发声单元播放该多个声道叠加得到的数据,或者,播放基于该多个声道下混得到的数据。为了减少重复的描述,以下将不再重复解释使用一个发声单元播放多个声道的数据的具体描述。
其中,该第二部分侧发声单元可以位于电子设备的第二部分的C壳上,或者,位于电子设备的第二部分的D壳上,或者,位于电子设备的第二部分的C壳与D壳之间的空腔中,或者,位于电子设备的第二部分的C壳与D壳的连接处。
若电子设备包括2个第二部分侧发声单元。在低频增强模式下,电子设备可以使用第二部分侧发声单元可以播放除了低频声道的数据以外的声道的数据,例如,该第二音频数据可以包括左声道与右声道、左环绕声道与右环绕声道和/或中置声道的数据。可以理解的是,第二音频数据包括左声道与右声道的数据时,电子设备使用位于电子设备的左侧的第二部分侧发声单元播放左声道的数据且使用位于电子设备右侧的第二部分侧发声单元播放右声道的数据。第二音频数据包括左环绕声道与右环绕声道的数据时,电子设备使用位于电子设备的左侧的第二部分侧发声单元播放左环绕声道的数据且使用位于电子设备右侧的第二部分侧发声单元播放右环绕声道的数据。第二音频数据包括中置声道的数据时,电子设备使用所有第二部分侧发声单元播放中置声道的数据。
在中置增强模式下,电子设备可以使用第二部分侧发声单元可以播放除了中置声道的数据以外的声道的数据,例如,该第二音频数据可以包括左声道与右声道、左环绕声道与右环绕声道和/或低频声道的数据。可以理解的是,第二音频数据包括左声道与右声道的数据时,电子设备使用位于电子设备的左侧的第二部分侧发声单元播放左声道的数据且使用位于电子设备右侧的第二部分侧发声单元播放右声道的数据。第二音频数据包括左环绕声道与右环绕声道的数据时,电子设备使用位于电子设备的左侧的第二部分侧发声单元播放左环绕声道的数据且使用位于电子设备右侧的第二部分侧发声单元播放右环绕声道的数据。第二音频数据包括低频声道的数据时,电子设备使用所有第二部分侧发声单元播放低频声道的数据。
在响度增强模式下,电子设备使用位于电子设备左侧的第二部分侧发声单元播放左声道的数据,位于电子设备右侧的第二部分侧发声单元播放右声道的数据。
在环绕增强模式下,电子设备可以使用第二部分侧发声单元可以播放除了左环绕声道与右环绕声道的数据以外的声道的数据,例如,该第二音频数据可以包括左声道与右声道、低频声道和/或中置声道的数据。可以理解的是,第二音频数据包括左声道与右声道的数据时,电子设备使用位于电子设备的左侧的第二部分侧发声单元播放左声道的数据且使用位于电子设备右侧的第二部分侧发声单元播放右声道的数据。第二音频数据包括低频声道的数据时,电子设备使用所有第二部分侧发声单元播放低频声道的数据。第二音频数据包括中置声道的数据时,电子设备使用所有第二部分侧发声单元播放中置声道的数据。
其中,该第二部分侧发声单元可以位于电子设备的第二部分的C壳上,或者,位于电子设备的第二部分的D壳上,或者,位于电子设备的第二部分的C壳与D壳之间的空腔中,或者,位于电子设备的第二部分的C壳与D壳的连接处。需要说明的是,电子设备的2个第二部分侧发声单元分别位于第二部分的左右两侧,并且相对于中心轴的中垂面对称。
若电子设备包括3个第二部分侧发声单元。其中,在低频增强模式下,电子设备可以使用第二部分侧发声单元播放除了低频声道的数据以外的声道的数据,例如,该第二音频数据可以包括左声道与右声道、左环绕声道与右环绕声道和/或中置声道的数据。
可以理解的是,第二音频数据仅包括左声道与右声道的数据时,电子设备使用位于电子设备的左侧的第二部分侧发声单元播放左声道的数据且使用位于电子设备右侧的第二部分侧发声单元播放右声道的数据。电子设备可以使用位于电子设备中间的第二部分侧发声单元播放左声道的数据与右声道的数据,或者,不使用位于电子设备中间的第二部分侧发声单元播放音频数据。
第二音频数据仅包括左环绕声道与右环绕声道的数据时,电子设备使用位于电子设备的左侧的第二部分侧发声单元播放左环绕声道的数据且使用位于电子设备右侧的第二部分侧发声单元播放右环绕声道的数据。电子设备可以使用位于电子设备中间的第二部分侧发声单元播放左环绕声道的数据与右环绕声道的数据,或者,不使用位于电子设备中间的第二部分侧发声单元播放音频数据。
第二音频数据仅包括中置声道的数据时,电子设备使用所有第二部分侧发声单元播放中置声道的数据,或者,仅使用位于电子设备中间的第二部分侧发声单元播放中置声道的数据。
第二音频数据包括左声道的数据、右声道的数据以及中置声道的数据时,电子设备可以使用位于电子 设备的左侧的第二部分侧发声单元播放左声道的数据,使用位于电子设备右侧的第二部分侧发声单元播放右声道的数据,使用位于电子设备中间的第二部分侧发声单元播放中置声道的数据。或者,电子设备使用位于电子设备的左侧的第二部分侧发声单元播放左声道的数据,使用位于电子设备右侧的第二部分侧发声单元播放右声道的数据,使用位于电子设备中间的第二部分侧发声单元播放左声道的数据、右声道的数据以及中置声道的数据。
第二音频数据包括左环绕声道的数据、右环绕声道的数据以及中置声道的数据时,电子设备使用位于电子设备的左侧的第二部分侧发声单元播放左环绕声道的数据,使用位于电子设备右侧的第二部分侧发声单元播放右环绕声道的数据,使用位于电子设备中间的第二部分侧发声单元播放中置声道的数据。
第二音频数据包括左声道的数据、右声道的数据、左环绕声道的数据以及右环绕声道的数据时,电子设备使用位于电子设备的左侧的第二部分侧发声单元播放左声道的数据以及左环绕声道的数据,使用位于电子设备右侧的第二部分侧发声单元播放右声道的数据以及右环绕声道的数据,使用位于电子设备中间的第二部分侧发声单元播放左声道的数据以及右声道的数据。
第二音频数据包括左声道的数据、右声道的数据、左环绕声道的数据、右环绕声道的数据以及中置声道的数据时,电子设备使用位于电子设备的左侧的第二部分侧发声单元播放左声道的数据以及左环绕声道的数据,使用位于电子设备右侧的第二部分侧发声单元播放右声道的数据以及右环绕声道的数据,使用位于电子设备中间的第二部分侧发声单元播放中置声道的数据。
同理,在对白增强模式下,电子设备可以使用第二部分侧发声单元播放除了中置声道的数据以外的声道的数据,例如,该第二音频数据可以包括左声道与右声道、左环绕声道与右环绕声道和/或低频声道的数据。在响度增强模式下,电子设备可以使用左侧的第二部分侧发声单元播放左声道的数据,右侧的第二部分侧发声单元播放右声道的数据,使用另一个第二部分侧发声单元播放左声道的数据和右声道的数据。在环绕增强模式下,电子设备使用第二部分侧发声单元播放除了左环绕声道和右环绕声道的数据以外的声道的数据,例如,该第二音频数据可以包括左声道与右声道、低频声道和/或中置声道的数据。
其中,该第二部分侧发声单元可以位于电子设备的第二部分的C壳上,或者,位于电子设备的第二部分的D壳上,或者,位于电子设备的第二部分的C壳与D壳之间的空腔中,或者,位于电子设备的第二部分的C壳与D壳的连接处。需要说明的是,电子设备的1个第二部分侧发声单元位于第二部分的中间,并且处于中心轴的中垂面。电子设备的另外2个第二部分侧发声单元分别位于第二部分的左右两侧,并且相对于中心轴的中垂面对称。
以此类推,当电子设备包括更多的第二部分侧发声单元时,电子设备可以在低频增强模式下,使用第二部分侧发声单元播放除了低频声道的数据以外的声道的数据。在对白增强模式下,使用第二部分侧发声单元播放除了中置声道的数据以外的声道的数据。在响度增强模式下,电子设备使用左侧的一个或多个第二部分侧发声单元播放左声道的数据,右侧的一个或多个第二部分侧发声单元播放右声道的数据,使用中间的一个或多个第二部分侧发声单元播放左声道的数据和右声道的数据。在环绕增强模式下,电子设备使用第二部分侧发声单元播放除了左环绕声道与右环绕声道的数据以外的声道的数据。
在一些实施例中,电子设备在各个音效模式下,第一音频数据包括的声道可以参见表1的描述。电子设备的第二音频数据可以为音源数据。电子设备的第二部分侧发声单元可以共同播放音源数据。
在另一些实施例中,电子设备可以电子设备在各个音效模式下,第一音频数据包括的声道可以参见表1的描述。电子设备可以基于第二部分侧发声单元的构成,播放指定音频。
例如,当电子设备的第二部分侧发声单元支持播放单声道音频时,电子设备可以通过第二部分侧发声单元播放音源数据。当电子设备的第二部分侧发声单元支持播放双声道音频时,电子设备可以通过第二部分侧发声单元的一部分第二部分侧发声单元播放基于音源数据得到的左声道的数据,另一部分第二部分侧发声单元播放基于音源数据得到的右声道的数据。当电子设备的第二部分侧发声单元支持播放5.1声道音频时,电子设备可以通过第二部分侧发声单元的不同第二部分侧发声单元分别播放基于音源数据得到的5.1声道的各个声道的数据,例如,第二部分侧发声单元分为左声道发声单元、右声道发声单元、左环绕声道发声单元、右环绕声道发声单元、中置声道发声单元和低频声道发声单元。通过第二部分的左声道发声单元播放包括左声道的音频数据,通过第二部分的右声道发声单元播放包括右声道的音频数据,通过第二部分的左环绕声道发声单元播放包括左环绕声道的音频数据,通过第二部分的右环绕声道发声单元播放包括右环绕声道的音频数据,通过第二部分的中置声道发声单元播放包括中置声道的音频数据,通过第二部分的低频声道发声单元播放包括低频声道的音频数据。以此类推。
需要说明的是,电子设备的第二部分侧发声单元支持播放单声道音频、双声道音频和多声道音频时,第二部分侧发声单元可以以第二部分侧发声单元支持的最多声道数量的音频的形式,播放指定音频,或者,第二部分侧发声单元可以以用户选中的音频形式,播放指定音频。
在一些实施例中,电子设备支持的音效模式包括全能增强模式,在全能增强模式中,基于音源数据得到的第二音频数据包括左声道、右声道以及中置声道。基于音源数据的得到的第一音频数据包括左环绕声道、右环绕声道和中置声道。这样,在全能增强模式下,可以通过第一部分侧发声单元播放的音频数据实现声场更好的包围感,并且,由于全能增强模式中,第一音频数据和第二音频数据都包括中置声道,中置声道包括音频数据中的人声,可以用于增加对白的清晰度。
在一些实施例中,电子设备的各个音效模式包括原声音效模式。在原声音效模式下,电子设备的第二部分侧发声单元和第一部分侧发声单元共同播放指定音频的音频数据。
在一种可能的实现方式中,电子设备还可以提供智能增强模式,当电子设备未接收到选择任一音效模式的输入时,电子设备可以将音效模式设置为智能增强模式。智能增强模式为电子设备和电子设备支持的一个或多个音效模式中的某一个或多个的组合。这样,电子设备可以在用户未选择音效模式时,电子设备也可以以智能增强模式处理播放的音源数据,使得电子设备和电子设备共同播放声音。
具体的,当电子设备和电子设备支持的音效模式包括对白增强模式、环绕增强模式、响度增强模式、低频增强模式时。电子设备可以基于是否播放视频,确定出智能增强模式。其中,当电子设备同时播放视频和音频时,智能增强模式可以为环绕增强模式和对白增强模式的组合,也就是说,第一音频数据包括左环绕声道、右环绕声道和中置声道。当电子设备仅播放音频时,智能增强模式可以为响度增强模式和低频增强模式的组合,也就是说,第一音频数据包括左环绕声道、右环绕声道和低频声道。
当电子设备和电子设备支持的音效模式为对白增强模式、低频增强模式或响度增强模式时。电子设备可以设置智能增强模式可以为低频增强模式和响度增强模式的组合。
需要说明的是,由于智能增强模式为电子设备基于电子设备和电子设备支持的音效模式得到的,电子设备确定出的一个或多个音效模式始终包括智能增强模式。
在一些实施例中,左环绕声道的数据与左声道的数据相同,右环绕声道的数据与右声道的数据相同,电子设备在播放左环绕声道的数据与右环绕声道的数据时,可以调整播放左环绕声道的数据与右环绕声道的数据的响度,例如,在逐步增大播放左环绕声道的数据的响度时,逐步减小播放右环绕声道的数据的响度,再例如,在逐步增大播放右环绕声道的数据的响度时,逐步减小播放左环绕声道的数据的响度。这样,可以给用户带来一种左声道数据与右声道数据模拟的音源在用户面前环绕移动的感觉。
在一些实施例中,电子设备可以显示电子设备支持的一个或多个音效模式对应的音效模式选项,一个或多个音效模式选项包括第一模式选项,一个或多个音效模式包括第一模式,第一模式选项和第一模式相对应。电子设备显示包括第一模式选项的一个或多个音效模式选项时,可以接收选中第一模式选项的第二输入,确定出电子设备的音效模式。例如,第一模式选项为响度增强模式选项,该第一模式选项对应的第一模式为响度增强模式。当该响度增强模式选项被选中时,电子设备将音效模式设置为响度增强模式。
S402.电子设备基于第一模式,处理指定音频的音频数据,得到各个发声单元的音频数据。
在本申请实施例中,指定音频的音频数据可以称为音源数据。其中,当电子设备仅播放音频时,音源数据为音频的数据。当电子设备播放视频时,音源数据为视频的音频数据,或者,为视频对应的音频的数据。
电子设备可以基于第一模式,处理音源数据,得到各个发声单元各自的音频数据。
在一种可能的实现方式中,电子设备的音效模式为第一模式。电子设备在播放指定音频时,电子设备可以解析指定音频的音频文件。电子设备可以在解析音频文件时,得到音源数据包括的声道。电子设备可以基于第一模式,处理音源数据包括的各个声道的数据,得到第一模式下各个发声单元所需的音频数据。这样,电子设备可以在音效模式为第一模式时,无论音源数据包括哪些声道,都可以得到电子设备的各个发声单元所需的音频数据。
在一些实施例中,电子设备可以在确定出音源数据包括1个声道时,即,音源为单声道音源时,电子设备可以基于音源数据得到包括2个声道(左声道和右声道)的音频数据。例如,电子设备可以通过直接拷贝音源数据,得到两段单声道的数据,并将其中一段单声道数据作为左声道的音频数据,另一段作为右 声道的音频数据,得到包括左声道和右声道的音源数据,其中,该左声道的音频数据与右声道的音频数据相同。或者,电子设备可以拷贝得到两段单声道音源数据后,再针对该两段单声道音源数据通过指定算法进行处理,调整该两段音源数据的相位差、幅值和频率中的一项或多项,得到包括左声道和右声道的音源数据。
之后,若电子设备的音效模式为响度增强模式,电子设备可以将基于音源数据得到包括2个声道(左声道和右声道)的音频数据作为第一音频数据和第二音频数据。
若电子设备的音效模式为低频增强模式、环绕增强模式或对白增强模式,将包括2个声道(左声道和右声道)的音频数据进行上混处理,得到包括3个或3个以上的声道的音频数据。其中,上混操作可以用于增加音频数据包括的声道的数量。低频增强模式下,电子设备可以得到包括低频声道的第一音频数据,包括左声道与右声道、左环绕声道与右环绕声道和/或中置声道的第二音频数据。环绕增强模式下,电子设备可以得到包括左环绕声道和右环绕声道的第一音频数据,包括左声道与右声道、低频声道和/或中置声道的第二音频数据。对白增强模式下,电子设备可以得到包括中置声道的第一音频数据,包括左声道与右声道、左环绕声道与右环绕声道和/或低频声道的第二音频数据。
电子设备可以在确定出音源数据包括2个声道(左声道和右声道)时,即,音源为双声道音源时,若电子设备的音效模式为响度增强模式,电子设备可以直接得到包括左声道和右声道的音频数据。
若电子设备的音效模式为低频增强模式、环绕增强模式或对白增强模式,电子设备可以将包括2个声道(左声道和右声道)的音频数据进行上混处理,得到包括3个或3个以上的声道的音频数据。低频增强模式下,电子设备可以得到包括低频声道的第一音频数据,包括左声道与右声道、左环绕声道与右环绕声道和/或中置声道的第二音频数据。环绕增强模式下,电子设备可以得到包括左环绕声道和右环绕声道的第一音频数据,包括左声道与右声道、低频声道和/或中置声道的第二音频数据。对白增强模式下,电子设备可以得到包括中置声道的第一音频数据,包括左声道与右声道、左环绕声道与右环绕声道和/或低频声道的第二音频数据。
电子设备可以在确定出音源数据包括3个或3个以上的声道(该3个或3个以上的声道包括左声道和右声道)时,即,音源为多声道音源时,若电子设备的音效模式为响度增强模式,电子设备可以直接得到包括左声道和右声道的第一音频数据和第二音频数据。或者,电子设备可以将音源数据下混得到仅包括左声道和右声道的音源数据,并将该下混后的音源数据作为第一音频数据和第二音频数据。其中,下混操作可以用于减少音频数据包括的声道的数量。
若电子设备的音效模式为低频增强模式,若音源数据包括低频声道的数据,电子设备可以将低频声道的数据作为第一音频数据,将音源数据的其他声道的数据作为第二音频数据。
或者,电子设备可以将音源数据上混得到包括更多声道的音源数据,将上混后的音源数据的低频声道的数据作为第一音频数据,将上混后的音源数据的其他声道的数据作为第二音频数据。这样,当电子设备的第二部分侧发声单元的数量较多时,电子设备的第二部分侧发声单元可以播放更多声道的数据。
若音源数据不包括低频声道的数据,电子设备可以对音源数据进行上混操作,得到包括低频声道的音源数据,该上混后的音源数据的声道数量大于上混前的音源数据的声道数量。电子设备可以将低频声道的数据作为第一音频数据,将音源数据的其他声道的数据作为第二音频数据。
同理,若电子设备的音效模式为对白增强模式,若音源数据包括中置声道的数据,电子设备可以将中置声道的数据作为第一音频数据,将音源数据的其他声道的数据作为第二音频数据。
或者,电子设备可以将音源数据上混得到包括更多声道的音源数据,将上混后的音源数据的中置声道的数据作为第一音频数据,将上混后的音源数据的其他声道的数据作为第二音频数据。这样,当电子设备的第二部分侧发声单元的数量较多时,电子设备的第二部分侧发声单元可以播放更多声道的数据。
若音源数据不包括中置声道的数据,电子设备可以对音源数据进行上混操作,得到包括中置声道的音源数据,该上混后的音源数据的声道数量大于上混前的音源数据的声道数量。电子设备可以将中置声道的数据作为第一音频数据,将音源数据的其他声道的数据作为第二音频数据。
若电子设备的音效模式为对白增强模式,若音源数据包括左环绕声道和右环绕声道的数据,电子设备可以将左环绕声道和右环绕声道的数据作为第一音频数据,将音源数据的其他声道的数据作为第二音频数据。
或者,电子设备可以将音源数据上混得到包括更多声道的音源数据,将上混后的音源数据的左环绕声道和右环绕声道的数据作为第一音频数据,将上混后的音源数据的其他声道的数据作为第二音频数据。这样,当电子设备的第二部分侧发声单元的数量较多时,电子设备的第二部分侧发声单元可以播放更多声道 的数据。
若音源数据不包括左环绕声道和右环绕声道的数据,电子设备可以对音源数据进行上混操作,得到包括左环绕声道和右环绕声道的音源数据,该上混后的音源数据的声道数量大于上混前的音源数据的声道数量。电子设备可以将左环绕声道和右环绕声道的数据作为第一音频数据,将音源数据的其他声道的数据作为第二音频数据。
需要说明的是,当电子设备的第二部分侧发声单元的数量小于音源数据的其他声道的数量时,电子设备可以使用一个第二部分侧发声单元播放多个声道的数据,或者,将音源数据的其他声道的数据进行下混操作,得到声道数量小于或等于第二部分侧发声单元的数量的第二音频数据。
在另一些实施例中,电子设备可以将音源数据作为第二音频数据。
在一些示例中,电子设备可以基于第二部分侧发声单元的数量,确定出是否对包括3个或3个以上声道的音源数据进行上混或下混操作。
具体的,音源数据包括第一模式下第一音频数据所需的声道的数据时,电子设备可以在音源数据的其他声道的数量小于第二部分侧发声单元的数量时,对音源数据进行上混操作,或者,复制部分声道的数据,得到包括的其他声道的数量与第二部分侧发声单元的数量相同的音源数据,并且使用第二部分侧发声单元播放该其他声道的数据。或者,使用多个第二部分侧发声单元播放同一个声道的数据,或者,电子设备可以对音源数据进行上混操作,得到包括的其他声道的数量大于第二部分侧发声单元的数量的音源数据。
电子设备可以在音源数据的其他声道的数量等于第二部分侧发声单元的数量时,不对音源数据进行上混或下混操作。
电子设备可以在音源数据的其他声道的数量大于第二部分侧发声单元的数量时,电子设备可以使用一个第二部分侧发声单元播放多个声道的数据,或者,将音源数据的其他声道的数据进行下混操作,或者叠加部分声道的数据,得到声道数量小于或等于第二部分侧发声单元的数量的第二音频数据。
需要说明的是,由于当音源数据不包括第一模式下第一音频数据所需的声道的数据时,电子设备必然对音源数据进行上混操作,得到包括第一模式下第一音频数据所需的声道的音源数据。之后,电子设备可以再基于第二部分侧发声单元的数量,确定出是否对音源数据进行进一步的上混或下混操作。
还需要说明的是,在此以音源数据的其他声道的数量作为判断标准仅为示例,也可以以音源数据的所有声道的数量作为判断标准,本申请实施例对此不作限定。
最后需要说明的是,电子设备在各个音效模式下播放音频数据的描述可以参见上述表1所示实施例,在此不再赘述。
在另一些示例中,电子设备可以基于电子设备的发声单元的数量,确定出是否对包括3个或3个以上声道的音源数据进行上混或下混操作。
在一些示例中,以在低频增强模式、环绕增强模式和对白增强模式中,将音源数据上混得到包括5.1声道(左声道、右声道、左环绕声道、右环绕声道、低频声道和中置声道)的音源数据为例,详细解释电子设备基于第一模式得到各个发声单元的音频数据。示例性的,电子设备基于第一模式,得到各个发声单元的音频数据的流程如图5所示:
S501.电子设备获取音源数据的声道信息。
电子设备获取音源数据的包括的声道。电子设备可以在按照指定音频的音频文件的解码格式解码该音频文件时,获取到音源数据包括哪些声道。可以理解的是,当电子设备获取到音源数据包括的声道时,可以获取到音源数据包括的声道的数量。
S502.音源的声道数量是否小于2。
电子设备判断音源数据的声道数量是否小于2,当电子设备判定出音源数据的声道数量小于2时,执行步骤S504;当电子设备判定出音源数据的声道数量大于或等于2时,执行步骤S503。
S503.音源数据的声道数量是否大于2。
电子设备判断音源数据的声道数量是否大于2。当电子设备判定出音源数据的声道数量小于或等于2时,可以执行步骤S505;当电子设备判定出音源数据的声道数量大于2时,可以执行步骤S506。
S504.电子设备处理音源数据,得到包括的双声道音源数据,双声道包括左声道和右声道。
电子设备判定出音源数据为单声道音源数据。电子设备可以复制单声道音源数据,得到双声道音源数 据。
在一些示例中,电子设备可以直接拷贝单声道音源数据,得到两段单声道音源数据,并将其中一段单声道音源数据作为左声道的音频数据,另一段作为右声道的音频数据,得到包括左声道和右声道的音源数据,其中,该左声道的音频数据与右声道的音频数据相同。或者,电子设备拷贝得到两段单声道音源数据后,可以针对该两段单声道音源数据通过指定算法进行处理,调整该两段音源数据的相位差、幅值和频率中的一项或多项,得到包括左声道和右声道的音源数据。
S505.电子设备基于第一模式,处理包括左声道和右声道的音源数据,得到各个发声单元的音频数据。
若第一模式为环绕增强模式,电子设备可以将双声道音源数据,经过相关算法处理,得到包括左环绕声道和右环绕声道的第一音频数据。示例性的,当电子设备包括图3A所示的发声单元时,电子设备可以通过第一部分侧发声单元33播放左环绕声道的数据,通过第一部分侧发声单元34播放右环绕声道的数据,通过第二部分侧发声单元31播放左声道的数据,通过第二部分侧发声单元32播放右声道的数据。其中,由于电子设备包括第一部分侧发声单元35,电子设备可以控制该第一部分侧发声单元35播放左环绕声道与右环绕声道的数据,或者,不使用该第一部分侧发声单元35播放音频数据。
可选的,电子设备可以通过该第一部分侧发声单元35播放音源数据。
若第一模式为低频增强模式,电子设备可以提取双声道音源数据中的低频声道的数据,得到包括低频声道的第一音频数据。示例性的,当电子设备包括图3A所示的发声单元时,电子设备可以通过第一部分侧发声单元33、第一部分侧发声单元34以及第一部分侧发声单元35播放低频声道的数据,通过第二部分侧发声单元31播放左声道的数据,通过第二部分侧发声单元32播放右声道的数据。
若第一模式为对白增强模式,电子设备可以通过相关算法处理双声道音源数据,提取双声道音源数据中的中置声道的数据,得到包括中置声道的第一音频数据。示例性的,当电子设备包括图3A所示的发声单元时,电子设备可以通过第一部分侧发声单元33、第一部分侧发声单元34以及第一部分侧发声单元35播放中置声道的数据,通过第二部分侧发声单元31播放左声道的数据,通过第二部分侧发声单元32播放右声道的数据。
若第一模式为响度增强模式,电子设备可以将双声道音源数据作为第一音频数据和第二音频数据。示例性的,当电子设备包括图3A所示的发声单元时,电子设备可以通过第一部分侧发声单元33播放左声道的数据,通过第一部分侧发声单元34播放右声道的数据,通过第二部分侧发声单元31播放左声道的数据,通过第二部分侧发声单元32播放右声道的数据。其中,由于电子设备包括第一部分侧发声单元35,电子设备可以控制该第一部分侧发声单元35播放左声道与右声道的数据,或者,不使用该第一部分侧发声单元35播放音频数据。
可选的,若第一模式为低频增强模式、对白增强模式或环绕增强模式,电子设备可以将双声道音源数据进行上混操作,得到包括5.1声道的音源数据,再得到第一音频数据和第二音频数据,具体的,可以参见步骤S508和步骤S510。
需要说明的是,在任意音频模式下,电子设备可以将单声道数据或双声道数据作为第二音频数据,或者,电子设备可以处理单声道数据或双声道数据,得到包括一个或多个声道的第二音频数据。
S506.音源数据的声道数量是否小于等于5.1,其中,5.1声道包括左声道、右声道、中置声道、左环绕声道、右环绕声道与低频声道。
电子设备可以判断音源数据的声道数量是否小于5.1。电子设备可以在判定出音源数据的声道数量小于等于5.1时,执行步骤S507;电子设备可以在判定出音源数据的声道数量大于5.1时,执行步骤S509。
其中,声道数量小于5.1可以理解为音源数据包括的声道的数量小于5.1声道包括的声道的数量。例如,音源数据的声道为4声道、4.1声道、3声道、3.1声道或2.1声道时,声道数量小于5.1。其中,4声道可以理解为音源数据包括左声道、右声道、左环绕声道和右环绕声道,4.1声道可以理解为音源数据包括左声道、右声道、低频声道、左环绕声道和右环绕声道,以此类推。
同理,声道数量大于5.1可以理解为音源数据包括的声道的数量大于5.1声道包括的声道的数量。例如,音源数据的声道为7.1声道、10.1声道时,声道数量大于5.1。其中,7.1声道可以理解为音源数据包括左声道、右声道、左前环绕声道、右前环绕声道、左后环绕声道、右后环绕声道、中置声道和低频声道。
S507.音源数据的声道数量是否等于5.1。
电子设备可以判断音源数据的声道数量是否等于5.1。电子设备可以在判定出音源数据的声道数量等于5.1时,执行步骤S510;电子设备可以在判定出音源数据的声道数量是否不等于5.1时,执行步骤S508。
S508.将音源数据上混得到包括5.1声道的音源数据。
电子设备判定出音源数据的声道数量小于5.1,可以经过指定上混算法,将音源数据上混得到5.1声道音源数据。
S509.将音源数据下混得到包括5.1声道的音源数据。
电子设备判定出音源数据的声道数量大于5.1,可以经过指定下混算法,将音源数据下混得到5.1声道音源数据。
S510.电子设备基于第一模式,处理5.1声道音源数据,得到各个发声单元的音频数据。
若第一模式为环绕增强模式,电子设备可以提取5.1声道音源数据中的左环绕声道的数据和右环绕声道的数据,得到第一音频数据。示例性的,当电子设备包括图3A所示的发声单元时,电子设备可以通过第一部分侧发声单元33播放左环绕声道的数据,通过第一部分侧发声单元34播放右环绕声道的数据,通过第二部分侧发声单元31播放左声道的数据,通过第二部分侧发声单元32播放右声道的数据。其中,由于电子设备包括第一部分侧发声单元35,电子设备可以控制该第一部分侧发声单元35播放低频声道和/或中置声道的数据,或者,不使用该第一部分侧发声单元35播放音频数据。可选的,电子设备还可以通过第二部分侧发声单元31和第二部分侧发声单元32播放低频声道和/或中置声道的数据。可选的,若未处理的音源数据包括左环绕声道、右环绕声道和其他声道,电子设备可以直接提取音源数据的对应声道的数据得到第二音频数据和第一音频数据。
若第一模式为低频增强模式,电子设备可以提取处理后的音源数据中的低频声道的数据,得到包括低频声道的第一音频数据。示例性的,当电子设备包括图3A所示的发声单元时,电子设备可以通过第一部分侧发声单元33、第一部分侧发声单元34以及第一部分侧发声单元35播放低频声道的数据,通过第二部分侧发声单元31播放左声道的数据且通过第二部分侧发声单元32播放右声道的数据,和/或通过第二部分侧发声单元31播放左环绕声道的数据且通过第二部分侧发声单元32播放右环绕声道的数据,和/或通过第二部分侧发声单元31以及第二部分侧发声单元32播放中置声道的数据。可选的,若未处理的音源数据包括低频声道和其他声道,电子设备可以直接提取音源数据的对应声道的数据得到第二音频数据和第一音频数据。
若第一模式为对白增强模式,电子设备还可以提取处理后的音源数据中的中置声道的数据,得到包括中置声道的第一音频数据。示例性的,当电子设备包括图3A所示的发声单元时,电子设备可以通过第一部分侧发声单元33、第一部分侧发声单元34以及第一部分侧发声单元35播放中置声道的数据,通过第二部分侧发声单元31播放左声道的数据且通过第二部分侧发声单元32播放右声道的数据,和/或通过第二部分侧发声单元31播放左环绕声道的数据且通过第二部分侧发声单元32播放右环绕声道的数据,和/或通过第二部分侧发声单元31以及第二部分侧发声单元32播放低频声道的数据。可选的,若未处理的音源数据包括中置声道和其他声道,第二部分侧发声单元可以直接提取音源数据的对应声道的数据得到第二音频数据和第一音频数据。
若第一模式为响度增强模式,电子设备可以将5.1声道音源数据经过相关下混算法处理,下混得到仅包括左声道和右声道的音源数据。电子设备可以将处理后的音源数据作为第二音频数据和第一音频数据。示例性的,当电子设备包括图3A所示的发声单元时,电子设备可以通过第一部分侧发声单元33播放左声道的数据,通过第一部分侧发声单元34播放右声道的数据。通过第二部分侧发声单元31播放左声道的数据且通过第二部分侧发声单元32播放右声道的数据。其中,由于电子设备包括第一部分侧发声单元35,电子设备可以控制该第一部分侧发声单元35播放左声道与右声道的数据,或者,不使用该第一部分侧发声单元35播放音频数据。可选的,若未处理的音源数据包括左声道和右声道,第二部分侧发声单元可以直接提取音源数据的对应声道的数据得到第二音频数据和第一音频数据。
可选的,在任意音频模式下,电子设备可以将音源数据、5.1声道数据或5.1声道数据中除了第一音频数据的声道以外的声道的数据作为第二音频数据,或者,电子设备可以处理音源数据或5.1声道数据,得到包括一个或多个声道的第二音频数据。
在一些实施例中,电子设备可以将基于支持的所有音效模式,处理音源数据,得到所有音效模式的所有声道的数据。电子设备可以基于当前的音效模式,使用第一部分侧发声单元和第二部分侧发声单元播放对应的声道的数据。这样,当电子设备的音效模式发生改变时,电子设备不需要按照修改后的音效模式处理音源数据,电子设备可以直接播放改变后的音效模式对应的音频数据。
例如,若电子设备支持的音效模式包括环绕增强模式、对白增强模式、响度增强模式和低频增强模式。电子设备处理音源数据得到左声道的数据、右声道的数据、左环绕声道的数据、右环绕声道的数据、中置 声道的数据和低频声道的数据。电子设备处于环绕增强模式时,通过第一部分侧发声单元播放左环绕声道的数据、右环绕声道的数据。电子设备处于低频增强模式时,使用第一部分侧发声单元播放低频声道的数据。
需要说明的是,各个发声单元的音频数据均至少包括音源数据的至少部分内容。其中,音源数据的至少部分内容包括音源数据的一部分内容与音源数据的全部内容这两种情形。其中,音源数据的一部分内容可以为音源数据中部分声道的数据和/或部分频段的数据。可以理解的是,这里的部分不仅包括部分,也可以包括全部。
需要说明的是,不限于图5所示的实施例,电子设备也可以直接从音源数据提取得到音效模式下,各个发声单元所需的声道的数据。例如,在响度增强模式下,电子设备可以将音源数据经过指定算法处理,得到左声道的数据以及右声道的数据等,本申请实施例对此不作限定。
S403.电子设备的各个发声单元同时播放各自的音频数据。
电子设备得到各个发声单元的音频数据后,可以使用所有的发声单元播放各自的音频数据。
具体的,电子设备可以采用指定算法(例如,图5所示的多声道算法)处理音源数据,并将处理音源数据得到的各个发声单元的音频数据发送至音频驱动,音频驱动可以将数字信号模式的音频数据转换为模拟信号模式的音频数据,再将模拟信号模式的音频数据发送至各个发声单元的音频功放芯片,各个发声单元的音频功放芯片可以放大模拟信号模式的音频数据,并通过各个发声单元播放该放大后的模拟信号模式的音频数据。在一些示例中,音频驱动可以通过集成电路内置音频总线(inter-IC sound,I2S)将音频数据发送给音频功放芯片。
需要说明的是,一个发声单元可以由一个或多个扬声器构成。一个发声单元可以由一个音频驱动控制,或者,多个发声单元可以由一个音频驱动控制,本申请实施例对此不作限定。
需要说明的是,在电子设备播放指定音频的过程中,电子设备可以接收到用户选中第二音效模式的输入,响应于该输入,基于第二音效模式处理指定音频的音频数据,得到电子设备的各个发声单元播放的音频数据,并控制该多个发声单元同时播放各自的音频数据,实现指定音频的播放操作。
这样,电子设备采用多声道算法处理得到不同发声单元的音频数据,结合不同位置的发声单元(例如,第一部分侧发声单元、第二部分侧发声单元)联合发声组合方式,通过音频功放芯片放大后独立传输到各个发声单元,可显著提升电子设备播放音频的声场包围感和沉浸感。
在一些实施例中,电子设备得到第一音频数据后,还可以针对第一音频数据进行音频处理操作,再播放该处理后的第一音频数据。例如,音频处理操作可以为调整第一音频数据的响度。
在一些示例中,电子设备可以基于第一音频数据的振幅,识别第一音频数据中的小信号音频,提高小信号音频的响度。其中,小信号音频为第一音频数据中,响度处于-35dB及以下范围内的音频信号。这样,由于小信号音频的响度较小,人耳感知不明显,提高小信号音频的响度,有利于用户更加清晰地收听小信号音频,加强用户对音频细节的感知。例如,若第一音频数据为游戏应用的音频数据,小信号音频可以为游戏角色引发游戏场景中环境变化的声音(例如,游戏角色经过草丛发出的窸窣声、游戏角色的脚步声、汽车驶过的声音等等)。电子设备可以提高小信号音频的音量,加强游戏的沉浸感,提高用户游戏体验。再例如,若第一音频数据为视频应用提供的音频数据,该小信号音频可以为视频中的环境音(例如,虫鸣、鸟鸣、风声等等)。在本申请的优选方案中,电子设备可以在响度增强模式下,电子设备执行该针对第一音频数据的音频处理操作。需要说明的是,第一音频数据中小信号音频以外的音频数据不变。
在一些实施例中,电子设备可以在播放视频时,通过视频识别算法,识别视频画面中多个对象(例如,人物、动物或物品等)的位置,并且可以从视频的音源数据中提取该位置的对象的声音数据。电子设备可以使用视频画面中距离该对象最近的一个或多个发声单元播放该对象的声音。这样,可以使用不同位置的发声单元播放视频画面中不同对象的声音,增强视频沉浸感。
在一些示例中,电子设备可以基于对象在视频画面中的位置与对象的声音数据,得到包括在视频画面中离显示屏较远的一个或多个对象的声音数据的第一音频数据,得到包括在视频画面中离显示屏较近的一个或多个对象的人声数据(即中置声道的数据)的第二音频数据。这样,电子设备通过第一部分侧发声单元播放离显示屏更远的对象的人声,通过第二部分侧发声单元播放离显示屏更近的对象的人声,使得用户可以感受到不同位置的人声,提高用户看视频的沉浸感。同理,电子设备可以基于对象在视频画面中的位置与对象的声音数据,得到包括处于视频画面上方的一个或多个对象的声音数据的第一音频数据,得到包括处于视频画面中下方的一个或多个对象的声音数据的第二音频数据。这样,电子设备通过第一部分侧发 声单元播放视频上方的对象的声音,通过第二部分侧发声单元播放离视频下方的对象的声音,使得用户可以感受到不同位置的上下方位的差距,提高用户看视频的沉浸感。
需要说明的是,第二音频数据包括至少一个对象的声音数据,第一音频数据包括除了第二音频数据的声音数据对应的对象以外的所有对象的声音数据。以视频画面包括3个对象为例,电子设备可以将离显示屏最近的1个对象的声音数据放入第二音频数据,将另外2个对象的声音数据放入第一音频数据。或者,电子设备可以将离显示屏最近的2个对象的声音数据放入第二音频数据,将另外1个对象的声音数据放入第一音频数据,本申请实施例对此不作限定。
在一些示例中,音源数据包括多个中置声道,该多个中置声道与视频画面中的对象一一对应。其中,一个中置声道的数据为视频画面中一个对象的声音数据。电子设备可以识别视频画面中各个对象的位置,得到包括在视频画面中离显示屏较远的一个或多个对象的声音数据(即中置声道的数据)的第一音频数据,得到包括在视频画面中离显示屏较近的一个或多个对象的声音数据(即中置声道的数据)的第二音频数据。
在一些示例中,电子设备可以在播放视频的音源数据时,判断播放的视频与音乐场景是否相关。其中,音乐场景可以包括但不限于演唱会场景、音乐短片(music video,MV)场景、歌唱比赛场景、演奏场景等等。电子设备可以在判断出视频与音乐场景相关时,将音效模式设置为低频增强模式或响度增强模式。电子设备可以在判断出视频与音乐场景无关时,将智能增强模式设置为对白增强模式或环绕增强模式。
在一些示例中,电子设备可以基于视频的名称判断该视频与音乐场景是否相关。例如,当视频的名称包括但不限于包括“唱”、“音乐”、“演奏”、“歌”等与音乐相关的字词时,判定出该视频与音乐场景相关。在另一些示例中,电子设备可以通过图像识别算法,识别是视频画面中是否有人物发生唱歌、演奏乐器等动作,电子设备可以在识别出视频画面中的人物在演奏、唱歌时,判定出视频与音乐场景相关。
需要说明的是,音源数据可以为视频文件中的音频数据,音源数据也可以为视频文件对应的音频文件中的音频数据。
还需要说明的是,音源数据可以存储在电子设备的存储器中,或者,音源数据可以为电子设备从其他电子设备(例如,服务器等)处获取的。
在一些实施例中,电子设备的第一部分的上方包括一个或多个顶部发声单元。电子设备可以通过该一个或多个顶部发声单元播放天空音。其中,天空音为音频中指定天空对象发出的声音。例如,指定天空对象可以为飞机、飞鸟、雷电等对象。这样,可以让用户以听觉感受音频中的指定天空对象的高度信息,例如,飞机飞过天空的声音、打雷的声音等等,增加用户收听音频的沉浸感。
在一些实施例中,电子设备的第一部分的下方包括一个或多个底部发声单元。电子设备可以通过该一个或多个底部发声单元播放地面音。其中,地面音为音频中指定地面对象发出的声音。例如,指定地面对象可以为昆虫、灌木丛、和地面接触的对象等对象。这样,可以让用户以听觉感受音频中的指定地面对象的高度信息,例如,虫鸣、脚步声、雨声等,增加用户收听音频的沉浸感。
在一种可能的实现方式中,电子设备显示有一个或多个音效模式选项,一个或多个音效模式选项与电子设备的一个或多个音效模式一一对应。其中,一个或多个音效模式选项包括第一模式选项,一个或多个音效模式包括第一模式。该第一模式选项与第一模式相对应。电子设备可以接收用户针对第一模式选项的输入,响应于该输入,将电子设备的音效模式设置为第一模式。这样,电子设备可以接收用户的输入,选择不同的音效模式,并且在播放音频时,基于电子设备的音效模式处理该音频的音频数据。
示例性的,如图6A所示,电子设备显示桌面601。桌面601可以包括但不限于一个或多个应用的图标(例如,音乐图标602)。该一个或多个应用的图标的下方还显示有任务栏。任务栏中可以包括一个或多个功能控件,功能控件可以用于触发电子设备显示对应的功能窗口。该一个或多个功能控件包括功能控件603。
电子设备可以接收到用户针对功能控件603的输入,响应于该输入,显示如图6B所示的功能窗口611。
如图6B所示,功能窗口611可以包括但不限于音效模式图标612、音量调节条等。其中,音效模式图标612可以用于触发电子设备显示电子设备支持的音效模式对应的音效模式选项。音量调节条可以用于调节电子设备播放音频的音量。
电子设备可以接收到用户针对音效模式图标612的输入,响应于该输入,显示如图6C所示的音效模式选择窗口621。
如图6C所示,该音效模式选择窗口621包括一个或多个音效模式选项。该一个或多个音效模式选项可以包括但不限于音效模式选项622、音效模式选项623、音效模式选项624和音效模式选项625。
其中,音效模式选项622可以用于触发电子设备将音效模式设置为响度增强模式。音效模式选项623可以用于触发电子设备将音效模式设置为环绕增强模式。音效模式选项624可以用于触发电子设备将音效模式设置为对白增强模式。音效模式选项625可以用于触发电子设备将音效模式设置为节奏增强模式。
其中,该一个或多个音效模式选项可以包括对应的音效模式的名称。其中,各个音效模式的描述可以参见图4所示实施例,在此不再赘述。在此,音效模式选项622处于选中状态,电子设备的音效模式为响度增强模式。
可选的,电子设备可以接收到用户针对功能控件603的输入,显示如图6C所示的音效模式选择窗口621。
在此,以电子设备包括位于显示屏左侧的第一部分侧发声单元33、位于显示屏右侧的第一部分侧发声单元34,位于键盘左侧的第二部分侧发声单元31以及位于键盘左侧的第二部分侧发声单元32为例,说明电子设备播放歌曲的流程。
示例性的,电子设备可以接收到针对音乐图标602的输入,显示音乐播放界面630。其中,音乐播放界面630可以包括但不限于歌曲名称,播放控件等。其中,播放控件可以用于触发电子设备播放歌曲名称指示的歌曲。
电子设备接收到针对播放控件的输入后,响应于该输入,基于响度增强模式处理音源数据(即,歌曲的音频数据),得到各个发声单元的音频数据,其中,电子设备基于音效模式处理音源数据的描述可以参见图4所示实施例,在此不再赘述。
电子设备得到各个发声单元的音频数据后,可以使用各个发声单元同时开始播放各自的音频数据,如图6D所示。电子设备通过第一部分侧发声单元33与第二部分侧发声单元31播放左声道的音频数据,第一部分侧发声单元B与第二部分侧发声单元B播放右声道的音频数据。可以理解的是,电子设备正在播放歌曲,电子设备取消显示播放控件,并显示暂停控件,暂停控件可以用于触发电子设备和电子设备停止播放音频数据。
这样,电子设备可以给用户提供多种音效模式,实现不同的音频播放效果,提升用户体验。
在一些实施例中,若电子设备仅包括一个第一部分侧发声单元,电子设备支持的音效模式包括但不限于对白增强模式和/或低频增强模式。电子设备可以只显示对白增强模式和/或低频增强模式对应的音效模式选项。
在另一些实施例中,若电子设备包括至少两个第一部分侧发声单元,至少两个第一部分侧发声单元包括第一部分侧发声单元33与第一部分侧发声单元34,该两个第一部分侧发声单元可以分别位于电子设备的显示屏的左右两侧。电子设备支持的音效模式可以包括但不限于响度增强模式、低频增强模式、环绕增强模式和/或对白增强模式。电子设备可以显示电子设备支持的音效模式对应的音效模式选项。
在一些示例中,电子设备的音效模式为环绕增强模式时,电子设备可以通过位于显示屏左侧的第一部分侧发声单元播放由指定音频的音频数据得到的左声道的数据,通过位于显示屏右侧的第一部分侧发声单元播放由指定音频的音频数据得到的右声道的数据。
在一种可能的实现方式中,电子设备在显示一个或多个音效模式选项时,还可以显示滑动条,该滑动条可以用于控制电子设备处理音源数据得到第一音频数据时,基于滑动条的数值调整第一音频数据。这样,电子设备可以接收用户对滑动条的调整,设置音效模式的实现效果,选择适合的实现效果的音效模式。
例如,在响度增强模式中,可以通过改变滑动条的数值,改变多声道处理算法中的响度因子的值,以改变电子设备的第一音频数据的振幅,以调整电子设备播放第一音频数据的音量。响度增强模式的滑动条的数值可以成为响度因子。其中,滑动条的数值(响度因子的值)越大,电子设备播放第一音频数据的音量越高,滑动条的数值(响度因子的值)越小,电子设备播放第一音频数据的音量越低。
在环绕模式中,可以通过改变滑动条的数值,改变多声道处理算法中的环绕因子的值,以调整第一部分侧发声单元的环绕音效的播放效果。其中,环绕因子的值越大,环绕声道的数据模拟的虚拟音源的位置距离用户越远,环绕效果越明显。即,随着环绕因子的增大,用户收听电子设备播放环绕声道的数据时,感知到自身与音源的距离也随之增大。其中,滑动条的数值越大,环绕因子的值越大,环绕效果越明显。 滑动条的数值越小,环绕因子的值越小,环绕效果越不明显。
在对白增强模式中,可以通过改变滑动条的数值,改变多声道处理算法中的人声因子的值,以改变电子设备的第一音频数据的振幅,调整电子设备播放第一音频数据的音量。其中,滑动条的数值(人声因子的值)越大,电子设备播放第一音频数据的音量越高,人声越突出。滑动条的数值(人声因子的值)越小,电子设备播放第一音频数据的音量越低,人声越不明显。
在低频增强模式中,可以通过改变滑动条的数值,改变多声道处理算法中的低频因子的值,以改变电子设备的第一音频数据的振幅,以调整电子设备播放第一音频数据的音量。其中,滑动条的数值(低频因子的值)越大,电子设备播放第一音频数据的音量越高,低频越明显。滑动条的数值(低频因子的值)越小,电子设备播放第一音频数据的音量越低,低频越不明显。
这样,可以根据百分比数值调整电子设备播放的第一音频数据的响度,环绕声范围,震感强度等。
需要说明的是,滑动条的值始终大于零,电子设备按照音效模式及对应的滑动条的值,处理音源数据。
在一些示例中,电子设备的滑动条被划分为十等分,电子设备可以将各个音效模式的滑动条的初始值设置50%。如果滑动条的值被调整为大于50%,则对应音效模式下处理的声道方式根据影响因子增加,如果滑动条的值被调整为小于50%,则对应音效模式下处理的声道方式根据影响因子减小。
例如,在环绕声场模式中,影响因子可以理解为环绕声道处理算法中的环绕因子。当滑动条的值为50%时,当滑动条的值被调整到100%时,环绕声道处理算法中的环绕因子的值可以按照预先设置的对应关系增大,加强环绕声道的扩展效果,该扩展效果强于滑动条的值为50%的音效模式的效果。当滑动条的值被调整到10%时,环绕声道处理算法中的环绕因子的值可以按照预先设置的对应关系减小,减弱环绕声道的扩展效果,该扩展效果弱于滑动条的值为50%的音效模式的效果。
这样,当滑动条的值为当前音效模式的最小值时,即10%。相对于未开启协同发声模式时,即使环绕音效的环绕效果弱于滑动条的值为50%的音效模式的效果,电子设备和电子设备共同播放音频数据,比单个电子设备单独播放音频数据带给用户的体验更好。例如,当滑动条的值为预设值1,电子设备的第一部分侧发声单元播放第一音频数据时模拟音源与用户的距离为预设距离1,滑动条的值为预设值2,电子设备的第一部分侧发声单元播放第一音频数据时模拟音源与用户的距离为预设距离2,预设值1小于预设值2且预设距离1小于预设距离2。
再例如,在响度增强模式中,当滑动条的值为50%时,第一音频数据的响度为5dB。当滑动条的值被调整为小于50%时,例如被调整为20%,第一音频数据的响度被调整为2dB,在此,影响因子可以看作响度因子,当滑动条的值为20%,响度因子的值为0.4。同理,对白增强模式、低频增强模式的描述可以参见响度增强模式的描述,在此不再赘述。
示例性的,电子设备可以在接收到用户针对图6A所示的功能控件603或者图6B所示的音效模式图标612的输入,显示如图6E所示的音效模式选择窗口631。
如图6E所示,该音效模式选择窗口631包括一个或多个音效模式选项。该一个或多个音效模式选项可以包括但不限于音效模式选项632、音效模式选项633、音效模式选项634和音效模式选项635。
其中,音效模式选项632可以用于触发电子设备将音效模式设置为响度增强模式。音效模式选项633可以用于触发电子设备将音效模式设置为环绕增强模式。音效模式选项634可以用于触发电子设备将音效模式设置为对白增强模式。音效模式选项635可以用于触发电子设备将音效模式设置为节奏增强模式。其中,该一个或多个音效模式选项可以包括对应的音效模式的名称。
该一个或多个音效模式选项都包括对应的滑动条,如图6E所示,音效模式选项632包括滑动条641,滑动条641包括最低数值,最高数值和滑块642。其中,最低数值可以用于表示响度增强模式的滑动条641的最低数值,在此为1,即10%。最高数值可以用于表示响度增强模式的滑动条641的最高数值,在此为10,即100%。滑块642可以用于改变滑动条641的数值。滑块642的附近还可以显示滑块642处于滑动条641的某个位置时,滑动条641的数值,在此为5,即50%。其中,各个音效模式的滑动条的描述可以参见上述实施例,在此不再赘述。
电子设备可以在接收到向左拖动滑块642的输入,降低滑动条641的值,例如,该滑动条641的值被降低为20%,电子设备可以基于该滑动条的数值,处理音源数据,得到第一音频数据。电子设备可以播放第一音频数据。
可以理解的是,滑动条的值为50%的场景中环绕音效的效果比分比栏的值为20%的场景中环绕音效的效果强。例如,图6D所示场景中,模拟的虚拟音源和用户的距离可以为20厘米,使得用户感觉音源的声 音的宽度在电子设备中轴线向左右延伸各0.2m的距离。在滑动条的值由50%减小到20%后,第一部分侧发声单元播放的音频数据模拟的虚拟音源和用户的距离可以为50厘米,使得用户感觉音源的声音的宽度在电子设备中轴线向左右延伸各0.5m的距离。虚拟音源距离用户越远,环绕效果越明显。可以理解的是,上述滑动条的值和距离的数值的对应关系仅为示例,具体实现中该距离可以为其他数值,本申请实施例对此不作限定。
在一些应用场景中,电子设备的第一部分侧发声单元部署在B壳。当B壳与C壳之间的角度处于不同的角度范围时,可以给用户带来不同的听觉体验。
示例性的,如图7所示,当电子设备的第一部分侧发声单元部署在B壳时,第一部分侧发声单元的声波方向由A壳指向B壳。
当电子设备的B壳与C壳之间的角度α处于图8A所示的角度范围81时,由于第一部分侧发声单元发出的声波朝向用户的耳朵的方向,声音更加清晰。
当电子设备的B壳与C壳之间的角度α处于图8B所示的角度范围82时,由于B壳与C壳之间形成局部腔体可以来回反射第一部分侧发声单元发出的声波,增强声音的混响感,声音更饱满。
当电子设备的B壳与C壳之间的角度α处于图8C所示的角度范围83时,B壳距离桌面更近,桌面反射的声波更多,使得声音能量更强,声音响度更大。
当电子设备的B壳与C壳之间的角度α处于图8D所示的角度范围84时,A壳与D壳重合,电子设备可以通过B壳播放画面和声音。这样,便于用户手持电子设备播放音频和视频。当电子设备播放视频屏,电子设备可以看作手持影院,声音画面更加同步,沉浸感更强。电子设备还可以放置于电子设备支架上,便于用户调整电子设备的位置。
其中,角度范围81的最大值小于或等于角度范围83的最小值。角度范围81的最小值大于或等于角度范围82的最大值,角度范围84的最小值大于或等于角度范围83的最大值。例如,角度范围82为0度至45度,角度范围81为45度至135度,角度范围83为135度至180度,角度范围84为180度至360度。
其中,位于B壳的发声单元的描述可以参见上述图3A所示的第一部分侧发声单元的描述,并且电子设备控制B壳的发声单元播放音频数据的描述可以参见图4与图5所示实施例,在此不再赘述。
在一种可能的实现方式中,电子设备的第一部分侧发声单元部署在B壳。电子设备可以检测电子设备的B壳与C壳之间的角度,调整第一部分侧发声单元和/或第二部分侧发声单元的音频数据的响度。这样,可以在B壳与C壳角度不同时,通过调整发声单元的音频数据的响度,给用户提供更好的播放效果。
示例性的,如图7所示,当电子设备的第一部分侧发声单元部署在B壳,第一部分侧发声单元的声波方向由A壳指向B壳。当电子设备的B壳与C壳之间的角度α处于图8A所示的角度范围81时,电子设备可以直接使用发声单元播放各自的音频数据,其中,电子设备的第一部分侧发声单元播放音频数据的音量为指定音量数值80。在一些示例中,由于B壳发声声音直出,声波方向朝向用户,声音尖锐度提高,电子设备可以通过降低高频率的音频数据的频率,减弱声音的尖锐度。
当电子设备的B壳与C壳之间的角度α处于图8B所示的角度范围82时,电子设备可以基于角度α,调整各个发声单元的音频数据的响度,例如,减小第一部分侧发声单元的音量,或者,减小第一部分侧发声单元的音量并且增大第二部分侧发声单元的音量。在此,电子设备的第一部分侧发声单元播放音频数据的音量小于指定音量数值80。这样,可以减小屏幕部分与键盘部分之间形成局部腔体来回反射第一部分侧发声单元发出的声波带来的混响效果,使声音更清晰。
当电子设备的B壳与C壳之间的角度α处于图8C所示的角度范围83时,电子设备可以基于角度α,调整各个发声单元的音频数据的响度,例如,增大第一部分侧发声单元的音量,或者,增大第一部分侧发声单元的音量并且减小第二部分侧发声单元的音量。在此,电子设备的第一部分侧发声单元播放音频数据的音量大于指定音量数值80。这样,由于第一部分侧发声单元与用户之间的距离增大,电子设备可以提高第一部分侧发声单元的音量,使得用户能听清第一部分侧发声单元的声音。
当电子设备的B壳与C壳之间的角度α处于图8D所示的角度范围84时,电子设备可以基于角度α,调整各个发声单元的音频数据的响度,例如,增大第二部分侧发声单元的音量,或者,减小第一部分侧发声单元的音量并且增大第二部分侧发声单元的音量。在此,电子设备的第一部分侧发声单元播放音频数据的音量小于或等于指定音量数值80。这样,由于第二部分侧发声单元位于B壳的背面,可以增加第二部 分侧发声单元的音量,是的用户能够听到更加清晰的第二部分侧发声单元的声音。
其中,角度范围81的最小值大于或等于角度范围82的最大值。角度范围81的最大值小于或等于角度范围83的最小值。角度范围83的最大值小于或等于角度范围84的最小值。例如,角度范围82为0度至45度,角度范围81为45度至135度,角度范围83为135度至180度,角度范围84为180度至360度。
在一些实施例中,当电子设备的第一部分侧发声单元部署在B壳时,电子设备可以仅提供环绕增强模式,或者,将电子设备的初始音效模式设置为环绕增强模式。其中,初始音效模式为电子设备开机后,电子设备的音效模式。这样,电子设备可以使用第二部分侧发声单元与第一部分侧发声单元同时播放指定音频,尽量让音频声像与画面处于同一高度,同时提升环绕效果。
在另一些实施例中,由于低频增强模式下,第一部分侧发声单元的振动幅度较大,为了避免影响显示屏的显示效果。电子设备可以在播放视频的过程中,将低频增强模式设置为不可选中状态。当低频增强模式被设置为不可选中状态时,电子设备不会显示低频增强模式对应的低频增强模式选项,或者,电子设备显示不可被选中的低频增强模式选项。
在另一些实施例中,电子设备的第一部分侧发声单元部署在B壳,电子设备可以直接不显示低频增强模式对应的低频增强模式选项。
在一些应用场景中,电子设备的第一部分侧发声单元部署在A壳。当B壳与C壳之间的角度处于不同的角度范围时,可以给用户带来不同的听觉体验。
示例性的,如图9所示,当电子设备的第一部分侧发声单元部署在A壳时,第一部分侧发声单元的声波方向由B壳指向A壳。
当电子设备的B壳与C壳之间的角度α处于图10A所示的角度范围91时,由于第一部分侧发声单元发出的声波朝向背对用户的方向,声音更加开阔。
当电子设备的B壳与C壳之间的角度α处于图10B所示的角度范围92时,A壳距离用户更近,声音更清晰。
当电子设备的B壳与C壳之间的角度α处于图10C所示的角度范围93时,A壳距离桌面更近,桌面反射的声波更多,使得声音能量更强,声音响度更大。
当电子设备的B壳与C壳之间的角度α处于图10D所示的角度范围94时,A壳与D壳重合。这样,便于用户手持电子设备。当电子设备播放音频时,A壳振动发声带来更强的低频震感。
其中,角度范围91的最大值小于或等于角度范围93的最小值。角度范围91的最小值大于或等于角度范围92的最大值,角度范围94的最小值大于或等于角度范围93的最大值。例如,角度范围92为0度至45度,角度范围91为45度至135度,角度范围93为135度至190度,角度范围94为190度至360度。
其中,电子设备控制A壳的发声单元播放音频数据的描述可以参见图4与图5所示实施例,在此不再赘述。
在另一种可能的实现方式中,电子设备的第一部分侧发声单元部署在A壳。电子设备可以检测电子设备的B壳与C壳之间的角度,调整第一部分侧发声单元与第二部分侧发声单元的音频数据的响度。
示例性的,如图9所示,当电子设备的第一部分侧发声单元部署在A壳时,第一部分侧发声单元的声波方向由B壳指向A壳。
当电子设备的B壳与C壳之间的角度α处于图10A所示的角度范围91时,电子设备可以直接使用发声单元播放各自的音频数据,其中,电子设备的第一部分侧发声单元播放音频数据的音量为指定音量数值90。
当电子设备的B壳与C壳之间的角度α处于图10B所示的角度范围92时,电子设备可以基于角度α,调整各个发声单元的音频数据的响度,例如,增大第一部分侧发声单元的音量,或者,增大第一部分侧发声单元的音量并且减小第二部分侧发声单元的音量。在此,电子设备的第一部分侧发声单元播放音频数据的音量大于指定音量数值90。这样,由于第一部分侧发声单元的声波方向背离用户,增加第一部分侧发声单元的音量可以增强用户收听音频的清晰度。
当电子设备的B壳与C壳之间的角度α处于图10C所示的角度范围93时,电子设备可以基于角度α,调整各个发声单元的音频数据的响度,例如,减小第一部分侧发声单元的音量,或者,减小第一部分侧发 声单元的音量并且增大第二部分侧发声单元的音量。在此,电子设备的第一部分侧发声单元播放音频数据的音量小于指定音量数值90。这样,可以减小屏幕部分与桌面之间形成桌面反射,造成的高频能量增强,以及减弱声音的尖锐度。再例如,增大第一部分侧发声单元的音量,或者,增大第一部分侧发声单元的音量并且减小第二部分侧发声单元的音量。在此,电子设备的第一部分侧发声单元播放音频数据的音量大于指定音量数值90。这样,可以增强屏幕部分与桌面之间形成桌面反射,增加混响感,提升距离用户更远的第一部分侧屏幕发声单元的音量,使得用户能同时收听第一部分侧发声单元与第二部分侧发声单元的声音。也就是说,可以避免出现由于距离用户更近的第二部分侧发声单元音量过大,导致用户听不清第一部分侧发声单元的声音的情形。
当电子设备的B壳与C壳之间的角度α处于图10D所示的角度范围94时,电子设备可以基于角度α,调整各个发声单元的音频数据的响度,例如,减小第一部分侧发声单元的音量,或者,减小第一部分侧发声单元的音量并且增大第二部分侧发声单元的音量。在此,电子设备的第一部分侧发声单元播放音频数据的音量小于指定音量数值90。这样,可以减轻A壳与D壳之间声音反射造成的混响效果,增强声音的清晰度。
可选的,由于A壳与D壳重合,为了减弱A壳震动导致电子设备震动,电子设备可以不通过第一部分侧发声单元播放低频声道的数据。
其中,角度范围91的最大值小于或等于角度范围93的最小值。角度范围91的最小值大于或等于角度范围92的最大值,角度范围94的最小值大于或等于角度范围93的最大值。例如,角度范围92为0度至45度,角度范围91为45度至135度,角度范围93为135度至190度,角度范围94为190度至360度。
在一些实施例中,当电子设备的第一部分侧发声单元部署在A壳时,电子设备可以仅提供低频增强模式,或者,将电子设备的初始音效模式设置为低频增强模式。其中,初始音效模式为电子设备开机后,电子设备的音效模式。这样,电子设备可以使用第一部分侧发声单元播放低频信号,增强低频氛围感。并且,第一部分侧发声单元负责播放低频信号,可以避免第二部分侧发声单元播放低频信号造成键盘杂音的问题。
在一种可能的实现方式中,电子设备包括多个第一部分侧发声单元,部分第一部分侧发声单元部署在电子设备的A壳,另一部分第一部分侧发声单元部署在电子设备的B壳。其中,电子设备在B壳与C壳之间的角度不同时,可以实现不同的播放效果,具体的,可以参见图7至图10D所示实施例的描述,在此不再赘述。同理,电子设备可以基于上述电子设备的B壳与C壳的角度,调整位于A壳的第一部分侧发声单元的响度以及调整B壳的第一部分侧发声单元的响度的描述也可以参见图7至图10D所示实施例的描述,在此不再赘述。
在一些实施例中,电子设备的音效模式为环绕增强模式时,电子设备可以使用位于B壳的第一部分侧发声单元播放左环绕声道与右环绕声道的数据,使用位于A壳的第一部分侧发声单元播放其他声道的数据,或者,控制位于A壳的第一部分侧发声单元不播放音频数据。
在一些实施例中,电子设备的音效模式为低频增强模式时,电子设备可以使用位于A壳的第一部分侧发声单元播放低频声道的数据,使用位于B壳的第一部分侧发声单元播放其他声道的数据,或者,控制位于B壳的第一部分侧发声单元不播放音频数据。
在一种可能的实现方式中,电子设备可以基于电子设备的B壳与C壳之间的角度,调整各个发声单元播放的声道。
示例性的,当电子设备为折叠屏设备时,电子设备的第一部分侧发声单元位于A壳或B壳时的描述可以参见图7至图10D所示实施例,在此不再赘述。
又示例性的,当电子设备为折叠屏设备,且电子设备的折叠方式为左右折叠时,若电子设备处于折叠状态,如图11A所示,电子设备包括发声单元1001、发声单元1002、发声单元1003和发声单元1004。其中,发声单元1003与发声单元1004位于第二部分,第二部分被第一部分遮挡,因此图11A中未示出发声单元1003与发声单元1004。发声单元1001和发声单元1002位于电子设备的第一部分。
其中,发声单元1002和发声单元1004位于电子设备的顶部,发声单元1001和发声单元1003位于电子设备的底部,位于电子设备顶部的发声单元可以用于播放天空音,位于电子设备底部的发声单元可以用于播放地面音。需要说明的是,电子设备的所有发声单元可以播放左声道和右声道的数据,和/或,左环绕声道和右环绕声道的数据,和/或,中置声道的数据,和/或,低频声道的数据。
在一些示例中,电子设备播放视频时,位于电子设备顶部的发声单元可以用于播放视频画面中距离顶部的发声单元更近的对象的声音,位于电子设备底部的发声单元可以用于播放视频画面中距离底部的发声单元更近的对象的声音。
当电子设备处于横屏状态(即,电子设备顶部的发声单元旋转至图11A所示的左侧或右侧)时,位于电子设备显示画面左侧的发声单元可以用于播放左声道的数据,并且位于电子设备显示画面右侧的发声单元可以用于播放右声道的数据,和/或位于电子设备显示画面左侧的发声单元可以用于播放左环绕声道的数据,并且位于电子设备显示画面右侧的发声单元可以用于播放右环绕声道的数据。
若电子设备处于半折叠状态,如图11B所示,电子设备可以通过第二部分侧发声单元播放视频画面中距离显示屏较远的对象的声音,通过第一部分侧发声单元播放视频画面中距离显示屏较近的对象的声音。在一些示例中,处于半折叠状态的电子设备的各个发声单元播放音频的描述可以参见图4和图5所示实施例,在此不再赘述。
若电子设备处于展开状态,如图11C所示,电子设备可以通过位于左侧的发声单元,例如图11C所示的发声单元1002与发声单元1001播放左声道的数据,电子设备可以通过位于右侧的发声单元,例如图11C所示的发声单元1003与发声单元1004播放右声道的数据,和/或,通过位于左侧的发声单元播放左环绕声道的数据,通过位于右侧的发声单元播放右环绕声道的数据。可选的,电子设备的发声单元还可以播放低频声道或中置声道的数据。需要说明的是,当电子设备的角度变化时,例如,电子设备的中心轴从图11C所示的上下方向旋转至左右方向,电子设备位于左侧的发声单元与位于右侧的发声单元与图11C所示的不同。
需要说明的是,图11A-图11C示出的折叠屏设备包括朝外翻折的折叠屏,当折叠屏设备包括朝内翻折的折叠屏时,电子设备的A壳或D壳也包括显示屏,电子设备处于展开状态、半折叠状态时各个发声单元播放音频数据的描述可以参见图11A-图11C所示实施例,在此不再赘述。当电子设备处于折叠状态时,可以将图11A中的B壳或C壳看做电子设备的A壳或D壳,再参考图11A中各个发声单元播放音频数据的描述。
再示例性的,当电子设备为折叠屏设备,且电子设备的折叠方式为上下折叠时,若电子设备处于展开状态,如图12A所示,电子设备包括发声单元1051、发声单元1052。其中,发声单元1051位于第一部分,发声单元1052位于电子设备的第二部分。
其中,发声单元1051位于电子设备的顶部,发声单元1052位于电子设备的底部,位于电子设备顶部的发声单元可以用于播放天空音,位于电子设备底部的发声单元可以用于播放地面音。需要说明的是,电子设备的所有发声单元可以播放左声道和右声道的数据,和/或,左环绕声道和右环绕声道的数据,和/或,中置声道的数据,和/或,低频声道的数据。
在一些示例中,电子设备播放视频时,位于电子设备顶部的发声单元可以用于播放视频画面中距离顶部的发声单元更近的对象的声音,位于电子设备底部的发声单元可以用于播放视频画面中距离底部的发声单元更近的对象的声音。
当电子设备处于横屏状态(即,电子设备顶部的发声单元旋转至图12A所示的左侧或右侧)时,位于电子设备显示画面左侧的发声单元可以用于播放左声道的数据,并且位于电子设备显示画面右侧的发声单元可以用于播放右声道的数据,和/或位于电子设备显示画面左侧的发声单元可以用于播放左环绕声道的数据,并且位于电子设备显示画面右侧的发声单元可以用于播放右环绕声道的数据。
若电子设备处于半折叠状态,如图12B所示,电子设备可以通过第二部分侧发声单元播放视频画面中距离显示屏较远的对象的声音,通过第一部分侧发声单元播放视频画面中距离显示屏较近的对象的声音。在一些示例中,处于半折叠状态的电子设备的各个发声单元播放音频的描述可以参见图4和图5所示实施例,在此不再赘述。
若电子设备处于折叠状态,如图12C所示,电子设备可以通过发声单元播放音源数据。这样,折叠屏设备在不同展开状态下播放音频可以实现不同的播放效果。
在一种可能的实现方式中,电子设备的第一部分侧发声单元部署在A壳与B壳的连接处(可以理解为部署在屏幕部分的侧边)。这样,第一部分侧发声单元可以通过该些第一部分侧发声单元,提升不同方位的环绕感。
示例性的,如图13所示,电子设备包括一个或多个第一部分侧发声单元,该一个或多个第一部分侧 发声单元包括第一部分侧发声单元1101、第一部分侧发声单元1102以及第一部分侧发声单元1103。其中,第一部分侧发声单元1101位于屏幕部分的左侧边,第一部分侧发声单元1101的声波的波束方向由显示屏中心指向显示屏的左侧。第一部分侧发声单元1102位于屏幕部分的右侧边,第一部分侧发声单元1102的声波的波束方向由显示屏中心指向显示屏的右侧。第一部分侧发声单元1103位于屏幕部分的上侧边,第一部分侧发声单元1103的声波的波束方向由显示屏中心指向显示屏的上侧。在一些示例中,第一部分侧发声单元11003可以用于播放天空音。具体的,可以参见上述实施例,在此不再赘述。
在一种可能的实现方式中,电子设备可以在播放视频时,识别视频画面中的指定对象,并且识别视频的音源数据中该指定对象的音频数据。电子设备可以基于指定对象在显示屏上的位置,使用距离指定对象最近的一个或多个发声单元播放该指定对象的音频数据。这样,电子设备可以按照指定对象在视频画面中的运动轨迹,播放指定对象的音频数据,实现声音随着指定对象变动的播放效果,给用户带来由指定对象发出声音的视听体验,增强视频沉浸感。
其中,电子设备可以通过指定对象的声音特征识别出音源数据中该指定对象的所有声音。电子设备还可以实时识别视频画面的指定对象,以及指定对象的音频数据。
在一些示例中,指定对象为交通工具,例如,飞机、火车、汽车、轮船等。电子设备可以在指定物体从视频画面的左侧移动至视频画面的右侧时,依次通过第一部分侧发声单元1101、第一部分侧发声单元1103以及第一部分侧发声单元1102播放指定物体的音频数据。例如,当视频中的飞机起飞时,飞机从视频画面的左下角移动至视频画面的左上角,电子设备可以在飞机位于视频画面左下角时,通过第一部分侧发声单元1101播放飞机的声音,在飞机位于视频画面中部时,通过第一部分侧发声单元1103播放飞机的声音。在飞机位于视频画面右上角时,通过第一部分侧发声单元1102与第一部分侧发声单元1103共同播放飞机的声音。这样,可以从声音方面体现飞机起飞的过程。
需要说明的是,不限于交通工具,指定对象也可以为视频中的某个角色、某个物体等等,本申请实施例对此不作限定。
可选的,电子设备的第一部分侧发声单元与第二部分侧发声单元可以共同实现指定对象的音频数据的播放操作。例如,当指定对象在视频画面中的运动轨迹由下至上时,当指定对象在最下方时,电子设备可以使用第二部分侧发声单元播放指定对象的音频数据。当指定对象向上移动后,电子设备可以使用第一部分侧发声单元播放指定对象的音频数据。再例如,当指定对象在视频画面中的运动轨迹由远至近时,当指定对象在最远处时,电子设备可以使用第一部分侧发声单元播放指定对象的音频数据。当指定对象向近处移动后,电子设备可以使用第二部分侧发声单元播放指定对象的音频数据。在一些示例中,指定物体由远及近移动在视频画面上可以体现为指定对象的体积逐渐变大。这样,当指定对象向上、向下移动时,由于第一部分侧发声单元位于第二部分侧发声单元的上方,通过第一部分侧发声单元与第二部分侧发声单元的相对位置可以更好地体现出指定物体在上下方向的移动。当指定对象向远处、向近处移动时,由于第一部分侧发声单元相比第二部分侧发声单元距离用户更远,通过第一部分侧发声单元与第二部分侧发声单元的相对位置可以更好地体现出指定物体的远近关系。
需要说明的是,当发声单元在播放指定对象的音频数据时,可以暂停播放电子设备基于第一模式与音源数据得到的该发声单元的音频数据,或者,发声单元可以同时播放指定对象的音频数据以及电子设备基于第一模式与音源数据得到的该发声单元的音频数据。
在一种可能的实现方式中,电子设备包括一个或多个发声单元,该一个或多个发声单元包括一个或多个第一部分侧发声单元。该一个或多个第一部分侧发声单元中的一部分第一部分侧发声单元位于A壳,一部分侧发声单元位于B壳,一部分第一部分侧发声单元位于A壳与B壳的连接处。其中,位于A壳的第一部分侧发声单元可以用于播放低频声道的数据,位于B壳的第一部分侧发声单元可以用于播放左环绕声道与右环绕声道的数据。位于A壳与B壳的连接处的第一部分侧发声单元可以分为顶部发声单元和底部发声单元。其中,顶部发声单元在第一部分的位置高于底部发声单元在第一部分的位置,顶部发声单元可以用于播放指定天空对象的音频数据,底部发声单元可以用于播放指定地面对象的音频数据。
接下来介绍本申请实施例提供的电子设备的硬件结构图。
电子设备可以是手机、平板电脑、桌面型计算机、膝上型计算机、手持计算机、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本,以及蜂窝电话、个人数字助理(personal digital assistant,PDA)、增强现实(augmented reality,AR)设备、虚拟现实(virtual reality,VR)设备、 混合现实(mixed reality,MR)设备、人工智能(artificial intelligence,AI)设备、可穿戴式设备、车载设备、智能家居设备和/或智慧城市设备,本申请实施例对该电子设备的具体类型不作特殊限制。
可选地,在本申请一些实施例中,电子设备可以为桌面型计算机、膝上型计算机、手持计算机、笔记本电脑、与键盘相接的平板电脑。
如图14所示,电子设备可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193以及显示屏194等。其中传感器模块180可以包括但不限于压力传感器180A,指纹传感器180B,温度传感器180C,触摸传感器180D,环境光传感器180E等。
可以理解的是,本实施例示意的结构并不构成对电子设备的具体限定。在本申请另一些实施例中,电子设备可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。I2S接口可以用于音频通信。在一些实施例中,处理器110可以包含多组I2S总线。处理器110可以通过I2S总线与音频模块170耦合,实现处理器110与音频模块170之间的通信。在一些实施例中,音频模块170可以通过I2S接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。在一些实施例中,音频模块170与无线通信模块160可以通过PCM总线接口耦合。在一些实施例中,音频模块170也可以通过PCM接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。所述I2S接口和所述PCM接口都可以用于音频通信。
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。在一些实施例中,UART接口通常被用于连接处理器110与无线通信模块160。例如:处理器110通过UART接口与无线通信模块160中的蓝牙模块通信,实现蓝牙功能。在一些实施例中,音频模块170可以通过UART接口向无线通信模块160传递音频信号,实现通过蓝牙耳机播放音乐的功能。
MIPI接口可以被用于连接处理器110与显示屏194,摄像头193等外围器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serial interface,DSI)等。在一些实施例中,处理器110和摄像头193通过CSI接口通信,实现电子设备的拍摄功能。处理器110和显示屏194通过DSI接口通信,实现电子设备的显示功能。
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器110与摄像头193,显示屏194,无线通信模块160,音频模块170,传感器模块180等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type  C接口等。USB接口130可以用于连接充电器为电子设备充电,也可以用于电子设备与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备,例如AR设备等。
可以理解的是,本实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备的结构限定。在本申请另一些实施例中,电子设备也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过电子设备的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备供电。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,显示屏194,摄像头193,和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。
电子设备的无线通信功能可以通过天线,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线用于发射和接收电磁波信号。电子设备中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。
无线通信模块160可以提供应用在电子设备上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线转为电磁波辐射出去。
电子设备通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备可以包括1个或N个显示屏194,N为大于1的正整数。
电子设备可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备可以包括1个或N个摄像头193,N为大于1的正整 数。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备可以支持一种或多种视频编解码器。这样,电子设备可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
外部存储器接口120可以用于连接外部的非易失性存储器,实现扩展电子设备的存储能力。外部的非易失性存储器通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部的非易失性存储器中。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行电子设备的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
电子设备可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备可以通过扬声器170A收听音乐,或收听免提通话。在本申请实施例中,扬声器170A可以播放音频数据。
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。电子设备可以设置至少一个麦克风170C。在另一些实施例中,电子设备可以设置两个麦克风170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备还可以设置三个,四个或更多麦克风170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
耳机接口170D用于连接有线耳机。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。指纹传感器180B用于采集指纹。温度传感器180C用于检测温度。触摸传感器180D,也称“触控器件”。触摸传感器180D可以设置于显示屏194,由触摸传感器180D与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180D用于检测作用于其上或附近的触摸操作。环境光传感器180E用于感知环境光亮度。
按键190包括开机键,键盘、触控板等。按键190可以是机械按键。也可以是触摸式按键。
以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (21)

  1. 一种音频播放方法,其特征在于,应用于电子设备,所述电子设备包括第一部分与第二部分,所述第一部分与所述第二部分围绕所述电子设备的中心轴旋转或展开;所述第一部分包括一个或多个第一发声单元,所述第二部分包括一个或多个第二发声单元;所述方法包括:
    所述电子设备的音效模式为第一模式,所述电子设备接收到播放第一音频的第一输入;
    所述电子设备响应于所述第一输入,控制所述一个或多个第一发声单元播放第一音频数据,控制所述一个或多个第二发声单元播放第二音频数据,所述第一音频数据与所述第二音频数据均至少包括所述第一音频的音源数据的至少部分内容;
    所述电子设备接收到第二输入;
    所述电子设备响应于所述第二输入,将所述电子设备的音效模式从所述第一模式切换为第二模式;
    所述电子设备接收到播放所述第一音频的第三输入;
    所述电子设备响应于所述第三输入,控制所述一个或多个第一发声单元播放第三音频数据,控制所述一个或多个第二发声单元播放第四音频数据,所述第三音频数据与所述第四音频数据均至少包括所述第一音频的音源数据的至少部分内容,所述第一音频数据与所述第三音频数据不同。
  2. 根据权利要求1所述的方法,其特征在于,所述第一音频数据包括的声道与所述第三音频数据包括的声道部分/全部不同。
  3. 根据权利要求1或2所述的方法,其特征在于,所述方法还包括:
    当所述第一模式为低频增强模式、对白增强模式和环绕增强模式中的任一种时,所述第一音频数据包括的至少部分声道与所述第二音频数据包括的至少部分声道不相同;和/或,
    当所述第一模式为响度增强模式时,所述第一音频数据的至少部分声道与所述第二音频数据的至少部分声道相同。
  4. 根据权利要求1-3中任一项所述的方法,其特征在于,所述电子设备包括一个或多个音效模式,所述一个或多个音效模式包括所述第一模式与所述第二模式;所述电子设备接收到第二输入之前,所述方法还包括:
    所述电子设备显示一个或多个音效模式选项,所述一个或多个音效模式选项与所述一个或多个音效模式一一对应,所述一个或多个音效模式选项包括第一模式选项与第二模式选项,所述第一模式选项与所述第一模式对应,所述第二模式选项与所述第二模式对应,所述第一模式选项被标记;其中,所述第二输入为针对所述第二模式选项的输入;
    所述响应于所述第二输入,将所述电子设备的音效模式设置为所述第二模式,具体包括:
    所述电子设备响应于所述第二输入,将所述电子设备的音效模式从所述第一模式切换为所述第二模式,并且取消标记所述第一模式选项,标记所述第二模式选项。
  5. 根据权利要求4所述的方法,其特征在于,所述第一模式选项为低频增强模式选项,所述第一模式为低频增强模式,所述第一音频数据包括低频声道的数据,所述第二音频数据包括左声道的数据与右声道的数据、左环绕声道的数据与右环绕声道的数据和/或中置声道的数据。
  6. 根据权利要求4所述的方法,其特征在于,所述第一模式选项为对白增强模式选项,所述第一模式为对白增强模式,所述第一音频数据包括中置声道的数据,所述第二音频数据包括,所述第二音频数据包括左声道的数据与右声道的数据、左环绕声道的数据与右环绕声道的数据和/或低频声道的数据。
  7. 根据权利要求4所述的方法,其特征在于,所述第一模式选项为响度增强模式选项,所述第一模式为响度增强模式,所述第一音频数据包括左声道的数据和右声道的数据,所述第二音频数据包括左声道的数据和右声道的数据。
  8. 根据权利要求7所述的方法,其特征在于,当所述电子设备包括1个第一发声单元,所述电子设备使用所述1个第一发声单元播放所述第一音频数据的左声道的数据和右声道的数据;或,
    当所述电子设备包括2个第一发声单元,所述电子设备使用1个第一发声单元播放所述左声道的数据,使用另1个第一发声单元播放所述右声道的数据;或,
    若所述电子设备包括3个及以上第一发声单元,所述电子设备使用至少一个第一发声单元播放所述左声道的数据,使用至少一个第一发声单元播放所述右声道的数据,使用至少一个第一发声单元播放所述第二音频数据的所述左声道的数据和所述右声道的数据。
  9. 根据权利要求4所述的方法,其特征在于,所述第一模式选项为环绕增强模式,所述第一模式为环绕增强模式,所述第一音频数据包括左环绕声道的数据和右环绕声道的数据,所述第二音频数据包括左声道的数据与右声道的数据、中置声道的数据和/或低频声道的数据。
  10. 根据权利要求4-9中任一项所述的方法,其特征在于,所述电子设备显示一个或多个音效模式选项时,所述方法还包括:
    所述电子设备显示滑动条;
    若所述第一模式选项为响度增强模式选项、低频增强模式选项或对白增强模式选项,所述滑动条的值为第一值时,所述电子设备播放所述第一音频数据的音量为第三音量,所述滑动条的值为第二值时,所述电子设备播放所述第一音频数据的音量为第四音量,所述第一值小于所述第二值,且所述第三音量低于所述第四音量;
    若所述第一模式选项为环绕模式选项,所述滑动条的值为第三值,所述电子设备播放所述第一音频数据时模拟音源与用户的距离为第三距离,所述滑动条的值为第四值,所述电子设备播放所述第一音频数据时模拟音源与用户的距离为第四距离,所述第三值小于所述第四值且所述第三距离小于所述第四距离。
  11. 根据权利要求1-10中任一项所述的方法,其特征在于,所述方法还包括:
    所述电子设备接收到播放第一视频的第四输入;
    所述电子设备响应于所述第四输入,识别所述第一视频的视频画面中的一个或多个对象,并且识别所述第一视频的音频文件中所述一个或多个对象的音频数据,所述一个或多个对象包括第一对象;
    所述电子设备使用所述一个或多个第一发声单元和/或所述一个或多个第二发声单元中与所述第一对象距离最近的发声单元播放所述第一对象的音频数据。
  12. 根据权利要求1-11中任一项所述的方法,其特征在于,位于第一位置的所述第一发声单元用于播放天空音;和/或,
    位于第二位置的所述第一发声单元用于播放地面音,所述第一位置与所述中心轴的距离大于所述第二位置与所述中心轴的距离;和/或,
    位于第三位置的第一发声单元用于播放左声道的数据,且位于所述第四位置的第一发声单元用于播放右声道的数据,所述第三位置位于所述第一部分的左侧,所述第四位置位于所述第一部分的右侧;和/或,
    位于第五位置的第一发声单元用于播放左环绕声道的数据,且位于所述第六位置的第一发声单元用于播放右环绕声道的数据,所述第三位置位于所述第一部分的左侧,所述第四位置位于所述第一部分的右侧。
  13. 根据权利要求1-12中任一项所述的方法,其特征在于,所述电子设备为笔记本电脑,所述第一部分包括所述电子设备的显示屏;所述第二部分包括所述电子设备的键盘和/或触摸板。
  14. 根据权利要求13所述的方法,其特征在于,所述电子设备的第一部分包括第一壳与第二壳,所述第一壳包括所述电子设备的显示屏;所述第一模式为环绕增强模式,位于所述第一壳的第一发声单元用于驱动所述第一壳播放所述第一音频数据;或者,
    所述第一模式为低频增强模式,位于所述第二壳的第一发声单元用于驱动所述第二壳播放所述第一音频数据。
  15. 根据权利要求1-12中任一项所述的方法,其特征在于,所述电子设备为折叠屏设备,所述第一部分包括第一屏,所述第二部分包括第二屏,所述电子设备的折叠方式为左右折叠,所述电子设备包括折叠状态和展开状态;
    若所述电子设备处于折叠状态,所述一个或多个第一发声单元与所述一个或多个第二发声单元中位于电子设备的第一侧的发声单元用于播放左声道的数据,所述一个或多个第一发声单元与所述一个或多个第二发声单元中位于电子设备的第二侧的发声单元用于播放右声道的数据;或者,所述一个或多个第一发声单元与所述一个或多个第二发声单元中位于电子设备的第一侧的发声单元用于播放天空音,所述一个或多个第一发声单元与所述一个或多个第二发声单元中位于电子设备的第二侧的发声单元用于播放地面音,所述第一侧与所述第二侧不同;
    若所述电子设备处于展开状态,所述一个或多个第一发声单元用于播放左声道的数据,所述一个或多个第二发声单元用于播放右声道的数据,或者,所述一个或多个第一发声单元用于播放左环绕声道的数据,所述一个或多个第二发声单元用于播放右环绕声道的数据。
  16. 根据权利要求1-12中任一项所述的方法,其特征在于,所述电子设备为折叠屏设备,所述第一部分包括第一屏,所述第二部分包括第二屏,所述电子设备的折叠方式为上下折叠,所述电子设备包括折叠状态和展开状态;
    若所述电子设备处于折叠状态,所述一个或多个第一发声单元与所述一个或多个第二发声单元播放所述音源数据;
    若所述电子设备处于展开状态,所述一个或多个第一发声单元用于播放左声道的数据,所述一个或多个第二发声单元用于播放右声道的数据;或者,所述一个或多个第一发声单元用于播放天空音,所述一个或多个第二发声单元用于播放地面音。
  17. 根据权利要求11-16中任一项所述的方法,其特征在于,所述天空音包括雷声、飞行物的声音、风声中的一种或多种,所述地面音包括脚步声、虫鸣、雨声中的一种或多种。
  18. 根据权利要求1-17中任一项所述的方法,其特征在于,所述音源数据包括所述第一模式下所述第一音频数据所需的声道的数据,所述音源数据中除了第一音频数据的声道以外的声道的数量为第一数量;所述方法还包括:
    若所述第一数量小于所述第二发声单元的数量,所述电子设备对所述音源数据进行上混,或者,复制所述音源数据中除了第一音频数据以外的声道的数据,得到第五音频数据;其中,所述第五音频数据包括所述第一音频数据所需的声道的数据,并且所述第五音频数据中除了第一音频数据的声道以外的声道的数量与所述第二发声单元的数量相同,所述第二音频数据包括所述第五音频数据中除了所述第一音频数据的声道以外的声道的数据;
    若所述第一数量大于所述第二发声单元的数量,所述电子设备对所述音源数据进行下混,或者,叠加音源数据中部分声道的数据,得到第六音频数据;其中,所述第六音频数据包括所述第一音频数据所需的声道的数据,并且所述第六音频数据中除了第一音频数据的声道以外的声道的数量与所述第二发声单元的数量相同,所述第二音频数据包括所述第六音频中除了所述第一音频数据的声道以外的声道的数据。
  19. 一种电子设备,其特征在于,包括:一个或多个处理器、一个或多个第一发声单元、一个或多个第二发声单元和一个或多个存储器;其中,所述一个或多个存储器、所述一个或多个第一发声单元、所述一个或多个第二发声单元分别与所述一个或多个处理器耦合,所述一个或多个存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,当所述一个或多个处理器在执行所述计算机指令时,使得所述电子设备执行如权利要求1-18中任一项所述的方法。
  20. 一种计算机可读存储介质,其特征在于,包括计算机指令,当计算机指令在电子设备上运行时,使得电子设备执行如权利要求1-18中任一项所述的方法。
  21. 一种芯片系统,其特征在于,所述芯片系统应用于电子设备,芯片系统包括一个或多个处理器,所述一个或多个处理器用于调用计算机指令,使得电子设备执行如权利要求1-18中任一项所述的方法。
PCT/CN2023/111689 2022-08-12 2023-08-08 一种音频播放方法及相关装置 WO2024032590A1 (zh)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202210966704 2022-08-12
CN202210966704.9 2022-08-12
CN202211415563.8A CN117596538A (zh) 2022-08-12 2022-11-11 一种音频播放方法及相关装置
CN202211415563.8 2022-11-11

Publications (1)

Publication Number Publication Date
WO2024032590A1 true WO2024032590A1 (zh) 2024-02-15

Family

ID=89850887

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/111689 WO2024032590A1 (zh) 2022-08-12 2023-08-08 一种音频播放方法及相关装置

Country Status (1)

Country Link
WO (1) WO2024032590A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050054719A (ko) * 2003-12-05 2005-06-10 엘지전자 주식회사 휴대용 컴퓨터의 스피커시스템 및 이에 사용되는스피커유니트
CN108551636A (zh) * 2018-05-29 2018-09-18 维沃移动通信有限公司 一种扬声器控制方法及移动终端
CN110580141A (zh) * 2019-08-07 2019-12-17 上海摩软通讯技术有限公司 移动终端及音效控制方法
WO2020231202A1 (ko) * 2019-05-15 2020-11-19 삼성전자 주식회사 복수 개의 스피커들을 포함하는 전자 장치 및 그 제어 방법
CN113206905A (zh) * 2021-05-17 2021-08-03 维沃移动通信有限公司 一种电子设备、扬声器声道配置方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050054719A (ko) * 2003-12-05 2005-06-10 엘지전자 주식회사 휴대용 컴퓨터의 스피커시스템 및 이에 사용되는스피커유니트
CN108551636A (zh) * 2018-05-29 2018-09-18 维沃移动通信有限公司 一种扬声器控制方法及移动终端
WO2020231202A1 (ko) * 2019-05-15 2020-11-19 삼성전자 주식회사 복수 개의 스피커들을 포함하는 전자 장치 및 그 제어 방법
CN110580141A (zh) * 2019-08-07 2019-12-17 上海摩软通讯技术有限公司 移动终端及音效控制方法
CN113206905A (zh) * 2021-05-17 2021-08-03 维沃移动通信有限公司 一种电子设备、扬声器声道配置方法及装置

Similar Documents

Publication Publication Date Title
EP4054177B1 (en) Audio processing method and device
WO2021175197A1 (zh) 一种音频处理方法及设备
CN113873378B (zh) 一种耳机噪声处理方法、装置及耳机
WO2021147415A1 (zh) 实现立体声输出的方法及终端
EP4192024A1 (en) Video processing method and related electronic device
CN114422935B (zh) 音频处理方法、终端及计算机可读存储介质
CN114338965A (zh) 音频处理的方法及电子设备
CN114245271A (zh) 音频信号处理方法及电子设备
WO2022267468A1 (zh) 一种声音处理方法及其装置
CN113747047A (zh) 一种视频播放的方法及设备
WO2024032590A1 (zh) 一种音频播放方法及相关装置
WO2023016032A1 (zh) 一种视频处理方法及电子设备
Cohen et al. Spatial soundscape superposition and multimodal interaction
CN117596538A (zh) 一种音频播放方法及相关装置
WO2024046182A1 (zh) 一种音频播放方法、系统及相关装置
CN116567489B (zh) 一种音频数据处理方法及相关装置
WO2024021712A1 (zh) 一种音频播放方法及电子设备
CN116320144B (zh) 一种音频播放方法及电子设备、可读存储介质
WO2024051638A1 (zh) 声场校准方法、电子设备及系统
US12014113B2 (en) Content playback program, content playback device, content playback method, and content playback system
WO2024027259A1 (zh) 信号处理方法及装置、设备控制方法及装置
WO2023207884A1 (zh) 一种音频播放方法及相关装置
US20220244908A1 (en) Content playback program, content playback device, content playback method, and content playback system
CN116347320A (zh) 音频播放方法及电子设备
CN118170339A (zh) 音频控制方法、音频控制装置、介质与电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23851811

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2023851811

Country of ref document: EP

Effective date: 20240524