CN117499850A - Audio data playing method and electronic equipment - Google Patents

Audio data playing method and electronic equipment Download PDF

Info

Publication number
CN117499850A
CN117499850A CN202311798813.5A CN202311798813A CN117499850A CN 117499850 A CN117499850 A CN 117499850A CN 202311798813 A CN202311798813 A CN 202311798813A CN 117499850 A CN117499850 A CN 117499850A
Authority
CN
China
Prior art keywords
audio data
spatial audio
spatial
audio
application
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311798813.5A
Other languages
Chinese (zh)
Inventor
丁利娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202311798813.5A priority Critical patent/CN117499850A/en
Publication of CN117499850A publication Critical patent/CN117499850A/en
Pending legal-status Critical Current

Links

Abstract

The application discloses an audio data playing method and electronic equipment, wherein the method comprises the following steps: playing the first audio data in response to a play operation of the first audio data at a play interface of the first application; under the condition of playing the first audio data, responding to the first audio data as a media stream, and enabling the electronic equipment to support a spatial audio function, and performing spatial audio effect processing on the first audio data based on attribute characteristics of the first audio data to obtain first spatial audio data; the first spatial audio data is played. Through the method and the device, the spatial audio data with better effect can be generated by being combined with the third party application, the spatial audio data can be effectively played for the user, the music playing quality is improved, and immersive spatial audio experience is brought to the user.

Description

Audio data playing method and electronic equipment
Technical Field
The embodiment of the application relates to the field of computers, in particular to an audio data playing method and electronic equipment.
Background
The spatial audio technology is an expansion technology based on stereo surround sound, has more sound decoding and channel technology support than stereo surround sound, and can more easily present layering, stereo perception and depth of sound. The spatial audio technology is mainly applied to the audio data of the channel coding, and compared with the common audio data, the effect of the audio data of the channel coding after the spatial audio effect processing is more obvious, and the immersive spatial audio experience is brought to the user.
At present, with the development of intelligent devices, more and more electronic devices have spatial audio technology, but these electronic devices lack audio resources of channel coding, so that spatial audio data with better effect cannot be played for users.
Disclosure of Invention
The embodiment of the application provides an audio data playing method and electronic equipment, which can be combined with a third party application to generate spatial audio data with better effect based on the method described in the application, so that the spatial audio data can be effectively played for a user, and the music playing quality is improved.
In a first aspect, the present application provides an audio data playing method, including: playing the first audio data in response to a play operation of the first audio data at a play interface of the first application; under the condition of playing the first audio data, responding to the first audio data as a media stream, and enabling the electronic equipment to support a spatial audio function, and performing spatial audio effect processing on the first audio data based on attribute characteristics of the first audio data to obtain first spatial audio data; the first spatial audio data is played.
Based on the method described in the first aspect, when playing audio data, spatial audio data with better effect can be generated by using the spatial audio technology of the electronic device and the audio resources of the channel coding of the third party application, so that the combination between the electronic device and the third party application is realized, and the spatial audio data can be effectively played for the user, and the music playing quality is improved.
In one possible implementation manner, performing spatial audio sound effect processing on the first audio data based on the attribute features of the first audio data to obtain first spatial audio data, including: if the attribute characteristics of the first audio data do not include the first characteristics, performing spatial audio sound effect processing on the first audio data to obtain first spatial audio data; the first characteristic indicates that the first audio data has been spatially audio sound processed, or the first characteristic indicates that the first audio data is not allowed to spatially audio sound processed, or the first characteristic indicates that a processing delay of the first audio data is below a preset threshold. Based on this mode, the effectiveness of spatial audio sound effect processing can be improved.
In one possible implementation manner, if the attribute feature of the first audio data does not include the first feature, performing spatial audio sound effect processing on the first audio data to obtain first spatial audio data, including: if the attribute features of the first audio data do not include the first feature, performing spatial audio sound effect processing on the first audio data under the condition that the spatial audio function is started to obtain the first spatial audio data. Based on the mode, the effectiveness of spatial audio sound effect processing can be improved, and power consumption is saved.
In one possible implementation manner, performing spatial audio sound effect processing on the first audio data to obtain first spatial audio data, including: and responding to the first audio data as multi-channel audio data, and performing spatial audio effect processing on the first audio data to obtain first spatial audio data. Based on the mode, the effectiveness and the clarity of the spatial audio sound effect processing can be improved, and the user experience is improved.
In one possible implementation, the playback interface includes a spatial audio function switch; the method further comprises the steps of: displaying a first prompt box in response to the spatial audio function switch being turned on; the first prompt box is used for recommending a special area of the user space audio data; displaying a second prompt box in response to the spatial audio function switch not being turned on; the second prompt box is used for recommending the user to start the spatial audio function. Based on the mode, the spatial audio data can be intelligently recommended to the user, and the user experience is improved.
In one possible implementation, before the first prompt box is displayed in response to the spatial audio function switch being turned on, the method further includes: storing address information of a spatial audio data area corresponding to one or more applications under the condition that the electronic equipment supports the spatial audio function and the one or more applications are registered and licensed by the electronic equipment; the one or more applications include the first application. Based on the mode, an interface corresponding to the AudioKit can be called when the third party application is started or at other proper time, and address information of a spatial audio data special area corresponding to the third party application is stored in the electronic equipment in advance, so that the combination with the third party application is realized.
In one possible implementation, the first prompt box includes options of a spatial audio data area corresponding to the one or more applications; the method further comprises the steps of: responding to the operation of triggering the option of the first spatial audio data special area, and displaying a recommendation interface corresponding to the first spatial audio data special area based on the address information of the first spatial audio data special area; the first spatial audio data section is one of the spatial audio data sections corresponding to one or more applications. Based on the mode, more space audio data can be intelligently recommended to the user, and user experience is improved.
In one possible implementation manner, the first spatial audio data special area is a spatial audio data special area corresponding to the first application, and the recommended interface of the first spatial audio data special area includes an option of first spatial audio data and an option of second spatial audio data; the second spatial audio data is associated with the first spatial audio data. Based on the mode, other spatial audio data related to the current audio data can be intelligently recommended to the user, and user experience is improved.
In a second aspect, the present application provides an audio data playing apparatus, where the apparatus may be an electronic device, or may be an apparatus in an electronic device, or may be an apparatus that is capable of being used in a matching manner with an electronic device; the audio data playing device may also be a chip system, where the audio data playing device may perform the method performed by the electronic device in the first aspect. The function of the audio data playing device can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes one or more units corresponding to the functions described above. The unit may be software and/or hardware. The operations and beneficial effects performed by the audio data playing device may refer to the methods and beneficial effects described in the first aspect, and the repetition is not repeated.
In a third aspect, the present application provides an audio data playback apparatus comprising a processor, the method as described in the first aspect being performed when the processor invokes a computer program in memory.
In a fourth aspect, the present application provides an audio data playback device comprising a processor and a memory, the processor and the memory being coupled; the processor is configured to implement the method as described in the first aspect.
In a fifth aspect, the present application provides an audio data playback device comprising a processor, a memory, and a transceiver, the processor and the memory being coupled; the transceiver is for receiving and transmitting data and the processor is for implementing the method as described in the first aspect.
In a sixth aspect, the present application provides an electronic device comprising one or more processors and one or more memories. The one or more memories are coupled to the one or more processors, the one or more memories being configured to store computer program code comprising computer instructions that, when executed by the one or more processors, cause the electronic device to perform the audio data playback method in any of the possible implementations of the first aspect described above.
In a seventh aspect, the present application provides a chip comprising a processor and an interface, the processor and the interface being coupled; the interface is for receiving or outputting signals and the processor is for executing code instructions to cause the method of the first aspect to be performed.
In an eighth aspect, the present application provides an audio data playback system, the audio data playback system including an electronic device; wherein the electronic device is configured to perform the method according to the first aspect.
In a ninth aspect, the present application provides an audio data playback apparatus comprising a function or unit for performing the method of any one of the first aspects.
In a tenth aspect, the present application provides a computer readable storage medium having stored therein a computer program comprising program instructions which, when run on an audio data playback apparatus, cause the audio data playback apparatus to perform the audio data playback method in any one of the possible implementations of the first aspect.
In an eleventh aspect, the present application provides a computer program product for, when run on a computer, causing the computer to perform the audio data playback method in any one of the possible implementation manners of the first aspect.
Drawings
Fig. 1 is a schematic hardware structure of an electronic device according to an embodiment of the present application;
fig. 2 is a software structural block diagram of an electronic device according to an embodiment of the present application;
fig. 3 is a flowchart of an audio data playing method according to an embodiment of the present application;
fig. 4A is a schematic flow chart of spatial audio data area preset according to an embodiment of the present application;
FIG. 4B is a schematic flow chart of spatial audio data recommendation according to an embodiment of the present application;
fig. 5 is a flowchart of another audio data playing method according to an embodiment of the present application;
fig. 6A is a schematic diagram of a playback interface of a first application according to an embodiment of the present application;
fig. 6B is a schematic diagram showing a second prompt box when the spatial audio function is not turned on according to the embodiment of the present application;
fig. 6C is a schematic diagram showing a first prompt box when the spatial audio function is turned on according to the embodiment of the present application;
fig. 6D is a schematic diagram showing a third prompt box when the third application check fails according to the embodiment of the present application;
fig. 7A is a schematic diagram of a spatial audio data area a corresponding to a first application according to an embodiment of the present application;
Fig. 7B is a schematic diagram of a spatial audio data area B corresponding to a second application according to an embodiment of the present application;
fig. 7C is a schematic diagram of a recommendation interface corresponding to a spatial audio data area a according to an embodiment of the present application;
FIG. 7D is a schematic view of a floating window according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an audio data playing device according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a chip according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and thoroughly described below with reference to the accompanying drawings. Wherein, in the description of the embodiments of the present application, "/" means or is meant unless otherwise indicated, for example, a/B may represent a or B; the text "and/or" is merely an association relation describing the associated object, and indicates that three relations may exist, for example, a and/or B may indicate: the three cases where a exists alone, a and B exist together, and B exists alone, and in addition, in the description of the embodiments of the present application, "plural" means two or more than two.
The terms "first," "second," and the like, are used below for descriptive purposes only and are not to be construed as implying or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature, and in the description of embodiments of the present application, unless otherwise indicated, the meaning of "a plurality" is two or more.
The term "User Interface (UI)" in the following embodiments of the present application is a media interface for interaction and information exchange between an application program or an operating system and a user, which enables conversion between an internal form of information and an acceptable form of the user. The user interface is a source code written in a specific computer language such as java, extensible markup language (extensible markup language, XML) and the like, and the interface source code is analyzed and rendered on the electronic equipment to finally be presented as content which can be identified by a user. A commonly used presentation form of the user interface is a graphical user interface (graphic user interface, GUI), which refers to a user interface related to computer operations that is displayed in a graphical manner. It may be a time, date, text, icon, button, menu, tab, text box, dialog box, status bar, navigation bar, widget, etc. visual interface element displayed in the display of the electronic device.
In order to facilitate understanding of the solutions provided by the embodiments of the present application, the following description describes related concepts related to the embodiments of the present application:
1. spatial audio technique
The spatial audio technology is an extended technology based on stereo surround sound, and is a technology for realizing that a listener perceives sound as being emitted from a virtual position in a three-dimensional space by manipulating sound generated by a sound emitting device such as a stereo speaker, a surround sound speaker, a speaker array, or headphones. Compared with stereo surround sound, the stereo surround sound has more support of sound decoding and channel technology, can more easily present layering sense, stereo sense and depth of sound, and can bring more immersive and spatial audio content experience to users.
In addition, the spatial audio technology is mainly applied to audio data supporting 5.1, 7.1 and other channel coding, and compared with common audio data, the effect of the channel coding audio data after spatial audio effect processing is more obvious, and the immersive spatial audio experience is brought to users.
2. Third party applications (which may also be referred to as three party applications)
Third party applications are related software developed by other organizations or individuals than the software composer for some software or application's functional inadequacies; it is understood that the third party programs and services are indirectly interacted with the manufacturer through the program.
In the embodiment of the application, taking a third-party music application as an example, the third-party music application has rich audio resources of channel coding, for example, audio resources supporting channel coding of 5.1, 7.1 and the like. At present, with the development of intelligent devices, more and more electronic devices have spatial audio technology, but these electronic devices lack audio resources of channel coding, so that spatial audio data with better effect cannot be played for users. Therefore, the electronic equipment and the third party application can be combined, and the third party application provides rich audio resources of channel coding, so that the audio resources processed by the spatial audio technology (namely, the spatial audio resources) are timely recommended to the user, and immersive spatial audio experience is brought to the user.
In order to be combined with third party application and generate space audio data with better effect, space audio data are effectively played for a user, music playing quality is improved, and the application provides an audio data playing method and electronic equipment. In a specific implementation, the above-mentioned audio data playback method may be performed by the electronic device 100. The electronic device 100 may be a mobile phone, a tablet computer, or a wearable electronic device (such as a smart watch) with a wireless communication function, but is not limited thereto. The electronic device 100 is configured with a display screen and may be installed with a preset Application (APP), such as a three-party music APP, etc., without limitation. For example, a user may play music through a three-party music APP on a display screen, or the like.
The hardware configuration of the electronic device 100 is described below. Referring to fig. 1, fig. 1 is a schematic hardware structure of an electronic device 100 according to an embodiment of the disclosure.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It should be understood that the illustrated structure of the embodiment of the present invention does not constitute a specific limitation on the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and a command center of the electronic device 100, among others. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system. The processor 110 invokes instructions or data stored in the memory to cause the electronic device 100 to perform the audio data playback method performed by the electronic device in the method embodiment described below.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 to power the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, etc. in other embodiments, the power management module 141 may be disposed in the processor 110.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc., applied to the electronic device 100. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be provided in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wi-Fi network), bluetooth (BT), BLE broadcast, global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., applied on the electronic device 100. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 and mobile communication module 150 of electronic device 100 are coupled, and antenna 2 and wireless communication module 160 are coupled, such that electronic device 100 may communicate with a network and other devices through wireless communication techniques.
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like. The ISP is used to process data fed back by the camera 193. The camera 193 is used to capture still images or video. The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions.
The internal memory 121 may be used to store computer executable program code including instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function) required for at least one function of the operating system, and the like. The storage data area may store data created during use of the electronic device 100 (e.g., audio data), and so on. In addition, the internal memory 121 may include a high-speed random access memory, and may also include a nonvolatile memory such as a flash memory device or the like.
The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or a portion of the functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. A receiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. Microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. The earphone interface 170D is used to connect a wired earphone. The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal.
In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The gyro sensor 180B may be used to determine a motion gesture of the electronic device 100. The air pressure sensor 180C is used to measure air pressure. The magnetic sensor 180D includes a hall sensor. The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). A distance sensor 180F for measuring a distance. The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector. The ambient light sensor 180L is used to sense ambient light level. The fingerprint sensor 180H is used to collect a fingerprint. The temperature sensor 180J is for detecting temperature. The touch sensor 180K, also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The bone conduction sensor 180M may acquire a vibration signal. The keys 190 include a power-on key, a volume key, etc. The motor 191 may generate a vibration cue. The indicator 192 may be an indicator light, may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc. The SIM card interface 195 is used to connect a SIM card.
In addition, an operating system is run on the components. Such as iOS, android, etc. The operating system of the electronic device 100 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. In this embodiment, taking an Android system with a layered architecture as an example, a software structure of the electronic device 100 is illustrated. It should be noted that, although the embodiment of the present application is described by taking an Android system as an example, the basic principle is also applicable to electronic devices of other operating systems.
Fig. 2 is a software configuration block diagram of the electronic device 100 according to the embodiment of the present application. The software structure adopts a layered architecture, the layered architecture divides the software into a plurality of layers, and each layer has clear roles and division work. The layers communicate with each other through a software interface. In this embodiment of the present application, an operating system (for example, an Android system running on an AP) may be divided into six layers, namely, an application layer (APP), an application framework layer (FWK), a An Zhuoyun row (Android run) and a system library, a hardware abstraction layer (hardware abstraction layer, HAL), a kernel layer and a hardware layer from top to bottom.
The application layer may include a series of application packages, among other things. As shown in fig. 2, the application package may include applications for cameras, gallery, calendar, phone calls, maps, navigation, WLAN, bluetooth, music, video, short messages, etc. The music application here may include an audio service software development kit (software development kit, SDK) (may be referred to as Audiokit), and the music application may be a music application of a third party. Audiokit may be considered as an application integration call interface for determining whether a device is provided with an audio open service. The application layer may also include a system UI (systemUI) for displaying an interface of the electronic device, such as a play interface, etc.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions. As shown in fig. 2, the application framework layer may include an audio framework, a system service, a camera service, etc., which is not limited in any way by the embodiments of the present application.
The audio framework is used for taking charge of collection and output of audio data, control of audio streams, management of audio equipment and the like, and specifically may include an audio open service (AudioEngine), a playback data module (AudioTrack), an audio service module (AudioService), a play monitoring module (playbackactionmonitor), an audio play configuration module (AudioPlaybackConfiguration), an audio attribute module (AudioAttributes), an audio service extension module (HwAudioServiceEx), a spatial audio processing module (spacial), and the like. Wherein:
AudioEngine is used to bind the application on the device and establish connection with the application; providing a corresponding audio open service list for the application and binding a corresponding audio service for the application; storing address information of a spatial audio data area to which the application corresponds, and the like.
The AudioTrack is used for being responsible for outputting playback data and belongs to the Android local framework API class. For example, the first audio data is played.
AudioService is a core process module of an audio framework for providing services responsible for managing and controlling audio devices and audio outputs. It provides a unified interface so that applications can access and control audio devices, including microphones, speakers, headphones, etc.; advanced functions such as audio mixing, audio enhancement, audio format conversion, etc. are also provided. It can automatically detect and configure audio devices to ensure that they are functioning properly. In addition, it can manage the drivers and firmware updates of the audio devices to ensure that they remain up-to-date at all times.
The playbackactigymonitor is used for monitoring an audio playing service, and when audio data (audio stream) is played in the device, a callback mode is adopted for notification.
The AudioPlaybackConfiguration is used to provide configuration parameters of audio data (audio stream).
AudioAttributes are used to parse configuration parameters of audio data (audio streams) to provide attribute features of the audio data (audio streams), such as a set of attribute tag bits.
The HwAudioServiceEx is used for triggering and executing a recommendation process, wherein the recommendation process is used for recommending a user to start a spatial audio function or recommending more other spatial audio data areas.
The spatial is used for triggering the spatial audio sound effect processing on the audio data under the condition that the user starts the spatial audio function.
The system service is used for being responsible for the starting flow scheduling of the application program, the creation and management of the process, the creation and management of the window and the like, and specifically can comprise a verification module (KitAssistant SDK), a window manager, a content provider, a view system, a telephone manager, a resource manager, a notification manager and the like. Wherein:
the KitAssistant SDK is used for checking the application, and indicates the audioEngine to acquire and store the address information of the special space resource area corresponding to the application under the condition that the application check is successful (namely registration permission).
The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
The telephony manager is used to provide the communication functions of the electronic device 100. Such as the management of call status (including on, hung-up, etc.).
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
A notification manager (notifianger) enables an application to display notification information (e.g., a prompt box) in a status bar, can be used to convey notification type messages, can automatically disappear after a short stay, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in a status bar, a prompt tone is emitted, the electronic device vibrates, and an indicator light blinks, etc.
Camera services are the core process modules of the camera framework, mainly providing API interface functions to the application layer, and invoking down to the camera hardware abstraction layer by way of HIDL (hardware interface definition language) (an interface definition language).
Android runtimes include core libraries and virtual machines. Android run time is responsible for scheduling and management of the Android system.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media library (media library), three-dimensional graphics processing library (e.g., openGL ES), 2D graphics engine (e.g., SGL), etc.
The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio and video encoding formats.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The hardware abstraction layer is an interface layer between the operating system kernel and the hardware circuitry, which aims at abstracting the hardware. It hides the hardware interface details of the specific platform and can provide a virtual hardware platform for the operating system. The hardware abstraction layer is used for packaging the Linux kernel driver, providing an interface upwards and shielding the implementation details of low-level hardware. As shown in fig. 2, the hardware abstraction layer may include Wi-Fi HAL, audio (audio) HAL, camera HAL (camera HAL), and the like.
The kernel layer is a layer between hardware and software, is a core of an operating system, is a first layer of software expansion based on hardware, provides the most basic functions of the operating system, is a basis for the operating system to work, and is responsible for managing the processes, the memory, the device driver, the files and the network system of the system to determine the performance and the stability of the system. The kernel layer may include display drivers, audio drivers, camera drivers, sensor drivers, and the like. The camera driving is a driving layer of a camera device and is mainly responsible for interaction with hardware.
The hardware layer comprises a display, a camera, a sensor and the like.
Based on the above software structure, the embodiment of the application provides a flow diagram of an audio data playing method. As shown in fig. 3, the audio data playing method includes spatial audio data region preset and spatial audio data recommendation. The process of spatial audio data region presetting includes steps S301 to S311 in FIG. 4A, and the process of spatial audio data recommendation includes steps S401 to S418 in FIG. 4B. Wherein:
1. spatial audio data zone presettings
The third party application can preset the address information of the corresponding spatial audio data special area into the electronic equipment through the opening capability of accessing the AudioKit. Specifically, the AudioKit provides an SDK interface for the third party application to integrate and call, and the third party application can call an interface corresponding to the AudioKit to pre-store address information of a special space audio data area corresponding to the third party application into the electronic equipment when the third party application is started or at other proper time. The third party application may include one or more, and taking the third party application as the first application as an example, the first application may actively execute the following steps S301 to S311 when started or at other suitable time.
S301, a first application calls an isdeviceSupport function to determine whether the electronic device supports an audio open service through an audio service SDK (Audiokit).
Wherein the first application may be a music application of a third party. The first application may have rich channel encoded audio data. For example, audio data supporting channel coding of 5.1, 7.1, etc.
S302, the audio service SDK informs the first application that the electronic device is provided with the audio open service.
For example, the audio service SDK replies "true" to the first application for indicating that the electronic device is provided with an audio open service.
S303, a first application calls a construction function to construct an object to an audio service SDK; after the audio service SDK calls an init function for initialization, a bindService function is called to inform an audio open service (Audio Engine) that the first application is bound on the electronic equipment; after the audio open service binds the first application on the electronic device, the audio open service is fed back to the first application through the audio service SDK.
For example, the audio open service may call the onserviceconnection function to inform the audio service SDK of the successful binding of the first application on the electronic device, and the audio service SDK may further feed back to the first application that the first application was successfully bound on the electronic device.
S304, the first application calls a getSupportedServices function to acquire an audio open service list from the audio open service through the audio service SDK; the audio open service feeds back the audio open service list to the first application through the audio service SDK.
For example, the audio open service feeds back an audio open service list (serviceist) supported by the electronic device to the audio service SDK, which further feeds back the audio open service list to the first application.
S305, the first application calls a createService function to construct an object to the audio service SDK; the audio service SDK calls a bindService function to inform an audio opening service that the audio service supporting spatial audio data storage is bound on the first application; after the audio open service binds the audio service supporting the spatial audio data storage on the first application, the audio open service is fed back to the first application through the audio service SDK.
For example, the audio open service may call the onserviceconnection function to notify the audio service SDK to successfully bind the corresponding audio service on the first application, and the audio service SDK further feeds back to the first application to successfully bind the audio service supporting spatial audio data storage on the first application.
S306, the first application calls an addspatial ZoneAddress function to inform an audio opening service of storing address information of a spatial audio data special area corresponding to the first application through an audio service SDK.
S307, the audio open service calls a checkPermission function to a verification module (KitAssistant SDK) to confirm whether the first application registers permission.
S308, the verification module informs the audio opening service of the first application registration permission (namely, verification success).
For example, the verification module replies "true" to the audio open service indicating that the first application is licensed for registration (i.e., successfully verified).
Optionally, when the verification module notifies the audio open service that the first application verification succeeds and fails, the method further includes: a notification manager (notifyingmanager) displays a third prompt box for prompting the user that the first application cannot use the audio open service.
S309, the audio opening service calls a saveSpatialAudioZoneAddress function to store address information of a spatial audio data area corresponding to the first application.
S310, after the audio opening service stores the address information of the spatial audio data special area corresponding to the first application, the return value is fed back to the first application through the audio service SDK.
The return value may be referred to as an operation result, and may indicate that the audio open service has completed storing address information of the spatial audio data area corresponding to the first application.
S311, the first application calls a destroyer function to inform an audio service SDK of destroying objects, and the audio service SDK calls an unbinding service function to inform an audio open service of releasing the audio service bound by the first application.
Similarly, other applications may also adopt the manner of steps S301 to S311 to store the address information of the corresponding spatial audio data area to the audio open service, so the audio open service may include the address information of the spatial audio data area corresponding to one or more applications.
2. Spatial audio data recommendation
S401, an audio service module (Audio service) calls a registerPlaybackCall function to register monitoring of a playing service with a playing monitoring module (PlaybackActivityMonitor).
S402, the electronic equipment responds to the playing operation of a user on the first audio data on the playing interface of the first application, and the first application plays the first audio data through a playback data module (AudioTrack).
And S403, when the play monitoring module monitors that the first audio data (audio stream) is played in the electronic equipment, calling a callback function to inform the audio service module that the first audio data is currently played.
It may be understood that, when the audio service is in the initial state, the playing monitoring module registers the monitoring of the playing service, and when the first audio data (i.e. the audio stream) is played in the device, the audio service receives the corresponding callback from the playing monitoring module. After receiving the callback, the audio service further analyzes the characteristics of the first audio data. The first audio data may be audio data supporting 5.1, 7.1, etc. channel coding in the first application.
S404, the audio service module analyzes the first audio data and judges whether the first audio data is a media stream.
S405, if the first audio data is a media stream, the audio service module further determines whether the electronic device supports a spatial audio function.
Optionally, if the first audio data is not a media stream, it indicates that the first audio data is not an audio data meeting the recommended feature, and the subsequent process needs to be terminated to save power consumption.
S406, if the electronic device supports the spatial audio function, the audio service module calls a config. GetAudioAttributes function to obtain configuration parameters of the first audio data from an audio play configuration module (AudioPlayback configuration).
Specifically, the audio playing configuration module queries the configuration parameters of the first audio data and feeds the configuration parameters back to the audio service module.
Optionally, if the electronic device does not support the spatial audio function, the subsequent process needs to be terminated to save power consumption.
S407, the audio service module calls an audioAttributes. GetAllFlags function to acquire the attribute mark bit set of the first audio data from an audio attribute module (audioAttributes).
Specifically, the audio attribute module analyzes the configuration parameters of the first audio data, determines an attribute mark bit set of the first audio data, and feeds back the attribute mark bit set to the audio service module.
It will be appreciated that if the audio service module determines that the first audio data is a media stream and the electronic device supports a spatial audio function, it is necessary to further determine an attribute characteristic (i.e., a set of attribute flag bits) of the first audio data.
S408, the audio service module judges whether the first audio data can be subjected to spatial audio sound effect processing or not based on the attribute mark bit set of the first audio data.
Specifically, if the attribute flag bit set of the first audio data does not include the first feature, it is determined that the first audio data is capable of spatial audio sound effect processing.
The first characteristic indicates that the first audio data has been subjected to spatial audio sound effect processing, or the first characteristic indicates that the first audio data is not allowed to be subjected to spatial audio sound effect processing, or the first characteristic indicates that the processing delay of the first audio data is lower than a preset threshold.
It will be appreciated that, assuming that the first feature indicates that the first audio data has been spatially audio sound processed, if the set of attribute FLAG bits of the first audio data includes the first feature (i.e. carries a flag_content_spatial_generated FLAG), then no further spatial audio sound processing is required, otherwise repeating the spatial audio sound processing may cause impairment of sound quality. Thus, when the set of attribute-marker bits of the first audio data does not include the first feature, it is determined that the first audio data is capable of spatial audio sound effect processing.
If the attribute FLAG bit set of the first audio data includes the first feature (i.e., carries a flag_new_space FLAG), then no further spatial audio processing may be performed, assuming the first feature indicates that spatial audio processing is not allowed for the first audio data. Thus, when the set of attribute-marker bits of the first audio data does not include the first feature, it is determined that the first audio data is capable of spatial audio sound effect processing.
It is assumed that the first characteristic indicates that the processing delay of the first audio data is below a preset threshold. The preset threshold may be 10ms, or may be another smaller value (for example, a value less than 20 ms), which is not limited herein. The processing delay of the first audio data being lower than the preset threshold may indicate that the first audio data needs to be played through a low-delay channel, that is, the processing delay requirement of the first audio data is low delay. If the attribute FLAG bit set of the first audio data includes the first feature (that is, carries a flag_low_latency FLAG), it indicates that the delay requirement of the play initiation Fang Duishi of the media stream is higher, and no further spatial audio sound effect processing is required, otherwise, the processing duration of the first audio data is increased, which is contrary to the LOW delay requirement of the audio data, and the real-time performance of the first audio data is reduced, and the user experience is reduced. Thus, when the set of attribute-marker bits of the first audio data does not include the first feature, it is determined that the first audio data is capable of spatial audio sound effect processing.
In general, when the set of attribute-marker bits of the first audio data does not include the first feature, it may be determined that the first audio data is capable of spatial audio sound effect processing. Based on the mode, whether the current audio data meets the target requirement can be judged more efficiently, and the audio data which does not meet the recommended characteristics can be screened out as soon as possible, so that the effectiveness of spatial audio sound effect processing is improved.
S409, if the first audio data can perform spatial audio effect processing, the audio service module further calls a getChannelmask function to obtain channel information of the first audio data from the audio play configuration module.
Specifically, the audio playing configuration module determines channel information of the first audio data and feeds the channel information back to the audio service module.
S410, the audio service module judges whether the first audio data is multi-channel audio data or not based on the channel information of the first audio data.
S411, if the first audio data is multi-channel audio data, the audio service module detects whether the spatial audio function is turned on.
It is understood that after the electronic device determines that the first audio data is capable of spatial audio sound effect processing, channel information of the first audio data is further analyzed. If the first audio data is mono audio data, the user cannot obviously perceive the spatial audio effect, and the user experience is easily reduced. Therefore, only when the first audio data is multi-channel audio data, the first audio data is meaningful in spatial audio effect processing, so that the user can obviously perceive the immersive spatial audio experience, and the first audio data is processed in spatial audio effect processing at the moment, thereby obtaining the first spatial audio data. Where by mono is meant that there is only one independent input or output signal in the audio system; by multi-channel is meant that there are at least two separate input or output signals in the audio system that can be used for immersive sound effects or wide stereo surround sound. Based on the mode, the music playing quality can be effectively improved, and immersive space audio experience can be obviously brought to the user. In addition, before the first audio data is processed in the spatial audio effect, whether the spatial audio function is started or not is judged, and different recommendation modes are provided for different situations.
S412, if the spatial audio function is not started, the audio service module indicates to an audio service extension module (HwAudioServiceEx) that the spatial audio function is not started.
S413, the audio service expansion module calls a notify function to indicate to a notification manager (notifnAnager) that the spatial audio function is not started.
S414, the notification manager displays a second prompt box, and the second prompt box is used for recommending the user to start the spatial audio function.
It can be understood that, under the condition that the spatial audio function is not started, the second prompt box recommends the user to start the spatial audio function (i.e. recommends a spatial audio function start button), so that the current audio frequency is prompted to have better experience effect when using the spatial audio frequency.
S415, the notification manager calls a setEnabled function to notify a spatial audio processing module (spatial) that the spatial audio function is turned on in response to an operation of turning on the spatial audio function by a user.
S416, the spatial audio processing module performs spatial audio effect processing on the first audio data to obtain first spatial audio data, and plays the first spatial audio data.
S417, the notification manager displays a first prompt box for recommending the user space audio data area.
The first prompt box comprises options of a special space audio data area corresponding to one or more applications.
It can be understood that after the spatial audio function is turned on, spatial audio sound effect processing is performed on the first audio data, so as to obtain first spatial audio data, and the first spatial audio data is played. At the moment, the user can hear the first audio data after the spatial audio effect processing, and immersion type spatial audio experience is brought to the user.
And, the user space audio data area can be recommended through the first prompt box. The jump link of the special spatial audio resource region corresponding to the audio in the album, the first application or other applications corresponding to the first audio data can be recommended to the user in the special spatial audio data region, and the user can experience the immersion sense brought by the spatial audio after clicking.
S418, the notification manager responds to the operation of triggering the option of the first spatial audio data special area by the user, and jumps to display a recommended interface corresponding to the first spatial audio data special area in the first application based on the address information of the first spatial audio data special area stored in the audio opening service.
Wherein the one or more applications include the first application, and the first spatial audio data section is one of the spatial audio data sections corresponding to the one or more applications. Taking the first spatial audio data special area as an example, when the user clicks the spatial audio data special area corresponding to the first application in the first prompt box, the embodiment of the application jumps to display a recommended interface corresponding to the spatial audio data special area in the first application. Based on the mode, more space audio data can be intelligently recommended to the user, and user experience is improved.
Similarly, when the user clicks the spatial audio data special area corresponding to the second application in the first prompt box, the user jumps from the first application to the second application to display a recommended interface corresponding to the audio data special area.
Based on the above-described mode, spatial audio data is generated by using the spatial audio technology of the electronic device and the audio resource of the third party application, so that the combination between the electronic device and the third party application is realized, and the spatial audio data can be played for the user, and the music playing quality is improved.
Based on the foregoing, the electronic device is regarded as an execution subject, and another audio data playing method provided in the embodiment of the present application is further described in detail below. As shown in FIG. 5, the audio data playing method includes the following steps S501 to S503. The method execution body shown in fig. 5 may be the above-mentioned electronic device. Alternatively, the method execution body shown in fig. 5 may be a chip in the electronic device, which is not limited in the embodiment of the present application. Fig. 5 illustrates an execution body of the method of the electronic device as an example.
S501, the electronic device plays the first audio data in response to a play operation on the first audio data at a play interface of the first application.
In the embodiment of the present application, the first application may be a music application of a third party. The first application may have rich channel encoded audio data. For example, audio data supporting channel coding of 5.1, 7.1, etc. As shown in fig. 6A, the playing interface of the first application includes a play button, a last song button, and a next song button. The user may play music a by clicking a play button in the play interface of the first application, and the electronic device starts playing music a, where music a is regarded as the first audio data.
S502, under the condition that the first audio data is played, the electronic equipment responds that the first audio data is a media stream, and the electronic equipment supports a spatial audio function, and performs spatial audio effect processing on the first audio data based on attribute characteristics of the first audio data to obtain the first spatial audio data.
S503, the electronic device plays the first spatial audio data.
In the embodiment of the application, under the condition of playing the first audio data, the electronic device determines whether the first audio data is a media stream, and whether the electronic device supports a spatial audio function. The first audio data may be audio data of channel coding in the first application, for example, audio data supporting channel coding of 5.1, 7.1, etc. If the first audio data is a media stream and the electronic device supports a spatial audio function, spatial audio effect processing of the first audio data is further achieved according to the attribute characteristics of the first audio data, so that the first spatial audio data is obtained. After the electronic equipment generates the first spatial audio data, the first spatial audio data can be played for the user, so that the music playing quality is improved, and immersive spatial audio experience is brought to the user.
In one possible implementation manner, when the electronic device performs spatial audio effect processing on the first audio data based on the attribute feature of the first audio data to obtain the first spatial audio data, a specific implementation manner may be:
if the attribute characteristics of the first audio data do not include the first characteristics, the electronic equipment performs spatial audio sound effect processing on the first audio data to obtain first spatial audio data; the first characteristic indicates that the first audio data has been spatially audio sound processed, or the first characteristic indicates that the first audio data is not allowed to spatially audio sound processed, or the first characteristic indicates that a processing delay of the first audio data is below a preset threshold.
It will be appreciated that the first feature herein may indicate: (1) The first audio data has been processed by spatial audio effects and can be represented by a flag_content_spatial audio mark; (2) The first audio data is not allowed to be subjected to spatial audio effect processing, and can be represented by a FLAG_NEVER_SPATIALIZE mark; (3) The processing delay of the first audio data is lower than a preset threshold value, and may be represented by a flag_low_latency FLAG. Of course, the first feature may also indicate other properties, not limited herein.
Assuming that the first feature indicates that the first audio data has been subjected to spatial audio effect processing, and the attribute FLAG bit set of the first audio data includes the first feature (i.e., carries a flag_content_spatial information FLAG), then further spatial audio effect processing may not be performed, otherwise, repeating spatial audio effect processing may cause impairment to sound quality. Thus, when the set of attribute-marker bits of the first audio data does not include the first feature, it is determined that the first audio data is capable of spatial audio sound effect processing.
Assuming that the first feature indicates that spatial audio sound processing is not allowed for the first audio data, the set of attribute-marker bits of the first audio data includes the first feature (i.e., carries a flag_new_spatial marker), then further spatial audio sound processing is not possible. Thus, when the set of attribute-marker bits of the first audio data does not include the first feature, it is determined that the first audio data is capable of spatial audio sound effect processing.
It is assumed that the first characteristic indicates that the processing delay of the first audio data is below a preset threshold. The preset threshold may be 10ms, or may be another smaller value (for example, a value less than 20 ms), which is not limited herein. The processing delay of the first audio data being lower than the preset threshold may indicate that the first audio data needs to be played through a low-delay channel, that is, the processing delay requirement of the first audio data is low delay. If the attribute FLAG bit set of the first audio data includes the first feature (that is, carries a flag_low_latency FLAG), it indicates that the delay requirement of the play initiation Fang Duishi of the media stream is higher, and no further spatial audio sound effect processing is required, otherwise, the processing duration of the first audio data is increased, which is contrary to the LOW delay requirement of the audio data, and the real-time performance of the first audio data is reduced, and the user experience is reduced. Thus, when the set of attribute-marker bits of the first audio data does not include the first feature, it is determined that the first audio data is capable of spatial audio sound effect processing.
In general, the first audio data may be determined to be capable of spatial audio sound effect processing when the set of attribute-tag-bits of the first audio data does not include the first feature. Based on the mode, whether the current audio data meets the target requirement can be judged more efficiently, and the audio data which does not meet the recommended characteristics can be screened out as soon as possible, so that the effectiveness of spatial audio sound effect processing is improved.
The specific implementation process in the electronic device may refer to the steps S401 to S408, that is:
an audio service module (AudioService) calls a registerplaybackcall function to register the monitoring of the playing service to a playing monitoring module (playbackactymonitor); the electronic equipment responds to the playing operation of a user on first audio data at a playing interface of a first application, and the first application plays the first audio data through a playback data module (AudioTrack); when the play monitoring module monitors that the first audio data (audio stream) is played in the electronic equipment, calling a callback function to inform the audio service module that the first audio data is currently played; the audio service module analyzes the first audio data and judges whether the first audio data is a media stream or not; if the first audio data is a media stream, the audio service module further judges whether the electronic equipment supports a spatial audio function; if the electronic device supports the spatial audio function, the audio service module calls a config. Getaudioattributes function to acquire configuration parameters of the first audio data from an audio play configuration module (AudioPlaybackConfiguration); the audio service module calls an audioAttributes. GetAllFlags function to acquire an attribute mark bit set of the first audio data from an audio attribute module (audioAttributes); the audio service module judges whether the first audio data can be subjected to spatial audio sound effect processing or not based on the attribute mark bit set of the first audio data.
Optionally, when the electronic device performs spatial audio sound effect processing on the first audio data to obtain the first spatial audio data, a specific implementation manner may be: and the electronic equipment responds to the first audio data to be multi-channel audio data, and performs spatial audio effect processing on the first audio data to obtain first spatial audio data.
It is understood that after the electronic device determines that the first audio data is capable of spatial audio sound effect processing, channel information of the first audio data is further analyzed. If the first audio data is mono audio data, the user cannot obviously perceive the immersive spatial audio experience, and the user experience is easily reduced. Therefore, only when the first audio data is multi-channel audio data, the first audio data is meaningful in spatial audio effect processing, so that the user can obviously perceive immersive spatial audio experience, and the first audio data is processed in spatial audio effect processing at the moment, thereby obtaining the first spatial audio data, and then entering a recommendation flow. Where by mono is meant that there is only one independent input or output signal in the audio system; by multi-channel is meant that there are at least two separate input or output signals in the audio system that can be used for immersive sound effects or wide stereo surround sound. Based on the mode, the music playing quality can be effectively improved, the effectiveness and the clarity of the spatial audio sound effect processing are improved, and the user experience is improved.
Further optionally, if the attribute feature of the first audio data does not include the first feature, the electronic device performs spatial audio sound effect processing on the first audio data to obtain first spatial audio data, and a specific implementation manner may be:
if the attribute features of the first audio data do not include the first feature, the electronic device performs spatial audio sound effect processing on the first audio data under the condition that the spatial audio function is started to obtain the first spatial audio data.
It can be understood that before the electronic device performs the spatial audio effect processing on the first audio data, it is further determined whether the spatial audio function is turned on, and different recommendation manners are provided for different situations. That is, when the electronic device determines that the first audio data is a media stream and the electronic device supports a spatial audio function, and the first audio data is multi-channel audio data, it is determined whether the spatial audio function is turned on, and spatial audio sound effect processing is performed on the first audio data only when the spatial audio function is turned on, so as to obtain the first spatial audio data. Based on the mode, if the spatial audio function is not started, the spatial audio sound effect processing can be not required to be executed, the effectiveness of the spatial audio sound effect processing is improved, and the power consumption is saved.
The specific implementation process in the electronic device may refer to the steps S409 to S411, that is:
if the first audio data can be subjected to spatial audio effect processing, the audio service module further calls a getChannelmask function to acquire channel information of the first audio data from the audio play configuration module; the audio service module judges whether the first audio data is multi-channel audio data or not based on the channel information of the first audio data; if the first audio data is multi-channel audio data, the audio service module detects whether the spatial audio function is started.
In one possible implementation, the playback interface includes a spatial audio function switch. The following describes in detail the recommended manner corresponding to the two cases where the spatial audio function switch is turned on (i.e., the spatial audio function is turned on) and the spatial audio function switch is turned off (i.e., the spatial audio function is turned off).
Case one: the electronic equipment responds to the fact that the spatial audio function switch is not turned on, and a second prompt box is displayed; the second prompt box is used for recommending the user to start the spatial audio function.
It can be understood that, under the condition that the spatial audio function is not started, the second prompt box recommends the user to start the spatial audio function (i.e. recommends a spatial audio function start button), so that the current audio frequency is prompted to have better experience effect when using the spatial audio frequency.
As shown in fig. 6B, a spatial audio function switch is displayed on the playback interface of the first application, and the user may turn on or off the spatial audio function through the spatial audio function switch. At this time, the spatial audio function switch in fig. 6B is in an off state (i.e., the spatial audio function switch is not turned on), and then the electronic device further needs to display a second prompt box, where the user is prompted in the second prompt box: "the effect of turning on the spatial audio function is better".
The specific implementation process in the electronic device may refer to the steps S412 to S414, that is:
if the spatial audio function is not started, the audio service module indicates to an audio service extension module (HwAudioServiceEx) that the spatial audio function is not started; the audio service extension module calls a notify function to indicate to a notification manager (notifinmanager) that the spatial audio function is not turned on; the notification manager displays a second prompt box for recommending the user to turn on the spatial audio function.
And a second case: the electronic equipment responds to the space audio frequency function switch to be turned on, and a first prompt box is displayed; the first prompt box is used for recommending a special area of the user space audio data.
It can be understood that after the spatial audio function is turned on, spatial audio sound effect processing is performed on the first audio data, so as to obtain first spatial audio data, and the first spatial audio data is played. At the moment, the user can hear the first audio data after the spatial audio effect processing, and immersion type spatial audio experience is brought to the user. Meanwhile, the electronic equipment can display a first prompt box for recommending the special area of the user space audio data; the special spatial audio data area can recommend the audio in the album corresponding to the first audio data, the jump link of the special spatial audio resource area corresponding to the first application or other applications, and the like to the user, and the user can experience the immersion feeling brought by more spatial audio after clicking. Based on the mode, the spatial audio data can be intelligently recommended to the user, and the user experience is improved.
As shown in fig. 6C, a user starts the spatial audio function through a spatial audio function switch in a playing interface of a first application, and after detecting that the spatial audio function switch is started, the electronic device performs spatial audio effect processing on music a (i.e., first audio data) to obtain music a (i.e., first spatial audio data) after spatial audio effect processing, plays the first spatial audio data, and can hear the music a after spatial audio effect processing, thereby bringing immersive spatial audio experience to the user. Simultaneously, a first prompt box is displayed in the playing interface, and the user is prompted in the first prompt box: "experience more spatial audio data section", and the first prompt box includes 3 option buttons of spatial audio resource section, namely, an option button of spatial audio resource section a, an option button of spatial audio resource section B, and an option button of spatial audio resource section C.
The specific implementation process in the electronic device may refer to the steps S415 to S417, that is:
the notification manager, in response to a user operation to turn on the spatial audio function, invokes a setEnabled function to notify a spatial audio processing module (spacial) that the spatial audio function is turned on; the spatial audio processing module performs spatial audio effect processing on the first audio data to obtain first spatial audio data, and plays the first spatial audio data; the notification manager displays a first prompt box for recommending a user space audio data area.
Optionally, before the electronic device displays the first prompt box in response to the spatial audio function switch being turned on, the method further includes: storing address information of a spatial audio data area corresponding to one or more applications under the condition that the electronic equipment supports a spatial audio function and the one or more applications are registered and licensed by the electronic equipment; the one or more applications include the first application.
Further optionally, when the electronic device supports a spatial audio function and one or more applications store address information of a spatial audio data area corresponding to the one or more applications under the condition that the electronic device registers permission, a specific implementation manner may be: under the condition that the electronic equipment supports the space audio function, acquiring a storage service corresponding to the space audio function; in response to one or more applications registering permissions with the electronic device, address information of spatial audio data areas corresponding to the one or more applications is stored based on a storage service corresponding to the spatial audio function.
It may be understood that, in response to the spatial audio function switch being turned on, before the first prompt box is displayed, the third party application may actively preset address information of the corresponding spatial audio data area into the electronic device by accessing the opening capability of the AudioKit at the time of startup or at other suitable occasions. Specifically, the AudioKit provides an SDK interface for the third party application to integrate and call, and the third party application can call the interface corresponding to the AudioKit when being started or at other proper time to prestore address information corresponding to the spatial audio data special area in the third party application into the electronic equipment, so that the combination with the third party application is realized. The third party application may include one or more. One or more of the three-way applications herein may be considered one or more applications.
Taking the third party application as the first application as an example, the first application may actively execute steps S301 to S311 as described in fig. 4A when started or at other suitable time, and the address information of the spatial audio data area corresponding to the first application is pre-stored in the electronic device, that is:
the first application calls an isDeviceSupport function to determine whether the electronic device supports an audio open service through an audio service SDK (Audiokit); the audio service SDK informs the first application that the electronic equipment is provided with an audio open service; the first application calls a construction function to construct an object to the audio service SDK; after the audio service SDK calls an init function for initialization, a bindService function is called to inform an audio open service (Audio Engine) that the first application is bound on the electronic equipment; after the audio opening service binds the first application on the electronic equipment, the audio opening service feeds back to the first application through the audio service SDK; the first application calls a getSupportedServices function to acquire an audio open service list from an audio open service through an audio service SDK; the audio open service feeds back the audio open service list to the first application through the audio service SDK; the first application calls a createService function to construct an object to the audio service SDK; the audio service SDK calls a bindService function to inform an audio opening service that an audio service (i.e. a storage service) supporting spatial audio data storage is bound on the first application; after the audio open service binds the audio service supporting the spatial audio data storage on the first application, the audio open service feeds back to the first application through the audio service SDK; the method comprises the steps that a first application calls an addspatial ZoneAddress function, and the address information of a spatial audio data area corresponding to the first application is informed to an audio open service through an audio service SDK; the audio open service calls a checkPermission function to confirm whether the first application registers permission or not to a verification module (KitAssistant SDK); the verification module informs the audio open service of the first application registration permission (i.e., verification success); the audio open service calls a saveSpatialAudio ZoneAddress function to store address information of a spatial audio data special area corresponding to the first application; after the audio opening service stores the address information of the special space audio data area corresponding to the first application, the return value is fed back to the first application through the audio service SDK; the first application calls a destroyfunction to inform an audio service SDK to destroy objects, and the audio service SDK calls an unbinding service function to inform an audio open service to release the audio service bound by the first application.
Further optionally, the method further comprises: the electronic equipment responds to the registration failure of the third application and displays a third prompt box; the third prompt box is used for prompting a user that the third application cannot use the spatial audio function. Based on the mode, the user can be effectively informed of the registration failure of the third application, the user is reminded of the possibility that the third application is dangerous, and the safety of the electronic equipment is improved.
As shown in fig. 6D, if the third application is started, the verification module of the electronic device verifies that the third application fails (i.e. fails to register), which indicates that the third application is potentially dangerous, and the user needs to be prompted through a third prompt box: "the third application cannot normally use the spatial audio function".
Further optionally, the first prompt box includes options of spatial audio data areas corresponding to the one or more applications; the method further comprises the steps of: the electronic equipment responds to the operation of triggering the option of the first spatial audio data special area, and displays a recommended interface corresponding to the first spatial audio data special area based on the address information of the first spatial audio data special area; the first spatial audio data section is one of the spatial audio data sections corresponding to one or more applications.
It can be understood that, by clicking on an option of a certain spatial audio data area (i.e. the first spatial audio data area) in the first prompt box, the electronic device jumps to the recommended interface corresponding to the first audio data area according to the address information stored previously. If the first spatial audio data special area is the spatial audio data special area corresponding to the first application, continuing to jump to a recommended interface corresponding to the first audio data special area in the first application; if the first spatial audio data area is the spatial audio data area corresponding to the second application (i.e. other applications), the playing interface of the first application is jumped to the recommended interface corresponding to the first audio data area displayed in the second application. Based on the mode, more space audio data can be intelligently recommended to the user, the user experience is improved, and meanwhile, the drainage effect can be achieved for other audio data or applications.
As shown in fig. 7A, the first prompt box includes an option button of the spatial audio data section a, an option button of the spatial audio data section B, and an option button of the spatial audio data section C. The space audio data special area A is a space audio data special area corresponding to a first application, the space audio data special area B is a space audio data special area corresponding to a second application, and the space audio data special area C is a space audio data special area corresponding to a third application. The user clicks an option button of the spatial audio data special area a in the first prompt box (namely, the spatial audio data special area a is the first spatial audio data special area), and according to the address information of the spatial audio data special area a stored in advance, a recommendation interface corresponding to the spatial audio data special area a is displayed in the first application, wherein the recommendation interface corresponding to the spatial audio data special area a comprises a music A-singer 1, a music B-singer 2, a music C-singer 3, a music D-singer 4, a music E-singer 5 and a music F-singer 6.
As further shown in fig. 7B, the first prompt box includes an option button of the spatial audio data section a, an option button of the spatial audio data section B, and an option button of the spatial audio data section C. The space audio data special area A is a space audio data special area corresponding to a first application, the space audio data special area B is a space audio data special area corresponding to a second application, and the space audio data special area C is a space audio data special area corresponding to a third application. The user clicks an option button of the spatial audio data special area B in the first prompt box (namely, the spatial audio data special area B is the first spatial audio data special area), and jumps to a recommended interface corresponding to the spatial audio data special area B from the first application to the second application according to the address information of the spatial audio data special area B stored in advance, wherein the recommended interface corresponding to the spatial audio data special area B comprises a music 1-singer a, a music 2-singer B, a music 3-singer c, a music 4-singer d, a music 5-singer e and a music 6-singer f.
Further optionally, the first spatial audio data area is a spatial audio data area corresponding to the first application, and the recommended interface of the first spatial audio data area includes an option of first spatial audio data and an option of second spatial audio data; the second spatial audio data is associated with the first spatial audio data.
It will be understood that the audio data in the same spatial audio data area may be related, for example, the same type of audio data may be related, or the same album may be related, or the same singer may sing audio data may be related, or the most recently played audio data may be related, or the most popular audio data may be related, which is not limited herein. Based on the mode, other spatial audio data related to the current audio data can be intelligently recommended to the user, and user experience is improved.
Illustratively, as shown in FIG. 7C (a), the first application corresponds to a spatial audio data section A, and the recommended interface corresponding to the spatial audio data section A includes a music A-singer 1, a music B-singer 2, a music C-singer 3, a music D-singer 4, a music E-singer 5, and a music F-singer 6. The music recommended in the spatial audio data section a is the same type of music. For example, assuming that "music a-singer 1" is an option of the first spatial audio data and "music B-singer 2" is an option of the first spatial audio data, music a and music B herein are associated, being the same type of music, i.e., the second spatial audio data is associated with the first spatial audio data.
Illustratively, as shown in fig. 7C (B), the first application corresponds to a spatial audio data section a, and the recommended interface corresponding to the spatial audio data section a includes music a-singer 1, music B-singer 1, music C-singer 1, music D-singer 1, music E-singer 1, music F-singer 1. The music recommended in the spatial audio data section a is music sung by the same singer in the same album. For example, assuming that "music a-singer 1" is an option of the first spatial audio data and "music B-singer 1" is an option of the first spatial audio data, music a and music B herein are associated, which are music singed by the same singer in the same album, that is, the second spatial audio data is associated with the first spatial audio data.
Optionally, the method further comprises: under the condition that the space audio function is started, the electronic equipment responds to the operation of exiting the playing interface, plays the first space audio data in the background and displays a floating window icon; the electronic equipment responds to the operation of triggering the floating window image and displays a floating window interface; the floating window interface comprises a spatial audio function switch and a plurality of spatial audio data areas, and the applications corresponding to the plurality of spatial audio data areas are the same or different. Based on the mode, convenience in playing the spatial audio data can be improved, and user experience is improved.
As shown in fig. 7D, in the case where the spatial audio function is turned on, music a (i.e., first spatial audio data) after spatial audio sound effect processing is played in the first application. After the user exits the playing interface of the first application and returns to the desktop, the electronic device can continue to play the first spatial audio data in the background, and at the moment, a floating window icon is displayed on the desktop. The user may click on the hover window icon at any time, entering the hover window interface. A player of the first application, a spatial audio function switch, and a plurality of option buttons for spatial audio data sections are included in the floating window interface. The user can conveniently switch songs, pause songs, play songs and the like in the floating window interface through the player of the first application; the user can also directly control the on or off of the spatial audio function through the spatial audio function switch in the floating window interface. In addition, the option buttons of the plurality of spatial audio data areas include an option button of the spatial audio data area a, an option button of the spatial audio data area B, and an option button of the spatial audio data area C, and the user can switch the spatial audio data areas by clicking the option buttons of the spatial audio data areas.
Therefore, based on the mode described in fig. 5, when playing audio data, spatial audio data with better effect can be generated by using the spatial audio technology of the electronic device and the audio resources of the channel coding of the third party application, so that the combination between the electronic device and the third party application is realized, and the spatial audio data can be effectively played for the user, and the music playing quality is improved.
Referring to fig. 8, fig. 8 is a schematic structural diagram of an audio data playing device 800 according to an embodiment of the present application. The audio data playing device shown in fig. 8 may be an electronic device, or may be a device in an electronic device, or may be a device that can be used in a matching manner with an electronic device. The audio data playback apparatus shown in fig. 8 may include a playback unit 801 and a processing unit 802. Wherein:
a playing unit 801 for playing the first audio data in response to a playing operation of the first audio data at a playing interface of the first application;
a processing unit 802, configured to respond to the first audio data being a media stream and the electronic device supporting a spatial audio function, and perform spatial audio effect processing on the first audio data based on attribute features of the first audio data to obtain first spatial audio data;
The playing unit 801 is further configured to play the first spatial audio data.
In a possible implementation manner, the processing unit 802 is specifically configured to, when performing spatial audio sound effect processing on the first audio data based on the attribute feature of the first audio data to obtain first spatial audio data: if the attribute characteristics of the first audio data do not include the first characteristics, performing spatial audio sound effect processing on the first audio data to obtain first spatial audio data; the first characteristic indicates that the first audio data has been spatially audio sound processed, or the first characteristic indicates that the first audio data is not allowed to spatially audio sound processed, or the first characteristic indicates that a processing delay of the first audio data is below a preset threshold.
In one possible implementation manner, the processing unit 802 is specifically configured to, when performing spatial audio sound effect processing on the first audio data if the attribute feature of the first audio data does not include the first feature, obtain the first spatial audio data: if the attribute features of the first audio data do not include the first feature, performing spatial audio sound effect processing on the first audio data under the condition that the spatial audio function is started to obtain the first spatial audio data.
In one possible implementation manner, the processing unit 802 is specifically configured to, when performing spatial audio sound effect processing on the first audio data to obtain first spatial audio data: and responding to the first audio data as multi-channel audio data, and performing spatial audio effect processing on the first audio data to obtain first spatial audio data.
In one possible implementation, the playback interface includes a spatial audio function switch; the device further comprises a display unit for: displaying a first prompt box in response to the spatial audio function switch being turned on; the first prompt box is used for recommending a special area of the user space audio data; displaying a second prompt box in response to the spatial audio function switch not being turned on; the second prompt box is used for recommending the user to start the spatial audio function.
In one possible implementation, the display unit is further configured to, before the first prompt box is displayed in response to the spatial audio function switch being turned on, store the first prompt box in the memory unit, where: storing address information of a spatial audio data area corresponding to one or more applications under the condition that the electronic equipment supports the spatial audio function and the one or more applications are registered and licensed by the electronic equipment; the one or more applications include the first application.
In one possible implementation, the first prompt box includes options of a spatial audio data area corresponding to the one or more applications; the display element is also used for: responding to the operation of triggering the option of the first spatial audio data special area, and displaying a recommendation interface corresponding to the first spatial audio data special area based on the address information of the first spatial audio data special area; the first spatial audio data section is one of the spatial audio data sections corresponding to one or more applications.
In one possible implementation manner, the first spatial audio data special area is a spatial audio data special area corresponding to the first application, and the recommended interface of the first spatial audio data special area includes an option of first spatial audio data and an option of second spatial audio data; the second spatial audio data is associated with the first spatial audio data.
For the case where the audio data playback apparatus may be a chip or a chip system, reference may be made to the schematic structural diagram of the chip shown in fig. 9. The chip 900 shown in fig. 9 includes a processor 901 and an interface 902. Optionally, a memory 903 may also be included. Wherein the number of processors 901 may be one or more, and the number of interfaces 902 may be a plurality.
For the case where the chip is used to implement the electronic device in the embodiments of the present application:
the interface 902 is configured to receive or output a signal;
the processor 901 is configured to perform data processing operations of an electronic device.
It can be understood that some optional features in the embodiments of the present application may be implemented independently in some scenarios, independent of other features, such as the scheme on which they are currently based, so as to solve corresponding technical problems, achieve corresponding effects, or may be combined with other features according to requirements in some scenarios. Accordingly, the audio data playing device provided in the embodiments of the present application may also implement these features or functions accordingly, which will not be described herein.
It should be appreciated that the processor in the embodiments of the present application may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the above method embodiments may be implemented by integrated logic circuits of hardware in a processor or instructions in software form. The processor may be a general purpose processor, a digital signal processor (digital signal processor, DSP), an application specific integrated circuit (application specific integrated circuit, ASIC), a field programmable gate array (field programmable gate array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components.
It will be appreciated that the memory in embodiments of the present application may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. The volatile memory may be random access memory (random access memory, RAM) which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous DRAM (SLDRAM), and direct memory bus RAM (DR RAM). It should be noted that the memory of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
The application also provides an audio data playing system, which comprises the electronic equipment; the electronic device is configured to execute the method executed by the electronic device in any of the method embodiments described above.
The present application also provides a computer readable storage medium having stored therein a computer program comprising program instructions for implementing the functions of any of the method embodiments described above when the program instructions are run on an electronic device.
The present application also provides a computer program product which, when run on a computer, causes the computer to carry out the functions of any of the method embodiments described above.
In the above embodiments, the implementation may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer instructions are loaded and executed on a computer, the processes or functions described in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line (digital subscriber line, DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a high-density digital video disc (digital video disc, DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (13)

1. A method of playing audio data, the method comprising:
playing the first audio data in response to a playing operation of the first audio data on a playing interface of the first application;
under the condition of playing the first audio data, responding to the first audio data as a media stream, and enabling the electronic equipment to support a spatial audio function, and performing spatial audio effect processing on the first audio data based on attribute characteristics of the first audio data to obtain first spatial audio data;
and playing the first spatial audio data.
2. The method of claim 1, wherein the performing spatial audio sound effect processing on the first audio data based on the attribute features of the first audio data to obtain first spatial audio data comprises:
If the attribute characteristics of the first audio data do not include the first characteristics, performing spatial audio sound effect processing on the first audio data to obtain first spatial audio data;
the first characteristic indicates that the first audio data has been subjected to spatial audio sound effect processing, or the first characteristic indicates that the first audio data is not allowed to be subjected to spatial audio sound effect processing, or the first characteristic indicates that the processing delay of the first audio data is lower than a preset threshold.
3. The method according to claim 2, wherein if the attribute features of the first audio data do not include the first feature, performing spatial audio sound effect processing on the first audio data to obtain first spatial audio data, including:
if the attribute features of the first audio data do not include the first features, under the condition that the spatial audio function is started, performing spatial audio sound effect processing on the first audio data to obtain first spatial audio data.
4. A method according to claim 2 or 3, wherein said performing spatial audio sound effect processing on said first audio data to obtain first spatial audio data comprises:
And responding to the first audio data as multi-channel audio data, and performing spatial audio effect processing on the first audio data to obtain first spatial audio data.
5. The method of claim 1, wherein the playback interface comprises a spatial audio function switch; the method further comprises the steps of:
displaying a first prompt box in response to the spatial audio function switch being turned on; the first prompt box is used for recommending a special user space audio data area;
displaying a second prompt box in response to the spatial audio function switch not being turned on; the second prompt box is used for recommending the user to start the spatial audio function.
6. The method of claim 5, wherein the method further comprises, in response to the spatial audio function switch being turned on, prior to displaying a first prompt box:
storing address information of a special space audio data area corresponding to one or more applications under the condition that the electronic equipment supports the space audio function and the one or more applications are registered and licensed by the electronic equipment; the one or more applications include the first application.
7. The method of claim 6, wherein the first prompt box includes options for spatial audio data areas corresponding to the one or more applications; the method further comprises the steps of:
Responding to the operation of triggering the option of the first spatial audio data special area, and displaying a recommendation interface corresponding to the first spatial audio data special area based on the address information of the first spatial audio data special area; the first spatial audio data section is one of the spatial audio data sections corresponding to the one or more applications.
8. The method of claim 7, wherein the first spatial audio data section is a spatial audio data section corresponding to the first application, and the recommendation interface corresponding to the first spatial audio data section includes an option for the first spatial audio data and an option for the second spatial audio data; the second spatial audio data is associated with the first spatial audio data.
9. An electronic device, comprising: one or more processors, one or more memories; wherein one or more memories are coupled to one or more processors, the one or more memories being operable to store computer program code comprising computer instructions that, when executed by the one or more processors, cause the electronic device to perform the method of any of claims 1-8.
10. An audio data playing system, comprising an electronic device; wherein the electronic device is adapted to perform the method of any of claims 1-8.
11. A chip comprising a processor and an interface, the processor and the interface being coupled; the interface being for receiving or outputting signals, the processor being for executing code instructions to cause the method of any of claims 1-8 to be performed.
12. A computer storage medium storing a computer program comprising program instructions which, when run on an electronic device, cause the electronic device to perform the method of any one of claims 1-8.
13. A computer program product, characterized in that the computer program product, when run on a computer, causes the computer to perform the method according to any of claims 1-8.
CN202311798813.5A 2023-12-26 2023-12-26 Audio data playing method and electronic equipment Pending CN117499850A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311798813.5A CN117499850A (en) 2023-12-26 2023-12-26 Audio data playing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311798813.5A CN117499850A (en) 2023-12-26 2023-12-26 Audio data playing method and electronic equipment

Publications (1)

Publication Number Publication Date
CN117499850A true CN117499850A (en) 2024-02-02

Family

ID=89680284

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311798813.5A Pending CN117499850A (en) 2023-12-26 2023-12-26 Audio data playing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN117499850A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130154930A1 (en) * 2011-12-19 2013-06-20 Qualcomm Incorporated Gesture controlled audio user interface
CN109287140A (en) * 2017-05-16 2019-01-29 苹果公司 Method and interface for home media control
CN112219411A (en) * 2018-03-29 2021-01-12 诺基亚技术有限公司 Spatial sound rendering
US20210210104A1 (en) * 2018-05-31 2021-07-08 Nokia Technologies Oy Spatial Audio Parameter Merging
CN114631141A (en) * 2019-10-30 2022-06-14 杜比实验室特许公司 Multi-channel audio encoding and decoding using directional metadata
CN115706883A (en) * 2021-08-06 2023-02-17 北京小米移动软件有限公司 Audio signal processing method and device
WO2023029967A1 (en) * 2021-08-31 2023-03-09 华为技术有限公司 Audio playback method, and electronic device
CN116208907A (en) * 2023-02-17 2023-06-02 深圳市倍思科技有限公司 Spatial audio processing device, apparatus, method and headphone
CN116578214A (en) * 2020-07-20 2023-08-11 苹果公司 System, method, and graphical user interface for selecting an audio output mode of a wearable audio output device
US20230300532A1 (en) * 2020-07-28 2023-09-21 Sonical Sound Solutions Fully customizable ear worn devices and associated development platform

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130154930A1 (en) * 2011-12-19 2013-06-20 Qualcomm Incorporated Gesture controlled audio user interface
CN109287140A (en) * 2017-05-16 2019-01-29 苹果公司 Method and interface for home media control
CN112219411A (en) * 2018-03-29 2021-01-12 诺基亚技术有限公司 Spatial sound rendering
US20210210104A1 (en) * 2018-05-31 2021-07-08 Nokia Technologies Oy Spatial Audio Parameter Merging
CN114631141A (en) * 2019-10-30 2022-06-14 杜比实验室特许公司 Multi-channel audio encoding and decoding using directional metadata
CN116578214A (en) * 2020-07-20 2023-08-11 苹果公司 System, method, and graphical user interface for selecting an audio output mode of a wearable audio output device
US20230300532A1 (en) * 2020-07-28 2023-09-21 Sonical Sound Solutions Fully customizable ear worn devices and associated development platform
CN115706883A (en) * 2021-08-06 2023-02-17 北京小米移动软件有限公司 Audio signal processing method and device
WO2023029967A1 (en) * 2021-08-31 2023-03-09 华为技术有限公司 Audio playback method, and electronic device
CN116208907A (en) * 2023-02-17 2023-06-02 深圳市倍思科技有限公司 Spatial audio processing device, apparatus, method and headphone

Similar Documents

Publication Publication Date Title
KR102575123B1 (en) Application display method and electronic device
CN113778574B (en) Card sharing method, electronic equipment and communication system
WO2022089207A1 (en) Cross-device application interaction method, electronic device, and server
CN115297405A (en) Audio output method and terminal equipment
CN113806105A (en) Message processing method and device, electronic equipment and readable storage medium
CN113986092B (en) Message display method and device
US20230259250A1 (en) Control method and apparatus, and electronic device
WO2023130921A1 (en) Method for page layout adapted to multiple devices, and electronic device
CN114915618A (en) Upgrade package downloading method and device
EP4354270A1 (en) Service recommendation method and electronic device
WO2023088459A1 (en) Device collaboration method and related apparatus
CN117499850A (en) Audio data playing method and electronic equipment
WO2022052928A1 (en) Application access method and related apparatus
CN115643339A (en) Method for adjusting volume, electronic device and computer readable storage medium
CN115438354A (en) User privacy protection method and device
CN115203716A (en) Permission synchronization method, related device and system
CN115130132A (en) Access control method for accurately revoking authority, related device and system
CN114006969B (en) Window starting method and electronic equipment
CN116828102B (en) Recording method, recording device and storage medium
WO2024078412A1 (en) Cross-screen sharing method, graphical interface, and related apparatus
EP4177777A1 (en) Flexibly authorized access control method, and related apparatus and system
WO2023131051A1 (en) Content sharing method and electronic device
WO2024067170A1 (en) Device management method and electronic device
CN114911438A (en) Task switching system and method, electronic device included in task switching system, and readable medium
CN116560536A (en) Application component setting method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination