WO2024093922A1 - 音频控制方法、存储介质、程序产品及电子设备 - Google Patents

音频控制方法、存储介质、程序产品及电子设备 Download PDF

Info

Publication number
WO2024093922A1
WO2024093922A1 PCT/CN2023/127789 CN2023127789W WO2024093922A1 WO 2024093922 A1 WO2024093922 A1 WO 2024093922A1 CN 2023127789 W CN2023127789 W CN 2023127789W WO 2024093922 A1 WO2024093922 A1 WO 2024093922A1
Authority
WO
WIPO (PCT)
Prior art keywords
audio
electronic device
focus
application
stack
Prior art date
Application number
PCT/CN2023/127789
Other languages
English (en)
French (fr)
Inventor
朱超超
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2024093922A1 publication Critical patent/WO2024093922A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output

Definitions

  • the present application relates to the field of computer technology, and in particular to an audio control method, a storage medium, a program product, and an electronic device.
  • the existing audio output processing method is: when device A outputs the audio of application 1, the user starts application 2 on device A to play the audio of application 2, then device A turns off the audio of application 1 or reduces the audio volume to output the audio of application 2.
  • the present application provides an audio control method, a storage medium, a program product, and an electronic device, which separately manage focus information through a first focus stack and a second focus stack, and separately manage audio applications that play audio through different electronic devices by separately managing the focus information, thereby achieving mutual independence of the audio played through the second electronic device and the audio played through the first electronic device, so as to achieve a distributed audio experience.
  • the present application provides an audio control method, which is applied to a first electronic device, on which a first audio application is installed, and the first electronic device includes a first focus stack, wherein the first focus stack is used to store focus information corresponding to the audio application that plays audio through the first electronic device, and the audio control method includes: creating a second focus stack, wherein the second focus stack is used to store focus information corresponding to the audio application that plays audio through the second electronic device; in response to a first instruction, placing the first focus information of the first audio application at the top of the second focus stack, and notifying the first audio application to obtain audio focus, so that when the content of the first audio application is transferred to the second electronic device, the audio of the first audio application is played through the second electronic device.
  • the audio control method of the present application creates and maintains at least two focus stacks.
  • the first focus stack and the second focus stack are both used to store the focus information of the audio application applying for audio focus.
  • the difference between the first focus stack and the second focus stack is that the audio of the audio application corresponding to the focus information stored in the first focus stack is played through the first electronic device, and the audio of the audio application corresponding to the focus information stored in the second focus stack is played through the second electronic device.
  • the focus information is managed separately through the first focus stack and the second focus stack, and the audio applications that play audio through different electronic devices are managed separately by separately managing the focus information, thereby achieving the independence of the audio played through the second electronic device and the audio played through the first electronic device, so as to achieve a distributed audio experience.
  • the audio control method further includes: obtaining first information, wherein the first information is used to indicate that the audio of the second audio application is played through the second electronic device; in response to obtaining the first information, obtaining second focus information of the second audio application, placing the second focus information at the top of the second focus stack, and notifying the first audio application of the loss of audio focus.
  • the audio control method before responding to obtaining the first information, it also includes: obtaining the first information from the second electronic device through the distributed fusion perception platform service.
  • the audio control method further includes: obtaining second information, wherein the second information is used to indicate that the second audio application has lost audio focus; in response to obtaining the second information, placing the first focus information at the top of the second focus stack, and notifying the first audio application to obtain the audio focus.
  • the audio control method before responding to obtaining the second information, it also includes: obtaining the second information from the second electronic device through the distributed fusion perception platform service.
  • the first electronic device transfers the first audio application to the second electronic device for playback, and when the first audio application has not been played, and the second audio application applies for playback, the second audio application can be played normally on the second electronic device, and when the second audio application loses the audio focus (for example, after applying for playback), the first audio application can continue to be played on the second electronic device.
  • placing the first focus information of the first audio application at the top of the second focus stack includes: obtaining Take a first operation; in response to the first operation, determine the first focus stack corresponding to the first operation from the first focus stack and the second focus stack, and place the first focus information of the first audio application at the top of the first focus stack to play the audio of the first audio application through the first electronic device; obtain a second operation, wherein the second operation is used to instruct the first audio application that is playing audio to be transferred to the second electronic device; in response to the first instruction corresponding to the second operation, determine the second focus stack corresponding to the first instruction from the first focus stack and the second focus stack, and move the first focus information of the first audio application from the first focus stack to the top of the second focus stack.
  • the first electronic device can transfer the first audio application that is playing audio to the second electronic device for playback.
  • the method before responding to the first instruction, further includes: responding to the third operation, transferring the first audio application to the second electronic device; and obtaining the first instruction when detecting that the first audio application transferred to the second electronic device applies to play audio.
  • the first electronic device can transfer the first audio application that is not playing audio to the second electronic device for playback.
  • the audio control method further includes: obtaining a second instruction, wherein the second instruction is used to instruct to migrate the first audio application back to the first electronic device; in response to the second instruction, moving the first focus information from the second focus stack to the top of the first focus stack, and notifying the first audio application to obtain the audio focus.
  • the first electronic device can migrate the first audio application migrated to the second electronic device back, and the migrated first audio application plays audio on the first electronic device.
  • creating the second focus stack includes: obtaining a device identification of the second electronic device; and creating the second focus stack according to the device identification.
  • creating a second focus stack according to the device identification includes: creating N second focus stacks respectively according to the N device identifications.
  • the present application provides an audio control method, which is applied to a first electronic device and a second electronic device, wherein the first electronic device and the second electronic device are communicatively connected, a first audio application is installed on the first electronic device, and the second electronic device includes a first local focus stack, and the first local focus stack is used to store focus information of an audio application that plays audio through the second electronic device.
  • the method includes: the first electronic device obtains a first instruction; the first electronic device responds to the first instruction, notifies the first audio application to obtain audio focus, and transmits the first focus information of the first audio application to the second electronic device; the second electronic device creates first simulated focus information based on the first focus information, and places the first simulated focus information at the top of the first local focus stack, so that when the content of the first audio application is transferred to the second electronic device, the audio of the first audio application is played through the second electronic device.
  • the present application stores the focus information of the audio application in the local focus stack of the electronic device that plays the audio application.
  • the focus information of the first audio application applying for the audio focus is stored in the first local focus stack of the electronic device B, and the first local focus stack is used to maintain the focus information corresponding to the first audio application that has been transferred.
  • the focus information of the second audio application applying for the audio focus is stored in the second local focus stack of the electronic device A, and the second local focus stack is used to maintain the focus information corresponding to the second audio application that has been transferred.
  • the audio applications that play audio through different electronic devices can be managed separately, thereby achieving the independence of the audio played through the second electronic device and the audio played through the first electronic device, so as to achieve a distributed audio experience.
  • the second electronic device includes a second audio application
  • the audio control method further includes: the second electronic device obtains a fourth operation, wherein the fourth operation indicates that the audio of the second audio application is played through the second electronic device; the second electronic device, in response to the fourth operation, places the second focus information of the second audio application at the top of the first local focus stack, notifying the second audio application to obtain the audio focus; the first electronic device, in response to the second audio application obtaining the audio focus, notifies the first audio application to lose the audio focus.
  • the first electronic device includes a second local focus stack
  • the audio control method further includes: the first electronic device obtains the first analog focus information from the second electronic device in response to the migration instruction; the first electronic device places the first analog focus information at the top of the second local focus stack, and notifies the first audio application to obtain the audio focus, and the first electronic device plays the audio of the first audio application.
  • the first electronic device can migrate the first audio application migrated to the second electronic device back, and the migrated first audio application plays audio on the first electronic device.
  • the present application provides a computer storage medium, including computer instructions, which, when executed on an electronic device, enables the electronic device to execute any one of the audio control methods described above.
  • the present application provides a computer program product, which, when executed on a computer, enables the computer to execute any one of the audio control methods described above.
  • the present application provides an electronic device, which includes a processor and a memory, the memory is used to store instructions, and the processor is used to call the instructions in the memory, so that the electronic device executes any audio control method as described above.
  • audio applications that play audio through different electronic devices can be managed separately, thereby making the audio played through the second electronic device independent from the audio played through the first electronic device, so as to achieve a distributed audio experience.
  • FIG1 is a schematic diagram of the structure of an audio system provided in an embodiment of the present application.
  • FIG. 2 is a block diagram of the software structure of the electronic device provided in an embodiment of the present application.
  • FIG3 is a flow chart of an audio control method provided in Embodiment 1 of the present application.
  • 4A to 4G are schematic diagrams of an application scenario corresponding to the first embodiment.
  • FIG. 5 is a schematic diagram of the operation flow of the first electronic device corresponding to FIG. 4A to FIG. 4G .
  • FIG6 is a flow chart of an audio control method provided in Embodiment 2 of the present application.
  • FIGS. 7A to 7D are schematic diagrams of an application scenario corresponding to the second embodiment.
  • FIG. 8 is a schematic diagram of the operation flow of the first electronic device corresponding to FIG. 7A to FIG. 7D .
  • FIG. 9 is a flow chart of an audio control method provided in Embodiment 3 of the present application.
  • 10A to 10C are schematic diagrams of an application scenario corresponding to the third embodiment.
  • FIG. 11 is a schematic diagram of the operation flow of the first electronic device corresponding to FIG. 10A to FIG. 10C .
  • FIG12 is a flow chart of an audio control method provided in Embodiment 4 of the present application.
  • FIGS. 13A to 13B are schematic diagrams of an application scenario corresponding to the fourth embodiment.
  • FIG. 14 is a schematic diagram of the operation flow of the first electronic device corresponding to FIG. 13A to FIG. 13B .
  • FIG15 is a flow chart of an audio control method provided in Embodiment 5 of the present application.
  • FIG16 is a flow chart of an audio control method provided in Embodiment 6 of the present application.
  • FIG. 17 is a flow chart of an audio control method provided in Embodiment 7 of the present application.
  • FIG18 is a flow chart of an audio control method provided in Embodiment 8 of the present application.
  • FIG19 is a flow chart of an audio control method provided in Embodiment 9 of the present application.
  • FIG20 is a flow chart of an audio control method provided in the tenth embodiment of the present application.
  • FIG. 21 is a flow chart of an audio control method provided in the eleventh embodiment of the present application.
  • FIG. 22 is a schematic diagram of the hardware structure of an electronic device provided in an embodiment of the present application.
  • the term “plurality” refers to two or more.
  • the terms “first”, “second”, etc. are only used for the purpose of distinguishing descriptions, and cannot be understood as indicating or implying relative importance, nor can they be understood as indicating or implying an order.
  • the existing audio output processing method manages audio output based on the audio focus preemption mechanism.
  • an audio focus preemption mechanism is set in the design of the Android Open Source Project (AOSP).
  • AOSP Android Open Source Project
  • each audio application needs to apply for audio focus to play audio, and the audio application that obtains the audio focus has the permission to play audio.
  • the focus information of all audio applications applying for audio focus on the same device is placed in the same audio focus stack, which makes it impossible to achieve a distributed audio experience.
  • Audio application 1 on device A successfully applies for audio focus
  • focus information 1 of audio application 1 applying for audio focus is added to the top of the audio focus stack, and audio application 1 is notified to obtain audio focus.
  • Audio application 1 has the permission to play audio.
  • Device A transfers audio application 1 to device B, and plays the audio of audio application 1 through device B.
  • the user starts audio application 2 on device A to play audio, and audio application 2 applies for audio focus.
  • focus information 2 of audio application 2 applying for audio focus is added to the top of the audio focus stack, and the focus information of audio application 1 is no longer at the top of the stack.
  • audio application 2 is notified to obtain audio focus, and audio applications corresponding to other focus information in the audio focus stack (such as audio application 1) are notified that they have lost audio focus. Audio application 1 will stop playing or pause playing, or lower the volume due to the loss of audio focus.
  • the present application provides an audio control method that can be applied to device interconnection scenarios to achieve a distributed audio experience.
  • the basic principle of the present application In a scenario where multiple electronic devices are interconnected, for multiple audio applications on the same electronic device, when the multiple audio applications play audio through different electronic devices, the audio playback of the multiple audio applications is managed separately.
  • the audio playback of the first audio application 1 and the first audio application 2 are managed separately, such as the focus information A for the first audio application 1 to apply for audio focus and the focus information B for the first audio application 2 to apply for audio focus are managed separately, avoiding storing both the focus information A and the focus information B in the same management bucket (such as an audio focus stack) for maintaining the order of audio playback, and then separately managing the focus information A and the focus information B to achieve separate management of the audio playback of the first audio application 1 and 2.
  • the first electronic device includes a first focus stack and a second focus stack
  • the first focus stack and the second focus stack are both used to store focus information of an audio application applying for audio focus.
  • the difference between the first focus stack and the second focus stack is that the audio of the audio application corresponding to the focus information stored in the first focus stack is played through the first electronic device, and the audio of the audio application corresponding to the focus information stored in the second focus stack is played through the second electronic device.
  • the focus information of the audio application applying for audio focus is stored in the first focus stack.
  • the focus information of the audio application applying for audio focus is stored in the second focus stack.
  • the focus information of the first audio application applying for audio focus is stored in the first local focus stack of electronic device B, and the first local focus stack maintains the focus information corresponding to the first audio application that has been transferred.
  • the focus information of the second audio application applying for audio focus is stored in the second local focus stack of electronic device A, and the second local focus stack maintains the focus information corresponding to the second audio application that has been transferred.
  • the audio applications that play audio through different electronic devices can be managed separately, thereby achieving the independence of the audio played through the second electronic device and the audio played through the first electronic device, so as to achieve a distributed audio experience.
  • FIG. 1 exemplarily introduces an audio system 100 provided in an embodiment of the present application.
  • the audio system 100 includes a first electronic device 101 and a second electronic device 102.
  • a communication connection is established between the first electronic device 101 and the second electronic device 102.
  • the first electronic device 101 and the second electronic device 102 can realize information transmission between the first electronic device 101 and the second electronic device 102 by means of the established communication connection.
  • the information transmitted between the first electronic device 101 and the second electronic device 102 includes but is not limited to application content, application-related parameters (such as focus information of an audio application applying for audio focus), video data, audio data, and control instructions.
  • the first electronic device 101 and the second electronic device 102 may communicate with each other by wire or by wireless.
  • a wired connection may be established between the first electronic device 101 and the second electronic device 102 using a Universal Serial Bus (USB).
  • USB Universal Serial Bus
  • a wireless connection may be established between the first electronic device 101 and the second electronic device 102 using a global system for mobile communications (GSM), a general packet radio service (GPRS), code division multiple access (CDMA), wideband code division multiple access (WCDMA), long term evolution (LTE), Bluetooth, wireless fidelity (Wi-Fi), near field communication (NFC), voice over Internet protocol (VoIP), and a communication protocol supporting a network slicing architecture. This application does not specifically limit this.
  • GSM global system for mobile communications
  • GPRS general packet radio service
  • CDMA code division multiple access
  • WCDMA wideband code division multiple access
  • LTE long term evolution
  • Bluetooth wireless fidelity
  • Wi-Fi wireless fidelity
  • NFC near field communication
  • VoIP voice over Internet protocol
  • the first electronic device 101 is a device that outputs the content of the audio application.
  • the second electronic device 102 is a device that receives the transmitted content of the audio application.
  • the first electronic device 101 transfers the content of the audio application installed thereon to the second electronic device 102.
  • the content of the audio application can be transferred between multiple electronic devices through application transfer technology.
  • the content (such as pictures, text or audio, etc.) of the application currently running on one or more electronic devices is transmitted to another or more electronic devices so that they can run the content of the application; for example, the audio of the first audio application running on the first electronic device 101 is transmitted to the second electronic device 102, so that the second electronic device 102 can play the audio of the first audio application.
  • the application flow technology may include application screen projection technology, application handoff technology, application distribution technology, etc.
  • the application screen projection technology is to project the content of an application running on an electronic device onto the display screen or display medium of another electronic device for display.
  • Application relay technology is a technology that stores, transfers or shares the content of an application running on an electronic device to another electronic device.
  • Application distribution technology is a technology that runs the back end of an application on one electronic device (such as processing the display business logic function of the user interface UI interface), and runs the front end of the application (such as the user interface UI interface) on another electronic device, and needs to access the back end of the application in real time.
  • the electronic device can stream the content of the audio application installed thereon, and can also receive the content of the audio application streamed from other electronic devices in the audio system 100.
  • the electronic device streams the content of the audio application installed thereon, the electronic device is the first electronic device 101.
  • the electronic device receives the content of the audio application streamed from other electronic devices, the electronic device is the second electronic device 102.
  • the first electronic device 101 can be a mobile phone, a speaker, a tablet, a television (also known as a smart TV, a smart screen or a large-screen device), a laptop computer, an ultra-mobile personal computer (UMPC), a handheld computer, a netbook, a personal digital assistant (PDA), a wearable electronic device, a vehicle-mounted device, a virtual reality device, and other electronic devices with audio input and output functions, and the embodiments of the present application do not impose any restrictions on this.
  • a television also known as a smart TV, a smart screen or a large-screen device
  • UMPC ultra-mobile personal computer
  • PDA personal digital assistant
  • a wearable electronic device a vehicle-mounted device
  • a virtual reality device and other electronic devices with audio input and output functions
  • the second electronic device 102 can also be an electronic device with audio input and output functions such as a mobile phone, tablet, laptop computer, television, speaker or car-mounted equipment.
  • a mobile phone tablet, laptop computer, television, speaker or car-mounted equipment.
  • the embodiments of the present application do not impose any restrictions on this.
  • the number and type of the first electronic device 101 and the number and type of the second electronic device 102 in the audio system 100 shown in FIG1 are only examples, and the present application does not specifically limit this.
  • the number of the second electronic devices in the audio system 100 of the present application is N, where N is an integer greater than or equal to 1.
  • the software system of the first electronic device 101 or the second electronic device 102 can adopt a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture or a cloud architecture.
  • the following embodiment of the present application takes the Android system of the layered architecture as an example to illustrate the software structure of the electronic device (the first electronic device 101 or the second electronic device 102).
  • the embodiments of the present application can also be implemented.
  • FIG. 2 shows the software structure of the electronic device provided in an embodiment of the present application.
  • the layered architecture divides the software into several layers, each with a clear role and division of labor.
  • the layers communicate with each other through software interfaces.
  • the Android system is divided into five layers, from top to bottom: the application layer, the application framework layer, the Android runtime (Android runtime) and system library, the hardware abstraction layer (HAL), and the kernel layer.
  • the application layer can include a series of application packages.
  • applications such as calling, navigation, browser, camera, calendar, map, Bluetooth, game, music, video, etc. can be installed in the application layer.
  • the applications installed in the application layer include audio applications.
  • audio applications are applications with audio functions that can provide audio content to users.
  • Audio applications can be audio applications that come with the device, such as Huawei Music, or audio applications released by third parties.
  • the audio application can be a music app, a camera app, a video app, a map app, or a recording app.
  • the music app can play music.
  • the camera app can output the system preset shutter sound when taking pictures.
  • the video app can output audio corresponding to the video screen while playing the video.
  • the map app can output navigation voice after turning on the navigation function.
  • the recording app can play pre-recorded audio. This application does not specifically limit the specific type of audio application.
  • the audio application before the audio application starts playing audio, it needs to send a request for audio focus to the audio framework.
  • the audio application successfully applies for the audio focus, the audio application obtains the audio focus, and the audio application obtains the permission to play audio. Otherwise, the permission to play audio is not obtained.
  • the audio application calls the function requestAudioFocus() to request audio focus from the audio framework, such as requesting audio focus from the audio manager (not shown) in the audio framework.
  • the audio application requests audio focus from the audio framework, it needs to provide the audio framework with relevant information (i.e., focus information) for requesting audio focus.
  • the audio framework constructs a corresponding focus request object and saves all focus information in the focus request object.
  • the focus information of an audio application or the focus information corresponding to an audio application refers to the focus information when the audio application applies for audio focus.
  • the focus information includes the identifier of the audio application, which can be formed by the package name information of the application, the audio manager information held by the application, and the monitoring object information of the application.
  • the monitor information and the application's monitoring object information are distinguished by memory addresses.
  • the focus information also includes the audio focus application type.
  • the audio focus application type has the following five optional values:
  • AUDIOFOCUS_GAIN This parameter indicates that you want to apply for a permanent audio focus and want the previous audio application holding the audio focus to stop playing. For example, you apply for AUDIOFOCUS_GAIN when an audio application needs to play music.
  • AUDIOFOCUS_GAIN_TRANSIENT Indicates that a short audio focus is requested and will be released soon. At this time, the audio application that holds the audio focus is expected to pause playback. For example, an audio application requests AUDIOFOCUS_GAIN_TRANSIENT when it needs to play a reminder sound.
  • AUDIOFOCUS_GAIN_TRANSIENT_MAY_DUCK Indicates that a short audio focus is requested and will be released soon. It is hoped that the previous audio application holding the focus will reduce its playback volume (but it can still play). At this time, the audio will be mixed and played. For example, when a map app wants to output navigation broadcasts, it requests AUDIOFOCUS_GAIN_TRANSIENT_MAY_DUCK.
  • AUDIOFOCUS_GAIN_TRANSIENT_EXCLUSIVE Indicates that you are requesting a short audio focus and hope that the system does not play any sudden sounds (such as notifications, reminders, etc.). For example, a recording app requests AUDIOFOCUS_GAIN_TRANSIENT_EXCLUSIVE when recording.
  • the focus information also includes a sound source type, and the sound source includes at least one of music, video, call, and voice.
  • the operating system can use the focus information to distinguish whether the sound source is from the same audio application.
  • the focus information includes a user code of the audio playback process and also includes a user code of the audio application.
  • the audio framework When the above application is successful, the audio framework returns the AUDIOFOCUS_REQUEST_GRANTED constant to the audio application. When the audio application receives the AUDIOFOCUS_REQUEST_GRANTED constant, it indicates that the audio application obtains the audio focus. When the above application fails, the audio framework returns the AUDIOFOCUS_REQUEST_FAILED constant to the audio application. When the audio application receives the AUDIOFOCUS_REQUEST_FAILED constant, it indicates that the audio application fails to obtain the audio focus.
  • OnAudioFocusChangeListener is an audio focus listener, through which you can know whether the audio application has obtained or lost focus.
  • the audio focus listener monitors the state of the audio focus.
  • the audio focus listener calls the onAudioFocusChange(int focus Change) function according to the change of the current audio focus.
  • the focus Change function mainly has the following four parameters:
  • AudioFOCUS_LOSS_TRANSIENT Indicates that the audio focus is temporarily lost, but will be returned soon. At this time, the audio application terminates the audio playback, but retains the playback resources because it may be returned soon.
  • the audio framework can execute the audio control method provided in the embodiment of the present application to manage the focus process requested by the audio application.
  • the audio framework in the embodiment of the present application also creates and maintains one or more focus stacks, such as the second focus stack.
  • the first electronic device is provided with an audio framework for the music APP to implement the above-mentioned audio functions.
  • the music APP applies to play audio on the first electronic device
  • the music APP applies for audio focus from the audio framework.
  • the focus information when the music APP applies for audio focus is placed on the top of the first focus stack, notifying the music APP to obtain audio focus, and the music APP plays audio and plays the audio through the first electronic device.
  • the music app applies for audio focus from the audio framework.
  • the focus information when the music app applies for audio focus is placed on the top of the second focus stack, notifying the music app to obtain audio focus and play audio.
  • the second electronic device obtains the audio content of the music app transmitted by the first electronic device, the second electronic device plays the audio of the music app.
  • a hardware abstraction layer can also be included between the application framework layer and the kernel layer of the Android system.
  • the HAL layer is responsible for interacting with various hardware devices of the electronic device.
  • the HAL layer hides the implementation details of each hardware device.
  • it can provide the Android system with interfaces for calling various hardware devices.
  • HAL provides HALs corresponding to different mobile phone hardware devices, such as Audio HAL, Camera HAL, Wi-Fi HAL, etc.
  • Audio HAL can also be used as a part of the above audio architecture.
  • the audio architecture can directly call Audio HAL and send the processed audio data to Audio HAL, which then sends the audio data to the corresponding audio output device (such as speakers, headphones, etc.) for playback.
  • Audio HAL can be further divided into Primary HAL, A2dp HAL, etc.
  • Audio Flinger can call Primary HAL to output audio data to the speaker of the electronic device, or Audio Flinger can call A2dp HAL to output audio data to a Bluetooth headset connected to the electronic device.
  • the application framework layer may also include a window manager, a content provider, a view system, a notification manager, etc., and the embodiments of the present application do not impose any restrictions on this.
  • the Android runtime includes the core library and the virtual machine.
  • the Android runtime is responsible for the scheduling and management of the Android system.
  • the core library consists of two parts: one part is the function that needs to be called by the Java language, and the other part is the Android core library.
  • the application layer and the application framework layer run in a virtual machine.
  • the virtual machine executes the Java files of the application layer and the application framework layer as binary files.
  • the virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
  • the system library can include multiple functional modules, such as surface manager, media library, 3D graphics processing library (such as OpenGL ES), 2D graphics engine (such as SGL), etc.
  • functional modules such as surface manager, media library, 3D graphics processing library (such as OpenGL ES), 2D graphics engine (such as SGL), etc.
  • the surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of multiple commonly used audio and video formats, as well as static image files.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, synthesis, and layer processing, etc.
  • the 2D graphics engine is a drawing engine for 2D drawing.
  • the kernel layer is located below the HAL and is a layer between hardware and software.
  • the kernel layer includes at least a display driver, a Near Field Communication (NFC) driver, an audio driver, a sensor driver, a Bluetooth driver, etc., and the embodiments of the present application do not impose any restrictions on this.
  • NFC Near Field Communication
  • An audio control method provided in an embodiment of the present application is introduced below, which can be applied to the audio system 100 shown in Figure 1 and executed by the first electronic device 101 shown in Figure 1, on which a first audio application is installed.
  • FIG. 3 exemplarily introduces an audio control method provided in an embodiment of the present application.
  • Step S301 The first electronic device creates a first focus stack and a second focus stack, wherein the first focus stack is used to store focus information corresponding to an audio application that plays audio through the first electronic device, and the second focus stack is used to store focus information corresponding to an audio application that plays audio through the second electronic device.
  • the first electronic device only maintains one audio focus stack, and the focus information corresponding to the audio application that applies for audio focus on the first electronic device is stored in the only audio focus stack. If the focus information is at the top of the audio focus stack, the audio application corresponding to the focus information obtains the audio focus, and the audio application that obtains the audio focus has the permission to play audio.
  • the audio application whose focus information is at the top of the stack has the permission to play audio, and the audio corresponding to the audio application that loses the audio focus is affected.
  • the first electronic device of the present application creates and maintains at least two focus stacks, and the at least two focus stacks are used to store the focus information of the audio application applying for audio focus when the audio application wants to play audio.
  • the first focus stack stores the focus information corresponding to the audio application that plays audio through the first electronic device
  • the second focus stack stores the focus information corresponding to the audio application that plays audio through the second electronic device.
  • the second audio application installed on the second electronic device needs to play the audio through the second electronic device, and the focus information of the second audio application is stored in the second focus stack.
  • the first focus stack stores the focus information of the first audio application installed on the first electronic device.
  • the second focus stack can store the focus information of the first audio application installed on the first electronic device.
  • the focus information of the first audio application installed on the second electronic device may also be stored.
  • the focus stack (such as the first focus stack and the second focus stack) in the embodiment of the present application can be implemented not only as a stack, but also as an array, a queue or a map, etc., and the present application does not make specific limitations on this.
  • the second focus stack and the first focus stack can ensure that only one audio application corresponding to the focus information obtains the audio focus at a time based on the preset maintenance order. For example, when the focus information is at the top of the focus stack, the audio application corresponding to the focus information obtains the audio focus.
  • multiple first audio applications may be installed on the first electronic device, and different first audio applications may correspond to different focus information.
  • the first focus information B1 corresponding to the first audio application A1 is different from the first focus information B2 corresponding to the first audio application A2, at least the application identifiers are different.
  • the first electronic device can create a second focus stack when searching for the second electronic device.
  • the first electronic device can also create a second focus stack in response to the user transferring the first audio application that is not playing audio or is playing audio to the second electronic device, then step S301 can be executed simultaneously with step S305.
  • the first electronic device can pre-create the second focus stack before leaving the factory. Then, when the first electronic device is interconnected with the second electronic device, the first electronic device can directly respond to transferring the first audio application to the second electronic device for playback, and store the focus information when the first audio application applies for audio focus in the created second focus stack. This application does not specifically limit the timing of the first electronic device creating the second focus stack.
  • the second focus stack may not be associated with the second electronic device. If the first electronic device creates two or more focus stacks in addition to the first focus stack, such as in the following embodiment, the first electronic device creates a second focus stack and a third focus stack, then the second focus stack and the third focus stack are associated with the corresponding second electronic device.
  • the second focus stack and the first focus stack are maintained on the first electronic device, and the audio output device on the first electronic device only plays the audio of the audio application that obtains the audio focus in the first focus stack.
  • Step S302 The first electronic device obtains a first operation, where the first operation is used to instruct to play audio of a first audio application through the first electronic device.
  • the first operation may be operations such as clicking, touching, long pressing, and voice.
  • the first operation may be clicking a first audio application playback control.
  • Step S303 In response to the first operation, the first electronic device determines a first focus stack corresponding to the first operation from the first focus stack and the second focus stack, and places the first focus information of the first audio application on the top of the first focus stack to play the audio of the first audio application through the first electronic device.
  • the first focus information of the first audio application is placed at the top of the first focus stack, the first audio application obtains the audio focus, has the authority to play audio, and the first audio application plays the audio through the audio output device on the first electronic device.
  • the first operation indicates that audio is played through the first electronic device, and the first electronic device can determine that the focus stack corresponding to the first operation is the first focus stack.
  • step S303 is that the first electronic device responds to the first operation, and places the first focus information of the first audio application at the top of the first focus stack, so as to play the audio of the first audio application through the first electronic device.
  • Step S304 the first electronic device obtains a second operation, where the second operation is used to instruct to transfer the first audio application that is playing audio to the second electronic device.
  • the second operation indicates transferring the first audio application currently playing audio on the first electronic device to the second electronic device, that is, transferring the first audio application to the second electronic device, and keeping the audio playing of the first audio application, and playing the audio through the second electronic device.
  • the second operation may be operations such as clicking, touching, long pressing, and voice.
  • the second operation may be that the user starts a streaming function on the interface of a first audio application that is playing audio.
  • Step S305 In response to the first instruction corresponding to the second operation, the first electronic device determines a second focus stack corresponding to the first instruction from the first focus stack and the second focus stack, and moves the first focus information of the first audio application from the first focus stack to the top of the second focus stack.
  • the first instruction is used to instruct the playing of the audio of the first audio application transferred to the second electronic device.
  • the first electronic device moves the first focus information from the first focus stack to the top of the second focus stack, and the first audio application corresponding to the first focus information in the focus stack obtains the audio focus, notifies the first audio application to obtain the audio focus, and the first audio application has the authority to play audio and can play audio.
  • the second electronic device receives the content of the first audio application (including at least audio data and may also include interface data, etc.), the second electronic device plays the audio of the first audio application.
  • step S305 can be: the first electronic device responds to the first instruction corresponding to the second operation to create a second focus stack, and the first focus information of the first audio application is changed from the first focus stack to the second focus stack. Move to the top of the second focus stack.
  • the first electronic device as a mobile phone and the first audio application as a music application.
  • a main interface 40 is displayed on the mobile phone, and the main interface 40 includes a music application 400 and a recording application 401, and the user clicks on the music application 400.
  • the mobile phone displays an interface 402 of the music application 400 as shown in FIG. 4B .
  • the interface 402 includes a control 403 of music 1.
  • the mobile phone displays an interface 404 of music 1 as shown in FIG. 4C .
  • the interface 404 of music 1 includes a play control 405, and the state of the play control 405 is not played. The user clicks on the play control 405 (i.e., the first operation).
  • the mobile phone In response to the user clicking on the play control 405, the mobile phone displays an interface 404 as shown in FIG. 4D , and the state of the play control 405 in the interface 404 shown in FIG. 4D changes to the play state, and the mobile phone plays the audio of music 1 at the same time.
  • the mobile phone displays a display list 407 and a flow option 408 as shown in FIG4E. The user clicks on the flow option 408, where the flow option 408 corresponds to the flow function.
  • the mobile phone In response to the user clicking on the flow option 408, the mobile phone starts the flow function, and can realize the flow of the content of the music application 400 to the second electronic device through the application flow technology.
  • the device that can currently receive the content of the music application 400 is searched, that is, the device that can establish an interconnection with the mobile phone is searched.
  • the current mobile phone searches for devices that can receive the streaming audio application, including tablets, and the mobile phone responds to the user clicking on the flow option 408, and displays the available device list 409 and the tablet option 410 as shown in FIG4F.
  • the user clicks on the tablet option 410 i.e., the second operation
  • the mobile phone transfers the content of the music application 400 (such as the audio of Music 1 and the interface of Music 1) to the tablet, and the main interface is displayed on the mobile phone.
  • the interface displayed on the tablet is similar to the interface 404 of the music application 400, and the tablet plays the audio of Music 1.
  • the mobile phone does not display the interface 404 of the music application 400, the mobile phone can display the main interface 400, and the mobile phone does not play the audio of Music 1.
  • Step S500 The first audio application obtains a first operation.
  • the first operation is that the user clicks the play control 405 .
  • Step S501 In response to a first operation, a first audio application applies for an audio focus from an audio framework of a first electronic device.
  • Step S502 The audio framework places the first focus information of the first audio application on the top of the first focus stack.
  • the audio framework places the first focus information of the first audio application at the top of the first focus stack, and notifies the first audio application to obtain the audio focus, and the first audio application obtains the permission to play audio.
  • the music application 400 obtains the permission to play audio
  • the audio output device on the mobile phone plays the audio of Music 1.
  • Step S503 the first audio application obtains a second operation.
  • the second operation is that the user clicks on the tablet option 410 .
  • Step S504 The first audio application performs application migration in response to the second operation.
  • the mobile phone starts the streaming function in response to the user clicking the tablet option 410.
  • the first audio application performs application migration, the mobile phone migrates the music application 400 to the tablet, and informs the audio framework of the information of migrating the music application 400 to the tablet.
  • Step S505 The audio framework extracts first focus information of the first audio application.
  • the audio framework In response to the information that the music application 400 has been migrated to the tablet, the audio framework extracts the first focus information corresponding to the music application 400 from the first focus stack.
  • Step S506 The audio framework places the first focus information at the top of the second focus stack.
  • the audio framework Based on the migration of the music application 400 to the tablet, the audio framework extracts the first focus information of the first audio application from the first focus stack, and places the first focus information on the top of the second focus stack.
  • the audio framework after the audio framework extracts the first focus information from the first focus stack, it can recreate the corresponding first focus information based on the extracted first focus information, and push the recreated first focus information into the stack and place it on the top of the second focus stack. For example, after the audio framework extracts the first focus information from the first focus stack, it creates a new focus request object based on the extracted first focus information, and the new focus request object stores the content of the extracted first focus information, and places the new focus request object on the top of the second focus stack.
  • the music application 400 obtains permission to play audio, and the tablet receives the audio of Music 1 transmitted from the mobile phone, and can play the audio of Music 1 through the audio output device on the tablet.
  • Step S507 The audio framework deletes the first focus information in the first focus stack.
  • the audio framework adds the first focus information to the second focus stack
  • the first focus information in the first focus stack is also deleted, that is, the first focus information is popped out of the stack.
  • a first audio application is installed on the first electronic device, and the first electronic device can play the first audio application that is playing audio.
  • the first focus information corresponding to the first audio application played by the second electronic device is stored in the second focus stack.
  • the first focus stack and the second focus stack are independent of each other, and the audio played by the first electronic device and the audio played by the second electronic device do not affect each other.
  • Embodiment 2 is based on embodiment 1, and the user continues to start other first audio applications on the first electronic device and plays audio.
  • FIG. 6 exemplarily introduces another audio control method provided in an embodiment of the present application.
  • Step S601 The first electronic device obtains a play operation, where the play operation is used to instruct to play audio of another first audio application through the first electronic device.
  • step S601 may be click, touch, long press, voice or the like.
  • the play operation may be click on another first audio application play control.
  • the difference between the first operation and the play operation is that the first operation indicates playing the audio of the first audio application on the first electronic device, while the play operation indicates playing the audio of another first audio application on the first electronic device.
  • Step S602 In response to the play operation, the first electronic device places the first focus information of the other first audio application at the top of the first focus stack, and notifies the other first audio application to obtain the audio focus to play the audio of the other first audio application on the first electronic device.
  • the first focus information of the other first audio application is placed at the top of the first focus stack, then the other first audio application obtains the audio focus and has the permission to play audio, and the other first audio application plays the audio through the audio output device on the first electronic device.
  • the first electronic device may determine that the focus stack corresponding to the play operation is the first focus stack.
  • the mobile phone can display the main interface 40.
  • the user continues to click the recording application 401 on the main interface 40.
  • the mobile phone displays the recording interface 700 as shown in FIG7B, and the recording interface 700 includes multiple recordings.
  • the mobile phone displays the interface 702 of recording 1 as shown in FIG7C, and the mobile phone plays the audio of recording 1.
  • the interface corresponding to the recording application 401 (interface 702 as shown in FIG7C) is displayed on the mobile phone, and the audio of recording 1 is played.
  • the tablet still displays the interface of the music application 400 (interface 404 as shown in FIG4D), and plays the audio of music 1.
  • FIG. 7B , FIG. 7C , FIG. 7D and FIG. 8 Please refer to FIG. 7B , FIG. 7C , FIG. 7D and FIG. 8 for an exemplary description of the operation process of the first electronic device.
  • the first focus information of the first audio application is stored in the second focus stack, and the first focus information is placed on the top of the stack. At this time, the tablet plays the audio of Music 1.
  • Step S800 another first audio application obtains a play operation.
  • the play operation is that the user clicks option 701 of recording 1.
  • Step S801 another first audio application applies for audio focus from the audio framework of the first electronic device in response to a play operation.
  • Step S802 The audio framework places the first focus information of another first audio application on the top of the first focus stack.
  • the audio framework places the first focus information of the other first audio application on the top of the first focus stack, and notifies the other first audio application to obtain the audio focus, and the other first audio application obtains the permission to play audio.
  • the recording application 401 obtains the permission to play audio
  • the audio output device on the mobile phone plays the audio of recording 1.
  • the first focus information of the first audio application is stored in the second focus stack
  • the first focus information of another first audio application is stored in the first focus stack.
  • the two focus information are independent of each other and do not affect each other, so the two first audio applications can obtain audio focus at the same time.
  • the second electronic device can play the audio of the first audio application normally and is not affected by the other first audio application.
  • the audio of another first audio application can also be played normally on the first electronic device without being affected by the first audio application.
  • Embodiment 3 is based on embodiment 2, and the user transfers the other first audio application to the second electronic device.
  • FIG. 9 exemplarily introduces another audio control method provided in an embodiment of the present application.
  • Step S901 The first electronic device obtains a second operation, where the second operation is used to instruct to transfer another first audio application that is playing audio to a second electronic device.
  • step S901 The content of the second operation in step S901 may refer to the first embodiment and will not be described in detail here.
  • the second operation of both the first embodiment and the third embodiment is to instruct to transfer the first audio application that is playing audio to the second electronic device.
  • the difference is that the first audio application transferred in the first embodiment is different from that in the third embodiment.
  • Step S902 the first electronic device responds to the first instruction corresponding to the second operation, determines the second focus stack corresponding to the first instruction from the first focus stack and the second focus stack, and places the first focus information corresponding to the other first audio application on the top of the second focus stack, so that when the audio of the other first audio application is transferred to the second electronic device, the audio of the other first audio application is played through the second electronic device.
  • the first instruction corresponding to the second operation in step S902 is used to instruct to play the audio of another first audio application transferred to the second electronic device.
  • the content of the first instruction in step S902 can refer to the first instruction in embodiment 1, and will not be repeated here.
  • the first instructions in both the first and third embodiments instruct to play the audio of the first audio application transferred to the second electronic device.
  • the difference is that the first audio application of the audio played in the first and third embodiments is different.
  • the first focus information of the other first audio application is placed at the top of the second focus stack, and the other first audio application obtains the audio focus and has the authority to play audio.
  • the audio framework notifies the first audio application originally at the top of the second focus stack that it has lost the audio focus.
  • the other first audio application plays audio, and the audio of the other first audio application is played through the audio output device on the second electronic device.
  • the first electronic device may determine that the focus stack corresponding to the first instruction is the second focus stack.
  • the recording interface 702 also includes more options 703, and the user clicks on more options 703.
  • the mobile phone displays an available device list 704 and a tablet option 705 as shown in FIG10B .
  • the user clicks on the tablet option 705.
  • the mobile phone transfers the content of the recording application 401 to the tablet.
  • the tablet displays the interface corresponding to the recording application 401 (such as the interface 702 shown in FIG7C or FIG10A ), and plays the audio of the recording application 401.
  • FIG. 10B , FIG. 10C and FIG. 11 Please refer to FIG. 10B , FIG. 10C and FIG. 11 for an exemplary description of the operation process of the first electronic device.
  • the second focus stack stores the first focus information of the first audio application
  • the first focus stack stores the first focus information of another first audio application.
  • Step S110 another first audio application obtains a second operation.
  • the second operation is that the user clicks on the tablet option 705 .
  • Step S111 another first audio application performs application migration in response to the second operation.
  • the mobile phone starts the streaming function in response to the user clicking the tablet option 705.
  • Another first audio application performs application migration, the mobile phone migrates the recording application 401 to the tablet, and informs the audio framework of the information of migrating the recording application 401 to the tablet.
  • Step S112 The audio framework extracts first focus information of another first audio application.
  • the audio framework In response to the information of migrating the recording application 401 to the tablet, the audio framework extracts the first focus information corresponding to the recording application 401 from the first focus stack.
  • Step S113 The audio framework places the first focus information of another first audio application on the top of the second focus stack.
  • the audio framework extracts the first focus information of another first audio application from the first focus stack, it places the first focus information of the other first audio application on the top of the second focus stack. Then, the first focus information of the first audio application is no longer at the top of the second focus stack.
  • Step S114 The audio framework notifies the first audio application that the audio focus has been lost.
  • the audio framework Based on the first focus information of the first audio application no longer being at the top of the second focus stack, the audio framework notifies the first audio application of losing the audio focus. As shown in FIG10C , the music application 401 loses the audio focus, and the music application 401 stops playing according to the audio focus application type AUDIOFOCUS_GAIN_TRANSIENT_EXCLUSIVE applied by the recording application.
  • Step S115 the audio framework deletes the first focus information of the other first audio application.
  • the audio framework adds the first focus information of another first audio application to the second focus stack
  • the first focus information of another first audio application in the first focus stack is also deleted, that is, the first focus information of another first audio application is popped out of the stack.
  • Step S115 may be executed before step S113 or S114, or may be executed simultaneously with step S113 or S114.
  • the first focus information of another first audio application is at the top of the stack, and the other first audio application can play audio.
  • the second electronic device receives the audio of the other first audio application, and the second electronic device plays the audio of the other first audio application.
  • the first electronic device also notifies other audio applications (such as the first audio application) in the second focus stack that they have lost audio focus, and the first audio application is adjusted according to the audio focus application type of the other first audio application, such as the audio of the first audio application can be paused or stopped or the volume can be lowered.
  • the audio of the first audio application is paused or stopped
  • the second electronic device only plays the audio of the first audio application.
  • the volume of the audio of the first audio application is lowered, the second electronic device The device plays audio of a first audio application and another first audio application simultaneously.
  • the difference between the fourth embodiment and the third embodiment is that the fourth embodiment further includes another second electronic device, and after the user opens another first audio application, the another first audio application is transferred to the other second electronic device.
  • FIG. 12 exemplarily introduces another audio control method provided in an embodiment of the present application.
  • Step S121 The first electronic device obtains a device identification of another second electronic device.
  • the first electronic device may obtain the device identification of one or more second electronic devices through a sensor.
  • the above-mentioned sensor may include an ultra-wideband (UWB) sensor, an NFC sensor, a laser sensor and/or a visible light sensor, etc.
  • the above-mentioned device identification may include an Internet Protocol (IP) address, a media access control (MAC) address, a UWB tag, an NFC tag, etc., and the present application does not make specific limitations on this.
  • IP Internet Protocol
  • MAC media access control
  • the following description is made by taking the first electronic device as a mobile phone and the second electronic device as a notebook computer as an example.
  • both a mobile phone and a laptop are equipped with UWB sensors and each has a UWB tag (ie, device identification)
  • UWB tag ie, device identification
  • both a mobile phone and a laptop are equipped with NFC sensors and each has an NFC tag (i.e., device identification)
  • NFC tag i.e., device identification
  • the first electronic device can be used to obtain the device identification of all second electronic devices under the same user account, or the device identification of all second electronic devices connected to the first electronic device under the same network.
  • the first electronic device can access a remote or cloud server or other electronic device to obtain the device identification of the second electronic device, can access the internal memory of the local end to obtain the above device identification, and can also access the external memory interface to obtain the above device identification, without specific limitation.
  • Step S122 The first electronic device creates a third focus stack according to the device identification of the other second electronic device.
  • the first electronic device obtains the device identification of the second electronic device and creates a second focus stack according to the device identification of the second electronic device.
  • the first electronic device obtains the device identification of another second electronic device and creates a third focus stack according to the device identification of another second electronic device.
  • the second focus stack is associated with a device identifier of a second electronic device
  • the third focus stack is associated with a device identifier of another second electronic device.
  • the stack name of the second focus stack can be named after the device identifier of the second electronic device
  • the stack name of the third focus stack can be named after the device identifier of another second electronic device
  • the stack name of the second focus stack is different from the stack name of the third focus stack.
  • the first electronic device can create the second focus stack and the third focus stack at the same time.
  • the first electronic device can also create the second focus stack and the third focus stack at different times.
  • the first electronic device can create the second focus stack in response to streaming the first audio application to the second electronic device, and then search for the other second electronic device when the other first audio application is subsequently opened or the other first audio application starts the streaming function, and create the third focus stack according to the device identification of the other second electronic device that is searched.
  • This application does not specifically limit the timing and number of focus stacks created by the first electronic device in addition to the first focus stack.
  • Step S123 the first electronic device obtains a second operation, where the second operation is used to instruct to transfer another first audio application that is playing audio to another second electronic device.
  • the content of the second operation in step S123 may refer to the above-mentioned embodiment 3, and will not be described in detail here.
  • the second operation of both the fourth embodiment and the third embodiment is to instruct to transfer another first audio application that is playing audio to a second electronic device.
  • the difference is that the second electronic devices in the third embodiment and the fourth embodiment are different.
  • Step S124 the first electronic device responds to the first instruction corresponding to the second operation, determines a third focus stack corresponding to the first instruction from the first focus stack, the second focus stack and the third focus stack, and places the first focus information of another first audio application on the top of the third focus stack, so that when the audio of the other first audio application is transferred to the other second electronic device, the audio of the other first audio application is played through the other second electronic device.
  • the first instruction corresponding to the second operation in step S124 is used to instruct to play the audio of another first audio application transferred to another second electronic device.
  • the content of the first instruction in step S124 can refer to the first instruction in embodiment 3, and will not be repeated here.
  • the first instructions in both the fourth embodiment and the third embodiment instruct to play the audio of another first audio application transferred to the second electronic device.
  • the difference is that the second electronic device that plays the audio in the fourth embodiment is different from that in the third embodiment.
  • the focus stack corresponding to the first instruction is the third focus stack associated with the other second electronic device.
  • the mobile phone displays an optional device list 704, which includes a tablet option 705 and a laptop option 706.
  • the mobile phone transfers the content of the recording application 401 to the laptop.
  • the tablet displays the interface of the music application 400 (interface 404 as shown in FIG4D ), and plays the audio of the music application 400.
  • the laptop displays the interface corresponding to the recording application 401 (interface 702 as shown in FIG7C ), and plays the audio of the recording application 401, while the mobile phone can display the main interface.
  • the first electronic device opens another new first audio application (such as a telephone application) and plays it on the first electronic device, then the audio of the telephone application is played on the first electronic device, the audio of the music application played on the second electronic device is not affected, and the audio of the recording application played on another second electronic device is not affected.
  • another new first audio application such as a telephone application
  • FIG. 13A , FIG. 13B and FIG. 14 Please refer to FIG. 13A , FIG. 13B and FIG. 14 for an exemplary description of the operation process of the first electronic device.
  • the second focus stack stores the first focus information of the first audio application
  • the first focus stack stores the first focus information of another first audio application.
  • Step S141 another first audio application obtains a second operation.
  • the second operation is that the user clicks on the laptop option 706 .
  • Step S142 another first audio application performs application migration.
  • the mobile phone starts the streaming function in response to the user clicking the laptop option 706.
  • Another first audio application performs application migration, the mobile phone migrates the recording application 401 to the laptop, and informs the audio framework of the information that the recording application 401 is migrated to the laptop.
  • Step S143 the audio framework extracts first focus information of another first audio application.
  • the audio framework In response to the information that the recording application 401 is migrated to the laptop computer, the audio framework extracts the first focus information corresponding to the recording application 401 from the first focus stack.
  • Step S144 The audio framework places the first focus information of another first audio application on the top of the third focus stack.
  • the audio framework After the audio framework extracts the first focus information of another first audio application from the first focus stack, it places the first focus information of the other first audio application on the top of the third focus stack. As shown in FIG13B , the recording application 401 obtains the audio focus, and when the notebook computer receives the audio of the recording application 401, the notebook computer plays the audio of the recording application 401.
  • Step S145 The audio framework deletes the first focus information in the first focus stack.
  • the audio framework adds the first focus information of another first audio application to the third focus stack
  • the first focus information of another first audio application in the first focus stack is also deleted, that is, the first focus information of another first audio application is popped out of the stack.
  • Step S145 may be executed before step S144 or simultaneously with step S144.
  • the second electronic device can play the audio of the first audio application normally, and is not affected by another first audio application.
  • the audio of another first audio application can also be played normally on another second electronic device, and is not affected by the first audio application.
  • the audio of the new first audio application can also be played normally on the first electronic device, so that the first electronic device can transfer the content of multiple different first audio applications to different second electronic devices, and the audio played by the first electronic device and the multiple different second electronic devices do not affect each other.
  • the difference between the fifth embodiment and the first embodiment is that the first electronic device transfers the first audio application that is not playing audio to the second electronic device.
  • FIG. 15 Please refer to FIG. 15 , which exemplarily introduces another audio control method provided in an embodiment of the present application.
  • Step S151 The first electronic device obtains a third operation, where the third operation indicates transferring the first audio application to the second electronic device.
  • the third operation may be click, touch, long press, voice, etc.
  • the third operation is that the user starts the streaming function on the interface of the first audio application.
  • the difference between the streaming function started by the third operation and the second operation is that the audio of the audio application is not included in the streaming in the third operation.
  • Step S152 The first electronic device transfers the first audio application stream to the second electronic device in response to the third operation.
  • Step S153 When it is detected that the first audio application transferred to the second electronic device applies to play audio, the first electronic device obtains a first instruction.
  • the user clicks the play control of the first audio application of the second electronic device, and the second electronic device transmits the information of the first audio application requesting to play to the first electronic device.
  • the second electronic device can transmit the information of "playing the audio of the first audio application" to the first electronic device through reverse control, and the first audio application transferred to the second electronic device obtains the information of "playing the audio of the first audio application", and the first audio application requests to play the audio from the audio framework of the first electronic device, and the first electronic device detects that the first audio application requests to play the audio, and the first electronic device obtains the first instruction.
  • a communication connection is established between the first electronic device and the second electronic device.
  • the communication connection of the electronic device obtains the information that "the first audio application transferred to the second electronic device applies to play audio".
  • a communication connection can be established between the first electronic device and the second electronic device through the distributed mobile sensing development platform (DMSDP) service.
  • DMSDP distributed mobile sensing development platform
  • the first electronic device can obtain the information that "the first audio application transferred to the second electronic device applies to play audio" from the second electronic device through the distributed mobile sensing platform service.
  • the first audio application then applies to the audio framework of the first electronic device to play audio, and the first electronic device obtains the first instruction.
  • Step S154 the first electronic device responds to the first instruction, determines the second focus stack corresponding to the first instruction from the first focus stack and the second focus stack, places the first focus information of the first audio application on the top of the second focus stack, and notifies the first audio application to obtain the audio focus, so that when the content of the first audio application is transferred to the second electronic device, the audio of the first audio application is played through the second electronic device.
  • the first electronic device can transfer the first audio application that is not playing audio to the second electronic device.
  • the first electronic device places the first focus information of the first audio application in the second focus stack, and the second electronic device plays the audio of the first audio application.
  • the first focus stack and the second focus stack are independent of each other, and the audio played by the first electronic device and the audio played by the second electronic device do not affect each other.
  • the first electronic device transfers the first audio application to the second electronic device.
  • the user starts the second audio application installed on the second electronic device and plays the audio of the second audio application.
  • FIG. 16 Please refer to FIG. 16 , which exemplarily introduces another audio control method provided in an embodiment of the present application.
  • Step S161 The first electronic device obtains first information, where the first information is used to instruct the second electronic device to play audio of a second audio application.
  • a communication connection is established between a first electronic device and a second electronic device, and the second electronic device can transmit the first information to the first electronic device in response to a user's operation of starting and playing a second audio application.
  • the second electronic device can transmit the first information to the first electronic device through the distributed fusion perception platform service when detecting the user starting and playing the second audio application.
  • the first electronic device obtains the first information from the second electronic device through the distributed fusion perception platform service.
  • the distributed fusion perception platform service deployed on the second electronic device detects that the user starts and plays the second audio application
  • the first information is transmitted to the first electronic device.
  • Step S162 In response to obtaining the first information, the first electronic device obtains second focus information of the second audio application, places the second focus information on the top of the second focus stack, and notifies the first audio application of the loss of audio focus.
  • the first electronic device in response to obtaining the first information, obtains second focus information corresponding to the second audio application indicated by the first information, determines the second electronic device corresponding to the first information, and then determines the corresponding focus stack (a focus stack other than the first focus stack) based on the determined second electronic device, and adds the second focus information to the determined focus stack.
  • the first electronic device obtains the second focus information when the second audio application applies for audio focus based on the communication connection with the second electronic device.
  • the second focus information when the second audio application applies for audio focus is obtained through a distributed fusion perception platform service.
  • placing the second focus information at the top of the second focus stack can be: after the first electronic device obtains the second focus information, it can create a new focus request object based on the obtained second focus information, and the new focus request object stores the content of the second focus information, and places the new focus request object at the top of the second focus stack.
  • the first audio application is transferred to the second electronic device, and another first audio application is transferred to another second electronic device.
  • the second audio application A is opened on the second electronic device and the audio is played
  • the second focus information A1 of the second audio application A is added to the second focus stack
  • the first focus information and the second focus information A1 of the first audio application are stored in the second focus stack.
  • the second audio application B is opened on another second electronic device and the audio is played
  • the second focus information B1 of the second audio application B is added to the third focus stack, and the first focus information and the second focus information B1 of another first audio application are stored in the third focus stack.
  • the first electronic device notifies the first audio application of the loss of audio focus in response to the second audio application obtaining the audio focus.
  • the first audio application can obtain the audio focus application type of the second audio application, and then pause, stop, or reduce the volume according to the audio focus application type.
  • the first electronic device transfers the content (Music 1) of the music application 400 to the second electronic device, and the second electronic device plays the music.
  • the user opens a second audio application (such as a video application) installed on the second electronic device, and based on the audio focus application type of the video application being AUDIOFOCUS_GAIN, the music application 400 pauses playing audio, and the second electronic device plays the audio of the video application.
  • a second audio application such as a video application
  • the user opens a second audio application installed on the second electronic device, which is a map application, and based on the audio focus application type of the map application being AUDIOFOCUS_GAIN_TRANSIENT_MAY_DUCK, the music application reduces its playing volume (but can still be played), and at this time the second electronic device will mix and play, playing the navigation audio of the map application and the audio of the audio application at the same time.
  • the second audio application installed on the second electronic device, which is a map application, and based on the audio focus application type of the map application being AUDIOFOCUS_GAIN_TRANSIENT_MAY_DUCK, the music application reduces its playing volume (but can still be played), and at this time the second electronic device will mix and play, playing the navigation audio of the map application and the audio of the audio application at the same time.
  • Embodiment 7 is based on embodiment 6, and the second audio application opened by the user loses the audio focus.
  • FIG. 17 Please refer to FIG. 17 , which exemplarily introduces another audio control method provided in an embodiment of the present application.
  • Step S171 The first electronic device obtains second information, where the second information is used to indicate that the second audio application has lost audio focus.
  • a communication connection is established between the first electronic device and the second electronic device, and the second electronic device can transmit second information to the first electronic device in response to the second audio application losing audio focus.
  • the second electronic device may transmit the second information to the first electronic device through the distributed fusion perception platform service, and the first electronic device may obtain the second information from the second electronic device through the distributed fusion perception platform service.
  • the second information is transmitted to the first electronic device.
  • the situations in which the second audio application loses audio focus include but are not limited to the following: other second audio applications on the second electronic device apply to play audio through the second electronic device, and the other second audio applications successfully apply for audio focus, the second audio application applies for a short audio focus, and the audio playback is completed or the second audio application is closed.
  • the first electronic device may stop streaming the content of the first audio application to the second electronic device.
  • Step S172 In response to obtaining the second information, the first electronic device places the first focus information of the first audio application on the top of the second focus stack, and notifies the first audio application to obtain the audio focus.
  • the first electronic device transfers the first audio application to the second electronic device for playback, and the first focus information of the first audio application is placed at the top of the second focus stack.
  • the second audio application on the second electronic device applies to play audio, and the second focus information of the second audio application is placed at the top of the second focus stack, and the first focus information is located below the second focus information.
  • the first electronic device moves the second focus information out of the second focus stack, and the first focus information of the first audio application is placed at the top of the second focus stack.
  • the first electronic device transfers the content of the music application to the second electronic device, and the second electronic device plays the audio of the music application.
  • the user opens the map application installed on the second electronic device, and the second focus information of the map application is at the top of the second focus stack, and the first focus information of the music application is below the second focus information.
  • the second electronic device plays the audio of the map application, and the map application applies for a short audio focus.
  • the map application releases the audio focus, that is, the map application loses the audio focus, and the second focus information at the top of the second focus stack is removed, and the first focus information of the music application is at the top of the stack.
  • the music application obtains the audio focus, and when the second electronic device receives the audio of the music application, the second electronic device can continue to play the audio of the music application.
  • a first electronic device streams multiple first audio applications to multiple second electronic devices.
  • the first electronic device obtains second information
  • the first electronic device determines a focus stack corresponding to the second information from multiple focus stacks in response to obtaining the second information, and removes second focus information corresponding to the second information from the determined focus stack.
  • the first audio application is transferred to the second electronic device, and another first audio application is transferred to another second electronic device.
  • the first electronic device determines from the second focus stack and the third focus stack that the focus stack corresponding to the second information is the second focus stack according to the second information, determines the second focus information of the second audio application corresponding to the second information, moves the second focus information in the second focus stack out, and then places the first focus information of the first audio application at the top of the second focus stack, notifying the first audio application to obtain the audio focus, and the second electronic device can play the audio of the first audio application.
  • the first electronic device determines from the second focus stack and the third focus stack that the focus stack corresponding to the second information is the third focus stack, determines the second focus information of the other second audio application corresponding to the second information, moves the second focus information of the other second audio application in the third focus stack out, places the first focus information of the other first audio application at the top of the third focus stack, notifies the other first audio application to obtain the audio focus, and then the other second electronic device can play the audio of the other first audio application.
  • the second electronic device receives the audio of the first audio application that is transferred. If the second electronic device also has a second audio application that requests to play, the second electronic device can determine the audio playback according to a preset strategy. For example, the audio corresponding to the latest event can be set to play. The second electronic device first receives the audio of the first audio application that is transferred. Later, the second electronic device opens the second audio application, and the second electronic device gives priority to playing the audio of the second audio application. Correspondingly, the audio of the second audio application is being played on the second electronic device, and at this time the second electronic device receives the audio of the first audio application that is transferred, and the second electronic device gives priority to playing the audio of the first audio application.
  • the first electronic device has migrated the first audio application to the second electronic device.
  • the user migrates the first audio application back to the first electronic device.
  • FIG. 18 Please refer to FIG. 18 , which exemplarily introduces another audio control method provided in an embodiment of the present application.
  • Step S181 The first electronic device obtains a second instruction, where the second instruction is used to instruct to migrate the first audio application back to the first electronic device.
  • the first electronic device detects that the user clicks on the icon of the first audio application, and the first electronic device obtains the second instruction.
  • the user performs a migration operation on the first audio application on the second electronic device, and the second electronic device transmits the second instruction to the first electronic device.
  • the distributed fusion perception platform service deployed on the second electronic device transmits the second instruction to the first electronic device when it detects that the user performs a migration operation on the first audio application on the second electronic device.
  • Step S182 In response to the second instruction, the first electronic device moves the first focus information of the first audio application from the second focus stack to the top of the first focus stack, and notifies the first audio application to obtain the audio focus.
  • the first electronic device responds to the second instruction, determines the focus stack corresponding to the second instruction, then extracts the first focus information of the first audio application from the determined focus stack, and places the extracted first focus information in the first focus stack.
  • the first electronic device migrates the first audio application to the second electronic device
  • the first focus information of the first audio application is stored in the second focus stack
  • the first electronic device determines the second focus stack in response to the second instruction
  • the first electronic device moves the first focus information from the second focus stack to the first focus stack.
  • the first electronic device migrates the first audio application to another second electronic device
  • the first focus information of the first audio application is stored in the third focus stack
  • the first electronic device determines the third focus stack in response to the second instruction
  • the first electronic device moves the first focus information from the third focus stack to the first focus stack.
  • the first focus information in the first focus stack is placed at the top of the stack
  • the first audio application corresponding to the first focus information obtains permission to play audio, and the first electronic device can play the audio of the first audio application.
  • the first audio application can be migrated back to the first electronic device to play the audio.
  • the focus information of the multiple first audio applications is maintained by the multiple focus stacks on the first electronic device, respectively, to achieve separate management of the audio playback of the multiple first audio applications.
  • the following introduces the first audio application that is managed by the second electronic device to achieve distributed audio management.
  • the second electronic device may maintain only one first local focus stack (similar to the first focus stack), and the first electronic device may maintain only one second local focus stack (ie the first focus stack).
  • the first electronic device and the second electronic device both include a focus stack, and the first local focus stack of the second electronic device is used to store focus information of an audio application that plays audio through the second electronic device (including a second audio application installed on the second electronic device and a first audio application transferred from the first electronic device to the second electronic device).
  • the second local focus stack of the first electronic device is used to store focus information of an audio application that applies to play audio through the first electronic device.
  • the focus stack can ensure that only one audio application corresponding to the focus information obtains the audio focus at a time based on a preset maintenance order.
  • the focus stack in the embodiment of the present application (such as the first local focus stack and the second local focus stack) can be implemented not only as a stack, but also as an array, a queue or a map, etc., and the present application does not make specific limitations on this.
  • FIG. 19 Please refer to FIG. 19 , which exemplarily introduces another audio control method provided in an embodiment of the present application.
  • Step S191 The first electronic device obtains a first instruction.
  • the first electronic device obtains the first instruction including but not limited to the following situations: the first electronic device detects that the user opens the first audio application that is playing audio The first electronic device obtains the first instruction after the first electronic device transfers the first audio application that has not played audio to the second electronic device. The second electronic device detects that the transferred first audio application requests to play audio. The second electronic device transmits the information that the transferred first audio application requests to play audio to the first electronic device, and the first electronic device obtains the first instruction.
  • the relevant content of the first instruction can refer to the above embodiment and will not be repeated here.
  • Step S192 The first electronic device notifies the first audio application to obtain the audio focus in response to the first instruction, and transmits the first focus information of the first audio application to the second electronic device.
  • the first electronic device responds to the first instruction and notifies the first audio application to obtain the audio focus, and the first audio application obtains the permission to play the audio.
  • the first electronic device can transmit the content of the first audio application (such as audio, interface, first focus information when applying for audio focus, etc.) to the second electronic device based on the communication connection between it and the second electronic device.
  • Step S193 The second electronic device creates first simulated focus information based on the first focus information, and places the first simulated focus information at the top of the first local focus stack, so that when the content of the first audio application is transferred to the second electronic device, the audio of the first audio application is played through the second electronic device.
  • the second electronic device simulates the first audio application according to the first focus information to apply for the first simulated focus information to the audio framework of the second electronic device, and adds the first simulated focus information to the first local focus stack of the second electronic device.
  • the first focus information is similar to the first simulated focus information, at least the audio focus application type in the first focus information and the first simulated focus is consistent.
  • the second electronic device places the first simulated focus information at the top of the first local focus stack, notifies the audio application corresponding to other focus information in the first local focus stack to lose the audio focus, and the audio application that loses the audio focus pauses, stops, or lowers the volume according to the corresponding audio focus application type in the first simulated focus information.
  • the second electronic device can inform the first electronic device of the information that the first audio application obtains the audio focus according to its communication connection with the first electronic device. Then the first audio application has the authority to play audio and can play audio. When the second electronic device receives the audio of the first audio application, the second electronic device plays the audio of the first audio application.
  • the first electronic device transmits the first focus information corresponding to the first audio application indicated by the first instruction to the second electronic device in response to the first instruction.
  • the second electronic device can create first analog focus information based on the first focus information, and add the first analog focus information to the first local focus stack of the second electronic device, and the second electronic device indirectly manages the first audio application transferred to the second electronic device.
  • the first focus information corresponding to the first audio application that applies to play audio on the first electronic device will be added to the second local focus stack of the first electronic device.
  • the first local focus stack and the second local focus stack are independent of each other, so the first audio application that plays audio through the second electronic device will not affect other first audio applications that play audio through the first electronic device.
  • the difference from the ninth embodiment is that after the first electronic device transfers the first audio application, a new second audio application is opened on the second electronic device.
  • FIG. 20 Please refer to FIG. 20 , which exemplarily introduces another audio control method provided in an embodiment of the present application.
  • Step S201 The second electronic device obtains a fourth operation, where the fourth operation indicates playing audio of a second audio application through the second electronic device.
  • the second electronic device installs the second audio application, and the second electronic device can obtain a fourth operation when monitoring the user's operation of starting and playing the second audio application.
  • the fourth operation can be an operation such as clicking, touching, long pressing, and voice, for example, the fourth operation can be clicking the second audio application play control.
  • Step S202 In response to the fourth operation, the second electronic device places the second focus information of the second audio application on the top of the first local focus stack, and notifies the second audio application to obtain the audio focus.
  • the second electronic device stores the first analog focus information in the first local focus stack of the second electronic device.
  • the second electronic device plays the audio of the first audio application.
  • the second electronic device adds the second focus information of the second audio application to the first local focus stack, the second focus information of the second audio application is at the top of the stack, and the first analog focus information is below the second focus information of the second audio application.
  • the second electronic device notifies the second audio application to obtain the audio focus.
  • Step S203 the first electronic device obtains first information.
  • the first information is used to instruct the second electronic device to play the audio of the second audio application.
  • the content of the first information obtained by the first electronic device may refer to the above-mentioned embodiment 6.
  • the second electronic device places the second focus information of the second audio application on the top of the first local focus stack, the second audio application is notified to obtain the audio focus.
  • the second electronic device notifies the first electronic device that the second audio application obtains the audio focus based on the communication connection with the first electronic device, and the first electronic device obtains the first information.
  • Step S204 In response to obtaining the first information, the first electronic device notifies the first audio application that the audio focus has been lost.
  • the second electronic device plays the audio of the second audio application.
  • the first audio application responds to the loss of audio focus by obtaining the audio focus request type of the second audio application, and then pausing playback, stopping playback, or reducing the volume according to the audio focus request type.
  • the difference from the ninth and tenth embodiments is that the first electronic device has migrated the first audio application to the second electronic device, and in the eleventh embodiment the user migrates the first audio application back to the first electronic device.
  • FIG. 21 Please refer to FIG. 21 , which exemplarily introduces another audio control method provided in an embodiment of the present application.
  • Step S211 the first electronic device obtains first simulated focus information from the second electronic device in response to the migration instruction.
  • the migration instruction instructs to migrate the first audio application back to the first electronic device.
  • the first electronic device detects that the user clicks on the icon of the first audio application and obtains a migration instruction (corresponding to the fourth instruction of the above embodiment).
  • the user performs a migration operation on the first audio application on the second electronic device (such as clicking to migrate the first audio application to the first electronic device), and the second electronic device transmits the migration instruction to the first electronic device.
  • the first electronic device obtains the first analog focus information from the corresponding second electronic device according to the first audio application that is indicated to be migrated back in the migration instruction.
  • the first audio application is transferred to the second electronic device for playing, and another first audio application is transferred to another second electronic device for playing.
  • the migration instruction received by the first electronic device indicates to migrate back to the first audio application on the second electronic device
  • the first analog focus information of the first audio application is obtained from the second electronic device.
  • the migration instruction indicates to migrate back to another first audio application on another second electronic device
  • the first analog focus information of the other first audio application is obtained from the other second electronic device.
  • Step S212 The first electronic device places the first simulated focus information at the top of the second local focus stack, and notifies the first audio application to obtain the audio focus, and the first electronic device plays the audio of the first audio application.
  • the first electronic device obtains the first analog focus information, and then creates new first analog focus information based on the obtained first analog focus information.
  • the new first analog focus information is added to the second local focus stack of the first electronic device.
  • the first electronic device notifies the first audio application corresponding to the new first analog focus information to obtain the audio focus, the first audio application has the permission to play audio, and the first electronic device plays the audio of the first audio application.
  • the first electronic device in Embodiments 9 to 11 may also maintain two or more focus stacks, and this application does not make any specific limitation on this.
  • the embodiments of the present application can also be applied to the management of focus, such as the management of window focus.
  • the window focus corresponding to the first application A is stored in the first bucket.
  • the window focus corresponding to the first application B can be stored in the second bucket.
  • the first bucket is used to store the window focus of the first application transferred to the second electronic device.
  • the second bucket is used to store the window focus of the first application on the first electronic device.
  • the window focuses corresponding to the N first applications can be stored in N first buckets respectively, and the N first buckets correspond to the second electronic devices to which they are transferred.
  • the focus of these multiple applications can be better managed.
  • FIG. 22 exemplarily introduces the structure of an electronic device 220 provided in the present application.
  • the electronic device 220 may be the first electronic device or the second electronic device mentioned above.
  • the electronic device 220 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, etc.
  • a processor 110 an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, etc.
  • USB universal serial bus
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 220.
  • the electronic device 220 may include more or fewer components than shown in the figure, or combine some components, or split some components, or arrange the components differently.
  • the components shown in the figure may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (AP), a modem processor, a graphics processor (GPU), an image signal processor (ISP), a controller, a memory, a video codec, a digital signal processor (DSP), a baseband processor, and/or a neural network processing unit (NPU). Different processing units may be independent devices or integrated in one or more. processor.
  • AP application processor
  • modem processor e.g., a graphics processor (GPU), an image signal processor (ISP), a controller, a memory, a video codec, a digital signal processor (DSP), a baseband processor, and/or a neural network processing unit (NPU).
  • AP application processor
  • GPU graphics processor
  • ISP image signal processor
  • DSP digital signal processor
  • NPU neural network processing unit
  • Different processing units may be independent devices or integrated in one or more. processor.
  • the processor 110 may also be provided with a memory for storing instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory may store instructions or data that the processor 110 has just used or cyclically used. If the processor 110 needs to use the instruction or data again, it may be directly called from the memory. This avoids repeated access, reduces the waiting time of the processor 110, and thus improves the efficiency of the system.
  • the wireless communication function of the electronic device 220 can be implemented through the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor and the baseband processor.
  • Antenna 1 and antenna 2 are used to transmit and receive electromagnetic wave signals.
  • the mobile communication module 150 may provide solutions for wireless communications including 2G/3G/4G/5G, etc., applied to the electronic device 220.
  • at least some functional modules of the mobile communication module 150 may be provided in the processor 110.
  • At least some functional modules of the mobile communication module 150 and at least some modules of the processor 110 may be provided in the same device.
  • the wireless communication module 160 can provide wireless communication solutions for application in the electronic device 220, including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) network), Bluetooth (BT), global navigation satellite system (GNSS), frequency modulation (FM), near field communication (NFC), infrared technology (IR), etc.
  • WLAN wireless local area networks
  • BT Bluetooth
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication
  • IR infrared technology
  • antenna 1 of electronic device 220 is coupled to mobile communication module 150, and antenna 2 is coupled to wireless communication module 160, so that electronic device 220 can communicate with the network and other devices through wireless communication technology.
  • the electronic device 220 implements the display function through a GPU, a display screen 194, and an application processor.
  • the GPU is a microprocessor for image processing, which connects the display screen 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • the processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos, etc.
  • the display screen 194 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light-emitting diode or an active matrix organic light-emitting diode (AMOLED), a flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diodes (QLED), etc.
  • the electronic device 220 may include 1 or N display screens 194, where N is a positive integer greater than 1.
  • the electronic device 220 can realize the shooting function through ISP, camera 193, video codec, GPU, display screen 194 and application processor.
  • the camera 193 is used to capture still images or videos.
  • the object generates an optical image through the lens and projects it onto the photosensitive element.
  • the photosensitive element can be a charge coupled device (CCD) or a complementary metal oxide semiconductor (CMOS) phototransistor.
  • CMOS complementary metal oxide semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then passes the electrical signal to the ISP to be converted into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • the DSP converts the digital image signal into an image signal in a standard RGB, YUV or other format.
  • the electronic device 220 may include 1 or N cameras 193, where N is a positive integer greater than 1.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 220.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music and videos can be stored in the external memory card.
  • the internal memory 121 can be used to store computer executable program codes, and the executable program codes include instructions.
  • the processor 110 executes various functional applications and data processing of the electronic device 220 by running the instructions stored in the internal memory 121.
  • the internal memory 121 may include a program storage area and a data storage area.
  • the program storage area may store an operating system, an application required for at least one function (such as a sound playback function, an image playback function, etc.), etc.
  • the data storage area may store data created during the use of the electronic device 220 (such as audio data, a phone book, etc.), etc.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one disk storage device, a flash memory device, a universal flash storage (UFS), etc.
  • UFS universal flash storage
  • the electronic device 220 can implement audio functions such as music playing and recording through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone jack 170D, and the application processor.
  • the audio module 170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signals.
  • the audio module 170 can also be used to encode and decode audio signals.
  • the audio module 170 can be arranged in the processor 110, or some functional modules of the audio module 170 can be arranged in the processor 110.
  • the speaker 170A also called a "speaker" is used to convert an audio electrical signal into a sound signal.
  • the electronic device 220 can listen to music or listen to a hands-free call through the speaker 170A.
  • the receiver 170B also called a "earpiece" is used to convert audio electrical signals into sound signals.
  • the electronic device 220 receives a call or voice message, the voice can be received by placing the receiver 170B close to the human ear.
  • Microphone 170C also called “microphone” or “microphone” is used to convert sound signals into electrical signals. When making a call or sending a voice message, the user can speak by putting their mouth close to microphone 170C to input the sound signal into microphone 170C.
  • the electronic device 220 can be provided with at least one microphone 170C. In other embodiments, the electronic device 220 can be provided with two microphones 170C, which can not only collect sound signals but also realize noise reduction function. In other embodiments, the electronic device 220 can also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify the sound source, realize directional recording function, etc.
  • the earphone interface 170D is used to connect a wired earphone.
  • the earphone interface 170D may be the USB interface 130, or may be a 3.5 mm open mobile terminal platform (OMTP) standard interface or a cellular telecommunications industry association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA
  • the sensor module 180 may include a pressure sensor, a gyro sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, and the like.
  • the electronic device 220 may also include a charging management module, a power management module, a battery, buttons, indicators, and one or more SIM card interfaces, etc., and the embodiments of the present application do not impose any restrictions on this.
  • the embodiment of the present application also provides a computer program product.
  • the computer program product When the computer program product is run on a computer, the computer is enabled to execute the above-mentioned related steps to implement the audio control method in the above-mentioned method embodiments.
  • An embodiment of the present application further provides a computer storage medium, including computer instructions.
  • the computer instructions When the computer instructions are executed on an electronic device, the electronic device executes the audio control method of the above embodiment.
  • the electronic device, computer storage medium, computer program product or chip system provided in the embodiments of the present application are all used to execute the corresponding methods provided above. Therefore, the beneficial effects that can be achieved can refer to the beneficial effects in the corresponding methods provided above and will not be repeated here.
  • the disclosed devices and methods can be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the modules or units is only a logical function division. There may be other division methods in actual implementation, such as multiple units or components can be combined or integrated into another device, or some features can be ignored or not executed.
  • Another point is that the mutual coupling or direct coupling or communication connection shown or discussed can be through some interfaces, indirect coupling or communication connection of devices or units, which can be electrical, mechanical or other forms.
  • the unit described as a separate component may or may not be physically separated, and the component shown as a unit may be one physical unit or multiple physical units, that is, it may be located in one place or distributed in multiple different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the present embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit may be implemented in the form of hardware or in the form of software functional units.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a readable storage medium.
  • the technical solution of the embodiment of the present application is essentially or the part that contributes to the prior art or all or part of the technical solution can be embodied in the form of a software product, which is stored in a storage medium and includes several instructions to enable a device (which can be a single-chip microcomputer, chip, etc.) or a processor (processor) to execute all or part of the steps of the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (ROM), random access memory (RAM), disk or optical disk and other media that can store program code.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Telephone Function (AREA)

Abstract

本申请公开一种音频控制方法、存储介质、程序产品及电子设备,通过分开管理焦点信息来分开管理通过不同电子设备播放音频的音频应用,以实现分布式音频体验。该音频控制方法应用于第一电子设备,第一电子设备上安装第一音频应用,第一电子设备包括第一焦点栈,第一焦点栈用于存放通过第一电子设备播放音频的音频应用所对应的焦点信息,音频控制方法包括:创建第二焦点栈,第二焦点栈用于存放通过第二电子设备播放音频的音频应用所对应的焦点信息;响应于第一指令,将第一音频应用的第一焦点信息置于第二焦点栈的栈顶,并通知第一音频应用获得音频焦点,以当第一音频应用的内容流转至第二电子设备时,通过第二电子设备播放第一音频应用的音频。

Description

音频控制方法、存储介质、程序产品及电子设备
本申请要求于2022年11月01日提交中国专利局、申请号为202211358862.2,发明名称为“音频控制方法、存储介质、程序产品及电子设备”的中国专利的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,尤其涉及一种音频控制方法、存储介质、程序产品及电子设备。
背景技术
现有的音频输出处理方式是:当设备A输出应用1的音频时,用户启动设备A上的应用2以播放应用2的音频,则设备A关闭应用1的音频或降低音频音量,以输出应用2的音频。
在设备互联场景中,基于现有的音频输出处理方式,当设备A将其上的应用1流转至设备B,并通过设备B播放应用1的音频时,若用户启动设备A上的应用2以播放应用2的音频,设备B上应用1的音频将受到影响,设备B上应用1的音频关闭或音量降低。
发明内容
本申请提供一种音频控制方法、存储介质、程序产品及电子设备,通过第一焦点栈和第二焦点栈实现分开管理焦点信息,通过分开管理焦点信息来分开管理通过不同电子设备播放音频的音频应用,进而实现通过第二电子设备播放的音频与通过第一电子设备播放的音频相互独立,以实现分布式音频体验。
第一方面,本申请提供了一种音频控制方法,应用于第一电子设备,第一电子设备上安装第一音频应用,第一电子设备包括第一焦点栈,其中,第一焦点栈用于存放通过第一电子设备播放音频的音频应用所对应的焦点信息,音频控制方法包括:创建第二焦点栈,其中第二焦点栈用于存放通过第二电子设备播放音频的音频应用所对应的焦点信息;响应于第一指令,将第一音频应用的第一焦点信息置于第二焦点栈的栈顶,并通知第一音频应用获得音频焦点,以当第一音频应用的内容流转至第二电子设备时,通过第二电子设备播放第一音频应用的音频。
相对于现有音频输出处理方式中仅维护一个焦点栈,本申请的音频控制方法创建并维护至少两个焦点栈。以第一电子设备维护第一焦点栈和第二焦点栈为例,第一焦点栈和第二焦点栈均用于存放音频应用申请音频焦点的焦点信息。第一焦点栈与第二焦点栈的区别在于:第一焦点栈中存放的焦点信息所对应的音频应用的音频是通过第一电子设备播放,第二焦点栈中存放的焦点信息所对应的音频应用的音频是通过第二电子设备播放。通过第一焦点栈和第二焦点栈实现分开管理焦点信息,通过分开管理焦点信息来分开管理通过不同电子设备播放音频的音频应用,进而实现通过第二电子设备播放的音频与通过第一电子设备播放的音频相互独立,以实现分布式音频体验。
在一种可能实现方式中,音频控制方法还包括:获取第一信息,其中第一信息用于指示通过第二电子设备播放第二音频应用的音频;响应于获取到第一信息,获取第二音频应用的第二焦点信息,将第二焦点信息置于第二焦点栈的栈顶,并通知第一音频应用失去音频焦点。其中,在响应于获取到第一信息之前还包括:通过分布式融合感知平台服务从第二电子设备中获取第一信息。通过采用该技术方案,可以实现在第一电子设备将第一音频应用流转至第二电子设备上播放后,第二电子设备上可正常播放第二音频应用。
在一种可能实现方式中,音频控制方法还包括:获取第二信息,其中第二信息用于指示第二音频应用失去音频焦点;响应于获取到第二信息,将第一焦点信息置于第二焦点栈的栈顶,并通知第一音频应用获得音频焦点。其中,在响应于获取到第二信息之前还包括:通过分布式融合感知平台服务从第二电子设备中获取第二信息。通过采用该技术方案,可以实现在第一电子设备将第一音频应用流转至第二电子设备上播放,当第一音频应用未播放完毕,第二音频应用申请播放时,第二电子设备上可正常播放第二音频应用,且在第二音频应用失去音频焦点(例如申请播放完毕后)时,第二电子设备上仍可以继续播放第一音频应用。
在一种可能实现方式中,响应于第一指令,将第一音频应用的第一焦点信息置于第二焦点栈的栈顶包括:获 取第一操作;响应于第一操作,从第一焦点栈和第二焦点栈中确定出与第一操作对应的第一焦点栈,并将第一音频应用的第一焦点信息置于第一焦点栈的栈顶,以通过第一电子设备播放第一音频应用的音频;获取第二操作,其中第二操作用于指示将正在播放音频的第一音频应用流转至第二电子设备;响应于第二操作对应的第一指令,从第一焦点栈和第二焦点栈中确定出与第一指令对应的第二焦点栈,并将第一音频应用的第一焦点信息由第一焦点栈移至第二焦点栈的栈顶。通过采用该技术方案,第一电子设备可以实现将正在播放音频的第一音频应用流转至第二电子设备上播放。
在一种可能实现方式中,在响应于第一指令之前还包括:响应于第三操作,将第一音频应用流转至第二电子设备;当检测到流转至第二电子设备的第一音频应用申请播放音频时,获取第一指令。通过采用该技术方案,第一电子设备可以实现将未播放音频的第一音频应用流转至第二电子设备上播放。
在一种可能实现方式中,音频控制方法还包括:获取第二指令,其中,第二指令用于指示将第一音频应用迁回至第一电子设备;响应于第二指令,将第一焦点信息由第二焦点栈移至第一焦点栈的栈顶,并通知第一音频应用获得音频焦点。通过采用该技术方案,第一电子设备可以实现将迁移至第二电子设备的第一音频应用迁移回来,且该迁移回来的第一音频应用在第一电子设备上播放音频。
在一种可能实现方式中,创建第二焦点栈包括:获取第二电子设备的设备标识;根据设备标识创建第二焦点栈。通过采用该技术方案,可以实现为与第一电子设备互联的第二电子设备创建对应的焦点栈(除第一焦点栈之外的焦点栈)。
在一种可能实现方式中,当获得N个设备标识时,其中,N为大于或等于2的整数,则根据设备标识创建第二焦点栈包括:根据N个设备标识分别创建N个第二焦点栈。通过采用该技术方案,当与第一电子设备互联的第二电子设备的数量为两个或两个以上时,可以实现为该两个或两个以上的第二电子设备分别创建对应的焦点栈(除第一焦点栈之外的焦点栈)。
第二方面,本申请提供一种音频控制方法,应用于第一电子设备与第二电子设备,第一电子设备与第二电子设备通信连接,第一电子设备上安装第一音频应用,第二电子设备包括第一本地焦点栈,第一本地焦点栈用于存放通过第二电子设备播放音频的音频应用的焦点信息,方法包括:第一电子设备获取第一指令;第一电子设备响应于第一指令,通知第一音频应用获得音频焦点,并将第一音频应用的第一焦点信息传输至第二电子设备;第二电子设备根据第一焦点信息创建第一模拟焦点信息,将第一模拟焦点信息置于第一本地焦点栈的栈顶,以当第一音频应用的内容流转至第二电子设备时,通过第二电子设备播放第一音频应用的音频。
相对于现有音频输出处理方式中电子设备上的音频应用的焦点信息均存放至该电子设备维护的焦点栈,本申请将音频应用的焦点信息存放至播放该音频应用的电子设备的本地焦点栈中。当希望通过电子设备A播放电子设备B上的第一音频应用的音频时,将该第一音频应用申请音频焦点的焦点信息存放至电子设备B的第一本地焦点栈,由第一本地焦点栈来维护流转过来的第一音频应用所对应的焦点信息。相应地,当希望通过电子设备A播放电子设备B上的第二音频应用的音频时,将该第二音频应用申请音频焦点的焦点信息存放至电子设备A的第二本地焦点栈,由第二本地焦点栈来维护流转过来的第二音频应用所对应的焦点信息。如此,通过分开管理焦点信息,可以分开管理通过不同电子设备播放音频的音频应用,进而实现通过第二电子设备播放的音频与通过第一电子设备播放的音频相互独立,以便实现分布式音频体验。
在一种可能实现方式中,第二电子设备上包括第二音频应用,音频控制方法还包括:第二电子设备获取第四操作,其中第四操作指示通过第二电子设备播放第二音频应用的音频;第二电子设备响应于第四操作,将第二音频应用的第二焦点信息置于第一本地焦点栈的栈顶,通知第二音频应用获得音频焦点;第一电子设备响应于第二音频应用获得音频焦点,通知第一音频应用失去音频焦点。通过采用该技术方案,可以实现在第一电子设备将第一音频应用流转至第二电子设备上播放后,第二电子设备上可正常播放第二音频应用。
在一种可能实现方式中,第一电子设备包括第二本地焦点栈,音频控制方法还包括:第一电子设备响应于迁回指令,从第二电子设备中获取第一模拟焦点信息;第一电子设备将第一模拟焦点信息置于第二本地焦点栈的栈顶,并通知第一音频应用获得音频焦点,第一电子设备播放第一音频应用的音频。通过采用该技术方案,第一电子设备可以实现将迁移至第二电子设备的第一音频应用迁移回来,且该迁移回来的第一音频应用在第一电子设备上播放音频。
第三方面,本申请提供了一种计算机存储介质,包括计算机指令,当计算机指令在电子设备上运行时,使得电子设备执行如上述实现方式中任一项音频控制方法。
第四方面,本申请提供了一种计算机程序产品,当计算机程序产品在计算机上运行时,使得计算机执行如上述实现方式中任一项音频控制方法。
第五方面,本申请提供了一种电子设备,电子设备包括处理器和存储器,存储器用于存储指令,处理器用于调用存储器中的指令,使得所述电子设备执行如上述实现方式中任一项音频控制方法。
上述第二方面、第三方面、第四方面和第五方面所获得的技术效果与第一方面中对应的技术手段获得的技术效果近似,在这里不再赘述。
本申请提供的技术方案带来的有益效果至少包括:
通过分开管理焦点信息,可以分开管理通过不同电子设备播放音频的音频应用,进而实现通过第二电子设备播放的音频与通过第一电子设备播放的音频相互独立,以便实现分布式音频体验。
附图说明
图1为本申请实施例提供的一种音频系统结构示意图。
图2为本申请实施例提供的电子设备的软件结构框图。
图3为本申请实施例一提供的一种音频控制方法流程示意图。
图4A至图4G为对应实施例一的一种应用场景示意图。
图5为对应图4A至图4G第一电子设备的操作流程示意图。
图6为本申请实施例二提供的一种音频控制方法流程示意图。
图7A至图7D为对应实施例二的一种应用场景示意图。
图8为对应图7A至图7D第一电子设备的操作流程示意图。
图9为本申请实施例三提供的一种音频控制方法流程示意图。
图10A至图10C为对应实施例三的一种应用场景示意图。
图11为对应图10A至图10C第一电子设备的操作流程示意图。
图12为本申请实施例四提供的一种音频控制方法流程示意图。
图13A至图13B为对应实施例四的一种应用场景示意图。
图14为对应图13A至图13B第一电子设备的操作流程示意图。
图15为本申请实施例五提供的一种音频控制方法流程示意图。
图16为本申请实施例六提供的一种音频控制方法流程示意图。
图17为本申请实施例七提供的一种音频控制方法流程示意图。
图18为本申请实施例八提供的一种音频控制方法流程示意图。
图19为本申请实施例九提供的一种音频控制方法流程示意图。
图20为本申请实施例十提供的一种音频控制方法流程示意图。
图21为本申请实施例十一提供的一种音频控制方法流程示意图。
图22为本申请实施例提供的一种电子设备硬件结构示意图。
具体实施方式
本申请中所涉及的多个,是指两个或两个以上。另外,需要理解的是,在本申请的描述中,“第一”、“第二”等词汇,仅用于区分描述的目的,而不能理解为指示或暗示相对重要性,也不能理解为指示或暗示顺序。
本申请中所涉及的多个,是指两个或两个以上。在本申请实施例中,“示例性”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性”或者“例如”等词旨在以具体方式呈现相关概念。
在设备互联场景中,用户希望实现分布式音频体验,如设备A将应用1的音频传输至设备B,在设备A上播放应用2的音频与在设备B上播放应用1的音频可以互不影响。基于现有的音频输出处理方式无法实现分布式音频体验。发明人在实施本申请时发现,现有的音频输出处理方式是根据音频焦点(Audio Focus)抢占机制管理音频输出。例如在安卓开放源代码项目(Android Open Source Project,AOSP)的设计中设置有音频焦点抢占机 制,各个音频应用要播放音频就需要申请音频焦点,获得音频焦点的音频应用具有播放音频的权限。但同一设备上所有音频应用申请音频焦点的焦点信息均放在同一音频焦点栈中,导致无法实现分布式音频体验。
具体地,当设备A上的音频应用1成功申请到音频焦点时,在音频焦点栈的栈顶添加音频应用1申请音频焦点的焦点信息1,通知音频应用1获得音频焦点,音频应用1具有播放音频权限。设备A将音频应用1流转至设备B,通过设备B播放音频应用1的音频。此时用户启动设备A上的音频应用2以播放音频,音频应用2申请音频焦点。当音频应用2成功申请到音频焦点时,在音频焦点栈的栈顶添加音频应用2申请音频焦点的焦点信息2,音频应用1的焦点信息不再处于栈顶位置。然后通知音频应用2获得音频焦点,并通知音频焦点栈中其他焦点信息所对应的音频应用(如音频应用1)其丢失音频焦点,音频应用1会因为丢失音频焦点而停止播放或暂停播放,或降低音量。
鉴于此,本申请提供一种音频控制方法,可以适用于设备互联场景,实现分布式音频体验。本申请的基本原理:在多个电子设备互联场景中,对于同一电子设备上的多个音频应用,当该多个音频应用通过不同电子设备播放音频时,分开管理该多个音频应用的音频播放。以第一电子设备上安装第一音频应用1、2为例,当第一音频应用1的音频通过第一电子设备播放,第二音频应用2的音频通过第二电子设备播放时,分开管理第一音频应用1与第一音频应用2的音频播放,如将第一音频应用1申请音频焦点的焦点信息A与第一音频应用2申请音频焦点的焦点信息B分开管理,避免将焦点信息A和焦点信息B均存放至同一个用于维护音频播放秩序的管理桶(如音频焦点栈),进而通过分开管理焦点信息A和焦点信息B来实现分开管理第一音频应用1、2的音频播放。
例如,第一电子设备包括第一焦点栈和第二焦点栈,第一焦点栈和第二焦点栈均用于存放音频应用申请音频焦点的焦点信息。第一焦点栈与第二焦点栈的区别在于,第一焦点栈中存放的焦点信息所对应的音频应用的音频是通过第一电子设备播放,第二焦点栈中存放的焦点信息所对应的音频应用的音频是通过第二电子设备播放。换句话说,当希望通过第一电子设备播放音频应用的音频时,则将该音频应用申请音频焦点的焦点信息存放至第一焦点栈。当希望通过第二电子设备播放音频应用的音频时,则将该音频应用申请音频焦点的焦点信息存放至第二焦点栈。
又例如,当希望通过电子设备A播放电子设备B上的第一音频应用的音频时,将该第一音频应用申请音频焦点的焦点信息存放至电子设备B的第一本地焦点栈,由第一本地焦点栈来维护流转过来的第一音频应用所对应的焦点信息。相应地,当希望通过电子设备A播放电子设备B上的第二音频应用的音频时,将该第二音频应用申请音频焦点的焦点信息存放至电子设备A的第二本地焦点栈,由第二本地焦点栈来维护流转过来的第二音频应用所对应的焦点信息。
如此,通过分开管理焦点信息,可以分开管理通过不同电子设备播放音频的音频应用,进而实现通过第二电子设备播放的音频与通过第一电子设备播放的音频相互独立,以便实现分布式音频体验。
请参阅图1,示例性介绍本申请实施例提供的一种音频系统100。
如图1所示,音频系统100包括第一电子设备101和第二电子设备102。第一电子设备101与第二电子设备102之间建立通信连接。第一电子设备101和第二电子设备102可以借助建立的通信连接,实现第一电子设备101与第二电子设备102之间的信息传输。其中,第一电子设备101与第二电子设备102之间传输的信息包括但不限于应用的内容、应用相关参数(例如音频应用申请音频焦点的焦点信息)、视频数据、音频数据和控制指令等。
第一电子设备101与第二电子设备102之间可通过有线方式通信,也可以通过无线方式通信。示例性地,第一电子设备101与第二电子设备102之间可以使用通用串行总线(Universal Serial Bus,USB)建立有线连接。又例如,第一电子设备101与第二电子设备102之间可以通过全球移动通讯系统(global system for mobile communications,GSM)、通用分组无线服务(generalpacket radio service,GPRS)、码分多址接入(code division multiple access,CDMA)、宽带码分多址(wideband code division multiple access,WCDMA),长期演进(Long Term Evolution,LTE)、蓝牙、无线保真(wireless fidelity,Wi‐Fi)、近场通信(Near Field Communication,NFC)、基于互联网协议的音通话(voice over Internet protocol,VoIP)、支持网络切片架构的通信协议建立无线连接。本申请对此不作具体限定。
在本申请实施例中,第一电子设备101为输出音频应用的内容的设备。第二电子设备102为接收传输过来的音频应用的内容的设备。如第一电子设备101将安装于其上的音频应用的内容流转至第二电子设备102。
在本申请实施例中,可以通过应用流转技术实现音频应用的内容在多个电子设备之间流转。应用流转技术是 将一个或多个电子设备上当前运行的应用的内容(如画面、文字或音频等)传输到另一个或多个电子设备,以使其运行该应用的内容;例如,将第一电子设备101上运行的第一音频应用的音频传输至第二电子设备102,以使第二电子设备102可以播放第一音频应用的音频。此外,该应用流转技术可以包括应用投屏技术、应用接力(handoff)技术、应用分布技术等。其中,应用投屏技术是将一个电子设备上运行的应用的内容投射到另一电子设备的显示屏或显示介质上进行显示。应用接力技术是一种将一个电子设备上运行的应用的内容存储、传递或者共享给另一个电子设备的技术。应用分布技术是一种在一个电子设备上运行某个应用的后端(如为用户接口UI界面的展示业务逻辑功能做处理),而在另一个电子设备上运行该应用的前端(如用户接口UI界面),并且需要实时访问该应用的后端。
对于音频系统100中的某一电子设备,该电子设备可以将安装于其上的音频应用的内容流转出去,同时也可以接收音频系统100中其他电子设备流转过来的音频应用的内容。当电子设备将安装于其上的音频应用的内容流转出去时,则该电子设备为第一电子设备101。当电子设备接收其他电子设备流转过来的音频应用的内容时,则该电子设备为第二电子设备102。
示例性地,第一电子设备101具体可以为手机、音箱、平板、电视(也可称为智能电视、智慧屏或大屏设备)、笔记本电脑、超级移动个人计算机(Ultra-mobile Personal Computer,UMPC)、手持计算机、上网本、个人数字助理(Personal Digital Assistant,PDA)、可穿戴电子设备、车载设备、虚拟现实设备等具有音频输入输出功能的电子设备,本申请实施例对此不做任何限制。
示例性地,第二电子设备102除了可以是蓝牙耳机、有线耳机等传统的音频输出设备外,还可以是手机、平板、笔记本电脑、电视、音箱或车载设备等具有音频输入输出功能的电子设备,本申请实施例对此不做任何限制。
需要说明的是,图1所示的音频系统100中第一电子设备101的数量和类型以及第二电子设备102的数量和类型仅为示例,本申请对此不作具体限定。本申请音频系统100的第二电子设备的数量为N,N为大于或等于1的整数。
上述第一电子设备101或第二电子设备102的软件系统可以采用分层架构、事件驱动架构、微核架构、微服务架构或云架构。下面本申请实施例以分层架构的安卓(Android)系统为例,示例性说明电子设备(第一电子设备101或第二电子设备102)的软件结构。当然,在其他操作系统中,只要各个功能模块实现的功能和本申请的实施例类似,也可以实现本申请实施例。
请参阅图2,本申请实施例提供的电子设备的软件结构。
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为五层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和系统库,硬件抽象层(hardware abstraction layer,HAL)以及内核层。应用程序层可以包括一系列应用程序包。
如图2所示,应用程序层中可以安装通话,导航,浏览器,相机,日历,地图,蓝牙,游戏、音乐,视频等应用程序(application,APP)。
应用程序层中安装的应用包括音频应用。其中,音频应用为具有音频功能,可为用户提供音频内容的应用程序。音频应用既可以是设备出厂自带的音频应用,例如华为音乐,也可以是第三方发布的音频应用。
示例性地,音频应用可以为音乐APP、相机APP、视频APP、地图APP或录音APP等。音乐APP可以播放音乐。相机APP拍照时可以输出系统预设的快门声音。视频APP播放视频的同时可输出与视频画面对应的音频。地图APP开启导航功能后可输出导航语音。录音APP可以播放预先录制的音频,本申请对音频应用的具体类型不作具体限定。
在本申请实施例中,音频应用在开始播放音频前,需要向音频框架发送申请音频焦点的请求,当音频应用成功申请到音频焦点时,音频应用获得音频焦点,则音频应用获得播放音频的权限,反之未获得播放音频的权限。
在一些实施例中,音频应用调用函数requestAudioFocus()向音频框架申请音频焦点,如向音频框架中的音频管理器(图未示)申请音频焦点。音频应用向音频框架申请音频焦点时,需要向音频框架提供申请音频焦点的相关信息(即焦点信息)。音频框架接收到音频应用申请音频焦点的请求后,构建相应的焦点请求对象,并将焦点信息都保存在焦点请求对象中。
本文中音频应用的焦点信息或音频应用所对应的焦点信息均指音频应用申请音频焦点时的焦点信息。
在本申请实施例中,焦点信息包括音频应用的标识,音频应用的标识可以通过应用的包名信息、应用所持有的音频管理器信息和应用的监听对象信息构成。其中,包名信息从音频应用中获取,音频应用所持有的音频管理 器信息和应用的监听对象信息通过内存地址区分。
在本申请实施例中,焦点信息还包括音频焦点申请类型。音频焦点申请类型可选值有以下五个:
(1)AUDIOFOCUS_GAIN:此参数表示希望申请一个永久的音频焦点,并且希望上一个持有音频焦点的音频应用停止播放;例如在音频应用需要播放音乐时申请AUDIOFOCUS_GAIN。
(2)AUDIOFOCUS_GAIN_TRANSIENT:表示申请一个短暂的音频焦点,并且马上就会被释放,此时希望上一个持有音频焦点的音频应用暂停播放。例如音频应用需要播放一个提醒声音时申请AUDIOFOCUS_GAIN_TRANSIENT。
(3)AUDIOFOCUS_GAIN_TRANSIENT_MAY_DUCK:表示申请一个短暂的音频焦点,并且马上就会被释放,希望上一个持有焦点的音频应用减小其播放声音(但仍可以播放),此时会混音播放。例如地图APP要输出导航播报时申请AUDIOFOCUS_GAIN_TRANSIENT_MAY_DUCK。
(4)AUDIOFOCUS_GAIN_TRANSIENT_EXCLUSIVE:表示申请一个短暂的音频焦点,并且会希望系统不要播放任何突然的声音(例如通知,提醒等),例如录音APP要录音时申请AUDIOFOCUS_GAIN_TRANSIENT_EXCLUSIVE。
在一些实施例中,焦点信息还包括音源类型,音源包括音乐、视频、通话、语音中至少一种。操作系统在后续处理过程中能够通过焦点信息区分音源是否来自同一音频应用。
在一些实施例中,焦点信息包括音频播放进程的用户编码,也包括音频应用的用户编码。
当上述申请成功时,音频框架向音频应用返回AUDIOFOCUS_REQUEST_GRANTED常量。当音频应用接收到AUDIOFOCUS_REQUEST_GRANTED常量时,则表示音频应用获得音频焦点。当上述申请失败时,音频框架向音频应用返回AUDIOFOCUS_REQUEST_FAILED常量。当音频应用接收到AUDIOFOCUS_REQUEST_FAILED常量时,则表示音频应用获取音频焦点失败。
当音频应用成功申请到音频焦点时,将该音频应用对应的焦点请求对象加入到音频焦点栈的栈顶,表示该音频应用获得音频焦点,并向音频框架注册回调函数OnAudioFocusChangeListener,以便于音频应用能及时接收到来自音频框架的音频焦点状态改变。OnAudioFocusChangeListener为音频焦点监听器,通过音频焦点监听器可以知道音频应用获取到焦点或者失去焦点。
通过音频焦点监听器监听音频焦点的状态,音频焦点监听器会根据当前音频焦点的变化,调用onAudioFocusChange(int focus Change)函数,其中,focus Change主要有以下四种参数:
1.AUDIOFOCUS_GAIN:表示已经获得音频焦点,音频应用可以调用音频输出设备播放音频。
2.AUDIOFOCUS_LOSS:表示已经失去音频焦点很长时间了,请结束相关音频播放工作并做好收尾工作。
3.AUDIOFOCUS_LOSS_TRANSIENT:表示临时失去了音频焦点,但是在不久就会再返回来。此时,音频应用终止音频播放,但是保留播放资源,因为可能不久就会返回来。
4.AUDIOFOCUS_LOSS_TRANSIENT_CAN_DUCK:表示已经临时失去了音频焦点,但是可以与新的使用者共同使用音频焦点。
音频应用播放完毕,音频应用可以调用abandonAudioFocus()函数释放音频焦点。
在本申请实施例中,音频框架可以执行本申请实施例提供的音频控制方法,管理音频应用所申请的焦点进程。
本申请实施例中的音频框架在创建并维护第一焦点栈之外,还创建并维护一个或一个以上的焦点栈,如第二焦点栈。
如图2所示,以第一电子设备上安装的第一音频应用为音乐APP举例,第一电子设备中设置有为音乐APP实现上述音频功能的音频架构。当音乐APP申请在第一电子设备上播放音频时,音乐APP向音频框架申请音频焦点。当音乐APP成功申请到音频焦点时,将音乐APP申请音频焦点时的焦点信息置于第一焦点栈的栈顶,通知音乐APP获得音频焦点,音乐APP播放音频并通过第一电子设备播放音频。
当音乐APP申请在第二电子设备上播放音频时,音乐APP向音频框架申请音频焦点。当音乐APP成功申请到音频焦点时,将音乐APP申请音频焦点时的焦点信息置于第二焦点栈的栈顶,通知音乐APP获得音频焦点,音乐APP播放音频。当第二电子设备获得第一电子设备传输的音乐APP的音频内容时,第二电子设备播放音乐APP的音频。
如图2所示,在Android系统的应用程序框架层和内核层之间还可以包括硬件抽象层(hardware abstraction layer,HAL)。HAL层负责与电子设备的各个硬件设备进行交互,HAL层一方面隐藏了各个硬件设备的实现细节, 另一方面可向Android系统提供调用各个硬件设备的接口。HAL中提供了与不同手机硬件设备对应的HAL,例如,Audio HAL、Camera HAL、Wi‐Fi HAL等。
其中,Audio HAL也可以作为上述音频架构中的一部分。音频架构可直接调用Audio HAL,将处理后的音频数据发送给Audio HAL,由Audio HAL将该音频数据发送给对应的音频输出设备(例如扬声器、耳机等)进行播放。
其中,Audio HAL又可以进一步划分为Primary HAL、A2dp HAL等。示例性地,Audio Flinger可调用Primary HAL将音频数据输出至电子设备的扬声器(speaker),或者,Audio Flinger可调用A2dp HAL将音频数据输出至与电子设备相连的蓝牙耳机。
另外,应用程序框架层还可以包括窗口管理器,内容提供器,视图系统,通知管理器等,本申请实施例对此不做任何限制。
如图2所示,安卓运行时(Android runtime)包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
系统库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(Media Libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。
其中,表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。
媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。2D图形引擎是2D绘图的绘图引擎。
内核层位于HAL之下,是硬件和软件之间的层。内核层至少包含显示驱动,近场通信(Near Field Communication,NFC)驱动,音频驱动,传感器驱动,蓝牙驱动等,本申请实施例对此不做任何限制。
下面介绍本申请实施例提供的一种音频控制方法,可以应用于图1所示的音频系统100,由图1所示的第一电子设备101执行,第一电子设备101上安装第一音频应用。
实施例一
请参阅图3,示例性介绍本申请实施例提供的一种音频控制方法。
步骤S301,第一电子设备创建第一焦点栈和第二焦点栈,其中第一焦点栈用于存放通过第一电子设备播放音频的音频应用所对应的焦点信息,第二焦点栈用于存放通过第二电子设备播放音频的音频应用所对应的焦点信息。
现有的音频输出管理方式中,第一电子设备仅维护一个音频焦点栈,第一电子设备上申请音频焦点的音频应用所对应的焦点信息均存放在该唯一的音频焦点栈上。焦点信息位于音频焦点栈的栈顶,则该焦点信息所对应的音频应用获得音频焦点,获得音频焦点的音频应用具有播放音频权限。而栈顶位置只有一个,导致至少存在如下问题:无法实现第一电子设备上的多个音频应用(如申请在第一电子设备上播放音频的音频应用与流转至第二电子设备上播放音频的音频应用)均获得音频焦点。焦点信息处于栈顶的音频应用具有播放音频的权限,而失去音频焦点的音频应用所对应的音频受到影响。
为此,本申请的第一电子设备创建并维护至少两个焦点栈,该至少两个焦点栈均用于存放音频应用要播放音频时,音频应用申请音频焦点的焦点信息。区别在于,第一焦点栈存放的是通过第一电子设备播放音频的音频应用所对应的焦点信息,而第二焦点栈存放的是通过第二电子设备播放音频的音频应用所对应的焦点信息。换句话说,在第一电子设备与第二电子设备互联实现分布式音频体验时,对于第一电子设备上所安装的第一音频应用,当该第一音频应用的音频需要通过第一电子设备播放时,将该第一音频应用的焦点信息存放至第一焦点栈。当该第一音频应用的音频需要通过第二电子设备播放时,将该第一音频应用的焦点信息存放至第二焦点栈。
在一些实施例中,当第一电子设备将音频应用流转至第二电子设备上播放时,第二电子设备上所安装的第二音频应用需要通过第二电子设备播放音频,则将该第二音频应用的焦点信息存放至第二焦点栈中。则第一焦点栈中存放的是第一电子设备上所安装的第一音频应用的焦点信息。第二焦点栈中可以存放第一电子设备上所安装 的第一音频应用的焦点信息,也可以存放第二电子设备上所安装的第二音频应用的焦点信息。
本申请实施例中的焦点栈(如第一焦点栈、第二焦点栈)不仅可以实现为堆栈,还可以实现为数组、队列或map等,本申请对此不作具体限定。其中,第二焦点栈、第一焦点栈可以基于预设的维护秩序确保一次仅一个焦点信息对应的音频应用获得音频焦点。例如,当焦点信息处于焦点栈的栈顶,该焦点信息所对应的音频应用获得音频焦点。
在本申请实施例中,第一电子设备上可以安装多个第一音频应用,不同第一音频应用所对应的焦点信息不同。例如,第一音频应用A1所对应的第一焦点信息B1和第一音频应用A2所对应的第一焦点信息B2不同,至少应用标识不同。
需要说明的是,第一电子设备可以在搜索到第二电子设备时创建第二焦点栈。第一电子设备也可以响应于用户将未播放音频或正在播放音频的第一音频应用流转至第二电子设备,创建第二焦点栈,则步骤S301可以和步骤S305同时执行。在一些实施例中,第一电子设备可以在出厂时预先创建第二焦点栈。则在第一电子设备与第二电子设备互联时,第一电子设备可以直接响应于将第一音频应用流转至第二电子设备播放,将该第一音频应用申请音频焦点时的焦点信息存放在创建好的第二焦点栈,本申请对第一电子设备创建第二焦点栈的时机不作具体限定。
第一电子设备仅创建一个第二焦点栈时,则第二焦点栈可以不与第二电子设备关联。若第一电子设备在创建第一焦点栈之外,还创建了两个或两个以上焦点栈,如下述实施例,第一电子设备创建了第二焦点栈和第三焦点栈,则第二焦点栈和第三焦点栈与对应的第二电子设备关联。
在本申请实施例中,第一电子设备上维护第二焦点栈与第一焦点栈,第一电子设备上的音频输出设备仅播放第一焦点栈中获得音频焦点的音频应用的音频。
步骤S302,第一电子设备获取第一操作,其中第一操作用于指示通过第一电子设备播放第一音频应用的音频。
示例性地,第一操作可以为点击、触控、长按、语音等操作,例如第一操作可以为点击第一音频应用播放控件。
步骤S303,第一电子设备响应于第一操作,从第一焦点栈和第二焦点栈中确定出与第一操作对应的第一焦点栈,并将第一音频应用的第一焦点信息置于第一焦点栈的栈顶,以通过第一电子设备播放第一音频应用的音频。
在本申请实施例中,第一音频应用的第一焦点信息置于第一焦点栈的栈顶,第一音频应用获得音频焦点,具有播放音频的权限,第一音频应用播放音频,通过第一电子设备上的音频输出设备播放该音频。
第一操作指示通过第一电子设备播放音频,第一电子设备可以确定与第一操作对应的焦点栈为第一焦点栈。当第一电子设备响应于第一操作时,第一电子设备尚未创建第二焦点栈,则步骤S303为第一电子设备响应于第一操作,将第一音频应用的第一焦点信息置于第一焦点栈的栈顶,以通过第一电子设备播放第一音频应用的音频。
步骤S304,第一电子设备获取第二操作,其中第二操作用于指示将正在播放音频的第一音频应用流转至第二电子设备。
第二操作指示将第一电子设备上正在播放音频的第一音频应用流转至第二电子设备,也即将第一音频应用流转至第二电子设备,且保持第一音频应用的音频播放,通过第二电子设备播放该音频。
示例性地,第二操作可以为点击、触控、长按、语音等操作,例如,第二操作可以为用户在正在播放音频的第一音频应用的界面上启动流转功能。
步骤S305,第一电子设备响应于第二操作对应的第一指令,从第一焦点栈和第二焦点栈中确定出与第一指令对应的第二焦点栈,并将第一音频应用的第一焦点信息由第一焦点栈移至第二焦点栈的栈顶。
其中,第一指令用于指示播放流转至第二电子设备的第一音频应用的音频。
在本申请实施例中,第一电子设备将第一焦点信息由第一焦点栈移至第二焦点栈的栈顶,处于焦点栈的第一焦点信息所对应的第一音频应用获得音频焦点,通知该第一音频应用获得音频焦点,第一音频应用具有播放音频的权限,可播放音频。当第二电子设备接收到第一音频应用的内容(至少包括音频数据,还可以包括界面数据等)时,第二电子设备播放第一音频应用的音频。
基于第一指令指示播放该音频的电子设备为第二电子设备,第一电子设备可以确定与第一指令对应的焦点栈为第二焦点栈。当第一电子设备获得第二操作时,第一电子设备尚未创建第二焦点栈,则步骤S305可以为:第一电子设备响应于第二操作对应的第一指令,创建第二焦点栈,并将第一音频应用的第一焦点信息由第一焦点栈 移至第二焦点栈的栈顶。
示例性地,以第一电子设备为手机,第一音频应用为音乐应用为例。
请参阅图4A,手机上显示主界面40,主界面40上包括音乐应用400和录音应用401,用户点击音乐应用400。手机响应于用户点击音乐应用400,显示如图4B所示的音乐应用400的界面402。界面402上包括音乐1的控件403。用户点击音乐1的控件403。手机响应于用户点击音乐1的控件403,显示如图4C所示的音乐1的界面404。音乐1的界面404上包括播放控件405,播放控件405的状态为未播放状态。用户点击播放控件405(即第一操作)。手机响应于用户点击播放控件405,显示如图4D所示的界面404,图4D所示的界面404中的播放控件405的状态更改为播放状态,且同时手机播放音乐1的音频。在用户播放音乐1的音频后,如图4D所示用户点击界面404上的更多控件406。手机响应于用户点击更多控件406,显示如图4E所示的显示列表407和流转选项408。用户点击流转选项408,其中流转选项408对应流转功能。手机响应于用户点击流转选项408启动流转功能,可以通过应用流转技术实现将音乐应用400的内容流转至第二电子设备。在将音乐应用400的内容流转出去之前,搜索当前可以接收该音乐应用400内容的设备,也即搜索可与手机建立互联的设备。当前手机搜索到可接收流转音频应用的设备包括平板,则手机响应于用户点击流转选项408,显示如图4F所示的可用的设备列表409以及平板选项410。用户点击平板选项410(即第二操作),也即用户指示将音乐应用401的内容流转至平板。手机响应于用户点击平板选项410,如图4G所示,手机将音乐应用400的内容(如音乐1的音频以及音乐1的界面)流转至平板,手机上显示主界面,平板上显示的界面与音乐应用400的界面404相似,且平板播放音乐1的音频。手机不显示音乐应用400的界面404,手机可以显示主界面400,且手机不播放音乐1的音频。
请一并参阅4C、4D、4F、4G以及图5,示例性介绍第一电子设备的操作过程。
步骤S500,第一音频应用获取第一操作。
如图4C所示,第一操作为用户点击播放控件405。
步骤S501,第一音频应用响应于第一操作,向第一电子设备的音频框架申请音频焦点。
步骤S502,音频框架将第一音频应用的第一焦点信息置于第一焦点栈的栈顶。
当第一音频应用成功申请到音频焦点时,音频框架将第一音频应用的第一焦点信息置于第一焦点栈顶的栈顶,并通知第一音频应用获得音频焦点,第一音频应用获得播放音频的权限。如图4D所示,音乐应用400获得播放音频的权限,手机上的音频输出设备播放音乐1的音频。
步骤S503,第一音频应用获取第二操作。
如图4F所示,第二操作为用户点击平板选项410。
步骤S504,第一音频应用响应于第二操作,进行应用迁移。
如图4F所示,手机响应于用户点击平板选项410,启动流转功能。第一音频应用进行应用迁移,手机将音乐应用400迁移至平板,告知音频框架将音乐应用400迁移至平板的信息。
步骤S505,音频框架提取出第一音频应用的第一焦点信息。
音频框架响应于音乐应用400迁移至平板的信息,音频框架从第一焦点栈中提取出音乐应用400所对应的第一焦点信息。
步骤S506,音频框架将第一焦点信息置于第二焦点栈的栈顶。
基于音乐应用400迁移至平板,音频框架从第一焦点栈中提取出第一音频应用的第一焦点信息后,将该第一焦点信息置于第二焦点栈的栈顶。
在一些实施例中,音频框架从第一焦点栈中提取出第一焦点信息后,可以根据提取出的第一焦点信息再创建对应的第一焦点信息,并将再创建的第一焦点信息入栈,置于第二焦点栈的栈顶。例如,音频框架从第一焦点栈中提取出第一焦点信息后,根据该提取出的第一焦点信息创建新的焦点请求对象,该新的焦点请求对象保存着提取出的第一焦点信息的内容,将该新的焦点请求对象置于第二焦点栈的栈顶。
如图4G所示,音乐应用400获得播放音频的权限,平板接收到手机传输过来的音乐1的音频,可以通过平板上的音频输出设备播放音乐1的音频。
步骤S507,音频框架将第一焦点栈中的第一焦点信息删除。
在音频框架将第一焦点信息加入至第二焦点栈时,也将第一焦点栈中的第一焦点信息删除,即第一焦点信息出栈。
在本申请实施例中,第一电子设备上安装第一音频应用,第一电子设备可以将正在播放音频的第一音频应用 流转至第二电子设备,由第二电子设备播放该第一音频应用的音频。基于该通过第二电子设备播放音频的第一音频应用所对应的第一焦点信息存放至第二焦点栈,第一焦点栈与第二焦点栈相互独立,第一电子设备播放的音频与第二电子设备播放的音频互不影响。
实施例二
实施例二是在实施例一的基础上,用户继续启动第一电子设备上的其他第一音频应并播放音频。
请参阅图6,示例性介绍本申请实施例提供的另一种音频控制方法。
步骤S601,第一电子设备获取播放操作,其中播放操作用于指示通过第一电子设备播放另一第一音频应用的音频。
步骤S601的播放操作可以为点击、触控、长按、语音等操作,例如播放操作可以为点击另一第一音频应用播放控件。
在本申请实施例中,第一操作与播放操作的区别在于,第一操作是指示播放第一电子设备上的第一音频应用的音频,而播放操作是指示播放第一电子设备上另一第一音频应用的音频。
步骤S602,第一电子设备响应于播放操作,将该另一第一音频应用的第一焦点信息置于第一焦点栈的栈顶,并通知该另一第一音频应用获得音频焦点,以在第一电子设备上播放该另一第一音频应用的音频。
在本申请实施例中,该另一第一音频应用的第一焦点信息置于第一焦点栈的栈顶,则该另一第一音频应用获得音频焦点,具有播放音频的权限,该另一第一音频应用播放音频,通过第一电子设备上的音频输出设备播放该音频。
基于播放操作指示通过第一电子设备播放音频,第一电子设备可以确定与播放操作对应的焦点栈为第一焦点栈。
如上述示例,以该另一第一音频应用实现为录音应用为例。如图4G所示,手机将音乐应用400的内容流转至平板后,手机可以显示主界面40。如图7A所示,用户继续点击主界面40上的录音应用401。手机响应于用户点击录音应用401,显示如图7B所示的录音界面700,录音界面700上包括多条录音。用户点击录音1的选项701(即第一操作)。手机响应于用户点击录音1的选项701,显示如图7C所示的录音1的界面702,同时手机播放录音1的音频。如图7D所示,手机上显示录音应用401对应的界面(如图7C所示的界面702),并播放录音1的音频。平板上依然显示音乐应用400的界面(如图4D所示的界面404),并播放音乐1的音频。
请一并参阅图7B、7C、7D以及图8,示例性介绍第一电子设备的操作过程。
在实施实施例一后,第二焦点栈中存放第一音频应用的第一焦点信息,且该第一焦点信息置于栈顶,此时平板播放音乐1的音频。
步骤S800,另一第一音频应用获得播放操作。
如图7B所示,播放操作为用户点击录音1的选项701。
步骤S801,另一第一音频应用响应于播放操作,向第一电子设备的音频框架申请音频焦点。
步骤S802,音频框架将另一第一音频应用的第一焦点信息置于第一焦点栈的栈顶。
当该另一第一音频应用成功申请到音频焦点时,音频框架将该另一第一音频应用的第一焦点信息置于第一焦点栈的栈顶,并通知该另一第一音频应用获得音频焦点,该另一第一音频应用获得播放音频的权限。如图7C所示,录音应用401获得播放音频的权限,手机上的音频输出设备播放录音1的音频。
在本申请实施例中,第一音频应用的第一焦点信息存放在第二焦点栈中,另一第一音频应用的第一焦点信息存放在第一焦点栈中,该两个焦点信息相互独立,互不影响,则该两个第一音频应用可以同时获得音频焦点。第二电子设备可以正常播放第一音频应用的音频,且不受另一第一音频应用的影响。同时,第一电子设备上也可以正常播放另一第一音频应用的音频,且不受第一音频应用的影响。
实施例三
实施例三是在实施例二的基础上,用户将该另一第一音频应用流转至第二电子设备。
请参阅图9,示例性介绍本申请实施例提供的另一种音频控制方法。
步骤S901,第一电子设备获取第二操作,其中第二操作用于指示将正在播放音频的另一第一音频应用流转至第二电子设备。
步骤S901第二操作的内容可以参考实施例一,在此不再赘述。
实施例一与实施例三的第二操作均是指示将正在播放音频的第一音频应用流转至第二电子设备,区别在于实施例一与实施例三中所流转的第一音频应用不同。
步骤S902,第一电子设备响应于第二操作对应的第一指令,从第一焦点栈与第二焦点栈中确定出与第一指令对应的第二焦点栈,将该另一第一音频应用所对应的第一焦点信息置于第二焦点栈的栈顶,以当该另一第一音频应用的音频流转至第二电子设备时,通过第二电子设备播放该另一第一音频应用的音频。
步骤S902中第二操作对应的第一指令用于指示播放流转至第二电子设备的另一第一音频应用的音频。步骤S902第一指令的内容可以参考实施例一的第一指令,在此不再赘述。
实施例一与实施例三的第一指令均是指示播放流转至第二电子设备的第一音频应用的音频,区别在于实施例一与实施例三中所播放音频的第一音频应用不同。
在本申请实施例中,该另一第一音频应用的第一焦点信息置于第二焦点栈的栈顶,则该另一第一音频应用获得音频焦点,具有播放音频的权限。音频框架通知第二焦点栈中原处于栈顶的第一音频应用失去音频焦点。该另一第一音频应用播放音频,通过第二电子设备上的音频输出设备播放该另一第一音频应用的音频。
基于第一指令指示通过第一电子设备播放音频,第一电子设备可以确定与第一指令对应的焦点栈为第二焦点栈。
如图10A所示,在录音界面702上还包括更多选项703,用户点击更多选项703。手机响应于用户点击更多选项703,显示如图10B所示的可用的设备列表704和平板选项705。用户点击平板选项705。手机响应于用户点击平板选项705,将录音应用401的内容流转至平板。如图10C所示,平板上显示录音应用401对应的界面(如图7C或图10A所示的界面702),并播放录音应用401的音频。
请一并参阅图10B、10C以及图11,示例性介绍第一电子设备的操作过程。
在实施实施例二后,第二焦点栈上存放着第一音频应用的第一焦点信息,第一焦点栈上存放着另一第一音频应用的第一焦点信息。
步骤S110,另一第一音频应用获取第二操作。
如图10B所示,第二操作为用户点击平板选项705。
步骤S111,另一第一音频应用响应于第二操作,进行应用迁移。
如图10B所示,手机响应于用户点击平板选项705,启动流转功能。另一第一音频应用进行应用迁移,手机将录音应用401迁移至平板,并告知音频框架将录音应用401迁移至平板的信息。
步骤S112,音频框架提取出另一第一音频应用的第一焦点信息。
音频框架响应于将录音应用401迁移至平板的信息,音频框架从第一焦点栈中提取出录音应用401所对应的第一焦点信息。
步骤S113,音频框架将另一第一音频应用的第一焦点信息置于第二焦点栈的栈顶。
音频框架从第一焦点栈中提取出另一第一音频应用的第一焦点信息后,将该另一第一音频应用的第一焦点信息置于第二焦点栈的栈顶,则第一音频应用的第一焦点信息不再处于第二焦点栈中的栈顶位置。
步骤S114,音频框架通知第一音频应用失去音频焦点。
基于第一音频应用的第一焦点信息不再处于第二焦点栈的栈顶位置,音频框架通知该第一音频应用失去音频焦点。如图10C音乐应用401失去音频焦点,音乐应用401根据录音应用申请的音频焦点申请类型为AUDIOFOCUS_GAIN_TRANSIENT_EXCLUSIVE,则音乐应用401停止播放。
步骤S115:音频框架删除该另一第一音频应用的第一焦点信息。
在音频框架将另一第一音频应用的第一焦点信息加入至第二焦点栈时,也将第一焦点栈中另一第一音频应用的第一焦点信息删除,即另一第一音频应用的第一焦点信息出栈。
其中步骤S115可以在步骤S113或S114之前执行,也可以与步骤S113或S114同时执行。
在本申请实施例中,另一第一音频应用的第一焦点信息处于栈顶位置,该另一第一音频应用可播放音频。第二电子设备接收到该另一第一音频应用的音频,第二电子设备播放该另一第一音频应用的音频。第一电子设备还通知第二焦点栈中的其他音频应用(如第一音频应用)失去音频焦点,则第一音频应用根据另一第一音频应用的音频焦点申请类型而调整,如第一音频应用的音频可暂停播放或停止播放或降低音量。当暂停播放或停止播放第一音频应用的音频时,第二电子设备仅播放第一音频应用的音频。当降低第一音频应用音频的音量时,第二电子 设备同时播放第一音频应用和另一第一音频应用的音频。
实施例四
实施例四与实施例三的区别在于,实施例四还包括另一第二电子设备,用户打开另一第一音频应用后,将该另一第一音频应用流转另一第二电子设备。
请参阅图12,示例性介绍本申请实施例提供的另一种音频控制方法。
步骤S121,第一电子设备获取另一第二电子设备的设备标识。
在本申请实施例中,第一电子设备可以通过传感器获取一个或一个以上第二电子设备的设备标识。上述传感器可以包括超宽带(Ultra Wide Band,UWB)传感器、NFC传感器、激光传感器和/或可见光传感器等,上述设备标识可以包括互联网协议(Internet Protocol,IP)地址、媒体接入控制(media access control,MAC)地址、UWB标签、NFC标签等,本申请对此不作具体限定。
下面以第一电子设备为手机,第二电子设备为笔记本电脑为例进行举例说明。
例如,如果手机和笔记本电脑上都安装有UWB传感器,并且各自具有UWB标签(即设备标识),当用户移动手机以使得手机与笔记本电脑之间的距离较近时,手机上的UWB传感器将获取到笔记本电脑的UWB标签。
又例如,如果手机和笔记本电脑上都安装有NFC传感器,并且各自具有NFC标签(即设备标识),当用户将手机通过碰一碰或靠一靠等方式触碰笔记本电脑时,手机上的NFC传感器将获取到笔记本电脑的NFC标签。
在一些实施例中,第一电子设备可以用于获取同一个用户账号下所有第二电子设备的设备标识,或同一个网络下与第一电子设备所连接的所有第二电子设备的设备标识。其中,第一电子设备可以访问远端或云端的服务器或其他电子设备以获取第二电子设备的设备标识,可以访问本端的内部存储器以获取上述设备标识,也可以访问外部存储器接口以获取上述设备标识,对此不作具体限制。
步骤S122,第一电子设备根据该另一第二电子设备的设备标识创建第三焦点栈。
在本申请实施例中,第一电子设备获得第二电子设备的设备标识,根据第二电子设备的设备标识创建第二焦点栈。第一电子设备获得另一第二电子设备的设备标识,根据另一第二电子设备的设备标识创建第三焦点栈。
在一些实施例中,第二焦点栈与第二电子设备的设备标识关联,第三焦点栈与另一第二电子设备的设备标识关联。可以以第二电子设备的设备标识命名第二焦点栈的栈名,以另一第二电子设备的设备标识命名第三焦点栈的栈名,第二焦点栈的栈名与第三焦点栈的栈名不同。
第一电子设备同时获得第二电子设备和另一第二电子设备的设备标识,则可以同时创建第二焦点栈和第三焦点栈。第一电子设备也可以在不同时机分别创建第二焦点栈和第三焦点栈,如上述实施例二,第一电子设备可以在响应于将第一音频应用流转至第二电子设备时创建第二焦点栈,然后在后续打开另一第一音频应用时或另一第一音频应用启动流转功能时,搜索到另一第二电子设备,根据搜索到的另一第二电子设备的设备标识创建第三焦点栈,本申请对第一电子设备创建除第一焦点栈之外的焦点栈的时机以及数量不作具体限定。
步骤S123,第一电子设备获取第二操作,其中第二操作用于指示将正在播放音频的另一第一音频应用流转至另一第二电子设备。
步骤S123第二操作的内容可以参考上述实施例三,在此不再赘述。
实施例四与实施例三的第二操作均是指示将正在播放音频的另一第一音频应用流转至第二电子设备,区别在于实施例三与实施例四中所流转的第二电子设备不同。
步骤S124,第一电子设备响应于第二操作对应的第一指令,从第一焦点栈、第二焦点栈以及第三焦点栈中确定出与第一指令对应的第三焦点栈,将另一第一音频应用的第一焦点信息置于第三焦点栈的栈顶,以当该另一第一音频应用的音频流转至该另一第二电子设备时,通过该另一第二电子设备播放该另一第一音频应用的音频。
步骤S124中第二操作对应的第一指令用于指示播放流转至另一第二电子设备的另一第一音频应用的音频。步骤S124第一指令的内容可以参考实施例三的第一指令,在此不再赘述。
实施例四与实施例三的第一指令均是指示播放流转至第二电子设备的另一第一音频应用的音频,区别在于实施例四与实施例三中所播放音频的第二电子设备不同。
基于第一指令指示的是通过另一第二电子设备播放音频,则从第一焦点栈、第二焦点栈以及第三焦点栈中确定出与第一指令对应的焦点栈为与该另一第二电子设备关联的第三焦点栈。
如上述示例,如图10A用户点击了更多选项703,第一电子设备还搜索到另一第二电子设备,如笔记本电 脑。如图13A所示,手机显示可选设备列表704,可选设备列表704包括平板选项705和笔记本电脑选项706。用户点击笔记本电脑选项706。手机响应于用户点击笔记本电脑选项706,将录音应用401的内容流转至笔记本电脑。如图13B,平板上显示音乐应用400的界面(如图4D所示的界面404),并播放音乐应用400的音频。笔记本电脑上显示录音应用401对应的界面(如图7C所示的界面702),并播放录音应用401的音频,而手机可以显示主界面。
在一些实施例中,在步骤S124之后,第一电子设备打开另一新的第一音频应用(如电话应用)且在第一电子设备上播放,则第一电子设备上播放该电话应用的音频,第二电子设备上播放音乐应用的音频不受影响,另一第二电子设备上播放的录音应用的音频不受影响。
请一并参阅图13A、13B以及图14,示例性介绍第一电子设备的操作过程。
在实施实施例二后,第二焦点栈上存放着第一音频应用的第一焦点信息,第一焦点栈上存放着另一第一音频应用的第一焦点信息。
步骤S141,另一第一音频应用获取第二操作。
如图13A所示,第二操作为用户点击笔记本电脑选项706。
步骤S142,另一第一音频应用进行应用迁移。
如图13A所示,手机响应于用户点击笔记本电脑选项706,启动流转功能。另一第一音频应用进行应用迁移,手机将录音应用401迁移至笔记本电脑,并告知音频框架将录音应用401迁移至笔记本电脑的信息。
步骤S143,音频框架提取出另一第一音频应用的第一焦点信息。
音频框架响应于将录音应用401迁移至笔记本电脑的信息,音频框架从第一焦点栈中提取出录音应用401对应的第一焦点信息。
步骤S144,音频框架将另一第一音频应用的第一焦点信息置于第三焦点栈的栈顶。
音频框架从第一焦点栈中提取出另一第一音频应用的第一焦点信息后,将该另一第一音频应用的第一焦点信息置于第三焦点栈的栈顶。如图13B所示,该录音应用401获得音频焦点,且在笔记本电脑接收录音应用401的音频时,笔记本电脑播放该录音应用401的音频。
步骤S145,音频框架将第一焦点栈中的第一焦点信息删除。
在音频框架将另一第一音频应用的第一焦点信息加入至第三焦点栈时,也将第一焦点栈中另一第一音频应用的第一焦点信息删除,即另一第一音频应用的第一焦点信息出栈。
步骤S145可以在步骤S144之前执行,也可以与步骤S144同时执行。
在本申请实施例中,第二电子设备可以正常播放第一音频应用的音频,且不受另一第一音频应用的影响。同时,另一第二电子设备上也可以正常播放另一第一音频应用的音频,且不受第一音频应用的影响。第一电子设备上也可以正常播放新的第一音频应用的音频,实现第一电子设备可以将多个不同的第一音频应用的内容流转至不同的第二电子设备,且第一电子设备与该多个不同的第二电子设备所播放的音频互不影响。
实施例五
实施例五与实施例一的区别在于,第一电子设备将未播放音频的第一音频应用流转至第二电子设备。
请参阅图15,示例性介绍本申请实施例提供的另一种音频控制方法。
步骤S151,第一电子设备获取第三操作,其中第三操作指示将第一音频应用流转至第二电子设备。
示例性地,第三操作可以为点击、触控、长按、语音等操作。第三操作为用户在第一音频应用的界面上启动流转功能,第三操作与第二操作所启动的流转功能区别在于,第三操作中流转过去的未包括音频应用的音频。
步骤S152,第一电子设备响应于第三操作,将第一音频应用流转至第二电子设备。
步骤S153,当检测到流转至第二电子设备的第一音频应用申请播放音频时,第一电子设备获取第一指令。
在本申请实施例中,用户将第一音频应用流转至第二电子设备后,用户点击第二电子设备的第一音频应用的播放控件,第二电子设备将第一音频应用申请播放的信息传输至第一电子设备。例如,第二电子设备可以通过反向控制,将“播放第一音频应用的音频”的信息传输给第一电子设备,则该流转至第二电子设备的第一音频应用获得“播放第一音频应用的音频”的信息,该第一音频应用向第一电子设备的音频框架申请播放音频,第一电子设备检测到该第一音频应用申请播放音频,第一电子设备获得第一指令。
在一种可能的实现方式中,第一电子设备和第二电子设备之间建立通信连接,第一电子设备可以根据与第二 电子设备的通信连接获得“流转至第二电子设备的第一音频应用申请播放音频”的信息。例如可以通过分布式融合感知平台(distribute mobile sensing development platform,DMSDP)服务在第一电子设备和第二电子设备之间建立通信连接。在用户点击第二电子设备的第一音频应用的播放控件时,第一电子设备可以通过分布式融合感知平台服务从第二电子设备处获得“流转至第二电子设备的第一音频应用申请播放音频”的信息,则该第一音频应用向第一电子设备的音频框架申请播放音频,第一电子设备得到第一指令。
步骤S154,第一电子设备响应于第一指令,从第一焦点栈和第二焦点栈中确定出与第一指令对应的第二焦点栈,将第一音频应用的第一焦点信息置于第二焦点栈的栈顶,并通知第一音频应用获得音频焦点,以当第一音频应用的内容流转至第二电子设备时,通过第二电子设备播放第一音频应用的音频。
在本申请实施例中,第一电子设备可以将未播放音频的第一音频应用流转至第二电子设备,在用户操作第二电子设备播放该第一音频应用的音频时,第一电子设备将该第一音频应用的第一焦点信息置于第二焦点栈,由第二电子设备播放该第一音频应用的音频。基于该通过第二电子设备播放音频的第一音频应用所对应的第一焦点信息存放至第二焦点栈,第一焦点栈与第二焦点栈相互独立,第一电子设备播放的音频与第二电子设备播放的音频互不影响。
实施例六
如上述实施例,第一电子设备将第一音频应用流转至第二电子设备,在实施例六中第二电子设备播放第一音频应用的音频后,用户启动第二电子设备上所安装的第二音频应用并播放第二音频应用的音频。
请参阅图16,示例性介绍本申请实施例提供的另一种音频控制方法。
步骤S161,第一电子设备获取第一信息,其中第一信息用于指示通过第二电子设备播放第二音频应用的音频。
在本申请实施例中,第一电子设备与第二电子设备之间建立通信连接,第二电子设备可以响应于用户启动并播放第二音频应用的操作,向第一电子设备传输第一信息。
在一些实施例中,第二电子设备可以在监测到用户启动并播放第二音频应用的操作时,通过分布式融合感知平台服务将第一信息传输至第一电子设备。第一电子设备通过分布式融合感知平台服务从第二电子设备中获取第一信息。
在另一些实施例中,布局在第二电子设备上的分布式融合感知平台服务监测到用户启动并播放第二音频应用的操作时,传输第一信息至第一电子设备。
步骤S162,第一电子设备响应于获取到第一信息,获取第二音频应用的第二焦点信息,将第二焦点信息置于第二焦点栈的栈顶,并通知第一音频应用失去音频焦点。
在本申请实施例中,第一电子设备响应于获取到第一信息,获取第一信息所指示的第二音频应用所对应的第二焦点信息,确定该第一信息对应的第二电子设备,然后根据所确定的第二电子设备确定对应的焦点栈(除了第一焦点栈之外的焦点栈),在所确定的焦点栈中加入该第二焦点信息。
在一些实施例中,第一电子设备基于与第二电子设备的通信连接,获取第二音频应用申请音频焦点时的第二焦点信息。又或者,通过分布式融合感知平台服务获得第二音频应用申请音频焦点时的第二焦点信息。
在一些实施例中,将第二焦点信息置于第二焦点栈的栈顶可以为:第一电子设备获得第二焦点信息后,可以根据该获得的第二焦点信息创建新的焦点请求对象,该新的焦点请求对象保存着该第二焦点信息的内容,将该新的焦点请求对象置于第二焦点栈的栈顶。
如上述实施例四,将第一音频应用流转至第二电子设备,将另一第一音频应用流转至另一第二电子设备。当在第二电子设备上打开第二音频应用A并播放音频时,则在第二焦点栈中加入该第二音频应用A的第二焦点信息A1,第二焦点栈中存放着第一音频应用的第一焦点信息和第二焦点信息A1。当在另一第二电子设备上打开第二音频应用B并播放音频,则在第三焦点栈中加入该第二音频应用B的第二焦点信息B1,则第三焦点栈中存放着另一第一音频应用的第一焦点信息和第二焦点信息B1。
在本申请实施例中,第一电子设备响应于第二音频应用获得音频焦点,通知第一音频应用失去音频焦点。第一音频应用响应于失去音频焦点,可以获得第二音频应用的音频焦点申请类型,然后根据音频焦点申请类型暂停播放、停止播放或降低音量。
如上述示例,第一电子设备将音乐应用400的内容(音乐1)流转至第二电子设备,第二电子设备播放音乐 1的音频。用户打开安装在第二电子设备上的第二音频应用(如视频应用),基于视频应用的音频焦点申请类型为AUDIOFOCUS_GAIN,音乐应用400暂停播放音频,第二电子设备播放视频应用的音频。又例如,用户打开安装在第二电子设备上的第二音频应用为地图应用,基于地图应用的音频焦点申请类型为AUDIOFOCUS_GAIN_TRANSIENT_MAY_DUCK,音乐应用减小其播放声音(但仍可以播放),此时第二电子设备会混音播放,同时播放地图应用的导航音频以及音频应用的音频。
实施例七
实施例七是在实施例六的基础上,用户所打开的第二音频应用失去音频焦点。
请参阅图17,示例性介绍本申请实施例提供的另一种音频控制方法。
步骤S171,第一电子设备获取第二信息,其中第二信息用于指示第二音频应用失去音频焦点。
在本申请实施例中,第一电子设备与第二电子设备之间建立通信连接,第二电子设备可以响应于第二音频应用失去音频焦点,向第一电子设备传输第二信息。
在一些实施例中,第二电子设备可以在监测到第二音频应用失去音频焦点,通过分布式融合感知平台服务将第二信息传输至第一电子设备,第一电子设备通过分布式融合感知平台服务从第二电子设备中获取第二信息。
在另一些实施例中,布局在第二电子设备上的分布式融合感知平台服务监测到第二音频应用失去音频焦点时,传输第二信息至第一电子设备。
在本申请实施例中,第二音频应用失去音频焦点的情形包括但不限如下:第二电子设备上的其他第二音频应用申请通过第二电子设备播放音频,且该其他第二音频应用成功申请到音频焦点、第二音频应用申请一个短暂的音频焦点,且音频播放完毕或第二音频应用关闭。
在第二电子设备关机或第二电子设备与第一电子设备断开连接时,第一电子设备可以停止将第一音频应用的内容流转至第二电子设备。
步骤S172,第一电子设备响应于获取到第二信息,将第一音频应用的第一焦点信息置于第二焦点栈的栈顶,并通知第一音频应用获得音频焦点。
在本申请实施例中,第一电子设备将第一音频应用流转至第二电子设备播放,第一音频应用的第一焦点信息置于第二焦点栈的栈顶。第二电子设备上第二音频应用申请播放音频,该第二音频应用的第二焦点信息置于第二焦点栈的栈顶,第一焦点信息位于第二焦点信息下方。第一电子设备响应于获取到第二信息,将第二焦点信息从第二焦点栈中移出,则第一音频应用的第一焦点信息置于第二焦点栈的栈顶。
示例性地,第一电子设备将音乐应用的内容流转至第二电子设备,第二电子设备播放音乐应用的音频。用户打开安装在第二电子设备上的地图应用,地图应用的第二焦点信息处于第二焦点栈的栈顶,音乐应用的第一焦点信息位于第二焦点信息下方。第二电子设备播放地图应用的音频,地图应用申请一个短暂的音频焦点,在地图应用播放完导航语音后,地图应用释放音频焦点,也即地图应用失去音频焦点,第二焦点栈中处于栈顶的第二焦点信息被移出,音乐应用的第一焦点信息处于栈顶位置。音乐应用获得音频焦点,当第二电子设备接收到音乐应用的音频时,第二电子设备可以继续播放音乐应用的音频。
在一些实施例中,第一电子设备将多个第一音频应用流转至多个第二电子设备,当第一电子设备获得第二信息时,第一电子设备响应于获取到第二信息,从多个焦点栈中确定出与第二信息对应的焦点栈,并将所确定出的焦点栈中与第二信息对应的第二焦点信息移出。
如上述实施例四情形,将第一音频应用流转至第二电子设备,将另一第一音频应用流转至另一第二电子设备。当第二电子设备上的第二音频应用失去音频焦点时,第一电子设备响应于获得到第二信息,根据第二信息从第二焦点栈和第三焦点栈中确定出与第二信息对应的焦点栈为第二焦点栈,确定第二信息对应的第二音频应用的第二焦点信息,将第二焦点栈中的该第二焦点信息移出,则第一音频应用的第一焦点信息置于第二焦点栈的栈顶,通知该第一音频应用获得音频焦点,第二电子设备可以播放该第一音频应用的音频。
当另一第二电子设备上的另一第二音频应用失去音频焦点时,第一电子设备响应于获得到第二信息,根据第二信息从第二焦点栈和第三焦点栈中确定出与第二信息对应的焦点栈为第三焦点栈,确定第二信息对应的另一第二音频应用的第二焦点信息,将第三焦点栈中的该另一第二音频应用的第二焦点信息移出,则另一第一音频应用的第一焦点信息置于第三焦点栈的栈顶,通知该另一第一音频应用获得音频焦点,则另一第二电子设备可以播放该另一第一音频应用的音频。
上述实施例一至七中,第二电子设备接收到流转过来的第一音频应用的音频,若此时第二电子设备中也有第二音频应用申请播放,第二电子设备可以根据预设策略确定音频的播放。例如可以设置播放最新事件对应的音频,第二电子设备先接收到流转过来的第一音频应用的音频,后续第二电子设备打开第二音频应用,则第二电子设备优先播放第二音频应用的音频。相应地,第二电子设备上正在播放第二音频应用的音频,此时第二电子设备接收到流转过来的第一音频应用的音频,则第二电子设备优先播放第一音频应用的音频。
实施例八
如上述实施例,第一电子设备已将第一音频应用迁移至第二电子设备,实施例八中用户将第一音频应用迁回至第一电子设备。
请参阅图18,示例性介绍本申请实施例提供的另一种音频控制方法。
步骤S181,第一电子设备获取第二指令,其中第二指令用于指示将第一音频应用迁回至第一电子设备。
在本申请实施例中,在将第一音频应用迁移至第二电子设备后,第一电子设备检测到用户点击第一音频应用的图标,则第一电子设备获得第二指令。或者,用户在第二电子设备上对第一音频应用进行迁回操作,第二电子设备向第一电子设备传输第二指令。或者,布局在第二电子设备上的分布式融合感知平台服务,监测到用户在第二电子设备上对第一音频应用进行迁回操作时,传输第二指令至第一电子设备。
步骤S182,第一电子设备响应于第二指令,将第一音频应用的第一焦点信息由第二焦点栈移至第一焦点栈的栈顶,并通知第一音频应用获得音频焦点。
在本申请实施例中,第一电子设备响应于第二指令,确定第二指令对应的焦点栈,然后从确定的焦点栈中将第一音频应用的第一焦点信息提取出来,并将提取出的第一焦点信息置于第一焦点栈。
示例性地,当第一电子设备将第一音频应用迁移至第二电子设备时,该第一音频应用的第一焦点信息存放在第二焦点栈,第一电子设备响应于第二指令确定第二焦点栈,第一电子设备将第一焦点信息由第二焦点栈移至第一焦点栈。当第一电子设备将第一音频应用迁移至另一第二电子设备时,则该第一音频应用的第一焦点信息存放在第三焦点栈,第一电子设备响应于第二指令确定第三焦点栈,第一电子设备将第一焦点信息由第三焦点栈移至第一焦点栈。当第一焦点栈中的第一焦点信息置于栈顶,则该第一焦点信息对应的第一音频应用获取播放音频的权限,第一电子设备可以播放第一音频应用的音频。
在本申请实施例中,第一电子设备将第一音频应用迁回至第二电子设备上播放音频后,又可以将第一音频应用迁回至第一电子设备上播放音频。
如上述实施例,对于第一电子设备上的多个第一音频应用,当该多个第一音频应用通过不同电子设备(如第一电子设备、第二电子设备或另一第二电子设备)播放音频时,由第一电子设备上的多个焦点栈分别维护该多个第一音频应用的焦点信息,实现分开管理该多个第一音频应用的音频播放。下面介绍由第二电子设备管理流转过来的第一音频应用,以此实现在分布式音频管理。
实施例九
实施例九与上述实施例一至八的区别在于,第二电子设备可以仅维护一个第一本地焦点栈(类似上述第一焦点栈),第一电子设备也可以仅维护一个第二本地焦点栈(也即上述第一焦点栈)。
第一电子设备与第二电子设备均包括焦点栈,第二电子设备的第一本地焦点栈用于存放通过第二电子设备播放音频的音频应用(包括安装在第二电子设备上的第二音频应用以及由第一电子设备流转至第二电子设备的第一音频应用)的焦点信息。第一电子设备的第二本地焦点栈用于存放申请通过第一电子设备播放音频的音频应用的焦点信息。
其中焦点栈可以基于预设的维护秩序确保一次仅一个焦点信息对应的音频应用获得音频焦点。本申请实施例中的焦点栈(如第一本地焦点栈、第二本地焦点栈)不仅可以实现为堆栈,还可以实现为数组、队列或map等,本申请对此不作具体限定。
请参阅图19,示例性介绍本申请实施例提供的另一种音频控制方法。
步骤S191,第一电子设备获取第一指令。
第一电子设备获得第一指令包括但不限如下情形:第一电子设备检测到用户将正在播放音频的第一音频应用 流转至第二电子设备,第一电子设备获得第一指令。又或者第一电子设备已将未播放音频的第一音频应用流转至第二电子设备,第二电子设备检测到流转过来的第一音频应用申请播放音频,第二电子设备将流转过来的第一音频应用申请播放音频的信息传输至第一电子设备,第一电子设备获得第一指令。其中第一指令的相关内容可以参考上述实施例,在此不再赘述。
步骤S192,第一电子设备响应于第一指令,通知第一音频应用获得音频焦点,并将第一音频应用的第一焦点信息传输至第二电子设备。
在本申请实施例中,第一电子设备响应于第一指令,通知第一音频应用获得音频焦点,第一音频应用获得播放音频的权限。第一电子设备可以基于其与第二电子设备之间的通信连接传输第一音频应用的内容(如音频、界面、申请音频焦点时的第一焦点信息等)至第二电子设备。
步骤S193,第二电子设备根据第一焦点信息创建第一模拟焦点信息,并将第一模拟焦点信息置于第一本地焦点栈的栈顶,以当第一音频应用的内容流转至第二电子设备时,通过第二电子设备播放第一音频应用的音频。
在本申请实施例中,第二电子设备根据第一焦点信息模拟第一音频应用向第二电子设备的音频框架申请第一模拟焦点信息,并将第一模拟焦点信息加入第二电子设备的第一本地焦点栈。第一焦点信息和第一模拟焦点信息相似,至少第一焦点信息和第一模拟焦点中的音频焦点申请类型一致。第二电子设备将第一模拟焦点信息置于第一本地焦点栈的栈顶,通知第一本地焦点栈中的其他焦点信息所对应的音频应用失去音频焦点,则失去音频焦点的音频应用根据第一模拟焦点信息中对应的音频焦点申请类型暂停播放、停止播放或降低音量。同时,第二电子设备可以根据其与第一电子设备的通信连接将第一音频应用获得音频焦点的信息告知第一电子设备。则该第一音频应用具有播放音频的权限,可播放音频。当第二电子设备接收到第一音频应用的音频时,第二电子设备播放该第一音频应用的音频。
在本申请实施例中,第一电子设备响应于第一指令,将第一指令指示的第一音频应用所对应的第一焦点信息传输至第二电子设备,第二电子设备可以根据该第一焦点信息创建第一模拟焦点信息,并将第一模拟焦点信息加入第二电子设备的第一本地焦点栈,由第二电子设备间接管理该流转至第二电子设备的第一音频应用。申请在第一电子设备上播放音频的第一音频应用对应的第一焦点信息会加入至第一电子设备的第二本地焦点栈,第一本地焦点栈与第二本地焦点栈相互独立,则通过第二电子设备播放音频的第一音频应用与通过第一电子设备播放音频的其他第一音频应用互不影响。
实施例十
与实施例九的区别在于,当第一电子设备将第一音频应用流转过去后,在第二电子设备上打开新的第二音频应用。
请参阅图20,示例性介绍本申请实施例提供的另一种音频控制方法。
步骤S201,第二电子设备获取第四操作,其中第四操作指示通过第二电子设备播放第二音频应用的音频。
第二电子设备上安装第二音频应用,第二电子设备可以在监测到用户启动并播放第二音频应用的操作时,获得第四操作。第四操作可以为点击、触控、长按、语音等操作,例如第四操作可以为点击第二音频应用播放控件。
步骤S202,第二电子设备响应于第四操作,将第二音频应用的第二焦点信息置于第一本地焦点栈的栈顶,通知第二音频应用获得音频焦点。
如实施例九,第二电子设备将第一模拟焦点信息存放至第二电子设备的第一本地焦点栈中,当第一模拟焦点信息处于栈顶位置,则第二电子设备播放第一音频应用的音频。在步骤S202中,第二电子设备将第二音频应用的第二焦点信息加入至第一本地焦点栈中,第二音频应用的第二焦点信息处于栈顶位置,第一模拟焦点信息位于第二音频应用的第二焦点信息下方。第二电子设备通知第二音频应用获得音频焦点。
步骤S203,第一电子设备获取第一信息。
其中第一信息用于指示通过第二电子设备播放第二音频应用的音频。
第一电子设备获得第一信息的内容可以参考上述实施例六。
在第二电子设备将第二音频应用的第二焦点信息置于第一本地焦点栈的栈顶时,通知第二音频应用获得音频焦点。同时,第二电子设备基于与第一电子设备的通信连接,通知第一电子设备第二音频应用获得音频焦点,则第一电子设备获得第一信息。
步骤S204,第一电子设备响应于获取到第一信息,通知第一音频应用失去音频焦点。
在本申请实施例中,当第二音频应用获得音频焦点时,第二电子设备播放第二音频应用的音频。
当第一电子设备通知第一音频应用其失去音频焦点,第一音频应用响应于失去音频焦点,第一音频应用可以获得第二音频应用的音频焦点申请类型,然后根据该音频焦点申请类型暂停播放、停止播放或降低音量。
实施例十一
与实施例九、十的区别在于,第一电子设备已将第一音频应用迁移至第二电子设备,实施例十一中用户将第一音频应用迁回至第一电子设备。
请参阅图21,示例性介绍本申请实施例提供的另一种音频控制方法。
步骤S211,第一电子设备响应于迁回指令,从第二电子设备中获取第一模拟焦点信息。
其中迁回指令指示将第一音频应用迁回至第一电子设备。
在本申请实施例中,在将第一音频应用迁移至第二电子设备后,第一电子设备检测到用户点击第一音频应用的图标,获得迁回指令(对应上述实施例的第四指令)。或者,用户在第二电子设备上对第一音频应用进行迁回操作(如点击将第一音频应用迁移至第一电子设备),第二电子设备向第一电子设备传输迁回指令。
第一电子设备根据迁回指令中指示迁回的第一音频应用,从对应的第二电子设备中获取第一模拟焦点信息。在上述实施例四情形中,第一音频应用流转至第二电子设备上播放,另一第一音频应用流转至另一第二电子设备上播放。当第一电子设备所接收到的迁回指令指示迁回第二电子设备上的第一音频应用时,从第二电子设备中获取该第一音频应用的第一模拟焦点信息。当迁回指令指示迁回另一第二电子设备上的另一第一音频应用时,从另一第二电子设备中获取该另一第一音频应用的第一模拟焦点信息。
步骤S212,第一电子设备将第一模拟焦点信息置于第二本地焦点栈的栈顶,并通知第一音频应用获得音频焦点,第一电子设备播放第一音频应用的音频。
如上述步骤S211第一电子设备获得第一模拟焦点信息,然后根据获得的第一模拟焦点信息创建新的第一模拟焦点信息。将新的第一模拟焦点信息加入第一电子设备的第二本地焦点栈。当新的第一模拟焦点信息位于第二本地焦点栈的栈顶时,第一电子设备通知该新的第一模拟焦点信息对应的第一音频应用获得音频焦点,第一音频应用具有播放音频的权限,第一电子设备播放第一音频应用的音频。
需要说明的是,实施例九至十一中的中第一电子设备也可以维护两个或两个焦点栈,本申请对此不作具体限定。
本申请的实施例也可以应用至焦点的管理,如窗口焦点的管理。例如在第一电子设备将安装其上的第一应用A流转至第二电子设备后,第一应用A对应的窗口焦点存放至第一桶中。相应地,若用户打开安装第一电子设备上的第一应用B,可以将第一应用B对应的窗口焦点存放至第二桶中。其中第一桶用于存放流转至第二电子设备的第一应用的窗口焦点。第二桶用于存放在第一电子设备上的第一应用的窗口焦点。通过设置第一桶和第二桶,可以使得两个应用均获得焦点,且可以很好管理焦点。
在一些实施例中,第一电子设备将N个第一应用分别流转至N个第二电子设备,则可以将该N个第一应用对应的窗口焦点分别存放至N个第一桶中,该N个第一桶分别与流转的第二电子设备对应,在流转至第二电子设备的应用包括多个时,可以较好地管理这多个应用的焦点。
请参阅图22,示例性介绍本申请提供的电子设备220结构,该电子设备220可以为上述第一电子设备或第二电子设备。
电子设备220可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180等。
本发明实施例示意的结构并不构成对电子设备220的具体限定。在本申请另一些实施例中,电子设备220可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural‐network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多 个处理器中。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
电子设备220的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。
移动通信模块150可以提供应用在电子设备220上的包括2G/3G/4G/5G等无线通信的解决方案。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。
在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
无线通信模块160可以提供应用在电子设备220上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi‐Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。
在一些实施例中,电子设备220的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备220可以通过无线通信技术与网络以及其他设备通信。
电子设备220通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light‐emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active‐matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light‐emitting diode,FLED),Miniled,MicroLed,Micro‐oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备220可以包括1个或N个显示屏194,N为大于1的正整数。
电子设备220可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal‐oxide‐semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备220可以包括1个或N个摄像头193,N为大于1的正整数。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备220的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行电子设备220的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备220使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
电子设备220可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备220可以通过扬声器170A收听音乐,或收听免提通话。
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备220接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。电子设备220可以设置至少一个麦克风170C。在另一些实施例中,电子设备220可以设置两个麦克风170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备220还可以设置三个,四个或更多麦克风170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口130,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
传感器模块180中可以包括压力传感器,陀螺仪传感器,气压传感器,磁传感器,加速度传感器,距离传感器,接近光传感器,指纹传感器,温度传感器,触摸传感器,环境光传感器,骨传导传感器等。
当然,电子设备220还可以包括充电管理模块、电源管理模块、电池、按键、指示器以及1个或多个SIM卡接口等,本申请实施例对此不做任何限制。
本申请实施例还提供了一种计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述相关步骤,以实现上述各方法实施例中的音频控制方法。
本申请实施例还提供一种计算机存储介质,包括计算机指令,当计算机指令在电子设备上运行时,使得电子设备执行如上述实施例的音频控制方法。
其中,本申请实施例提供的电子设备、计算机存储介质、计算机程序产品或芯片系统均用于执行上文所提供的对应的方法,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,该模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
该作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是一个物理单元或多个物理单元,即可以位于一个地方,或者也可以分布到多个不同地方。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
该集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该软件产品存储在一个存储介质中,包括若干指令用以使得一个器件(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。

Claims (14)

  1. 一种音频控制方法,其特征在于,应用于第一电子设备,所述第一电子设备上安装第一音频应用,所述第一电子设备包括第一焦点栈,其中,所述第一焦点栈用于存放通过所述第一电子设备播放音频的音频应用所对应的焦点信息,所述方法包括:
    创建第二焦点栈,其中所述第二焦点栈用于存放通过第二电子设备播放音频的音频应用所对应的焦点信息;
    响应于第一指令,将所述第一音频应用的第一焦点信息置于所述第二焦点栈的栈顶,并通知所述第一音频应用获得音频焦点,以当所述第一音频应用的内容流转至所述第二电子设备时,通过所述第二电子设备播放所述第一音频应用的音频。
  2. 如权利要求1所述音频控制方法,其特征在于,所述第二电子设备上安装第二音频应用,所述方法还包括:
    获取第一信息,其中所述第一信息用于指示通过所述第二电子设备播放所述第二音频应用的音频;
    响应于获取到第一信息,获取所述第二音频应用的第二焦点信息,将所述第二焦点信息置于所述第二焦点栈的栈顶,并通知所述第一音频应用失去所述音频焦点。
  3. 如权利要求2所述音频控制方法,其特征在于,所述方法还包括:
    获取第二信息,其中所述第二信息用于指示所述第二音频应用失去所述音频焦点;
    响应于获取到第二信息,将所述第一焦点信息置于所述第二焦点栈的栈顶,并通知所述第一音频应用获得所述音频焦点。
  4. 如权利要求1至3中任一项所述音频控制方法,其特征在于,所述响应于第一指令,将所述第一音频应用的第一焦点信息置于所述第二焦点栈的栈顶包括:
    获取第一操作;
    响应于所述第一操作,从所述第一焦点栈和所述第二焦点栈中确定出与所述第一操作对应的所述第一焦点栈,并将所述第一音频应用的所述第一焦点信息置于所述第一焦点栈的栈顶,以通过所述第一电子设备播放所述第一音频应用的音频;
    获取第二操作,其中所述第二操作用于指示将正在播放音频的所述第一音频应用流转至所述第二电子设备;
    响应于所述第二操作对应的所述第一指令,从所述第一焦点栈和所述第二焦点栈中确定出与所述第一指令对应的所述第二焦点栈,并将所述第一音频应用的所述第一焦点信息由所述第一焦点栈移至所述第二焦点栈的栈顶。
  5. 如权利要求1至3中任一项所述音频控制方法,其特征在于,在所述响应于第一指令之前还包括:
    响应于第三操作,将所述第一音频应用流转至所述第二电子设备;
    当检测到流转至所述第二电子设备的所述第一音频应用申请播放音频时,获取所述第一指令。
  6. 如权利要求1至5中任一项所述音频控制方法,其特征在于,所述方法还包括:
    获取第二指令,其中,所述第二指令用于指示将所述第一音频应用迁回至所述第一电子设备;
    响应于所述第二指令,将所述第一焦点信息由所述第二焦点栈移至所述第一焦点栈的栈顶,并通知所述第一音频应用获得所述音频焦点。
  7. 如权利要求1至6中任一项所述音频控制方法,其特征在于,所述创建第二焦点栈包括:
    获取所述第二电子设备的设备标识;
    根据所述设备标识创建所述第二焦点栈。
  8. 如权利要求7所述音频控制方法,其特征在于,当获得N个所述设备标识时,其中,N为大于或等于2的整数,则所述根据所述设备标识创建所述第二焦点栈包括:
    根据N个所述设备标识分别创建N个所述第二焦点栈。
  9. 一种音频控制方法,其特征在于,应用于第一电子设备与第二电子设备,所述第一电子设备与所述第二电子设备通信连接,所述第一电子设备上安装第一音频应用,所述第二电子设备包括第一本地焦点栈,所述第一本地焦点栈用于存放通过所述第二电子设备播放音频的音频应用的焦点信息,所述方法包括:
    所述第一电子设备获取第一指令;
    所述第一电子设备响应于所述第一指令,通知所述第一音频应用获得音频焦点,并将所述第一音频应用的第一焦点信息传输至所述第二电子设备;
    所述第二电子设备根据所述第一焦点信息创建第一模拟焦点信息,将所述第一模拟焦点信息置于所述第一本地焦点栈的栈顶,以当所述第一音频应用的内容流转至所述第二电子设备时,通过所述第二电子设备播放所述第一音频应用的音频。
  10. 如权利要求9所述音频控制方法,其特征在于,所述第二电子设备上包括第二音频应用,所述方法还包括:
    所述第二电子设备获取第四操作,其中所述第四操作指示通过所述第二电子设备播放所述第二音频应用的音频;
    所述第二电子设备响应于所述第四操作,将所述第二音频应用的第二焦点信息置于所述第一本地焦点栈的栈顶,通知所述第二音频应用获得所述音频焦点;
    所述第一电子设备响应于所述第二音频应用获得所述音频焦点,通知所述第一音频应用失去所述音频焦点。
  11. 如权利要求9或10所述音频控制方法,其特征在于,所述第一电子设备包括第二本地焦点栈,所述方法还包括:
    所述第一电子设备响应于迁回指令,从所述第二电子设备中获取所述第一模拟焦点信息;
    所述第一电子设备将所述第一模拟焦点信息置于所述第二本地焦点栈的栈顶,并通知所述第一音频应用获得所述音频焦点,所述第一电子设备播放所述第一音频应用的音频。
  12. 一种计算机存储介质,其特征在于,包括计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如权利要求1至8中任一项所述音频控制方法。
  13. 一种计算机程序产品,其特征在于,当所述计算机程序产品在计算机上运行时,使得所述计算机执行如权利要求1至8中任一项所述音频控制方法。
  14. 一种电子设备,其特征在于,所述电子设备包括处理器和存储器,所述存储器用于存储指令,所述处理器用于调用所述存储器中的指令,使得所述电子设备执行权利要求1至8中任一项所述音频控制方法。
PCT/CN2023/127789 2022-11-01 2023-10-30 音频控制方法、存储介质、程序产品及电子设备 WO2024093922A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211358862.2A CN117992007A (zh) 2022-11-01 2022-11-01 音频控制方法、存储介质、程序产品及电子设备
CN202211358862.2 2022-11-01

Publications (1)

Publication Number Publication Date
WO2024093922A1 true WO2024093922A1 (zh) 2024-05-10

Family

ID=90889553

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/127789 WO2024093922A1 (zh) 2022-11-01 2023-10-30 音频控制方法、存储介质、程序产品及电子设备

Country Status (2)

Country Link
CN (1) CN117992007A (zh)
WO (1) WO2024093922A1 (zh)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090273712A1 (en) * 2008-05-01 2009-11-05 Elliott Landy System and method for real-time synchronization of a video resource and different audio resources
US20160249356A1 (en) * 2015-02-23 2016-08-25 Intricon Corporation Enabling Concurrent Proprietary Audio And Bluetooth Low Energy Using Enhanced LE Link Layer For Hearing Device
CN107391063A (zh) * 2017-06-30 2017-11-24 青岛海信移动通信技术股份有限公司 信息显示方法、装置及计算机可读存储介质
CN107493375A (zh) * 2017-06-30 2017-12-19 北京超卓科技有限公司 移动终端扩展式投屏方法及投屏系统
CN107967130A (zh) * 2016-10-20 2018-04-27 深圳联友科技有限公司 一种车机音频通道的切换方法及装置
CN111324327A (zh) * 2020-02-20 2020-06-23 华为技术有限公司 投屏方法及终端设备
CN112162783A (zh) * 2020-09-27 2021-01-01 珠海格力电器股份有限公司 音乐播放应用保活处理方法、系统、存储介质及电子设备
WO2022179405A1 (zh) * 2021-02-26 2022-09-01 华为技术有限公司 一种投屏显示方法及电子设备

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090273712A1 (en) * 2008-05-01 2009-11-05 Elliott Landy System and method for real-time synchronization of a video resource and different audio resources
US20160249356A1 (en) * 2015-02-23 2016-08-25 Intricon Corporation Enabling Concurrent Proprietary Audio And Bluetooth Low Energy Using Enhanced LE Link Layer For Hearing Device
CN107967130A (zh) * 2016-10-20 2018-04-27 深圳联友科技有限公司 一种车机音频通道的切换方法及装置
CN107391063A (zh) * 2017-06-30 2017-11-24 青岛海信移动通信技术股份有限公司 信息显示方法、装置及计算机可读存储介质
CN107493375A (zh) * 2017-06-30 2017-12-19 北京超卓科技有限公司 移动终端扩展式投屏方法及投屏系统
CN111324327A (zh) * 2020-02-20 2020-06-23 华为技术有限公司 投屏方法及终端设备
CN112162783A (zh) * 2020-09-27 2021-01-01 珠海格力电器股份有限公司 音乐播放应用保活处理方法、系统、存储介质及电子设备
WO2022179405A1 (zh) * 2021-02-26 2022-09-01 华为技术有限公司 一种投屏显示方法及电子设备

Also Published As

Publication number Publication date
CN117992007A (zh) 2024-05-07

Similar Documents

Publication Publication Date Title
US11818420B2 (en) Cross-device content projection method and electronic device
WO2020238871A1 (zh) 一种投屏方法、系统及相关装置
WO2020244495A1 (zh) 一种投屏显示方法及电子设备
WO2022100305A1 (zh) 画面跨设备显示方法与装置、电子设备
JP2022549157A (ja) データ伝送方法及び関連装置
CN111866950B (zh) Mec中数据传输的方法和通信装置
WO2021233079A1 (zh) 一种跨设备的内容投射方法及电子设备
WO2020077512A1 (zh) 语音通话方法、电子设备及系统
WO2021121052A1 (zh) 一种多屏协同方法、系统及电子设备
WO2022100304A1 (zh) 应用内容跨设备流转方法与装置、电子设备
CN113630910B (zh) 蜂窝通信功能的使用方法、相关装置及系统
WO2020156230A1 (zh) 一种电子设备在来电时呈现视频的方法和电子设备
WO2021017894A1 (zh) 一种使用远程sim模块的方法及电子设备
WO2022179405A1 (zh) 一种投屏显示方法及电子设备
WO2022143883A1 (zh) 一种拍摄方法、系统及电子设备
US20230403458A1 (en) Camera Invocation Method and System, and Electronic Device
KR102491006B1 (ko) 데이터 송신 방법 및 전자 기기
WO2022222691A1 (zh) 一种通话处理方法及相关设备
US20230273872A1 (en) Application debuging method and electronic device
WO2023020012A1 (zh) 设备之间的数据通信方法、设备、存储介质及程序产品
WO2024093922A1 (zh) 音频控制方法、存储介质、程序产品及电子设备
US20220311700A1 (en) Method for multiplexing http channels and terminal
WO2021218544A1 (zh) 一种提供无线上网的系统、方法及电子设备
WO2024104122A1 (zh) 分享方法、电子设备及计算机存储介质
WO2024067170A1 (zh) 设备管理方法及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23884852

Country of ref document: EP

Kind code of ref document: A1