WO2022222780A1 - 音频输出方法、媒体文件的录制方法以及电子设备 - Google Patents

音频输出方法、媒体文件的录制方法以及电子设备 Download PDF

Info

Publication number
WO2022222780A1
WO2022222780A1 PCT/CN2022/086067 CN2022086067W WO2022222780A1 WO 2022222780 A1 WO2022222780 A1 WO 2022222780A1 CN 2022086067 W CN2022086067 W CN 2022086067W WO 2022222780 A1 WO2022222780 A1 WO 2022222780A1
Authority
WO
WIPO (PCT)
Prior art keywords
audio
electronic device
applications
application
target
Prior art date
Application number
PCT/CN2022/086067
Other languages
English (en)
French (fr)
Inventor
王傲飞
于飞
范亚军
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP22790889.4A priority Critical patent/EP4310664A1/en
Publication of WO2022222780A1 publication Critical patent/WO2022222780A1/zh
Priority to US18/492,185 priority patent/US20240045651A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44594Unloading
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C7/00Arrangements for writing information into, or reading information out from, a digital store
    • G11C7/16Storage of analogue signals in digital stores using an arrangement comprising analogue/digital [A/D] converters, digital memories and digital/analogue [D/A] converters 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/12Circuits for transducers, loudspeakers or microphones for distributing signals to two or more loudspeakers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • H04R2430/01Aspects of volume control, not necessarily automatic, in sound systems

Definitions

  • the present application relates to the field of electronic technology, and in particular, to an audio output method, a media file recording method, an electronic device, and a computer-readable storage medium.
  • audio applications such as music applications, navigation applications, conference applications, etc.
  • electronic devices eg, mobile phones.
  • the electronic device will respond to the request and play audio corresponding to the application. In this way, when multiple applications request audio output at the same time, the sounds of the multiple applications will be mixed together, interfere with each other, and affect the user experience.
  • Some embodiments of the present application provide an audio output method, a method for recording a media file, an electronic device, and a computer-readable storage medium.
  • the present application is described below from various aspects, and the embodiments and beneficial effects of the following aspects can be mutually refer to.
  • embodiments of the present application provide an audio output method for an electronic device, the method comprising: receiving audio output requests from M audio applications on the electronic device (for example, the operating system of the electronic device receives audio output requests of M audio applications on the electronic device); select N target applications from the M audio applications (for example, the operating system of the electronic device selects N target applications from the M audio applications), and output N target applications Audio data of the target application; where M is greater than N.
  • the embodiments of the present application limit the maximum number of applications (herein referred to as “target applications”) that output audio at the same time, when the number M of applications (herein referred to as “candidate applications”) that request audio output is greater than the threshold N , the electronic device screens the candidate applications to limit the number of target applications to N.
  • target applications the maximum number of applications
  • candidate applications the number M of applications that request audio output is greater than the threshold N
  • the electronic device screens the candidate applications to limit the number of target applications to N.
  • the embodiments of the present application can improve the situation of audio interference of multiple applications in the prior art.
  • the application that sends the audio output request last is determined as the target application for outputting audio; while in the embodiment of the present application, according to any filtering condition (for example, the filtering condition specified by the user), from Screen N target applications from M candidate applications. Therefore, compared with the audio focus mechanism of the prior art, the embodiments of the present application can determine the target application for outputting audio in a more flexible manner, thereby improving user experience.
  • any filtering condition for example, the filtering condition specified by the user
  • N is a positive integer of 2 or more.
  • multiple target applications (as long as the number of target applications does not exceed N (N ⁇ 2)) are allowed to output audios at the same time, so the user’s requirement for listening to multiple audios at the same time can be satisfied.
  • selecting N target applications from M audio applications includes: selecting N target applications from M audio applications based on a current working scene of the electronic device; or, selecting N target applications from M audio applications based on preset application priority information; Select N target applications from the M audio applications; or; select N target applications from the M audio applications based on the user's selection operation on the M audio applications.
  • determining the target device according to the current working scene of the electronic device, or the application priority, or the real-time specification by the user can better meet the needs of the user and improve the user experience.
  • selecting N target applications from the M audio applications based on the current working scene of the electronic device includes: determining the current working scene of the electronic device; determining the scene feature application from the M audio applications according to the current working scene , wherein the scene feature application is an application required by the current work scene; based on the determination result of the scene feature application, N target applications are determined; wherein, the N target applications at least include the scene feature application.
  • the N target applications include at least applications required by the current work scene (ie, scene feature applications), so that the audio output by the electronic device matches the current work scene better, which is beneficial to improve user experience.
  • determining N target applications based on the determination result of the scene feature application includes: determining the priority of each application in the M audio applications based on the determination result of the scene feature application, wherein the priority of the scene feature application Higher priority than other applications in the M audio applications; according to the priority ordering of the M audio applications, the N audio applications with the highest priority are determined as the N target applications, so that the N target applications include at least scene feature applications .
  • determining the current working scene of the electronic device includes: determining the current working scene according to other electronic devices communicatively connected to the electronic device; or, determining the current working scene according to an application currently running on the electronic device;
  • the measurement data of the specific sensor determines the current work scene, and the specific sensor is used to measure the displacement, velocity and/or acceleration data of the electronic device; or, according to the user's scene-specified operation, the current work scene is determined.
  • the current working scene of the electronic device includes a vehicle scene, a home scene, a conference scene, a sports scene, or a high-speed rail travel scene.
  • the electronic device includes a plurality of volume control information corresponding to N target applications, and each target application in the N target applications corresponds to one of the plurality of volume control information; output N
  • the audio data of the target application includes: determining the volume of the target application according to the volume control information of the target application; and outputting the audio data of the target application with the volume.
  • the electronic device includes N pieces of volume control information, and the N target applications are in one-to-one correspondence with the N pieces of volume control information.
  • the volume of each target application can be independently controlled.
  • the electronic device includes a plurality of volume control information corresponding to the N target applications, wherein each volume control information of the plurality of volume control information can be determined based on user input.
  • the user can adjust the volume of each application as required, so as to improve the user experience.
  • outputting the audio data of the N target applications includes: playing the audio data of the N target applications through multiple audio playback devices, where the audio playback devices include electronic devices and/or other devices other than electronic devices .
  • playing the audio data of N target applications through multiple audio playback devices includes: determining an audio playback device corresponding to each target application in the N target applications, and playing N audio data based on the determination results of the audio playback devices Audio data of the target application; wherein, determining the audio playback device corresponding to each target application in the N target applications includes: determining the audio playback device corresponding to the target application based on preset device priority information; or, based on the target application The number of times played on the audio playback device determines the audio playback device corresponding to the target application.
  • the M audio applications are other applications than the system phone application.
  • N is determined by the electronic device according to the number of audio playback devices currently communicatively connected to the electronic device.
  • an embodiment of the present application provides a method for recording a media file, which is used in an electronic device, including: when the electronic device outputs audio data of multiple audio applications, receiving a first input, where the first input is used from Selecting one or more target applications from multiple audio applications; recording a first media file, wherein recording the first media file includes: recording audio data of one or more target applications to generate the first media file.
  • the electronic device when the electronic device outputs audio of multiple audio applications (referred to as “candidate applications”), the electronic device only records the audio of the selected application (referred to as “target application”), and does not record other than the target application audio of other candidate applications to meet the diverse needs of users.
  • the number of target applications is less than the number of audio applications currently outputting audio.
  • the electronic device when the electronic device outputs audio data of multiple audio applications, the electronic device outputs video data of a first video application; and recording the first media file includes: recording audio data of one or more target applications , and record the video data of the first video application to generate the first media file.
  • a user when recording a video, a user can select a video data source and an audio data source, so as to meet the diverse needs of the user.
  • embodiments of the present application provide an electronic device, including: a memory for storing instructions executed by one or more processors of the electronic device; a processor, when the processor executes the instructions in the memory, it can The electronic device is caused to execute the audio output method provided by any embodiment of the first aspect of the present application, or to execute the media file recording method provided by any embodiment of the second aspect of the present application.
  • an embodiment of the present application provides a computer-readable storage medium, where an instruction is stored on the computer-readable storage medium, and the instruction, when executed on a computer, causes the computer to execute the audio frequency provided by any one of the embodiments of the first aspect of the present application.
  • An output method, or a method for recording a media file provided by any implementation manner of the second aspect of this application.
  • FIG. 1 shows an exemplary application scenario 1 of the audio output method provided by the embodiment of the present application
  • Fig. 2 shows the exemplary structure diagram of the focus queue under the audio focus mechanism
  • FIG. 3 shows an exemplary scenario in which a candidate application on an electronic device issues an audio output request in an embodiment of the present application
  • FIG. 4a shows an exemplary flowchart 1 of the method for selecting a target application provided by an embodiment of the present application
  • FIG. 4b shows an exemplary interface diagram for specifying a current work scene provided by an embodiment of the present application
  • FIG. 5a shows an exemplary interface diagram for specifying key applications provided by an embodiment of the present application
  • FIG. 5b shows an exemplary flowchart 2 of the target application selection method provided by the embodiment of the present application
  • FIG. 6a shows an exemplary interface diagram 1 for specifying a target application provided by an embodiment of the present application
  • FIG. 6b shows an exemplary interface diagram 2 for specifying a target application provided by an embodiment of the present application
  • FIG. 7a shows a schematic diagram 1 of a focus queue provided by an embodiment of the present application.
  • FIG. 7b shows a second schematic diagram of a focus queue provided by an embodiment of the present application.
  • FIG. 7c shows a schematic diagram 3 of a focus queue provided by an embodiment of the present application.
  • FIG. 8 shows an exemplary flowchart of an audio output method provided by an embodiment of the present application.
  • FIG. 9 shows an exemplary flowchart 1 of an electronic device outputting audio of a target application provided by an embodiment of the present application
  • FIG. 10a shows a schematic diagram 1 of a volume adjustment interface provided by an embodiment of the present application
  • FIG. 10b shows a second schematic diagram of a volume adjustment interface provided by an embodiment of the present application.
  • FIG. 11 shows an exemplary application scenario 2 of the audio output method provided by the embodiment of the present application.
  • Fig. 12a shows an exemplary flow chart 2 of an electronic device outputting audio of a target application provided by an embodiment of the present application
  • Fig. 12b shows an interface diagram 1 for specifying a priority playback device provided by an embodiment of the present application
  • FIG. 12c shows interface diagram 2 for specifying a priority playback device provided by an embodiment of the present application
  • FIG. 13 shows an exemplary application scenario 1 of the method for recording a media file provided by an embodiment of the present application
  • FIG. 14 shows an exemplary flowchart of a method for recording a media file provided by an embodiment of the present application
  • FIG. 15 shows an exemplary interface diagram for selecting an audio source of a media file provided by an embodiment of the present application
  • FIG. 16 shows an exemplary application scenario 2 of the method for recording a media file provided by an embodiment of the present application
  • 17 is a schematic structural diagram of an electronic device provided by an embodiment of the application.
  • FIG. 18 shows a block diagram of a control device provided by an embodiment of the present application.
  • FIG. 19 shows a schematic structural diagram of a system on chip (System on Chip, SoC) provided by an embodiment of the present application.
  • SoC System on Chip
  • FIG. 1 shows an application scenario of the embodiment of the present application.
  • a plurality of audio applications are installed on the electronic device 100 (specifically, a mobile phone), and each audio application can output audio data of different contents (“audio” for short).
  • audio for short.
  • Kuwo Music TM can output audio of music programs
  • ZOOM TM can output conference voice
  • Dragonfly FM TM can output audio of radio programs
  • Baidu Map TM can output navigation voice and so on.
  • the electronic device will respond to the request and play audio corresponding to the application. For example, when the user turns on Kuwo Music and Baidu Maps, the electronic device plays songs and navigation voices; at this time, if the user answers a call on the electronic device, the electronic device plays the phone voice at the same time; then, if the timing set on the electronic device When the time of the timer is up, the electronic device will play the prompt sound of the timer at the same time.
  • the audio contents of many applications are mixed together and interfere with each other, which will result in a bad user experience.
  • the operating systems of some electronic devices provide an audio focus mechanism.
  • the audio focus mechanism only one application (usually the last requesting application) can output audio at a time.
  • the operating system assigns the focus to the application that made the last request, and outputs the audio of the application; other applications lose focus and suspend the audio output.
  • the application currently outputting audio shown as an alarm clock in the figure
  • the next application in the focus queue adjacent to the current application shown as ZOOM in the figure
  • the audio focus mechanism only the application that sends the request last can output audio, and the flexibility of audio output is not high.
  • the audio focus mechanism cannot meet the user's needs.
  • the embodiments of the present application provide an audio output method.
  • the electronic device 100 receives audio output requests from the M audio applications, the electronic device 100 selects N target applications from the M audio applications, outputs the audio corresponding to the N target applications, and stops outputting other than the target applications. audio from other audio apps. where M is greater than N.
  • the embodiments of the present application limit the maximum number of applications (herein referred to as “target applications”) that output audio at the same time (herein, the maximum number is referred to as “threshold N”), when an application that requests audio output (herein referred to as “target applications”) When the number M of “candidate applications”) is greater than the threshold N, the electronic device 100 screens the candidate applications to limit the number of target applications to N.
  • the embodiments of the present application can improve the situation of mutual audio interference of multiple applications in the prior art.
  • N target applications may be screened from M candidate applications according to any screening condition (for example, a screening condition specified by a user). Therefore, compared with the audio focus mechanism of the prior art, the embodiments of the present application can determine the target application for outputting audio in a more flexible manner, thereby improving user experience.
  • N is a positive integer of 2 or more. That is, the application embodiment allows multiple target applications (as long as the number of target applications does not exceed N) to output audios simultaneously, so the user's requirement for listening to multiple audios at the same time can be satisfied.
  • Electronic devices can be mobile phones, laptops, tablets, large-screen devices, wearable devices (eg, watches, smart glasses, helmets), desktop computers, Augmented Reality (AR)/Virtual Reality (VR) devices , Personal Digital Assistant (Personal Digital Assistant, PDA) and so on.
  • wearable devices eg, watches, smart glasses, helmets
  • VR Virtual Reality
  • PDA Personal Digital Assistant
  • Audio applications can be phone applications, timer applications, smart voice applications, browser applications (eg, safari TM ) and other system applications, and can also be music applications (eg, Kuwo Music TM ), video applications (eg, iQIYI ) TM ), conference applications (eg, ZOOM TM ), gaming applications (eg, PUBG TM , Tank Wars TM ), payment applications (eg, Alipay TM ), short video applications (eg, Douyin TM ), social applications (Sina Weibo TM ), navigation applications (eg, Baidu Maps TM ), e-book applications (eg, Seven Cat Novels TM ), radio applications (eg, Dragonfly FM TM ) and other third-party applications, as long as they can output audio, different One more elaboration.
  • music applications eg, Kuwo Music TM
  • video applications eg, iQIYI ) TM
  • conference applications eg, ZOOM TM
  • FIG. 3 shows an exemplary scenario in which three candidate applications on an electronic device issue audio output requests to the electronic device.
  • the three candidate applications are ZOOM, Kuwo Music and Alarm Clock. Among them, ZOOM issues an audio output request at the earliest, and an alarm clock issues an audio output request at the latest.
  • Example 1 The electronic device 100 selects 2 target applications from 3 candidate applications based on the current work scenario.
  • this example includes the following steps:
  • S11 The electronic device 100 determines its current working scene.
  • the electronic device 100 determines the current working scene according to the device information of an external communication device (other electronic device that establishes a communication connection with the electronic device 100 ). For example, when the electronic device 100 is connected to a car audio, the electronic device 100 determines that its current working scene is a car scene; when the electronic device 100 is connected to a home gateway, the electronic device 100 determines that its current scene is a home scene, etc.;
  • the electronic device 100 determines the current work scene according to an application running on the electronic device 100 . For example, when the electronic device 100 runs a PPT application or a conference application (eg, ZOOM), the electronic device 100 determines that the current work scene is a conference scene; when the electronic device 100 runs a fitness application (eg, KEEP), the electronic device 100 determines the current work scene The scene is a sports scene, etc.;
  • a PPT application or a conference application eg, ZOOM
  • the electronic device 100 determines that the current work scene is a conference scene
  • a fitness application eg, KEEP
  • the electronic device 100 determines the current working scene according to measurement data of a specific sensor, wherein the specific sensor is used to measure displacement, velocity and/or acceleration data of the electronic device 100 .
  • the specific sensor is a gyro sensor, an acceleration sensor, a GPS sensor, and the like.
  • the electronic device 100 determines that the current work scene is a sports scene (eg, walking, running, etc.); when the current movement of the electronic device 100 is determined according to the measurement data of the GPS sensor
  • the electronic device 100 determines that the current scene is a high-speed rail travel scene.
  • the electronic device 100 may also determine the current work scene according to other conditions, for example, the electronic collects image data of the current environment through a camera, or collects sound data of the current environment through a microphone, and determines the current work scene through an AI algorithm, etc., which will not be described in detail.
  • the electronic device 100 automatically perceives the current work scene according to the preset conditions, but the present application is not limited to this.
  • the electronic device 100 may determine the current work scene based on the user's scene specification operation.
  • Figure 4b shows an example of the user's scene specification operation.
  • the user can enter the interface 101 shown in Fig. 4b by selecting the "App Audio Management" option of the system setting application.
  • the interface 101 includes a scene selection list.
  • the scene designation operation is an operation performed by the user on the interface 101, but the present application is not limited to this.
  • the scene specifying operation may be other operations of the user, for example, sending a voice instruction to the electronic device 100 and the like.
  • the electronic device 100 determines a scene feature application from three candidate applications according to the current work scene, where the scene feature application is an application required by the current work scene.
  • the electronic device 100 stores a "scene-application” relationship table.
  • Table 1 gives an example of a “scenario-application” relational table.
  • the "scene” column is the work scene
  • the "application” column is the application required by each work scene (ie, the scene feature application).
  • the electronic device 100 may determine the scene feature application by querying the "scene-application” relationship table. For example, after determining that the current work scene is a high-speed rail travel scene, the electronic device 100 determines the "alarm clock" application as the scene feature application.
  • Table 1 may be generated by the electronic device according to the application tags of each audio application.
  • the application tag can be defined in a property file of the application (for example, the "AndroidManifest.xml” file of the android application).
  • Application tags are used to characterize the use of the application, for example, Baidu Maps has the tag “navigation”, and Dragonfly FM and Kuwo Music have the tag "entertainment”.
  • the electronic device determines the scene feature application corresponding to each scene according to the application tag. For example, the electronic device determines all applications with application labels of "navigation" and "entertainment” as scene feature applications of the vehicle scene.
  • the corresponding relationship between the scene and the application label may be preset when the electronic device leaves the factory, or may be specified by the user through the system setting application.
  • Table 1 can also be determined by an AI algorithm.
  • the audio applications running on the electronic device in each scenario are counted, and the AI model is trained through the statistical results.
  • the electronic device may calculate the usage probability of each application in the preset scene according to the AI model, and determine the scene characteristic application of the preset scene for the application whose usage probability exceeds a set threshold (eg, 70%).
  • a set threshold eg, 70%
  • the electronic device 100 determines the priority of each candidate application among the M candidate applications based on the determination result of the scene feature application, where the priority of the scene feature application is higher than the priorities of other candidate applications.
  • the electronic device 100 determines the scene feature application as the ZOOM application according to Table 1. Therefore, the electronic device 100 determines the ZOOM application as the application with the highest priority among the three candidate applications. After that, the electronic device 100 determines the priorities of the remaining applications according to the order in which the audio output requests are issued. Specifically, the application that makes the request late has a higher priority, and therefore, the priority of the alarm clock is higher than that of Kuwo Music. Finally, the priority of the three candidate applications is sorted from high to low: ZOOM, Alarm Clock, Kuwo Music.
  • the electronic device 100 determines that the scenario feature application is an alarm clock application according to Table 1. Therefore, the electronic device 100 determines the alarm clock application as the application with the highest priority among the three candidate applications. After that, the electronic device 100 determines the priorities of the remaining applications according to the order in which the audio output requests are issued. Specifically, an application that issues a request late has a higher priority, and therefore, Kuwo Music has a higher priority than ZOOM. Finally, the priorities of the three candidate applications are sorted from high to low: Alarm Clock, Kuwo Music, and ZOOM.
  • the electronic device 100 may determine the priority of each scene feature application according to the order in which the audio output request is sent. Specifically, the application that sends the audio output request later has a higher priority. priority.
  • the current working scene of the electronic device is a high-speed rail travel scene
  • the scene feature applications determined by the electronic device include an alarm clock application (used for reminding time) and a Seven Cats novel (used for relaxation during travel).
  • the application that sends out the audio output request later is the alarm clock application. Therefore, the priority of the electronic device for determining the alarm clock is higher than that of Qi Mao Novel.
  • S14 The electronic device 100 determines two target applications according to the priorities of the three candidate applications.
  • the electronic device 100 determines the two applications with the highest priority as the two target applications according to the priority ordering of the three candidate applications. Since the priority of the scene feature application is higher than the priority of other candidate applications, at least the scene feature application determined in step S12 may be included in the two target applications. Specifically, for the meeting scenario described in step S13, the electronic device 100 determines ZOOM and the alarm clock as two target applications; for the high-speed rail travel scenario described in step S13, the electronic device 100 determines the alarm clock and Kuwo Music as two targets application.
  • the electronic device 100 selects N target applications from the M candidate applications according to the current work scenario, and the N target applications at least include the scene feature applications required by the current work scenario, so that the audio output by the electronic device 100 matches the current work scenario.
  • the scene is more matched, which is conducive to improving the user experience.
  • Example 2 The electronic device 100 selects 2 target applications from 3 candidate applications based on preset application priority information.
  • a higher priority may be required, so that the user can listen to the audio content output by these applications in time.
  • Such applications are, for example, call applications (eg, system phone applications), notification applications (eg, timer applications, schedule reminder applications), payment applications (eg, Alipay), human-computer interaction applications (eg, smart voice applications) )Wait.
  • call applications eg, system phone applications
  • notification applications eg, timer applications, schedule reminder applications
  • payment applications eg, Alipay
  • human-computer interaction applications eg, smart voice applications
  • the application priority information is attribute information added to the application.
  • the attribute information of the application is Priority
  • the attribute information of the application is other values or vacancies
  • the user may set the attribute information of the application through the interface 102 shown in FIG. 5a.
  • the electronic device 100 may also set the attribute information of a specific application (eg, a system phone application) as Priority in its factory settings.
  • the method for the electronic device 100 to select 2 target applications from 3 candidate applications includes the following steps:
  • S21 The electronic device 100 determines a key application from three candidate applications according to preset application priority information.
  • the electronic device 100 determines the key application according to the attribute information of each candidate application.
  • the attribute information of the application is Priority
  • the electronic device 100 determines the application as a key application.
  • the electronic device 100 determines the alarm clock as a key application.
  • the electronic device 100 determines the priority of each candidate application among the three candidate applications based on the determination result of the key application, wherein the priority of the key application is higher than the priority of other candidate applications.
  • the electronic device 100 determines the alarm clock as the application with the highest priority among the three candidate applications. After that, the electronic device 100 determines the priorities of the remaining applications according to the order in which the audio output requests are issued. Specifically, an application that issues a request late has a higher priority, and therefore, Kuwo Music has a higher priority than ZOOM. Finally, the priorities of the three candidate applications are sorted from high to low: Alarm Clock, Kuwo Music, and ZOOM.
  • the electronic device 100 may determine the priority of each key application according to the order of sending the audio output request. Specifically, the application that sends the audio output request later has a higher priority. For example, in a scenario, application 1, application 2, and application 3 send audio output requests in sequence, and all three are key applications, the electronic device 100 determines that the order of priority of the three from high to low is application three and application two. , Application 1.
  • S23 The electronic device 100 determines three target applications according to the priorities of the three candidate applications.
  • the electronic device 100 determines the two applications with the highest priority as N target applications. Therefore, in this example, the electronic device 100 determines the alarm clock and Kuwo Music as two target applications.
  • the electronic device 100 selects 2 target applications from 3 candidate applications based on preset priority information, so the audio of key applications can be output preferentially, which is beneficial to improve user experience.
  • Example 3 The electronic device 100 selects 2 target applications from the 3 candidate applications based on the user's selection operation on the 3 candidate applications.
  • the user's selection operation on the three candidate applications is a selection operation performed by the user on the application selection interface of the electronic device 100 .
  • the electronic device 100 displays an application selection interface, through which the user can select an application (ie, a target application) that is expected to output audio.
  • Figure 6a shows an example of an application selection interface.
  • the user can select an application that is expected to output audio by checking the selection box corresponding to the candidate application. For example, in FIG. 6a, the user has checked the selection boxes corresponding to ZOOM and Kuwo Music. Therefore, the electronic device 100 determines ZOOM and Kuwo Music as 2 target applications based on the user's selection.
  • the electronic device 100 selects the application that requests audio output at the latest from the remaining applications, and forms N target applications together with the application selected by the user. Specifically, referring to FIG. 6b, when the user only checks the ZOOM application, the electronic device 100 selects the application (specifically, the alarm clock application) that requests audio output at the latest from Kuwo Music and the alarm clock application, and selects the ZOOM application and the alarm clock application as 2 target applications.
  • the application specifically, the alarm clock application
  • the electronic device 100 selects the application (specifically, the alarm clock application) that requests audio output at the latest from Kuwo Music and the alarm clock application, and selects the ZOOM application and the alarm clock application as 2 target applications.
  • the user's selection operation is an operation performed by the user on the application selection interface, but the application is not limited to this.
  • the user's selection operation may be other operations of the user, for example, sending a voice instruction to the electronic device.
  • the threshold N may be other values greater than 2 (eg, 3, 6, etc.), and M may be other values greater than N (eg, 5, 8, etc.).
  • the embodiment of the present application uses the idea of the audio focus mechanism to control the audio output process.
  • the existing electronic device only allows one application to acquire the focus at the same time (that is, the number of focuses is 1); in this embodiment, at most N applications can acquire the focus (that is, the focus is can be up to N).
  • the operating system determines whether the number M of candidate applications currently sending the request exceeds the threshold N. If the threshold N is not exceeded, the operating system assigns the focus to all of the M candidate applications to output the audio of the M candidate applications; if the number M exceeds the threshold N, the operating system selects N target applications from the M candidate applications, And assign focus to N target applications to output the audio of N target applications.
  • the electronic device can play (as "output") the audio of the target application through its own audio playback device (for example, a microphone), and can also use other audio playback devices (for example, a Bluetooth speaker) that are communicatively connected to the electronic device. ) to play the audio of the target app.
  • its own audio playback device for example, a microphone
  • other audio playback devices for example, a Bluetooth speaker
  • the electronic device may set the threshold N as required.
  • the electronic device 100 may also determine the threshold N according to user input.
  • the focus queues ie, audio output queues
  • the audio output queue at time T3 is shown in FIG. 7c , and the electronic device 100 outputs the audio of ZOOM and the alarm clock.
  • S110 The electronic device 100 receives an audio output request sent by the audio application.
  • an alarm clock is used as an example of an audio application.
  • an audio output request is sent to the operating system.
  • the alarm clock application sends an audio output request to the operating system by calling an audio player provided by the system (eg, MediaPlayer, AudioTrack, etc. of the android system).
  • the electronic device 100 receives the audio output request.
  • the audio output request sent by the audio application may include the application identifier of the application (for example, the application name), the file information of the audio file requested to be output (for example, the name of the audio file, the address of the audio file, the compression format of the audio file), etc. .
  • an alarm clock application is used as an example of an audio application, but the present application is not limited to this.
  • the audio application may be other applications, such as Douyin, browser, etc.
  • the triggering condition for each audio application to send the audio output request to the operating system may be determined according to its own function. For example, for short video applications such as Douyin, after the user opens the video playback interface, the application sends an audio output request to the operating system; for music playback applications such as Kuwo Music, after the user clicks the "Start Playing" button, the application sends an audio output request to the operating system.
  • S120 The electronic device 100 determines that the number M of candidate applications is greater than the threshold N.
  • the operating system stores a list of candidate applications. When the operating system receives a new request, it determines whether the request comes from a new application (that is, an application that is not on the application list). If so, the operating system adds 1 to the current number M, and adds the application identifier of the new application to Application list; if not, the operating system maintains the current number M and the application list unchanged.
  • the operating system determines that the audio output of a certain candidate application ends, the operating system decrements the number of applications M by 1, and deletes the application identifier of the candidate application from the application list.
  • the operating system determines that the application ends the audio output
  • the operating system determines that the application ends the audio output and so on.
  • S130 The electronic device 100 selects N target applications from the M candidate applications.
  • Example 1 For the method for the operating system to select N target applications from the M candidate applications, reference may be made to the descriptions of Example 1 (eg, steps S11-S14), Example 2 (eg, steps S21-S23) and Example 3 above, which will not be repeated.
  • the target application determined in the conference scenario in step S14 is used as an example for introduction. That is, in this embodiment, the target applications are ZOOM and an alarm clock.
  • S140 The electronic device 100 updates the focus queue according to the selection result of the target application.
  • step S130 referring to FIG. 7c, after the electronic device 100 determines the ZOOM and the alarm clock as the target applications, the electronic device 100 maintains assigning the first focus to ZOOM, and assigns the second focus to the alarm clock (the second focus is assigned before time T3). to cool me music). And, the operating system arranges the Kuwo Music after the alarm clock in the second focus queue.
  • the electronic device 100 outputs the audios of the N target applications. Referring to FIG. 9 , the process of outputting audio by the electronic device 100 specifically includes:
  • S151 The electronic device 100 determines the volume of each target application according to the volume control information of each target application.
  • the electronic device 100 includes a plurality of volume control information, and each audio application corresponds to different volume control information.
  • the user may set the volume control information of the audio application through the method shown in FIG. 10a and FIG. 10b. That is, in this embodiment, the volume control information of each audio application may be determined based on user input.
  • Figure 10a shows one way in which the user sets volume control information.
  • the interface 101 includes a volume control bar corresponding to each audio application, and each position point of the volume control bar can be mapped to a volume adjustment coefficient (as volume control information).
  • the volume control bar also includes a volume control ball, and the user can set the volume corresponding to the audio application by dragging the volume control ball.
  • ZOOM corresponds to the volume control bar 103 .
  • the volume adjustment coefficient of ZOOM can be set to 0.4.
  • Figure 10b shows another way for the user to set the volume control information.
  • the user can adjust the volume of the target application more quickly.
  • the electronic device 100 displays the interface 104 as shown in the figure.
  • the predetermined manner can be understood as a shortcut for calling up the interface 104 .
  • the predetermined manner is, for example, pressing a predetermined button (for example, a volume key), touching the screen with a predetermined gesture (for example, a four-finger swipe gesture), and shaking the electronic device 100 in a predetermined direction (a direction perpendicular to the plane where the electronic device 100 is located) etc., which are not limited in this application.
  • the interface 104 includes a volume control bar corresponding to the target application, and the user can adjust the volume of each target application in real time by dragging the volume control ball on the volume control bar. In this way, when the electronic device 100 is outputting the audio of the target application, the user can quickly adjust the volume of each target application in the manner shown in FIG. 10b.
  • the operating system determines the volume of the target application according to the volume control information of each target application, and outputs the audio of the target application at the volume. Still take the ZOOM shown in FIG. 10a as an example.
  • the audio data (for example, decoded PCM audio data) requested to be output by ZOOM is denoted as: Data_1.
  • the volume control ball of ZOOM is dragged to the position shown in Figure 10a, the operating system takes 0.4 ⁇ Data_1 as the output audio of ZOOM.
  • each audio application corresponds to an independent volume control information, so that the volume of each audio application can be controlled independently.
  • the present application is not limited to this.
  • multiple audio applications may share the same audio control information.
  • the electronic device 100 categorizes audio applications into multiple application groups according to the application tags of each application (eg, Dragonfly FM and Seven Cat Novels are categorized into entertainment application groups according to the "entertainment" tag), and each application group corresponds to one application group. Volume control bar. In this way, the user can adjust the volume of all applications in the application group by operating a volume control bar.
  • the electronic device 100 mixes the audios of the N target applications. Specifically, the operating system superimposes the output audio data of the N target applications to generate the audio data Data_Final finally output by the electronic device 100 .
  • S153 Electronic playback of the mixed audio.
  • the operating system sends the audio data Data_Final to an audio output device (eg, a speaker) of the electronic device 100, so as to play the audio of the two target applications through the audio output device.
  • an audio output device eg, a speaker
  • this embodiment can limit the number of applications that output audio (ie, target applications) to N.
  • target applications By limiting the number of target applications, this embodiment can improve the situation that the audio of multiple applications interferes with each other in the prior art; in addition, this embodiment allows multiple applications to output audio at the same time (that is, N ⁇ 2), so it can satisfy the user's simultaneous audio output. The need to listen to audio from multiple apps.
  • the volume control information of each application can be set independently, so that the user can adjust the volume of each application as required, so as to improve user experience. For example, when the user does not want to hear the voice in ZOOM, but it is inconvenient to exit the ZOOM application, the user can turn up the volume of Kuwo Music and turn down the volume of ZOOM. In this way, the sound of Kuwo Music can overshadow the sound of ZOOM to meet user needs.
  • the electronic device 100 sets the system phone application as a special case of the audio application, and is not included in the candidate applications (ie, the candidate applications only include other applications than the system phone application).
  • the system phone application is not limited by the threshold N, and the electronic device 100 can output the audio of the system phone application and N target applications at the same time. Equivalently, when the user answers a call through the system phone application, the electronic device 100 can output audios of N+1 applications.
  • step S150 of the first embodiment improves step S150 of the first embodiment.
  • the audios of the N target applications are played through the same audio playback device (ie, the electronic device 100 itself); in this embodiment, the audios of the N target applications are played through different audio playback devices.
  • FIG. 11 shows an exemplary application scenario of this embodiment.
  • the electronic device 100 (specifically a mobile phone) communicates with the laptop 120 (device name “Laptop”) and the speaker 130 (device name “AI speaker”) through the gateway 110 , and communicates with the Bluetooth headset 140 (device name “AI speaker”) through Bluetooth "FreeBuds”) communication connection.
  • N 3.
  • the three target applications that are outputting audio on the electronic device 100 are ZOOM, Kuwo Music and Alarm Clock.
  • the audio of Kuwo Music is played through the speaker 130
  • the audio of ZOOM is played through the notebook computer 120
  • the audio of the alarm clock is played through the electronic device 100 .
  • the electronic device 100 (specifically, a mobile phone), a notebook computer 120 , a speaker 130 and a Bluetooth headset 140 are used as examples of audio playback devices, but the present application is not limited thereto.
  • the audio playback device may be a large screen, tablet, car audio, smart watch and other devices, as long as it can play audio.
  • the communication mode between the electronic device 100 and other audio playback devices may be WiFi, Bluetooth, wired communication, or the like.
  • the electronic device 100 includes a device information list, and the device information list is used to record device information of other audio playback devices (herein referred to as “standby devices”) other than the electronic device 100 .
  • the backup device is, for example, an audio playback device that has established a communication connection with the electronic device 100 .
  • the electronic device 100 adds the device information of the audio playback device to the device information list.
  • the device information of the backup device may include device name, device type, device number (assigned by the electronic device 100 ), and the like.
  • the device information list also includes status information of each standby device. For example, when the communication state between a backup device and the electronic device 100 is changed from the disconnected state to the connected state, the electronic device 100 updates the "status information" of the device to "online”; When the communication state between 100 is changed from the connected state to the disconnected state, the electronic device 100 updates the "status information" of the device to "offline”. It can be understood that when the status information of a certain device is "online”, it means that the device is an available device, and the electronic device 100 can play the audio of the target application through the device.
  • Table 2 shows a list of device information corresponding to the scenario shown in FIG. 11 .
  • Equipment type status information 001 FreeBuds Bluetooth earphone online 002 AI speaker speaker online 003 Car speaker car audio offline 004 TV Big screen offline 005 Laptop laptop online 006 Watch smart watch offline
  • Table 2 can be understood as the data preparation process of this embodiment. The following describes the process of outputting audio by the electronic device 100 in this embodiment with reference to the scenario shown in FIG. 11 .
  • the process for the electronic device 100 to output the audios of the three target applications includes the following steps:
  • S210 The electronic device 100 determines the volume of each target application according to the volume control information of each target application.
  • step S151 in the first embodiment is substantially the same as step S151 in the first embodiment, so reference may be made to the description in step S151 in the first embodiment, and details are not repeated here.
  • the electronic device 100 determines an audio playing device corresponding to each target application (ie, a device for playing audio of the target application).
  • a target application ie, a device for playing audio of the target application.
  • Example 1 The electronic device 100 determines the audio playback device corresponding to the target application according to preset device priority information.
  • the device priority information in the electronic device 100 is an “application-device” relationship table stored in the electronic device 100 .
  • Table 3 gives an example of the "application-device” relationship table.
  • the electronic device 100 can determine the priority playback device of each target application (ie, the audio playback device corresponding to the target application).
  • the audio playback device located in the same row as the audio application is the priority playback device of the audio application.
  • "AI speaker” is the priority playback device for Kuwo Music.
  • Priority playback device Alarm clock native kuwo music AI speaker Baidu map Car speaker browser TV ZOOM Laptop KEEP Watch
  • Figure 12b illustrates an exemplary method of setting device priority information.
  • the application audio management interface 101 of the electronic device 100 includes several audio applications, and a drop-down box is provided on the right side of each audio application.
  • the device name of each backup device eg, the backup device in Table 2
  • the user can set the device priority information by selecting the device in the drop-down list.
  • the user sets "AI speaker (ie, speaker 130)" as the priority playback device of Kuwo Music through the interface shown in Figure 12b.
  • AI speaker ie, speaker 130
  • the electronic device 100 determines the speaker 130 as the audio playback device corresponding to the target application, and plays the audio of Kuwo Music through the speaker 130; if The speaker 130 is in an offline state, and the electronic device 100 plays the audio of Kuwo Music through its own audio playback device (eg, a speaker).
  • each audio application corresponds to multiple priority playback devices
  • the Kuwo application corresponds to two priority playback devices, which are the first priority playback device (eg, the speaker 130 ) and the second priority playback device respectively device (eg, smart watch).
  • the electronic device 100 outputs the audio of the Kuwo application
  • the first priority playback device eg, the speaker 130
  • the second priority playback device respectively device (eg, smart watch).
  • the electronic device 100 outputs the audio of the Kuwo application
  • the first priority playback device is online, the audio of the Kuwo music is played through the first priority playback device; if the first priority playback device is offline, the Kuwo music is played through the second priority playback device. Audio of my music; if both the first priority playback device and the second priority playback device are offline, the audio of the Kuwo application is played through the electronic device 100 itself.
  • Example 2 The electronic device 100 determines the audio playback device corresponding to the target application according to the number of times the target application is played on the audio playback device.
  • the electronic device 100 stores the number of times the target application is played on each audio playback device. Still taking Kuwo Music as an example, according to the records stored in the electronic device 100, Kuwo Music is played 30 times on the “Watch” and 24 times on the “AI speaker (that is, the speaker 130)”. The number of playbacks on "FreeBuds" is 10 times, and there is no record of playback on other devices.
  • the operating system selects the device that plays Kuwo Music the most times from the currently online devices according to Table 2, as the audio playback device corresponding to Kuwo Music.
  • the device that plays the Kuwo music the most times is the speaker 130 . Therefore, the electronic device 100 plays the audio of the Kuwo music through the speaker 130 .
  • the number of times the target application is played on the audio playback device can reflect user preferences.
  • the audio playback device corresponding to the target application is determined according to the number of times the target application is played on the audio playback device, so it can be more in line with user preferences and improve user experience.
  • Example 3 The electronic device 100 determines the audio playback device corresponding to the target application according to the real-time input of the user.
  • FIG. 12c shows how the user specifies the audio playback device in real time.
  • the interface 105 shown in Fig. 12c is a further improvement of the interface 104 shown in Fig. 10b.
  • Fig. 12c adds a device selection option on the basis of Fig. 10b. That is to say, in addition to the volume control bar corresponding to the target application, FIG. 12c also includes an audio playback device selection list corresponding to the target application.
  • the device selection list includes currently online audio playback devices, for example, the online devices determined according to Table 2; the device selection list may also include devices newly discovered by the electronic device 100, such as the device “Glasses” in FIG. 12c, so there are It is helpful for the user to choose the appropriate audio playback device in the new environment.
  • the user can select the audio playback device of the target application through the operation interface 105. For example, after the user clicks "AI speaker", the electronic device 100 determines "AI speaker” as the audio playback device of Kuwo Music.
  • the user can call up the interface shown in FIG. 12c through the same shortcut as in FIG. 10b, for example, pressing the volume key, touching the screen with a specific gesture, and so on. That is, this example can provide a shortcut for the user to select an audio output device.
  • the electronic device 100 sends the audio of the target application to the corresponding audio playing device, so as to play the audio of the three target applications through multiple audio playing devices.
  • the electronic device 100 determines the speaker 130 as the audio playback device of Kuwo Music, the notebook computer 120 as the audio playback device of ZOOM, and the electronic device 100 itself. Audio playback device for the alarm clock.
  • the electronic device 100 sends the audio of Kuwo Music to the speaker 130 to play the audio of Kuwo Music through the speaker 130; sends the audio of ZOOM to the notebook computer 120, so as to play the voice of ZOOM through the notebook computer 120; Audio is sent to its own speaker to play the alarm sound through the speaker.
  • the audio of each target application is output through different audio playback devices, which can not only further avoid mutual interference between the audios of different applications, but also play the audio of the target application through the device desired by the user to improve user experience. .
  • the device for playing audio includes the electronic device 100 itself, but the application is not limited thereto.
  • the device for playing audio may not include the electronic device 100 itself, but only include a plurality of external devices (the external devices are other audio playing devices other than the electronic device 100 ).
  • the audios of the three target applications are respectively played through three audio playback devices, and each audio playback device plays the audio of one target application.
  • each audio playback device may play audios of multiple (eg, 2, 3) target applications.
  • the car audio can play Baidu Maps, Dragonfly FM and phone audio at the same time.
  • the external device plays the audio of multiple applications
  • the electronic device 100 can complete the audio mixing of the multiple applications locally, and send the mixed audio to the external device; it can also send the audio of each application to the external device independently , to complete the mixing of multiple applications by an external device.
  • FIG. 13 shows an exemplary application scenario of this embodiment.
  • the electronic device 100 is outputting the audio of Kuwo Music and ZOOM.
  • the electronic device 100 also runs an audio recording application (in this embodiment, a voice recorder application).
  • the audio recorder can record the audio that the electronic device 100 is outputting.
  • the audio recorder records all the audios that the electronic device 100 is outputting (the superimposed audio of each audio). Overlay audio.
  • the user only wants to record the audio of a specific application, for example, only wants to record the audio of ZOOM.
  • the existing technology cannot meet this requirement of users.
  • this embodiment provides a method for recording an audio file (as a media file), when the electronic device 100 outputs the audio of multiple audio applications (referred to as “candidate applications"), the electronic device 100 records only the selected application (called “target application”) audio without recording the audio of other candidate applications other than the target application, so as to meet the diverse needs of users.
  • the audio recording method of this embodiment includes the following steps:
  • S310 The electronic device 100 outputs audios of multiple candidate applications.
  • the electronic device 100 outputs audios of two candidate applications (specifically, Kuwo Music and ZOOM). In other embodiments, the electronic device 100 may output other numbers (eg, 4) of audio for the candidate applications.
  • candidate applications can be other applications other than Kuwo Music and ZOOM, such as iQIYI, Baidu Maps, etc., as long as they can output audio.
  • the electronic device 100 outputs audios of multiple candidate applications, which may include: the electronic device 100 plays the audios of the candidate applications through its own audio playback device (eg, a speaker), and/or the electronic device 100 plays audio of the candidate applications through other audio playback devices (eg, bluetooth headset, smart watch) to play the audio of the candidate application.
  • the electronic device 100 plays the audios of the candidate applications through its own audio playback device (eg, a speaker), and/or the electronic device 100 plays audio of the candidate applications through other audio playback devices (eg, bluetooth headset, smart watch) to play the audio of the candidate application.
  • S320 The electronic device 100 receives a first input, where the first input is used to select one or more target applications from multiple candidate applications.
  • the first input is a screen input from a user.
  • Figure 15 shows an example of screen input. Specifically, after the user clicks the application icon of the recorder, the user can enter the interface 106 of the recorder application shown in FIG. 15 .
  • the interface 106 includes a "Start Recording" button, and checkboxes corresponding to each candidate application. The user can select the target application for which they want to record audio through the checkbox.
  • the electronic device 100 determines ZOOM as the target application. It can be understood that, in the example given in FIG. 15( a ), the number of target applications is less than the number of candidate applications.
  • the electronic device 100 determines both Kuwo Music and ZOOM as target applications.
  • the number of target applications may be other numbers, for example, 4; for another example, in other embodiments, the user selects the target application by means of voice commands (that is, the first input is the user's voice input ).
  • the electronic device 100 records audio of one or more target applications to generate an audio file A (as an example of a first media file).
  • Fig. 15(a) the user selects ZOOM as the target application.
  • the recorder application starts to obtain the audio stream data of Zoom (that is, starts to record the audio of Zoom), and forms an audio file A through the audio stream data of Zoom. That is to say, in the example given in FIG. 15( a ), the audio data in the audio file A is the audio data of ZOOM (denoted as Record_Data_1 ).
  • the user selects both Kuwo Music and ZOOM as target applications.
  • the recorder application starts to obtain the audio stream data after mixing the audio of ZOOM and Kuwo Music (that is, starts to record the audio of Kuwo Music and ZOOM), and
  • the audio file A is formed by the mixed audio stream data. That is to say, in the example given in Figure 15(b), the audio data in audio file A is the superimposed audio data of Kuwo Music (audio data is Record_Data_2) and ZOOM (audio data is Record_Data_1), specifically Record_Data_1+Record_Data_2 .
  • the electronic device 100 when the electronic device 100 outputs the audios of multiple candidate applications at the same time, the electronic device 100 may only record the audios of the target application, so as to meet the diverse needs of the user. For example, referring to FIG. 15( a ), when the electronic device 100 is outputting the audio of Kuwo Music and ZOOM, if the user wishes to record only the audio of ZOOM but not the audio of Kuwo Music, the user can record the audio of Kuwo Music through FIG. 15( a ). ) to select the target application, in this way, after the electronic device 100 completes the recording, the audio file A only includes the audio of ZOOM, and does not include the audio of Kuwo Music.
  • the electronic device 100 also outputs the video data of the first video application while outputting the audio data of the multiple candidate applications.
  • the first video application may be one of the multiple candidate applications, or may be another application other than the multiple candidate applications.
  • the electronic device 100 is running Kuwo Music and ZOOM, wherein Kuwo Music and ZOOM are examples of multiple candidate applications (applications that are outputting audio on the electronic device 100 ), and at the same time, ZOOM is the first video application Example (ZOOM's video data is real-time call image data). That is, in this example, the first video application is one of multiple candidate applications.
  • the electronic device 100 also runs a video recorder application, and FIG. 16 shows the main interface 107 of the video recorder application.
  • the main interface 107 of the recorder application includes two video data source options (referred to as “video options”), which are the screen image and the video application currently running on the electronic device 100 (specifically, “ZOOM”).
  • the interface 107 also includes radio boxes corresponding to the two video options, and the user can select one of the video options as the video data source of the video recorder application through the radio boxes.
  • the video data source of the video recorder application is ZOOM.
  • the main interface 107 of the video recorder application also includes two audio data source options (“audio options” for short), which are two audio applications currently running on the electronic device 100 (ie candidate applications, specifically Kuwo Music and ZOOM).
  • the interface 107 also includes checkboxes corresponding to the two audio options, through which the user can select one or more candidate applications as the audio data source of the video recorder application (the selected candidate applications are the target applications).
  • the audio data source of the video recorder application is Kuwo Music. That is, in FIG. 16 , the target application is Kuwo Music.
  • the video recorder application starts to obtain the video stream data of ZOOM (that is, starts to record the video of ZOOM); and simultaneously obtains the audio stream data of Kuwo Music (that is, starts to record the audio of Kuwo Music),
  • the video stream data of ZOOM and the audio stream data of Kuwo Music are synthesized into a video file B (as an example of the first media file). That is to say, in the example given in FIG. 16 , the video data in the video file B is the video data of ZOOM, and the audio data in the video file B is the audio data of Kuwo Music.
  • the audio data in the video file B recorded by the video recorder application is the superimposed audio of Kuwo Music and ZOOM. data.
  • the user when recording a video, the user can select the source of video data and the source of audio data, so that the diverse needs of the user can be met. For example, through the embodiment given in FIG. 16 , the user can use the audio of Kuwo Music as the background audio of the ZOOM call image, thereby increasing the interest.
  • the scenario shown in FIG. 16 is only an exemplary application scenario of the technical solution of the present application, and those skilled in the art can make other modifications.
  • the audio application may be other applications other than Kuwo Music and ZOOM
  • the first video application may be other applications other than the audio application, and the like.
  • FIG. 17 shows a schematic structural diagram of the electronic device 100 .
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) connector 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2 , mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone jack 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display screen 194, and Subscriber identification module (subscriber identification module, SIM) card interface 195 and so on.
  • SIM Subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light. Sensor 180L, bone conduction sensor 180M, etc.
  • the structures illustrated in the embodiments of the present invention do not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or less components than shown, or combine some components, or separate some components, or arrange different components.
  • the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (neural-network processing unit, NPU), etc. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor graphics processor
  • ISP image signal processor
  • controller video codec
  • digital signal processor digital signal processor
  • baseband processor baseband processor
  • neural-network processing unit neural-network processing unit
  • the processor can generate an operation control signal according to the instruction operation code and timing signal, and complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is cache memory. This memory may hold instructions or data that have just been used or recycled by the processor 110 . If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby increasing the efficiency of the system.
  • the processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transceiver (universal asynchronous transmitter) receiver/transmitter, UART) interface, mobile industry processor interface (mobile industry processor interface, MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (subscriber identity module, SIM) interface.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transceiver
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • the I2C interface is a bidirectional synchronous serial bus that includes a serial data line (SDA) and a serial clock line (SCL).
  • the processor 110 may contain multiple sets of I2C buses.
  • the processor 110 can be respectively coupled to the touch sensor 180K, the charger, the flash, the camera 193, etc. through different I2C bus interfaces.
  • the processor 110 may couple the touch sensor 180K through the I2C interface, so that the processor 110 and the touch sensor 180K communicate with each other through the I2C bus interface, so as to realize the touch function of the electronic device 100 .
  • the I2S interface can be used for audio communication.
  • the processor 110 may contain multiple sets of I2S buses.
  • the processor 110 may be coupled with the audio module 170 through an I2S bus to implement communication between the processor 110 and the audio module 170 .
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the I2S interface, so as to realize the function of answering calls through a Bluetooth headset.
  • the PCM interface can also be used for audio communications, sampling, quantizing and encoding analog signals.
  • the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface.
  • the audio module 170 can also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • a UART interface is typically used to connect the processor 110 with the wireless communication module 160 .
  • the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to implement the Bluetooth function.
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the UART interface, so as to realize the function of playing music through the Bluetooth headset.
  • the MIPI interface can be used to connect the processor 110 with peripheral devices such as the display screen 194 and the camera 193 .
  • MIPI interfaces include camera serial interface (CSI), display serial interface (DSI), etc.
  • the processor 110 communicates with the camera 193 through a CSI interface, so as to realize the photographing function of the electronic device 100 .
  • the processor 110 communicates with the display screen 194 through the DSI interface to implement the display function of the electronic device 100 .
  • the GPIO interface can be configured by software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface may be used to connect the processor 110 with the camera 193, the display screen 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like.
  • the GPIO interface can also be configured as I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the interface connection relationship between the modules illustrated in the embodiment of the present invention is only a schematic illustration, and does not constitute a structural limitation of the electronic device 100 .
  • the electronic device 100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
  • the USB connector 130 is a connector conforming to the USB standard specification, which can be used to connect the electronic device 100 and peripheral devices, and specifically can be a standard USB connector (such as a Type C connector), a Mini USB connector, a Micro USB connector, and the like.
  • the USB connector 130 can be used to connect a charger to charge the electronic device 100, and can also be used to transmit data between the electronic device 100 and peripheral devices. It can also be used to connect headphones to play audio through the headphones.
  • the connector can also be used to connect other electronic devices, such as AR devices, etc.
  • the processor 110 may support a Universal Serial Bus, and the standard specifications of the Universal Serial Bus may be USB1.x, USB2.0, USB3.x, and USB4.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module 140 may receive charging input from the wired charger through the USB connector 130 .
  • the charging management module 140 may receive wireless charging input through a wireless charging coil of the electronic device 100 . While the charging management module 140 charges the battery 142 , it can also supply power to the electronic device through the power management module 141 .
  • the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, the internal memory 121, the display screen 194, the camera 193, and the wireless communication module 160.
  • the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, battery health status (leakage, impedance).
  • the power management module 141 may also be provided in the processor 110 .
  • the power management module 141 and the charging management module 140 may also be provided in the same device.
  • the wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modulation and demodulation processor, the baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 100 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • the antenna 1 can be multiplexed as a diversity antenna of the wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 150 may provide wireless communication solutions including 2G/3G/4G/5G etc. applied on the electronic device 100 .
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA) and the like.
  • the mobile communication module 150 can receive electromagnetic waves from the antenna 1, filter and amplify the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modulation and demodulation processor, and then turn it into an electromagnetic wave for radiation through the antenna 1 .
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110 .
  • at least part of the functional modules of the mobile communication module 150 may be provided in the same device as at least part of the modules of the processor 110 .
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low frequency baseband signal is processed by the baseband processor and passed to the application processor.
  • the application processor outputs sound signals through audio devices (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or videos through the display screen 194 .
  • the modem processor may be a stand-alone device.
  • the modem processor may be independent of the processor 110, and may be provided in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), global navigation satellites Wireless communication solutions such as global navigation satellite system (GNSS), frequency modulation (FM), near field communication (NFC), and infrared technology (IR).
  • WLAN wireless local area networks
  • BT Bluetooth
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication
  • IR infrared technology
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110 , perform frequency modulation on it, amplify it, and convert it into electromagnetic waves for radiation through the antenna 2 .
  • the antenna 1 of the electronic device 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code Division Multiple Access (WCDMA), Time Division Code Division Multiple Access (TD-SCDMA), Long Term Evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include global positioning system (global positioning system, GPS), global navigation satellite system (global navigation satellite system, GLONASS), Beidou navigation satellite system (beidou navigation satellite system, BDS), quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite based augmentation systems (SBAS).
  • global positioning system global positioning system, GPS
  • global navigation satellite system global navigation satellite system, GLONASS
  • Beidou navigation satellite system beidou navigation satellite system, BDS
  • quasi-zenith satellite system quadsi -zenith satellite system, QZSS
  • SBAS satellite based augmentation systems
  • the electronic device 100 implements a display function through a GPU, a display screen 194, an application processor, and the like.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
  • Display screen 194 is used to display images, videos, and the like.
  • Display screen 194 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light).
  • LED diode AMOLED
  • flexible light-emitting diode flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (quantum dot light emitting diodes, QLED) and so on.
  • the electronic device 100 may include one or N display screens 194 , where N is a positive integer greater than one.
  • the electronic device 100 may implement a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
  • the ISP is used to process the data fed back by the camera 193 .
  • the shutter is opened, the light is transmitted to the camera photosensitive element through the lens, the light signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin tone.
  • ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be provided in the camera 193 .
  • the camera 193 is used to capture still images or video.
  • the object is projected through the lens to generate an optical image onto the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats of image signals.
  • the electronic device 100 may include 1 or N cameras 193 , where N is a positive integer greater than 1.
  • a digital signal processor is used to process digital signals, in addition to processing digital image signals, it can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy and so on.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs.
  • the electronic device 100 can play or record videos of various encoding formats, such as: Moving Picture Experts Group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4 and so on.
  • MPEG Moving Picture Experts Group
  • MPEG2 moving picture experts group
  • MPEG3 MPEG4
  • MPEG4 Moving Picture Experts Group
  • the NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • Applications such as intelligent cognition of the electronic device 100 can be implemented through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100 .
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example to save files like music, video etc in external memory card.
  • the Internal memory 121 may be used to store computer executable program code, which includes instructions.
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area can store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), and the like.
  • the storage data area may store data (such as audio data, phone book, etc.) created during the use of the electronic device 100 and the like.
  • the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
  • the instructions stored in the memory 121 may include instructions that, when executed by at least one of the processors, cause the electronic device 100 to implement the audio output method and/or the media file recording method provided by the embodiments of the present application.
  • the electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playback, recording, etc.
  • the audio module 170 is used for converting digital audio information into analog audio signal output, and also for converting analog audio input into digital audio signal. Audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be provided in the processor 110 , or some functional modules of the audio module 170 may be provided in the processor 110 .
  • Speaker 170A also referred to as a "speaker" is used to convert audio electrical signals into sound signals.
  • the electronic device 100 can listen to music through the speaker 170A, or listen to a hands-free call.
  • the receiver 170B also referred to as "earpiece" is used to convert audio electrical signals into sound signals.
  • the voice can be answered by placing the receiver 170B close to the human ear.
  • the microphone 170C also called “microphone” or “microphone” is used to convert sound signals into electrical signals.
  • the user can make a sound by approaching the microphone 170C through a human mouth, and input the sound signal into the microphone 170C.
  • the electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C, which can implement a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may further be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions.
  • the earphone jack 170D is used to connect wired earphones.
  • the earphone interface 170D may be a USB connector 130, or a 3.5mm open mobile terminal platform (OMTP) standard interface, a cellular telecommunications industry association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA
  • the pressure sensor 180A is used to sense pressure signals, and can convert the pressure signals into electrical signals.
  • the pressure sensor 180A may be provided on the display screen 194 .
  • the capacitive pressure sensor may be comprised of at least two parallel plates of conductive material.
  • the electronic device 100 determines the intensity of the pressure according to the change in capacitance.
  • a touch operation acts on the display screen 194
  • the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A.
  • the electronic device 100 may also calculate the touched position according to the detection signal of the pressure sensor 180A.
  • touch operations acting on the same touch position but with different touch operation intensities may correspond to different operation instructions.
  • the instruction for viewing the short message is executed.
  • the instruction to create a new short message is executed.
  • the gyro sensor 180B may be used to determine the motion attitude of the electronic device 100 .
  • the angular velocity of electronic device 100 about three axes ie, x, y, and z axes
  • the gyro sensor 180B can be used for image stabilization.
  • the gyro sensor 180B detects the shaking angle of the electronic device 100, calculates the distance that the lens module needs to compensate according to the angle, and allows the lens to offset the shaking of the electronic device 100 through reverse motion to achieve anti-shake.
  • the gyro sensor 180B can also be used for navigation and somatosensory game scenarios.
  • the air pressure sensor 180C is used to measure air pressure.
  • the electronic device 100 calculates the altitude through the air pressure value measured by the air pressure sensor 180C to assist in positioning and navigation.
  • the magnetic sensor 180D includes a Hall sensor.
  • the electronic device 100 can detect the opening and closing of the flip holster using the magnetic sensor 180D.
  • the electronic device 100 can detect the opening and closing of the flip according to the magnetic sensor 180D. Further, according to the detected opening and closing state of the leather case or the opening and closing state of the flip cover, characteristics such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 180E can detect the magnitude of the acceleration of the electronic device 100 in various directions (generally three axes).
  • the magnitude and direction of gravity can be detected when the electronic device 100 is stationary. It can also be used to identify the posture of electronic devices, and can be used in applications such as horizontal and vertical screen switching, pedometers, etc.
  • the electronic device 100 can measure the distance through infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 can use the distance sensor 180F to measure the distance to achieve fast focusing.
  • Proximity light sensor 180G may include, for example, light emitting diodes (LEDs) and light detectors, such as photodiodes.
  • the light emitting diodes may be infrared light emitting diodes.
  • the electronic device 100 emits infrared light to the outside through the light emitting diode.
  • Electronic device 100 uses photodiodes to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100 . When insufficient reflected light is detected, the electronic device 100 may determine that there is no object near the electronic device 100 .
  • the electronic device 100 can use the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear to talk, so as to automatically turn off the screen to save power.
  • Proximity light sensor 180G can also be used in holster mode, pocket mode automatically unlocks and locks the screen.
  • the ambient light sensor 180L is used to sense ambient light brightness.
  • the electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket, so as to prevent accidental touch.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the electronic device 100 can use the collected fingerprint characteristics to realize fingerprint unlocking, accessing application locks, taking pictures with fingerprints, answering incoming calls with fingerprints, and the like.
  • the temperature sensor 180J is used to detect the temperature.
  • the electronic device 100 uses the temperature detected by the temperature sensor 180J to execute a temperature processing strategy. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold value, the electronic device 100 reduces the performance of the processor located near the temperature sensor 180J in order to reduce power consumption and implement thermal protection.
  • the electronic device 100 when the temperature is lower than another threshold, the electronic device 100 heats the battery 142 to avoid abnormal shutdown of the electronic device 100 caused by the low temperature.
  • the electronic device 100 boosts the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
  • Touch sensor 180K also called “touch device”.
  • the touch sensor 180K may be disposed on the display screen 194 , and the touch sensor 180K and the display screen 194 form a touch screen, also called a “touch screen”.
  • the touch sensor 180K is used to detect a touch operation on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to touch operations may be provided through display screen 194 .
  • the touch sensor 180K may also be disposed on the surface of the electronic device 100 , which is different from the location where the display screen 194 is located.
  • the bone conduction sensor 180M can acquire vibration signals.
  • the bone conduction sensor 180M can acquire the vibration signal of the vibrating bone mass of the human voice.
  • the bone conduction sensor 180M can also contact the pulse of the human body and receive the blood pressure beating signal.
  • the bone conduction sensor 180M can also be disposed in the earphone, combined with the bone conduction earphone.
  • the audio module 170 can analyze the voice signal based on the vibration signal of the vocal vibration bone block obtained by the bone conduction sensor 180M, so as to realize the voice function.
  • the application processor can analyze the heart rate information based on the blood pressure beat signal obtained by the bone conduction sensor 180M, and realize the function of heart rate detection.
  • the keys 190 include a power-on key, a volume key, and the like. Keys 190 may be mechanical keys. It can also be a touch key.
  • the electronic device 100 may receive key inputs and generate key signal inputs related to user settings and function control of the electronic device 100 .
  • Motor 191 can generate vibrating cues.
  • the motor 191 can be used for vibrating alerts for incoming calls, and can also be used for touch vibration feedback.
  • touch operations acting on different applications can correspond to different vibration feedback effects.
  • the motor 191 can also correspond to different vibration feedback effects for touch operations on different areas of the display screen 194 .
  • Different application scenarios for example: time reminder, receiving information, alarm clock, games, etc.
  • the touch vibration feedback effect can also support customization.
  • the indicator 192 can be an indicator light, which can be used to indicate the charging state, the change of the power, and can also be used to indicate a message, a missed call, a notification, and the like.
  • the SIM card interface 195 is used to connect a SIM card.
  • the SIM card can be contacted and separated from the electronic device 100 by inserting into the SIM card interface 195 or pulling out from the SIM card interface 195 .
  • the electronic device 100 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • the SIM card interface 195 can support Nano SIM card, Micro SIM card, SIM card and so on. Multiple cards can be inserted into the same SIM card interface 195 at the same time. The types of the plurality of cards may be the same or different.
  • the SIM card interface 195 can also be compatible with different types of SIM cards.
  • the SIM card interface 195 is also compatible with external memory cards.
  • the electronic device 100 interacts with the network through the SIM card to implement functions such as call and data communication.
  • the electronic device 100 employs an eSIM, ie: an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100 .
  • Electronic device 400 may include one or more processors 401 coupled to controller hub 403 .
  • the controller hub 403 is connected to 406 via a multidrop bus such as a Front Side Bus (FSB), a point-to-point interface such as a QuickPath Interconnect (QPI), or the like
  • the processor 401 communicates.
  • Processor 401 executes instructions that control general types of data processing operations.
  • the controller hub 403 includes, but is not limited to, a Graphics & Memory Controller Hub (GMCH) (not shown) and an Input/Output Hub (IOH) (which may be on a separate chip) (not shown), where the GMCH includes the memory and graphics controller and is coupled to the IOH.
  • GMCH Graphics & Memory Controller Hub
  • IOH Input/Output Hub
  • Electronic device 400 may also include a coprocessor 402 and memory 404 coupled to controller hub 403 .
  • a coprocessor 402 and memory 404 coupled to controller hub 403 .
  • one or both of the memory and GMCH may be integrated within the processor (as described in this application), with the memory 404 and coprocessor 402 coupled directly to the processor 401 and to the controller hub 403, the controller hub 403 and IOH are in a single chip.
  • the memory 404 may be, for example, Dynamic Random Access Memory (DRAM), Phase Change Memory (PCM), or a combination of the two.
  • Memory 404 may include one or more tangible, non-transitory computer-readable media for storing data and/or instructions.
  • the computer-readable storage medium stores instructions, in particular temporary and permanent copies of the instructions.
  • the instructions stored in the memory 404 may include instructions that, when executed by at least one of the processors, cause the electronic device to implement the methods shown in Figures 4a, 5b, 8, 9, 12a, 14.
  • the coprocessor 402 is a special purpose processor, such as, for example, a high-throughput Many Integrated Core (MIC) processor, a network or communications processor, a compression engine, a graphics processor, a graphics processing unit General-purpose computing on graphics processing units (GPGPU), or embedded processors, etc.
  • MIC Many Integrated Core
  • GPGPU General-purpose computing on graphics processing units
  • embedded processors etc.
  • Optional properties of the coprocessor 402 are shown in FIG. 18 with dashed lines.
  • the electronic device 400 may further include a network interface (Network Interface Controller, NIC) 406 .
  • the network interface 406 may include a transceiver for providing a radio interface for the electronic device 400 to communicate with any other suitable devices (eg, front-end modules, antennas, etc.).
  • network interface 406 may be integrated with other components of electronic device 400 .
  • the network interface 406 can implement the functions of the communication unit in the above-mentioned embodiments.
  • the electronic device 400 may further include an input/output (I/O) device 405 .
  • I/O 405 may include: a user interface designed to enable a user to interact with electronic device 400 ; a peripheral component interface designed to enable peripheral components to also interact with electronic device 400 ; and/or sensors designed to determine association with electronic device 400 environmental conditions and/or location information.
  • Figure 18 is exemplary only. That is, although FIG. 18 shows that the electronic device 400 includes multiple devices such as the processor 401, the controller center 403, the memory 404, etc., in practical applications, the device using each method of the present application may only include the electronic device 400 Some of the devices, for example, may include only the processor 401 and the network interface 406 . The properties of the optional device in Figure 18 are shown in dashed lines.
  • SoC 500 includes: interconnect unit 550 coupled to processor 510; system agent unit 580; bus controller unit 590; integrated memory controller unit 540; , which may include integrated graphics logic, image processor, audio processor and video processor; Static Random Access Memory (SRAM) unit 530; Direct Memory Access (Direct Memory Access, DMA) unit 560.
  • coprocessor 520 comprises a special purpose processor, such as, for example, a network or communications processor, a compression engine, general-purpose computing on graphics processing units (GPGPU), a high-throughput MIC processor, or embedded processor, etc.
  • Static random access memory unit 530 may include one or more tangible, non-transitory computer-readable media for storing data and/or instructions.
  • the computer-readable storage medium stores instructions, in particular temporary and permanent copies of the instructions.
  • the SoCs shown in FIG. 19 may be provided in electronic devices, respectively.
  • the static random access memory unit 530 stores instructions, the instructions may include: when executed by at least one of the processors, cause the electronic device to implement the implementation of FIG. 4a, FIG. 5b, FIG. 8, Instructions for the methods shown in FIG. 9 , FIG. 12 a , and FIG. 14 .
  • Program code may be applied to input instructions to perform the functions described herein and to generate output information.
  • the output information can be applied to one or more output devices in a known manner.
  • a processing system includes any system having a processor such as, for example, a Digital Signal Processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.
  • DSP Digital Signal Processor
  • ASIC application specific integrated circuit
  • the program code may be implemented in a high-level procedural language or an object-oriented programming language to communicate with the processing system.
  • the program code may also be implemented in assembly or machine language, if desired.
  • the mechanisms described herein are not limited to the scope of any particular programming language. In either case, the language may be a compiled language or an interpreted language.
  • IP Intelligent Property
  • an instruction converter may be used to convert instructions from a source instruction set to a target instruction set.
  • an instruction translator may transform (eg, using static binary transforms, dynamic binary transforms including dynamic compilation), warp, emulate, or otherwise convert an instruction into one or more other instructions to be processed by the core.
  • Instruction translators can be implemented in software, hardware, firmware, or a combination thereof.
  • the instruction translator may be on-processor, off-processor, or partially on-processor and partially off-processor.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Signal Processing (AREA)
  • Otolaryngology (AREA)
  • Acoustics & Sound (AREA)
  • Telephone Function (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种音频输出方法,媒体文件的录制方法、电子设备以及计算机可读存储介质。音频输出方法用于电子设备,包括:接收到来自电子设备上的M个音频应用的音频输出请求;从M个音频应用中选择N个目标应用,并输出N个目标应用的音频数据;其中,M大于N。本申请通过限制目标应用的数量,可以改善现有技术中多个应用的音频相互干扰的情况;另外,本申请允许多个目标应用(目标应用的数量只要不超过N个即可)同时输出音频,因此可以满足用户同时收听多个音频的需求。

Description

音频输出方法、媒体文件的录制方法以及电子设备
本申请要求2021年04月21日提交中国专利局、申请号为202110430850.5、申请名称为“音频输出方法、媒体文件的录制方法以及电子设备”的中国专利申请的优先权,上述申请的全部内容通过引用结合在本申请中。
技术领域
本申请涉及电子技术领域,尤其涉及一种音频输出方法、媒体文件的录制方法、电子设备以及计算机可读存储介质。
背景技术
电子设备(例如,手机)上可以安装有各类音频应用,例如,音乐应用,导航应用,会议应用等。在一些电子设备中,只要应用产生音频播放请求,电子设备即会对该请求进行响应,并播放该应用所对应的音频。这样,当多个应用同时请求输出音频时,多个应用的声音会混杂在一起,相互干扰,影响用户体验。
另一些电子设备则采用了音频焦点机制。根据音频焦点机制,最后发送音频输出请求的应用输出音频,音频输出的灵活性不高。
发明内容
本申请的一些实施方式提供了一种音频输出方法、媒体文件的录制方法、电子设备以及计算机可读存储介质,以下从多个方面介绍本申请,以下多个方面的实施方式和有益效果可互相参考。
第一方面,本申请实施方式提供了一种音频输出方法,用于电子设备,方法包括:接收到来自电子设备上的M个音频应用的音频输出请求(例如,电子设备的操作系统接收到来自电子设备上的M个音频应用的音频输出请求);从M个音频应用中选择N个目标应用(例如,电子设备的操作系统从M个音频应用中选择N个目标应用),并输出N个目标应用的音频数据;其中,M大于N。
相当于,本申请实施方式限定了同一时刻输出音频的应用(本文称为“目标应用”)的最大数量,当请求输出音频的应用(本文称为“候选应用”)的数量M大于阈值N时,电子设备对候选应用进行筛选,以将目标应用的数量限制至N个。通过限制目标应用的数量,本申请实施方式可以改善现有技术中多个应用的音频相互干扰的情况。
另外,现有技术的音频焦点机制中,将最后发送音频输出请求的应用确定为输出音频的目标应用;而本申请实施方式中,可以根据任意筛选条件(例如,用户指定的筛选条件),从M个候选应用中筛选N个目标应用。因此,相对于现有技术的音频焦点机制,本申请实施方式可以以更灵活的方式确定输出音频的目标应用,从而可以提高用户体验。
在一些实施方式中,N为2以上的正整数。
根据本申请实施方式,允许多个目标应用(目标应用的数量只要不超过N(N≥2)个即可) 同时输出的音频,因此可以满足用户同时收听多个音频的需求。
在一些实施方式中,从M个音频应用中选择N个目标应用,包括:基于电子设备的当前工作场景从M个音频应用中选择N个目标应用;或者,基于预设的应用优先级信息从M个音频应用中选择N个目标应用;或者;基于用户对所述M个音频应用的选择操作,从M个音频应用中选择N个目标应用。
根据本申请实施方式,根据电子设备的当前工作场景,或者应用优先级,或者用户实时指定来确定目标设备,可以更为符合用户需求,提高用户体验。
在一些实施方式中,基于电子设备的当前工作场景从M个音频应用中选择N个目标应用,包括:确定电子设备的当前工作场景;根据当前工作场景,从M个音频应用中确定场景特征应用,其中,场景特征应用为当前工作场景所需的应用;基于场景特征应用的确定结果,确定N个目标应用;其中,N个目标应用中至少包括场景特征应用。
根据本申请实施方式,N个目标应用中至少包括当前工作场景所需的应用(即场景特征应用),使得电子设备输出的音频与当前工作场景更为匹配,有利于提高用户体验。
在一些实施方式中,基于场景特征应用的确定结果,确定N个目标应用,包括:基于场景特征应用的确定结果,确定M个音频应用中各应用的优先级,其中,场景特征应用的优先级高于M个音频应用中其他应用的优先级;根据M个音频应用的优先级排序,将优先级最高的N个音频应用确定为N个目标应用,以使得N个目标应用至少包括场景特征应用。
在一些实施方式中,确定电子设备的当前工作场景,包括:根据电子设备通信连接的其他电子设备确定当前工作场景;或者,根据电子设备当前运行的应用确定当前工作场景;或者,根据电子设备上的特定传感器的测量数据确定当前工作场景,特定传感器用于测量电子设备的位移、速度和/或加速度数据;或者,根据用户的场景指定操作,确定当前工作场景。
在一些实施方式中,电子设备的当前工作场景包括车载场景、居家场景、会议场景、运动场景或高铁出行场景。
在一些实施方式中,电子设备中包括与N个目标应用相对应的多个音量控制信息,N个目标应用中的各目标应用对应于多个音量控制信息中的其中一个音量控制信息;输出N个目标应用的音频数据,包括:根据所述目标应用的音量控制信息确定目标应用的音量;以该音量输出目标应用的音频数据。
在一些实施方式中,电子设备中包括N个音量控制信息,N个目标应用与N个音量控制信息一一对应。从而,各目标应用的音量均可以被独立地控制。
在一些实施方式中,电子设备包括与N个目标应用对应的多个音量控制信息,其中,多个音量控制信息中的每一个音量控制信息能够基于用户输入被确定。
根据本申请实施方式,用户可以根据需要调节各应用的音量,以提高用户体验。
在一些实施方式中,输出N个目标应用的音频数据,包括:通过多个音频播放设备播放N个目标应用的音频数据,音频播放设备包括电子设备,和/或,电子设备之外的其他设备。
在一些实施方式中,通过多个音频播放设备播放N个目标应用的音频数据,包括:确定N个目标应用中各目标应用所对应的音频播放设备,并基于音频播放设备的确定结果播放N个目标应用的音频数据;其中,确定N个目标应用中各目标应用所对应的音频播放设备,包括:基于预设的设备优先级信息,确定目标应用所对应的音频播放设备;或者,基于目标应用在音频播放设备上的播放次数,确定目标应用所对应的音频播放设备。
在一些实施方式中,M个音频应用是系统电话应用之外的其他应用。
在一些实施方式中,N是电子设备根据当前与电子设备通信连接的音频播放设备的数量确定的。
第二方面,本申请实施方式提供了一种媒体文件的录制方法,用于电子设备,包括:当电子设备输出多个音频应用的音频数据时,接收到第一输入,第一输入用于从多个音频应用中选择一个或多个目标应用;录制第一媒体文件,其中,录制第一媒体文件,包括:录制一个或多个目标应用的音频数据,以生成第一媒体文件。
根据本申请实施方式,当电子设备输出多个音频应用(称作“候选应用”)的音频时,电子设备仅录制选定应用(称作“目标应用”)的音频,而不录制目标应用以外的其他候选应用的音频,从而满足用户的多样化需求。
在一些实施方式中,目标应用的数量小于当前输出音频的音频应用的数量。
在一些实施方式中,当电子设备输出多个音频应用的音频数据时,电子设备输出第一视频应用的视频数据;并且,录制第一媒体文件,包括:录制一个或多个目标应用的音频数据,并录制第一视频应用的视频数据,以生成第一媒体文件。
根据本申请实施方式,在录制视频时,用户可以选择视频数据来源和音频数据来源,从而可以满足用户的多样化需求。
第三方面,本申请实施方式提供了一种电子设备,包括:存储器,用于存储由电子设备的一个或多个处理器执行的指令;处理器,当处理器执行存储器中的指令时,可使得电子设备执行本申请第一方面任一实施方式提供的音频输出方法,或执行本申请第二方面任一实施方式提供的媒体文件的录制方法。
第四方面,本申请实施方式提供了一种计算机可读存储介质,计算机可读存储介质上存储有指令,该指令在计算机上执行时使得计算机执行本申请第一方面任一实施方式提供的音频输出方法,或执行本申请第二方面任一实施方式提供的媒体文件的录制方法。
附图说明
图1示出了本申请实施例提供的音频输出方法的示例性应用场景一;
图2示出了音频焦点机制下的焦点队列的示例性结构图;
图3示出了本申请实施例中电子设备上的候选应用发出音频输出请求的示例性情景;
图4a示出了本申请实施例提供的目标应用选择方法的示例性流程图一;
图4b示出了本申请实施例提供的用于指定当前工作场景的示例性界面图;
图5a示出了本申请实施例提供的用于指定关键应用的示例性界面图;
图5b示出了本申请实施例提供的目标应用选择方法的示例性流程图二;
图6a示出了本申请实施例提供的用于指定目标应用的示例性界面图一;
图6b示出了本申请实施例提供的用于指定目标应用的示例性界面图二;
图7a示出了本申请实施例提供的焦点队列示意图一;
图7b示出了本申请实施例提供的焦点队列示意图二;
图7c示出了本申请实施例提供的焦点队列示意图三;
图8示出了本申请实施例提供的音频输出方法的示例性流程图;
图9示出了本申请实施例提供的电子设备输出目标应用的音频的示例性流程图一;
图10a示出了本申请实施例提供的音量调节界面示意图一;
图10b示出了本申请实施例提供的音量调节界面示意图二;
图11示出了本申请实施例提供的音频输出方法的示例性应用场景二;
图12a示出了本申请实施例提供的电子设备输出目标应用的音频的示例性流程图二;
图12b示出了本申请实施例提供的用于指定优先播放设备的界面图一;
图12c示出了本申请实施例提供的用于指定优先播放设备的界面图二;
图13示出了本申请实施例提供媒体文件的录制方法的示例性应用场景一;
图14示出了本申请实施例提供的媒体文件的录制方法的示例性流程图;
图15示出了本申请实施例提供的用于选择媒体文件的音频来源的示例性界面图;
图16示出了本申请实施例提供媒体文件的录制方法的示例性应用场景二;
图17为本申请实施例提供的电子设备的构造示意图;
图18示出了本申请实施方式提供的控制设备的框图;
图19示出了本申请实施方式提供的片上系统(System on Chip,SoC)的结构示意图。
具体实施方式
以下将参考附图详细说明本申请的具体实施方式。
图1示出了本申请实施方式的一个应用场景。图1中,电子设备100(具体为手机)上安装有多个音频应用,各音频应用可以输出不同内容的音频数据(简称“音频”)。例如,酷我音乐 TM可以输出音乐节目的音频,ZOOM TM可以输出会议语音,蜻蜓FM TM可以输出电台节目的音频,百度地图 TM可以输出导航语音等。
在一些电子设备中,只要应用产生音频输出请求(简称“请求”),电子设备即会对该请求进行响应,并播放该应用所对应的音频。例如,当用户开启了酷我音乐和百度地图后,电子设备播放歌曲和导航语音;此时,如果用户在电子设备上接听来电,电子设备同时播放电话语音;然后,如果电子设备上设置的定时器时间到,电子设备同时播放定时器的提示音等。在上述示例性场景中,该电子设备中,许多个应用的音频内容混杂在一起,相互干扰,会产生不良的用户体验。
为解决多个应用的音频混杂播放的问题,一些电子设备的操作系统(例如,android系统)提供了音频焦点机制。参考图2,在音频焦点机制中,一个时刻仅有一个应用(通常为最后提出请求的应用)可以输出音频。具体地,操作系统为最后提出请求的应用分配焦点,并输出该应用的音频;其他应用则失去焦点并暂停输出音频。在当前输出音频的应用(图示为闹钟)释放焦点后,焦点队列中与当前应用相邻的下一个应用(图示为ZOOM)获取到焦点。由于音频焦点机制规定一台电子设备在同一时刻仅能输出一个应用的音频,因此可以解决多个应用的音频混杂播放的问题。但是,音频焦点机制中,仅能是最后发送请求的应用输出音频,音频输出的灵活性不高。另外,当用户需要同时获取多个应用的音频内容时(例如,用户希望同时收听音乐和导航语音时),音频焦点机制无法满足用户的需求。
为此,本申请实施方式提供了一种音频输出方法。当电子设备100接收到来自M个音频应用的音频输出请求时,电子设备100从M个音频应用中选择N个目标应用,并输出N个目标应用所对应的音频,且停止输出目标应用之外的其他音频应用的音频。其中M大于N。
相当于,本申请实施方式限定了同一时刻输出音频的应用(本文称为“目标应用”)的最大数量(本文将该最大数量称为“阈值N”),当请求输出音频的应用(本文称为“候选应用”)的数量M大于阈值N时,电子设备100对候选应用进行筛选,以将目标应用的数量限制至N个。通过限制目标应用 的数量,本申请实施方式可以改善现有技术中多个应用的音频相互干扰的情况。
另外,本申请实施方式中,可以根据任意筛选条件(例如,用户指定的筛选条件),从M个候选应用中筛选N个目标应用。因此,相对于现有技术的音频焦点机制,本申请实施方式可以以更灵活的方式确定输出音频的目标应用,从而可以提高用户体验。
在本申请的一些实施方式中,N为2以上的正整数。即,申请实施方式允许多个目标应用(目标应用的数量只要不超过N个即可)同时输出的音频,因此可以满足用户同时收听多个音频的需求。
本申请实施方式对电子设备的类型不作限定。电子设备可以为手机、笔记本电脑、平板、大屏设备、可穿戴设备(例如,手表,智能眼镜,头盔),台式电脑、增强现实(Augmented Reality,AR)/虚拟现实(Virtual Reality,VR)设备、个人数字助理(Personal Digital Assistant,PDA)等。
本申请实施方式对音频应用不作限定。音频应用可以是电话应用,定时器应用,智慧语音应用,浏览器应用(例如,safari TM)等系统应用,也可以是音乐应用(例如,酷我音乐 TM),视频应用(例如,爱奇艺 TM),会议应用(例如,ZOOM TM),游戏应用(例如,绝地求生 TM,坦克大战 TM),支付应用(例如,支付宝 TM),短视频应用(例如,抖音 TM),社交应用(新浪微博 TM),导航应用(例如,百度地图 TM),电子书应用(例如,七猫小说 TM),电台应用(例如,蜻蜓FM TM)等第三方应用,只要能够输出音频即可,不一一赘述。
以下给出电子设备100从M个候选应用中选择N个目标应用的示例。以下示例中,M=3,N=2。图3示出了电子设备上的3个候选应用向电子设备发出音频输出请求的示例性情景。参考图3,3个候选应用分别为ZOOM、酷我音乐和闹钟。其中,ZOOM最早发出音频输出请求,闹钟最晚发出音频输出请求。
(1)示例一:电子设备100基于当前工作场景从3个候选应用中选择2个目标应用。
参考图4a,本示例包括以下步骤:
S11:电子设备100确定其当前工作场景。
在一些实施例中,电子设备100根据外接通信设备(与电子设备100建立通信连接的其他电子设备)的设备信息确定当前工作场景。例如,当电子设备100与车载音响连接时,电子设备100确定其当前工作场景为车载场景;当电子设备100与家庭网关连接时,电子设备100确定其当前场景为居家场景等;
在一些实施例中,电子设备100根据电子设备100上运行的应用确定当前工作场景。例如,当电子设备100运行PPT应用或会议应用(例如,ZOOM)时,电子设备100确定当前工作场景为会议场景;当电子设备100运行健身应用(例如,KEEP)时,电子设备100确定当前工作场景为运动场景等;
在一些实施例中,电子设备100根据特定传感器的测量数据确定当前工作场景,其中,特定传感器用于测量电子设备100的位移、速度和/或加速度数据。示例性地,特定传感器为陀螺仪传感器,加速度传感器,GPS传感器等。具体地,当加速度传感器的测量数据符合运动场景的数据特征时,电子设备100确定当前工作场景为运动场景(例如,步行,跑步等);当根据GPS传感器的测量数据确定电子设备100的当前移动速度位于设定区间(例如,180km/h~300km/h)时,电子设备100确定当前场景为高铁出行场景。
电子设备100还可以根据其他条件确定当前工作场景,例如,电子通过摄像头采集当前环境的图像数据,或通过麦克风采集当前环境的声音数据,并通过AI算法确定当前工作场景等,不一一赘述。
上述实施例中,电子设备100根据预设条件自动感知当前工作场景,但本申请不限于此。在其他实施例中,电子设备100可以基于用户的场景指定操作确定当前工作场景。图4b给出了用户的场景指定操作的一个示例。用户通过选择系统设置应用的“应用音频管理”选项可以进入图4b所示的界面101。 界面101中包括场景选择列表,当用户在场景选择列表中选择“会议场景”时,电子设备100确定当前工作场景为会议场景。本示例中,场景指定操作为用户作用于界面101的操作,但本申请不限于此。在其他实施例中,场景指定操作可以是用户的其他操作,例如,向电子设备100发送语音指令等。
S12:电子设备100根据当前工作场景,从3个候选应用中确定场景特征应用,其中,场景特征应用是当前工作场景所需的应用。
作为一示例,电子设备100中存储有“场景-应用”关系表。表1给出了“场景-应用”关系表的一个示例。其中,“场景”一栏为工作场景,“应用”一栏为各工作场景所需的应用(即场景特征应用)。电子设备100在确定当前工作场景后,可通过查询“场景-应用”关系表确定场景特征应用。例如,电子设备100在确定当前工作场景为高铁出行场景后,将“闹钟”应用确定为场景特征应用。
表1“场景-应用”关系表
场景 场景特征应用
车载场景 百度地图;蜻蜓FM;酷我音乐
居家场景 智慧生活;美的美居
会议场景 ZOOM;腾讯会议
运动场景 KEEP;
高铁出行场景 闹钟;七猫小说
其中,表1可以是电子设备根据各音频应用的应用标签生成的。其中,应用标签可以在应用的属性文件(例如,android应用的“AndroidManifest.xml”文件)中进行定义。应用标签用于表征应用的用途,例如,百度地图的标签为“导航”,蜻蜓FM和酷我音乐的标签为“娱乐”。电子设备在确定表1时,根据应用标签来确定各场景所对应的场景特征应用。例如,电子设备将应用标签为“导航”和“娱乐”的所有应用均确定为车载场景的场景特征应用。另外,场景和应用标签之间的对应关系可以是电子设备出厂时预置的,也可以是用户通过系统设置应用指定的。
表1也可以是通过AI算法确定的。电子设备在使用过程中,对各场景下电子设备上运行的音频应用进行统计,并通过统计结果训练AI模型。之后,电子设备可根据该AI模型计算预设场景下各应用的使用概率,并将使用概率超过设定阈值(例如,70%)的应用确定该预设场景的场景特征应用。
S13:电子设备100基于场景特征应用的确定结果,确定M个候选应用中各候选应用的优先级,其中,场景特征应用的优先级高于其他候选应用的优先级。
结合图3示出的场景。当电子设备100工作于会议场景时,电子设备根据表1,确定场景特征应用为ZOOM应用,因此,电子设备100将ZOOM应用确定为3个候选应用中优先级最高的应用。之后,电子设备100根据发出音频输出请求的次序确定剩余应用的优先级。具体地,晚发出请求的应用具有更高的优先级,因此,闹钟的优先级高于酷我音乐。最终,3个候选应用的优先级从高到低的排序为:ZOOM,闹钟,酷我音乐。
当电子设备100工作于高铁出行场景时,电子设备根据表1,确定场景特征应用为闹钟应用。因此,电子设备100将闹钟应用确定为3个候选应用中优先级最高的应用。之后,电子设备100根据发出音频输出请求的次序确定剩余应用的优先级。具体地,晚发出请求的应用具有更高的优先级,因此,酷我音乐的优先级高于ZOOM。最终,3个候选应用的优先级从高到低的排序为:闹钟,酷我音乐,ZOOM。
需要说明的是,当场景特征应用的数量为多个时,电子设备100可以根据发出音频输出请求的次序确定各场景特征应用的优先级,具体地,越晚发出音频输出请求的应用具有越高的优先级。例如,在一个示例中,电子设备的当前工作场景为高铁出行场景,电子设备确定的场景特征应用包括闹钟应用(用 于提醒时间)和七猫小说(用于旅行途中的放松),这两个应用中较晚发出音频输出请求的应用为闹钟应用,因此,电子设备确定闹钟的优先级高于七猫小说。
S14:电子设备100根据3个候选应用的优先级确定2个目标应用。
具体地,电子设备100根据3个候选应用的优先级排序,将优先级最高的2个应用确定为2个目标应用。由于场景特征应用的优先级高于其他候选应用的优先级,因此,2个目标应用中至少可以包括步骤S12所确定的场景特征应用。具体地,对于步骤S13所述的会议场景,电子设备100将ZOOM和闹钟确定为2个目标应用;对于步骤S13所述的高铁出行场景,电子设备100将闹钟和酷我音乐确定为2个目标应用。
本示例中,电子设备100根据当前工作场景从M个候选应用中选择N个目标应用,N个目标应用中至少包括当前工作场景所需的场景特征应用,使得电子设备100输出的音频与当前工作场景更为匹配,有利于提高用户体验。
(2)示例二:电子设备100中基于预设的应用优先级信息从3个候选应用中选择2个目标应用。
对于一些较为关键的应用,对于用户比较重要,可能需要具有较高的优先级,以便用户及时收听到这些应用输出的音频内容。通常此类应用例如为通话应用(例如,系统电话应用),通知类应用(例如,定时器应用,日程提醒应用),支付类应用(例如,支付宝),人机交互类应用(例如,智慧语音)等。对于此类应用,电子设备100预设有与其相对应的优先级信息,电子设备100通过预设的优先级信息可以将此类应用确定为具有较高优先级的应用。
本实施例中,应用优先级信息为添加在应用上的属性信息。例如,当应用的属性信息为Priority时,表示该应用为优先级较高的应用;当应用的属性信息为其他值或空缺时,表示该应用为优先级较低的应用。示例性地,用户可以通过图5a所示的界面102设置应用的属性信息。例如,用户在勾选了“支付宝”右侧的复选框之后,可将“支付宝”的属性信息设置为Priority。另外,电子设备100也可以在其出厂设置中将特定应用(例如,系统电话应用)的属性信息设置为Priority。
参考图5b,本示例中,电子设备100从3个候选应用中选择2个目标应用的方法包括以下步骤:
S21:电子设备100根据预设的应用优先级信息,从3个候选应用中确定关键应用。
具体地,电子设备100根据各候选应用的属性信息确定关键应用。当应用的属性信息为Priority时,电子设备100将该应用确定为关键应用。结合图3和图5a,本示例中,电子设备100将闹钟确定为关键应用。
S22:电子设备100基于关键应用的确定结果,确定3个候选应用中各候选应用的优先级,其中,关键应用的优先级高于其他候选应用的优先级。
由于闹钟为关键应用,因此,电子设备100将闹钟确定为3个候选应用中优先级最高的应用。之后,电子设备100根据发出音频输出请求的次序确定剩余应用的优先级。具体地,晚发出请求的应用具有更高的优先级,因此,酷我音乐的优先级高于ZOOM。最终,3个候选应用的优先级从高到低的排序为:闹钟,酷我音乐,ZOOM。
需要说明的是,电子设备100可以根据发出音频输出请求的次序确定各关键应用的优先级,具体地,越晚发出音频输出请求的应用具有越高的优先级。例如,在一个情景中,应用一、应用二和应用三依次发出音频输出请求,且三者均为关键应用,电子设备100确定三者优先级从高到低的排序依次为应用三、应用二、应用一。
S23:电子设备100根据3个候选应用的优先级确定3个目标应用。
与步骤S13相同,电子设备100将优先级最高的2个应用确定为N个目标应用。因此,本示例中, 电子设备100将闹钟和酷我音乐确定为2个目标应用。
本示例中,电子设备100基于预设的优先级信息从3个候选应用中选择2个目标应用,因此可以优先输出关键应用的音频,有利于提高用户体验。
(3)示例三:电子设备100基于用户对3个候选应用的选择操作,从3个候选应用中选择2个目标应用。
本示例中,用户对3个候选应用的选择操作为用户作用于电子设备100的应用选择界面上的选择操作。当候选应用的数量超过大于阈值N时,电子设备100显示应用选择界面,用户可以通过该应用选择界面选择期望输出音频的应用(即目标应用)。
图6a示出了一个应用选择界面的示例。用户通过勾选候选应用所对应的选择框,可以选择期望输出音频的应用。例如,图6a中,用户勾选了与ZOOM和酷我音乐相对应的选择框,因此,电子设备100基于用户选择将ZOOM和酷我音乐确定为2个目标应用。
在另一种情形下,用户勾选的应用数量少于阈值N。此时,电子设备100从剩余应用中,选择最晚请求输出音频的应用,与用户勾选的应用共同组成N个目标应用。具体地,参考图6b,当用户仅勾选了ZOOM应用时,电子设备100从酷我音乐和闹钟应用中选择最晚请求输出音频的应用(具体为闹钟应用),并将ZOOM应用和闹钟应用作为2个目标应用。
本示例中,用户的选择操作为用户作用于应用选择界面的操作,但本申请不限于此。在其他实施例中,用户的选择操作可以是用户的其他操作,例如,向电子设备发送语音指令等。
以上示例一至示例三为本申请实施方式的示例性说明。本领域技术人员可以进行其他变形。例如,阈值N可以为2以上的其他数值(例如,3,6等),M可以为大于N的其他数值(例如,5,8等)。
以上对电子设备100从M个候选应用中选择N个目标应用的方法进行了说明。以下介绍本申请实施例提供的音频输出方法的具体过程。
本申请实施例利用音频焦点机制的思想,对音频的输出过程进行控制。与现有技术不同的是,现有电子设备同一时刻仅允许一个应用获取焦点(即焦点的数量为1个);本实施例中,同一时刻,最多可以有N个应用获取到焦点(即焦点的数量最多可以为N个)。
具体地,当候选应用向操作系统发送音频输出请求后,操作系统判断当前发送请求的候选应用的数量M是否超过阈值N。如果未超过阈值N,操作系统为M个候选应用中的所有应用分配焦点,以输出M个候选应用的音频;如果数量M超过阈值N,操作系统从M个候选应用中选择N个目标应用,并为N个目标应用分配焦点,以输出N个目标应用的音频。
本申请实施方式中,电子设备可以通过自身的音频播放装置(例如,麦克风)播放(作为“输出”)目标应用的音频,也可以通过与电子设备通信连接的其他音频播放设备(例如,蓝牙音箱)播放目标应用的音频。
另外,本申请实施方式中,电子设备可以根据需要对阈值N进行设置。示例性地,当电子设备100没有与其他音频播放设备连接时,电子设备100将N设置为较小值(例如,N=2);当电子设备100与其他音频播放设备连接时,电子设备100将N设置为较大的数值。例如,当电子设备100连接的其他音频播放设备的数量为4个时,电子设备100将N设置为6。在其他示例中,电子设备100也可以根据用户输入确定阈值N。
以下介绍本申请的具体实施例。
【实施例一】
以下仍然结合图3示出的场景进行介绍本实施例提供的音频输出方法。参考图3,电子设备100上 的ZOOM、酷我音乐和闹钟应用分别在时刻T1、时刻T2、时刻T3向操作系统发送音频输出请求。另外,本实施例中,N=2。在其他实施例中,N可以为其他整数,例如,1,4,7等。
为便于理解,首先给出本实施例中,不同时刻的焦点队列(即音频输出队列)。在T1时刻,参考图7a,ZOOM向操作系统发送音频输出请求后,操作系统确定当前发送请求的候选应用的数量M=1,未超过阈值N(N=2)。因此,操作系统为ZOOM分配第一焦点,并通过焦点队列一输出ZOOM的音频。
在T2时刻,参考图7b,当酷我音乐向操作系统发送音频输出请求后,操作系统确定当前发送请求的候选应用的数量M=2,未超过阈值N(N=2)。因此,操作系统为酷我音乐分配第二焦点,并通过焦点队列二以输出酷我音乐的音频。
在T3时刻,当闹钟向操作系统发送音频输出请求后,操作系统确定当前发送请求的候选应用的数量M=3,超过了阈值N(N=2)。因此,操作系统重新调整焦点队列,以输出2个目标应用的音频。本实施例中,T3时刻的音频输出队列如图7c所示,电子设备100输出ZOOM和闹钟的音频。
以下结合图8,介绍本实施例提供的音频输出方法的具体步骤。为突出本实施例与现有技术的区别,下述步骤中,结合T3时刻之后的音频输出过程进行介绍。
S110:电子设备100接收到音频应用发送的音频输出请求。
本实施例中,将闹钟作为音频应用的示例。闹钟的设定时间到之后,向操作系统发送音频输出请求。例如,闹钟应用通过调用系统提供的音频播放器(例如,android系统的MediaPlayer,AudioTrack等)向操作系统发送音频输出请求。音频应用在向操作系统发送音频输出请求后,电子设备100接收到音频输出请求。
音频应用发送的音频输出请求中,可以包括应用的应用标识(例如,应用名称),请求输出的音频文件的文件信息(例如,音频文件的名称,音频文件的地址,音频文件的压缩格式)等。
本实施例中,以闹钟应用作为音频应用的示例,但本申请不限于此。在其他实施例中,音频应用可以为其他应用,例如,抖音,浏览器等。另外,各音频应用向操作系统发送音频输出请求的触发条件可以根据自身的功能确定。例如,对于抖音等短视频应用,在用户打开视频播放界面后,应用向操作系统发送音频输出请求;对于酷我音乐等音乐播放应用,在用户点击“开始播放”按钮后,应用向操作系统发送音频输出请求;对于日程提醒等通知类应用,在设定条件满足(例如,设定时间到,设定事件发生)时,应用向操作系统发送音频输出请求;对于在线播放音频的应用(例如,浏览器),应用在音频数据缓冲完成之后向操作系统发送音频输出请求等等。未述应用可以参考现有技术中的实例,不一一赘述。
S120:电子设备100确定候选应用的数量M大于阈值N。
操作系统存储有候选应用的名单。当操作系统接收到新的请求时,判断请求是否来自于新的应用(即应用名单上没有的应用),若是,操作系统在当前的数量M上加1,并将新应用的应用标识添加至应用名单;若否,操作系统维持当前的数量M以及应用名单不变。
本实施例中,在T3时刻之前,应用名单上包括酷我音乐和ZOOM,候选应用的数量M=2。当操作系统在T3时刻接收到来自闹钟应用的音频输出请求时,根据应用名单判断闹钟应用为新的应用,因此将应用数量更新为M=3,并在应用名单上添加闹钟应用的应用标识(例如,应用名称)。
在更新应用数量M(M=3)之后,操作系统将应用数量M与阈值N(N=2)进行比较,以确定当前发送请求的候选应用的数量M大于阈值N。
另外,在输出音频的过程中,当操作系统判断某一候选应用的音频输出结束时,操作系统在应用数量M上减1,并将该候选应用的应用标识从应用名单中删除。操作系统判断候选应用结束音频输出的 方法可参考现有技术,本实施例仅作示例性说明。例如,当操作系统接收到候选应用发送的音频输出结束指令(例如,应用通过调用android系统提供的MediaPlayer.stop()方法向操作系统发送音频输出结束指令)时,操作系统判断该应用结束音频输出;或者,当操作系统读取到候选应用的音频文件的结束标记时,操作系统判断该应用结束音频输出等。
S130:电子设备100从M个候选应用中选择N个目标应用。
操作系统从M个候选应用中选择N个目标应用的方法可以参考上文示例一(如步骤S11~S14)、示例二(如步骤S21~S23)和示例三的叙述,不再赘述。
本实施例中,以步骤S14的会议场景中确定的目标应用为例进行介绍。即,本实施例中,目标应用为ZOOM和闹钟。
S140:电子设备100根据目标应用的选择结果更新焦点队列。
如步骤S130所述,参考图7c,电子设备100在将ZOOM和闹钟确定为目标应用后,维持将第一焦点分配至ZOOM,并将第二焦点分配至闹钟(T3时刻之前第二焦点是分配至酷我音乐)。并且,操作系统将酷我音乐在焦点队列二中的次序排在闹钟之后。
S150:电子设备100输出N个目标应用的音频。参考图9,电子设备100输出音频的过程具体包括:
S151:电子设备100根据各目标应用的音量控制信息,确定各目标应用的音量。
本实施例中,电子设备100中包括多个音量控制信息,各音频应用对应于不同的音量控制信息。示例性地,用户可以通过图10a和图10b所示的方法设置音频应用的音量控制信息。也就是说,本实施例中,各音频应用的音量控制信息可以基于用户输入而被确定。
图10a示出了用户设置音量控制信息的一种方式。用户选择系统设置应用的“应用音频管理”选项后,进入图10a所示的界面101。界面101上包括与各音频应用相对应的音量控制条,音量控制条的各位置点可以映射至一个音量调节系数(作为音量控制信息)。音量控制条上还包括音量控制球,用户通过拖动音量控制球,可以设置音频应用所对应的音量。以ZOOM为例,ZOOM与音量控制条103相对应。当用户将音量控制条103上的音量控制球拖动至如图10a所示位置时,可以将ZOOM的音量调节系数设置为0.4。
图10b示出了用户设置音量控制信息的另一种方式。通过图10b所示的方式,用户可以更快捷地调节目标应用的音量。具体地,当用户以预定方式操作电子设备100时,电子设备100显示如图所示的界面104。预定方式可理解为调出界面104的快捷方式。预定方式例如为按压预定按钮(例如,音量键),以预定手势(例如,四指下滑手势)触控屏幕,在预定方向上(与电子设备100所在平面相垂直的方向上)晃动电子设备100等,本申请不作限定。
界面104上包括目标应用所对应的音量控制条,用户通过拖动音量控制条上的音量控制球,可以实时地调节各目标应用的音量。这样,当电子设备100在输出目标应用的音频时,用户可以通过图10b所示的方式,快捷地调整各目标应用的音量。
电子设备100在输出各目标应用的音频时,操作系统根据各目标应用的音量控制信息确定该目标应用的音量,并以该音量输出目标应用的音频。仍以图10a所示的ZOOM为例。本文将ZOOM请求输出的音频数据(例如,解码后的PCM音频数据)记作:Data_1。当ZOOM的音量控制球被拖动至如图10a所示位置时,操作系统将0.4×Data_1作为ZOOM的输出音频。
本实施例中,各音频应用对应于一个独立的音量控制信息,这样,各音频应用的音量均可以被独立地控制。但本申请不限于此。在其他实施例中,多个音频应用可以共享同一个音频控制信息。例如,电子设备100根据各应用的应用标签将音频应用归类为多个应用组(例如,根据“娱乐”标签将蜻蜓FM 和七猫小说归类为娱乐应用组),每个应用组对应一个音量控制条。这样,用户通过操作一个音量控制条,可以调整应用组内所有应用的音量。
S152:电子设备100对N个目标应用的音频进行混音。具体为,操作系统对N个目标应用的输出音频数据进行叠加,以生成电子设备100最终输出的音频数据Data_Final。
参考图10a,ZOOM的音频数据为Data_1,音量调节系数为0.4;闹钟的音频数据为Data_2,音量调节系数为1.6。那么,混音操作后,电子设备100最终输出的音频数据Data_Final=0.4×Data_1+1.6×Data_1。
S153:电子播放混音后的音频。具体为,操作系统将音频数据Data_Final发送至电子设备100的音频输出装置(例如,扬声器),以通过该音频输出装置播放2个目标应用的音频。
综上,当候选应用的数量M超过阈值N时,本实施例可以将输出音频的应用(即目标应用)的数量限制至N个。通过限制目标应用的数量,本实施例可以改善现有技术中多个应用的音频相互干扰的情况;另外,本实施例允许多个应用同时输出音频(即N≥2),因此可以满足用户同时收听多个应用的音频的需求。
另外,本实施例中,各应用的音量控制信息可以独立地被设置,这样,用户可以根据需要调节各应用的音量,以提高用户体验。例如,当用户不希望听到ZOOM中的语音,但又不方便退出ZOOM应用时,用户可以调高酷我音乐的音量,并调低ZOOM的音量。这样,酷我音乐的声音可以盖过ZOOM的声音,以满足用户需求。
本实施例是本申请技术方案的示例性说明,本领域技术人员可以进行其他变形。例如,在其他实施例中,电子设备100将系统电话应用设为音频应用的特例,不列入候选应用(即候选应用仅包括系统电话应用之外的其他应用)。在该实施例中,系统电话应用不受阈值N的限制,电子设备100可以同时输出系统电话应用和N个目标应用的音频。相当于,当用户通过系统电话应用接听电话时,电子设备100可以输出N+1个应用的音频。
【实施例二】
本实施例对实施例一的步骤S150进行改进。实施例一的步骤S150中,N个目标应用的音频通过同一个音频播放设备(即电子设备100自身)进行播放;本实施例中,N个目标应用的音频通过不同的音频播放设备进行播放。
图11示出了本实施例的示例性应用场景。图11中,电子设备100(具体为手机)通过网关110与笔记本电脑120(设备名称“Laptop”),音箱130(设备名称“AI speaker”)通信连接,并通过蓝牙与蓝牙耳机140(设备名称“FreeBuds”)通信连接。
仍然结合图3示出的场景进行介绍。与实施例一不同的是,本实施例中,N=3。电子设备100上正在输出音频的3个目标应用分别是ZOOM,酷我音乐和闹钟。其中,酷我音乐的音频通过音箱130播放,ZOOM的音频通过笔记本电脑120播放,闹钟的音频通过电子设备100播放。
图11所示场景中,将电子设备100(具体为手机)、笔记本电脑120、音箱130和蓝牙耳机140作为音频播放设备的示例,但本申请不限于此。在其他实施例中,音频播放设备可以是大屏,平板、车载音响、智能手表等其他设备,只要能播放音频即可。另外,电子设备100与其他音频播放设备之间的通信方式可以是WiFi、蓝牙、有线通信等方式。
本实施例中,电子设备100中包括设备信息列表,设备信息列表用于记录电子设备100以外的其他音频播放设备(本文称“备用设备”)的设备信息。备用设备例如为与电子设备100建立过通信连接的音频播放设备。示例性地,当电子设备100首次与某音频播放设备建立通信连接时,电子设备100将该 音频播放设备的设备信息加至设备信息列表。备用设备的设备信息可以包括设备名称、设备类型、设备编号(由电子设备100进行分配)等。
设备信息列表中还包括各备用设备的状态信息。例如,当某备用设备与电子设备100之间的通信状态由断开状态变更为连接状态时,电子设备100将该设备的“状态信息”更新为“在线”;而当某备用设备与电子设备100之间的通信状态由连接状态变更为断开状态时,电子设备100将该设备的“状态信息”更新为“离线”。可以理解,当某设备的状态信息为“在线”时,表示该设备为可用设备,电子设备100可通过该设备播放目标应用的音频。
表2示出了与图11所示场景相对应的设备信息列表。
表2设备信息列表
编号 设备名称 设备类型 状态信息
001 FreeBuds 蓝牙耳机 在线
002 AI speaker 音箱 在线
003 Car speaker 车载音响 离线
004 TV 大屏 离线
005 Laptop 笔记本电脑 在线
006 Watch 智能手表 离线
表2可理解为本实施例的数据准备过程。以下结合图11所示场景介绍本实施例中电子设备100输出音频的过程。
参考图12a,本实施例中,电子设备100输出3个目标应用的音频的过程包括以下步骤:
S210:电子设备100根据各目标应用的音量控制信息,确定各目标应用的音量。
本步骤与实施例一步骤S151实质相同,因此可参考实施例一步骤S151中的叙述,不再赘述。
S220:电子设备100确定各目标应用所对应的音频播放设备(即用于播放目标应用的音频的设备)。以下给出几个具体示例。
示例一:电子设备100根据预设的设备优先级信息,确定目标应用所对应的音频播放设备。
示例性地,电子设备100中设备优先级信息为电子设备100中储存的“应用-设备”关系表。表3给出了“应用-设备”关系表一个示例。根据表3,电子设备100可以确定各目标应用的优先播放设备(即目标应用所对应的音频播放设备)。具体地,与音频应用位于同一行的音频播放设备为该音频应用的优先播放设备。例如,“AI speaker”为酷我音乐的优先播放设备。
表3“应用-设备”关系表
音频应用 优先播放设备
闹钟 本机
酷我音乐 AI speaker
百度地图 Car speaker
浏览器 TV
ZOOM Laptop
KEEP Watch
图12b示出了设置设备优先级信息的示例性方法。参考图12b,电子设备100的应用音频管理界面101上包括若干个音频应用,每个音频应用的右侧均设置有下拉框。下拉框的下拉列表中列出了各备用设备(例如,表2中的备用设备)的设备名称。用户通过对下拉列表中的设备进行选择,可以对设备优 先级信息进行设置。
以酷我音乐为例,用户通过图12b所示的界面,将“AI speaker(即音箱130)”设置为酷我音乐的优先播放设备。这样,当电子设备100输出酷我音乐的音频时,如果音箱130处于在线状态,电子设备100将音箱130确定为目标应用所对应的音频播放设备,并通过音箱130播放酷我音乐的音频;如果音箱130处于离线状态,电子设备100通过自身的音频播放装置(例如,扬声器)播放酷我音乐的音频。
在另一些实现方式中,每个音频应用对应于多个优先播放设备,例如,酷我应用对应于两个优先播放设备,分别为第一优先播放设备(例如,音箱130)和第二优先播放设备(例如,智能手表)。电子设备100在输出酷我应用的音频时,如果第一优先播放在线,则通过第一优先播放设备播放酷我音乐的音频;如果第一优先播放设备离线,则通过第二优先播放设备播放酷我音乐的音频;如果第一优先播放设备和第二优先播放设备均离线,则通过电子设备100自身播放酷我应用的音频。
示例二:电子设备100根据目标应用在音频播放设备上的播放次数确定目标应用所对应的音频播放设备。
电子设备100中存储有目标应用在各音频播放设备上播放的次数。仍以酷我音乐为例,根据电子设备100中存储的记录,酷我音乐在“Watch”上播放的次数为30次,在“AI speaker(即音箱130)”上的播放次数为24次,在“FreeBuds”上的播放次数为10次,无在其他设备上播放的记录。
当电子设备100输出酷我音乐的音频时,操作系统根据表2,从当前在线的设备中选择播放酷我音乐次数最多的设备,作为酷我音乐对应的音频播放设备。本实施例中,当前在线的设备中播放酷我音乐次数最多的设备为音箱130,因此,电子设备100通过音箱130播放酷我音乐的音频。
由于用户通常会选择自己喜爱的的设备播放音频,因此,目标应用在音频播放设备上的播放次数可以反映用户偏好。本示例中,根据目标应用在音频播放设备上的播放次数确定目标应用所对应的音频播放设备,因此可以更为符合用户偏好,提高用户体验。
示例三:电子设备100根据用户实时输入确定目标应用所对应的音频播放设备。
图12c示出了用户实时指定音频播放设备的方式。其中,图12c所示界面105为图10b所示界面104的进一步改进,具体地,图12c在图10b的基础上增加了设备选择选项。也就是说,图12c中除包括与目标应用相对应的音量控制条之外,还包括与目标应用相对应音频播放设备选择列表。设备选择列表中包括当前在线的音频播放设备,例如,根据表2确定的在线设备;设备选择列表中还可以包括电子设备100新发现的设备,例如,图12c中的设备“Glasses”,这样有利于用户在新环境下选择合适的音频播放设备。用户可以通过操作界面105,可以选择目标应用的音频播放设备,例如,用户在点击“AI speaker”之后,电子设备100将“AI speaker”确定为酷我音乐的音频播放设备。
另外,用户可以通过与图10b相同的快捷方式调出图12c所示界面,例如,按压音量键,以特定手势触控屏幕等。也就是说,本示例可以为用户提供一种选择音频输出设备的快捷方式。
S230:电子设备100将目标应用的音频发送至相应的音频播放设备,以通过多个音频播放设备播放3个目标应用的音频。
参考图11,本实施例中,电子设备100根据步骤S230中的示例,将音箱130确定为酷我音乐的音频播放设备,将笔记本电脑120确定为ZOOM的音频播放设备,将电子设备100自身确定为闹钟的音频播放设备。
从而,电子设备100将酷我音乐的音频发送至音箱130,以通过音箱130播放酷我音乐的音频;将ZOOM的音频发送至笔记本电脑120,以通过笔记本电脑120播放ZOOM的语音;将闹钟的音频发送至自身的扬声器,以通过扬声器播放闹铃声。
本实施例中,各目标应用的音频经由不同的音频播放设备输出,不仅可以进一步避免不同应用的音频之间的相互干扰,还可以通过用户期望的设备来播放目标应用的音频,以提高用户体验。
需要说明的是,本实施例为本申请技术方案的示例性说明,本领域技术人员可以进行其他变形。
例如,本实施例中,用于播放音频的设备包括电子设备100自身,但本申请不限于此。在其他实施例中,用于播放音频的设备可以不包括电子设备100自身,而是仅包括多个外接设备(外接设备为除电子设备100之外的其他音频播放设备)。
又如,本实施例中,3个目标应用的音频分别通过3个音频播放设备进行播放,每个音频播放设备播放1个目标应用的音频。但本申请不限于此,在其他场景中,每个音频播放设备可以播放多个(例如,2个,3个)目标应用的音频。例如,在车载场景中,车载音响可同时播放百度地图、蜻蜓FM和电话的音频。当外接设备播放多个应用的音频时,电子设备100可以在本地完成多个应用的混音,将混音后的音频发送在至外接设备;也可以将各应用的音频独立地发送至外接设备,由外接设备完成多个应用的混音。
【实施例三】
本实施例用于提供一种媒体文件的录制方法。图13示出了本实施例的一个示例性应用场景。图13中,电子设备100正在输出酷我音乐和ZOOM的音频。电子设备100上还运行有音频录制应用(本实施例为录音机应用)。录音机可以录制电子设备100正在输出的音频。
现有技术中,录音机会录制电子设备100正在输出的所有音频(各音频的叠加音频),例如,当电子设备100正在输出酷我音乐和ZOOM的音频时,录音机会录制酷我音乐和ZOOM的叠加音频。但是,有时候,用户仅希望录制特定应用的音频,例如,仅希望录制ZOOM的音频。现有技术无法满足用户的该需求。
为此,本实施例提供了一种音频文件(作为媒体文件)的录制方法,当电子设备100输出多个音频应用(称作“候选应用”)的音频时,电子设备100仅录制选定应用(称作“目标应用”)的音频,而不录制目标应用以外的其他候选应用的音频,从而满足用户的多样化需求。
以下结合图14具体介绍本实施例的技术方案。参考图14,本实施例的音频录制方法包括以下步骤:
S310:电子设备100输出多个候选应用的音频。
本实施例中,电子设备100输出两个候选应用(具体为酷我音乐和ZOOM)的音频。在其他实施例中,电子设备100可以输出其他数量(例如,4个)的候选应用的音频。另外,候选应用可以为酷我音乐和ZOOM之外的其他应用,例如,爱奇艺,百度地图等,只要能输出音频即可。
另外,所述电子设备100输出多个候选应用的音频,可以包括:电子设备100通过自身的音频播放装置(例如,扬声器)播放候选应用的音频,和/或,电子设备100通过其他音频播放设备(例如,蓝牙耳机,智能手表)播放候选应用的音频。
S320:电子设备100接收到第一输入,第一输入用于从多个候选应用中选择一个或多个目标应用。
本实施例中,第一输入为来自用户的屏幕输入。图15给出了一个屏幕输入的示例。具体地,用户在点击录音机的应用图标之后,可以进入图15所示的录音机应用的界面106。界面106上包括“开始录制”按钮,以及与各候选应用相对应的复选框。用户可以通过复选框选择希望录制音频的目标应用。
参考图15(a),用户在勾选ZOOM的复选框之后,电子设备100将ZOOM确定为目标应用。可以理解,图15(a)给出的示例中,目标应用的数量小于候选应用的数量。
参考图15(b),用户在同时勾选了酷我音乐和ZOOM的复选框后,电子设备100将酷我音乐和ZOOM均确定为目标应用。
以上为目标应用选择方式的示例性说明,本领域技术人员可以进行其他变形。例如,在其他实施例中,目标应用的数量可以为其他数量,例如,4个;又如,在其他实施例中,用户通过语音指令的方式选择目标应用(即第一输入为用户的语音输入)。
S330:电子设备100录制一个或多个目标应用的音频,以生成音频文件A(作为第一媒体文件的示例)。
在图15(a),用户将ZOOM选定为目标应用。这样,用户在点击界面106上的“开始录制”按钮后,录音机应用开始获取ZOOM的音频流数据(即开始录制ZOOM的音频),并通过ZOOM的音频流数据形成音频文件A。也就是说,图15(a)给出的示例中,音频文件A中的音频数据为ZOOM的音频数据(记作Record_Data_1)。
在图15(b),用户将酷我音乐和ZOOM同时选定为目标应用。这样,用户在点击界面106上的“开始录制”按钮后,录音机应用开始获取将ZOOM和酷我音乐的音频进行混音后的音频流数据(即开始录制酷我音乐和ZOOM的音频),并通过混音后的音频流数据形成音频文件A。也就是说,图15(b)给出的示例中,音频文件A中的音频数据为酷我音乐(音频数据为Record_Data_2)和ZOOM(音频数据为Record_Data_1)的叠加音频数据,具体为Record_Data_1+Record_Data_2。
本实施例中,当电子设备100同时输出多个候选应用的音频时,电子设备100可以仅录制目标应用的音频,从而满足用户的多样化需求。例如,参考图15(a),当电子设备100在输出酷我音乐和ZOOM的音频时,如果用户希望仅录制ZOOM的音频,而不希望录制酷我音乐的音频,用户可以通过图15(a)所示方式选择目标应用,这样,电子设备100录制完成后音频文件A仅包括ZOOM的音频,而不包括酷我音乐的音频。
需要说明的是,本实施例为本申请技术方案的示例性说明,本领域技术人员可以进行其他变形。
例如,在另一个实施例中,电子设备100在输出多个候选应用的音频数据的同时,还输出第一视频应用的视频数据。其中,第一视频应用可以是多个候选应用中的其中一个应用,也可以是多个候选应用之外的其他应用。
以下结合图16示出的场景对该实施例进行介绍。图16中,电子设备100正在运行酷我音乐和ZOOM,其中,酷我音乐和ZOOM作为多个候选应用(电子设备100上正在输出音频的应用)的示例,同时,ZOOM作为第一视频应用的示例(ZOOM的视频数据为实时通话图像数据)。即,本示例中,第一视频应用是多个候选应用中的其中一个应用。另外,电子设备100还运行有录像机应用,图16示出了录像机应用的主界面107。
参考图16,录像机应用的主界面107中包括两个视频数据来源选项(简称“视频选项”),分别为屏幕图像和电子设备100当前运行的视频应用(具体为“ZOOM”)。界面107中还包括与两个视频选项对应的单选框,用户可以通过单选框将其中一个视频选项选择为录像机应用的视频数据来源。根据图16,录像机应用的视频数据来源为ZOOM。
录像机应用的主界面107中还包括两个音频数据来源选项(简称“音频选项”),分别为电子设备100当前运行的两个音频应用(即候选应用,具体为酷我音乐和ZOOM)。界面107中还包括与两个音频选项对应的复选框,用户可以通过复选框将其中一个或多个候选应用选择为录像机应用的音频数据来源(被选中的候选应用为目标应用)。根据图16,录像机应用的音频数据来源为酷我音乐。即,图16中,目标应用为酷我音乐。
在用户点击“开始录制视频”按钮后,录像机应用开始获取ZOOM的视频流数据(即开始录制ZOOM的视频);并同步获取酷我音乐的音频流数据(即开始录制酷我音乐的音频),并将ZOOM的视频流 数据和酷我音乐的音频流数据合成为视频文件B(作为第一媒体文件的示例)。也就是说,图16给出的示例中,视频文件B中的视频数据为ZOOM的视频数据,视频文件B中的音频数据为酷我音乐的音频数据。
可以理解,在另一个示例中,如果用户在界面107的音频选项中同时勾选了酷我音乐和ZOOM,那么,录像机应用录制的视频文件B中的音频数据为酷我音乐和ZOOM的叠加音频数据。
通过图16给出的实施例,在录制视频时,用户可以选择视频数据来源和音频数据来源,从而可以满足用户的多样化需求。例如,通过图16给出的实施例,用户可以将酷我音乐的音频作为ZOOM通话图像的背景音频,从而增加趣味性。
需要说明的是,图16所示场景仅为本申请技术方案的示例性应用场景,本领域技术人员可以进行其他变形。例如,音频应用可以为酷我音乐和ZOOM之外的其他应用,第一视频应用为音频应用之外的其他应用等。
图17示出了电子设备100的结构示意图。电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接头130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
可以理解的是,本发明实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
处理器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口。
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器110可以包含多组I2C总线。处理器110可以通过不 同的I2C总线接口分别耦合触摸传感器180K,充电器,闪光灯,摄像头193等。例如:处理器110可以通过I2C接口耦合触摸传感器180K,使处理器110与触摸传感器180K通过I2C总线接口通信,实现电子设备100的触摸功能。
I2S接口可以用于音频通信。在一些实施例中,处理器110可以包含多组I2S总线。处理器110可以通过I2S总线与音频模块170耦合,实现处理器110与音频模块170之间的通信。在一些实施例中,音频模块170可以通过I2S接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。在一些实施例中,音频模块170与无线通信模块160可以通过PCM总线接口耦合。在一些实施例中,音频模块170也可以通过PCM接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。所述I2S接口和所述PCM接口都可以用于音频通信。
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。在一些实施例中,UART接口通常被用于连接处理器110与无线通信模块160。例如:处理器110通过UART接口与无线通信模块160中的蓝牙模块通信,实现蓝牙功能。在一些实施例中,音频模块170可以通过UART接口向无线通信模块160传递音频信号,实现通过蓝牙耳机播放音乐的功能。
MIPI接口可以被用于连接处理器110与显示屏194,摄像头193等外围器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serial interface,DSI)等。在一些实施例中,处理器110和摄像头193通过CSI接口通信,实现电子设备100的拍摄功能。处理器110和显示屏194通过DSI接口通信,实现电子设备100的显示功能。
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器110与摄像头193,显示屏194,无线通信模块160,音频模块170,传感器模块180等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。
可以理解的是,本发明实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在本申请另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
USB接头130是一种符合USB标准规范的连接器,可以用来连接电子设备100和外围设备,具体可以是标准USB接头(例如Type C接头),Mini USB接头,Micro USB接头等。USB接头130可以用于连接充电器为电子设备100充电,也可以用于电子设备100与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接头还可以用于连接其他电子设备,例如AR设备等。在一些实施方案中,处理器110可以支持通用串行总线(Universal Serial Bus),通用串行总线的标准规范可以为USB1.x,USB2.0,USB3.x,USB4。
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接头130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过电子设备100的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备供电。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,显示屏194,摄像头193,和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量,电池循环次数,电池健康状态 (漏电,阻抗)等参数。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可 包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备100可以包括1个或N个摄像头193,N为大于1的正整数。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。处理器110通过运行存储在内部存储器121的指令,和/或存储在设置于处理器中的存储器的指令,执行电子设备100的各种功能应用以及数据处理。存储器121中存储的指令可以包括:由处理器中的至少一个执行时导致电子设备100实施本申请实施例提供的音频输出方法,和/或媒体文件录制方法的指令。
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备100可以通过扬声器170A收听音乐,或收听免提通话。
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备100接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。电子设备100可以设置至少一个麦克风170C。在另一些实施例中,电子设备100可以设置两个麦克风170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备100还可以设置三个,四个或更多麦克风170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接头130,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。压力传感器180A
的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器180A,电极之间的电容改变。电子设备100根据电容的变化确定压力的强度。当有触摸操作作用于显示屏194,电子设备100根据压力传感器180A检测所述触摸操作强度。电子设备100也可以根据压力传感器180A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度小于第一压力阈值的触摸操作作用于短消息应用图标时,执行查看短消息的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于短消息应用图标时,执行新建短消息的指令。
陀螺仪传感器180B可以用于确定电子设备100的运动姿态。在一些实施例中,可以通过陀螺仪传感器180B确定电子设备100围绕三个轴(即,x,y和z轴)的角速度。陀螺仪传感器180B可以用于拍摄防抖。示例性的,当按下快门,陀螺仪传感器180B检测电子设备100抖动的角度,根据角度计算出镜头模组需要补偿的距离,让镜头通过反向运动抵消电子设备100的抖动,实现防抖。陀螺仪传感器180B还可以用于导航,体感游戏场景。
气压传感器180C用于测量气压。在一些实施例中,电子设备100通过气压传感器180C测得的气压值计算海拔高度,辅助定位和导航。
磁传感器180D包括霍尔传感器。电子设备100可以利用磁传感器180D检测翻盖皮套的开合。在一些实施例中,当电子设备100是翻盖机时,电子设备100可以根据磁传感器180D检测翻盖的开合。进而根据检测到的皮套的开合状态或翻盖的开合状态,设置翻盖自动解锁等特性。
加速度传感器180E可检测电子设备100在各个方向上(一般为三轴)加速度的大小。当电子设备100静止时可检测出重力的大小及方向。还可以用于识别电子设备姿态,应用于横竖屏切换,计步器等应用。
距离传感器180F,用于测量距离。电子设备100可以通过红外或激光测量距离。在一些实施例中,拍摄场景,电子设备100可以利用距离传感器180F测距以实现快速对焦。
接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。电子设备100通过发光二极管向外发射红外光。电子设备100使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定电子设备100附近有物体。当检测到不充分的反射光时,电子设备100可以确定电子设备100附近没有物体。电子设备100可以利用接近光传感器180G检测用户手持电子设备100贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器180G也可用于皮套模式,口袋模式自动解锁与锁屏。
环境光传感器180L用于感知环境光亮度。电子设备100可以根据感知的环境光亮度自适应调节显示屏194亮度。环境光传感器180L也可用于拍照时自动调节白平衡。环境光传感器180L还可以与接近光传感器180G配合,检测电子设备100是否在口袋里,以防误触。
指纹传感器180H用于采集指纹。电子设备100可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
温度传感器180J用于检测温度。在一些实施例中,电子设备100利用温度传感器180J检测的温度,执行温度处理策略。例如,当温度传感器180J上报的温度超过阈值,电子设备100执行降低位于温度传感器180J附近的处理器的性能,以便降低功耗实施热保护。在另一些实施例中,当温度低于另一阈值时,电子设备100对电池142加热,以避免低温导致电子设备100异常关机。在其他一些实施例中,当温度低于又一阈值时,电子设备100对电池142的输出电压执行升压,以避免低温导致的异常关机。
触摸传感器180K,也称“触控器件”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于电子设备100的表面,与显示屏194所处的位置不同。
骨传导传感器180M可以获取振动信号。在一些实施例中,骨传导传感器180M可以获取人体声部振动骨块的振动信号。骨传导传感器180M也可以接触人体脉搏,接收血压跳动信号。在一些实施例中,骨传导传感器180M也可以设置于耳机中,结合成骨传导耳机。音频模块170可以基于所述骨传导传感器180M获取的声部振动骨块的振动信号,解析出语音信号,实现语音功能。应用处理器可以基于所述骨传导传感器180M获取的血压跳动信号解析心率信息,实现心率检测功能。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。
马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。作用于显示屏194不同区域的触摸操作,马达191也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。
指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。
SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现和电子设备100的接触和分离。电子设备100可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口195可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口195 可以同时插入多张卡。所述多张卡的类型可以相同,也可以不同。SIM卡接口195也可以兼容不同类型的SIM卡。SIM卡接口195也可以兼容外部存储卡。电子设备100通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,电子设备100采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在电子设备100中,不能和电子设备100分离。
现在参考图18,所示为根据本申请的一个实施例的电子设备400的框图。电子设备400可以包括耦合到控制器中枢403的一个或多个处理器401。对于至少一个实施例,控制器中枢403经由诸如前端总线(Front Side Bus,FSB)之类的多分支总线、诸如快速通道连(QuickPath Interconnect,QPI)之类的点对点接口、或者类似的连接406与处理器401进行通信。处理器401执行控制一般类型的数据处理操作的指令。在一实施例中,控制器中枢403包括,但不局限于,图形存储器控制器中枢(Graphics&Memory Controller Hub,GMCH)(未示出)和输入/输出中枢(Input Output Hub,IOH)(其可以在分开的芯片上)(未示出),其中GMCH包括存储器和图形控制器并与IOH耦合。
电子设备400还可包括耦合到控制器中枢403的协处理器402和存储器404。或者,存储器和GMCH中的一个或两者可以被集成在处理器内(如本申请中所描述的),存储器404和协处理器402直接耦合到处理器401以及控制器中枢403,控制器中枢403与IOH处于单个芯片中。
存储器404可以是例如动态随机存取存储器(Dynamic Random Access Memory,DRAM)、相变存储器(Phase Change Memory,PCM)或这两者的组合。存储器404中可以包括用于存储数据和/或指令的一个或多个有形的、非暂时性计算机可读介质。计算机可读存储介质中存储有指令,具体而言,存储有该指令的暂时和永久副本。
存储器404中存储的指令可以包括:由处理器中的至少一个执行时导致电子设备实施如图4a、图5b、图8、图9、图12a、图14所示的方法的指令。
在一个实施例中,协处理器402是专用处理器,诸如例如高吞吐量集成众核(Many Integrated Core,MIC)处理器、网络或通信处理器、压缩引擎、图形处理器、图形处理单元上的通用计算(General-purpose computing on graphics processing units,GPGPU)、或嵌入式处理器等等。协处理器402的任选性质用虚线表示在图18中。
在一个实施例中,电子设备400可以进一步包括网络接口(Network Interface Controller,NIC)406。网络接口406可以包括收发器,用于为电子设备400提供无线电接口,进而与任何其他合适的设备(如前端模块,天线等)进行通信。在各种实施例中,网络接口406可以与电子设备400的其他组件集成。网络接口406可以实现上述实施例中的通信单元的功能。
电子设备400可以进一步包括输入/输出(Input/Output,I/O)设备405。I/O405可以包括:用户界面,该设计使得用户能够与电子设备400进行交互;外围组件接口的设计使得外围组件也能够与电子设备400交互;和/或传感器设计用于确定与电子设备400相关的环境条件和/或位置信息。
值得注意的是,图18仅是示例性的。即虽然图18中示出了电子设备400包括处理器401、控制器中枢403、存储器404等多个器件,但是,在实际的应用中,使用本申请各方法的设备,可以仅包括电子设备400各器件中的一部分器件,例如,可以仅包含处理器401和网络接口406。图18中可选器件的性质用虚线示出。
现在参考图19,所示为根据本申请的一实施例的片上系统(System on Chip,SoC)500的框图。在图19中,相似的部件具有同样的附图标记。另外,虚线框是更先进的SoC的可选特征。在图19中,SoC500包括:互连单元550,其被耦合至处理器510;系统代理单元580;总线控制器单元590;集成存储器控制器单元540;一组或一个或多个协处理器520,其可包括集成图形逻辑、图像处理器、音频 处理器和视频处理器;静态随机存取存储器(Static Random access Memory,SRAM)单元530;直接存储器存取(Direct Memory Access,DMA)单元560。在一个实施例中,协处理器520包括专用处理器,诸如例如网络或通信处理器、压缩引擎、图形处理单元上的通用计算(General-purpose computing on graphics processing units,GPGPU)、高吞吐量MIC处理器、或嵌入式处理器等。
静态随机存取存储器单元530可以包括用于存储数据和/或指令的一个或多个有形的、非暂时性计算机可读介质。计算机可读存储介质中存储有指令,具体而言,存储有该指令的暂时和永久副本。
如图19所示的SoC可以被分别设置在电子设备中。当SoC被设置在电子设备中时,静态随机存取存储器单元530中存储有指令,该指令可以包括:由处理器中的至少一个执行时导致电子设备实施如图4a、图5b、图8、图9、图12a、图14所示的方法的指令。
本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。
本申请的各方法实施方式均可以以软件、磁件、固件等方式实现。
可将程序代码应用于输入指令,以执行本文描述的各功能并生成输出信息。可以按已知方式将输出信息应用于一个或多个输出设备。为了本申请的目的,处理系统包括具有诸如例如数字信号处理器(Digital Signal Processor,DSP)、微控制器、专用集成电路(ASIC)或微处理器之类的处理器的任何系统。
程序代码可以用高级程序化语言或面向对象的编程语言来实现,以便与处理系统通信。在需要时,也可用汇编语言或机器语言来实现程序代码。事实上,本文中描述的机制不限于任何特定编程语言的范围。在任一情形下,该语言可以是编译语言或解释语言。
至少一个实施例的一个或多个方面可以由存储在计算机可读存储介质上的表示性指令来实现,指令表示处理器中的各种逻辑,指令在被机器读取时使得该机器制作用于执行本文所述的技术的逻辑。被称为“知识产权(Intellectual Property,IP)核”的这些表示可以被存储在有形的计算机可读存储介质上,并被提供给多个客户或生产设施以加载到实际制造该逻辑或处理器的制造机器中。
在一些情况下,指令转换器可用来将指令从源指令集转换至目标指令集。例如,指令转换器可以变换(例如使用静态二进制变换、包括动态编译的动态二进制变换)、变形、仿真或以其它方式将指令转换成将由核来处理的一个或多个其它指令。指令转换器可以用软件、硬件、固件、或其组合实现。指令转换器可以在处理器上、在处理器外、或者部分在处理器上且部分在处理器外。

Claims (19)

  1. 一种音频输出方法,用于电子设备,其特征在于,所述方法包括:
    接收到来自所述电子设备上的M个音频应用的音频输出请求;
    从所述M个音频应用中选择N个目标应用,并输出所述N个目标应用的音频数据;其中,M大于N。
  2. 根据权利要求1所述的方法,其特征在于,N为2以上的正整数。
  3. 根据权利要求1或2所述的方法,其特征在于,所述从所述M个音频应用中选择N个目标应用,包括:
    基于所述电子设备的当前工作场景从所述M个音频应用中选择N个目标应用;或者,
    基于预设的应用优先级信息从所述M个音频应用中选择N个目标应用;或者;
    基于用户对所述M个音频应用的选择操作,从所述M个音频应用中选择N个目标应用。
  4. 根据权利要求3所述的方法,其特征在于,所述基于所述电子设备的当前工作场景从所述M个音频应用中选择N个目标应用,包括:
    确定所述电子设备的当前工作场景;
    根据所述当前工作场景,从所述M个音频应用中确定场景特征应用,其中,所述场景特征应用为所述当前工作场景所需的应用;
    基于所述场景特征应用的确定结果,确定所述N个目标应用;其中,所述N个目标应用中至少包括所述场景特征应用。
  5. 根据权利要求4所述的方法,其特征在于,所述基于所述场景特征应用的确定结果,确定所述N个目标应用,包括:
    基于所述场景特征应用的确定结果,确定所述M个音频应用中各应用的优先级,其中,所述场景特征应用的优先级高于所述M个音频应用中其他应用的优先级;
    根据所述M个音频应用的优先级排序,将优先级最高的N个音频应用确定为所述N个目标应用,以使得所述N个目标应用至少包括所述场景特征应用。
  6. 根据权利要求4所述的方法,其特征在于,所述确定所述电子设备的当前工作场景,包括:
    根据所述电子设备通信连接的其他电子设备确定所述当前工作场景;或者,
    根据所述电子设备当前运行的应用确定所述当前工作场景;或者,
    根据所述电子设备上的特定传感器的测量数据确定所述当前工作场景,所述特定传感器用于测量所述电子设备的位移、速度和/或加速度数据;或者,
    根据用户的场景指定操作,确定所述当前工作场景。
  7. 根据权利要求3~6任一项所述的方法,其特征在于,所述电子设备的当前工作场景包括车载场景、居家场景、会议场景、运动场景或高铁出行场景。
  8. 根据权利要求1~7任一项所述的方法,其特征在于,所述电子设备中包括与所述N个目标应用相对应的多个音量控制信息,所述N个目标应用中的各目标应用对应于所述多个音量控制信息中的其中一个音量控制信息;
    所述输出所述N个目标应用的音频数据,包括:
    根据所述目标应用所对应的音量控制信息确定所述目标应用的音量;
    以所述音量输出所述目标应用的音频数据。
  9. 根据权利要求8所述的方法,其特征在于,所述电子设备中包括N个音量控制信息,所述N个 目标应用与所述N个音量控制信息一一对应。
  10. 根据权利要求8所述的方法,其特征在于,所述电子设备包括与所述N个目标应用对应的多个音量控制信息,其中,所述多个音量控制信息中的每一个音量控制信息能够基于用户输入被确定。
  11. 根据权利要求1~10任一项所述的方法,其特征在于,所述输出所述N个目标应用的音频数据,包括:
    通过多个音频播放设备播放所述N个目标应用的音频数据,所述音频播放设备包括所述电子设备,和/或,所述电子设备之外的其他设备。
  12. 根据权利要求11所述的方法,其特征在于,所述通过多个音频播放设备播放所述N个目标应用的音频数据,包括:
    确定所述N个目标应用中各目标应用所对应的音频播放设备,并基于所述音频播放设备的确定结果播放所述N个目标应用的音频数据;
    其中,所述确定所述N个目标应用中各目标应用所对应的音频播放设备,包括:
    基于预设的设备优先级信息,确定所述目标应用所对应的音频播放设备;或者,
    基于所述目标应用在各个所述音频播放设备上的播放次数,确定所述目标应用所对应的音频播放设备。
  13. 根据权利要求1~12任一项所述的方法,其特征在于,所述M个音频应用是系统电话应用之外的其他应用。
  14. 根据权利要求1~13任一项所述的方法,其特征在于,N是所述电子设备根据当前与所述电子设备通信连接的音频播放设备的数量确定的。
  15. 一种媒体文件的录制方法,用于电子设备,其特征在于,包括:
    当所述电子设备输出多个音频应用的音频数据时,接收到第一输入,所述第一输入用于从所述多个音频应用中选择一个或多个目标应用;
    录制第一媒体文件,其中,所述录制第一媒体文件,包括:录制所述一个或多个目标应用的音频数据,以生成所述第一媒体文件。
  16. 根据权利要求15所述的方法,其特征在于,所述目标应用的数量小于当前输出音频的所述音频应用的数量。
  17. 根据权利要求15所述的方法,其特征在于,当所述电子设备输出多个音频应用的音频数据时,所述电子设备输出第一视频应用的视频数据;并且,
    所述录制第一媒体文件,包括:
    录制所述一个或多个目标应用的音频数据,并录制所述第一视频应用的视频数据,以生成所述第一媒体文件。
  18. 一种电子设备,包括:
    存储器,用于存储由所述电子设备的一个或多个处理器执行的指令;
    处理器,当所述处理器执行所述存储器中的所述指令时,可使得所述电子设备执行权利要求1~14任一项所述的音频输出方法,或执行权利要求15~17任一项所述的媒体文件的录制方法。
  19. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有指令,该指令在计算机上执行时使得计算机执行权利要求1~14任一项所述的音频输出方法,或执行权利要求15~17任一项所述的媒体文件的录制方法。
PCT/CN2022/086067 2021-04-21 2022-04-11 音频输出方法、媒体文件的录制方法以及电子设备 WO2022222780A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP22790889.4A EP4310664A1 (en) 2021-04-21 2022-04-11 Audio output method, media file recording method, and electronic device
US18/492,185 US20240045651A1 (en) 2021-04-21 2023-10-23 Audio Output Method, Media File Recording Method, and Electronic Device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110430850.5A CN115309360A (zh) 2021-04-21 2021-04-21 音频输出方法、媒体文件的录制方法以及电子设备
CN202110430850.5 2021-04-21

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/492,185 Continuation US20240045651A1 (en) 2021-04-21 2023-10-23 Audio Output Method, Media File Recording Method, and Electronic Device

Publications (1)

Publication Number Publication Date
WO2022222780A1 true WO2022222780A1 (zh) 2022-10-27

Family

ID=83723698

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/086067 WO2022222780A1 (zh) 2021-04-21 2022-04-11 音频输出方法、媒体文件的录制方法以及电子设备

Country Status (4)

Country Link
US (1) US20240045651A1 (zh)
EP (1) EP4310664A1 (zh)
CN (1) CN115309360A (zh)
WO (1) WO2022222780A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106898372A (zh) * 2015-12-17 2017-06-27 杰发科技(合肥)有限公司 车载设备的录音方法和录音系统
CN107770760A (zh) * 2017-10-18 2018-03-06 维沃移动通信有限公司 一种识别蓝牙设备的类型的方法及移动终端
CN109445740A (zh) * 2018-09-30 2019-03-08 Oppo广东移动通信有限公司 音频播放方法、装置、电子设备及存储介质
CN111580781A (zh) * 2020-05-27 2020-08-25 重庆蓝岸通讯技术有限公司 一种移动终端音频输出方法及移动终端
CN111858277A (zh) * 2020-07-07 2020-10-30 广州三星通信技术研究有限公司 用于电子终端的录屏方法和录屏装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106898372A (zh) * 2015-12-17 2017-06-27 杰发科技(合肥)有限公司 车载设备的录音方法和录音系统
CN107770760A (zh) * 2017-10-18 2018-03-06 维沃移动通信有限公司 一种识别蓝牙设备的类型的方法及移动终端
CN109445740A (zh) * 2018-09-30 2019-03-08 Oppo广东移动通信有限公司 音频播放方法、装置、电子设备及存储介质
CN111580781A (zh) * 2020-05-27 2020-08-25 重庆蓝岸通讯技术有限公司 一种移动终端音频输出方法及移动终端
CN111858277A (zh) * 2020-07-07 2020-10-30 广州三星通信技术研究有限公司 用于电子终端的录屏方法和录屏装置

Also Published As

Publication number Publication date
EP4310664A1 (en) 2024-01-24
US20240045651A1 (en) 2024-02-08
CN115309360A (zh) 2022-11-08

Similar Documents

Publication Publication Date Title
CN112712803B (zh) 一种语音唤醒的方法和电子设备
WO2021052263A1 (zh) 语音助手显示方法及装置
WO2020211701A1 (zh) 模型训练方法、情绪识别方法及相关装置和设备
WO2021213164A1 (zh) 应用界面交互方法、电子设备和计算机可读存储介质
CN111752443A (zh) 显示设备控制页面的方法、相关装置及系统
WO2021036770A1 (zh) 一种分屏处理方法及终端设备
CN113691842B (zh) 一种跨设备的内容投射方法及电子设备
CN111628916B (zh) 一种智能音箱与电子设备协作的方法及电子设备
WO2021052204A1 (zh) 基于通讯录的设备发现方法、音视频通信方法及电子设备
WO2021083128A1 (zh) 一种声音处理方法及其装置
WO2021052415A1 (zh) 资源调度方法及电子设备
WO2021253975A1 (zh) 应用程序的权限管理方法、装置和电子设备
WO2020029094A1 (zh) 一种语音控制命令生成方法及终端
WO2022100685A1 (zh) 一种绘制命令处理方法及其相关设备
WO2021082815A1 (zh) 一种显示要素的显示方法和电子设备
WO2022160991A1 (zh) 权限控制方法和电子设备
WO2022100141A1 (zh) 插件管理方法、系统及装置
WO2022017474A1 (zh) 任务处理方法及相关装置
CN111835907A (zh) 一种跨电子设备转接服务的方法、设备以及系统
WO2023179123A1 (zh) 蓝牙音频播放方法、电子设备及存储介质
WO2022179495A1 (zh) 一种隐私风险反馈方法、装置及第一终端设备
WO2022135157A1 (zh) 页面显示的方法、装置、电子设备以及可读存储介质
CN113986369B (zh) 物联网设备控制方法、系统、电子设备及存储介质
CN115333941A (zh) 获取应用运行情况的方法及相关设备
CN114064160A (zh) 应用图标布局方法及相关装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22790889

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2022790889

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2022790889

Country of ref document: EP

Effective date: 20231016

NENP Non-entry into the national phase

Ref country code: DE