CN116668582A - Audio file sharing method and electronic equipment - Google Patents
Audio file sharing method and electronic equipment Download PDFInfo
- Publication number
- CN116668582A CN116668582A CN202310961126.4A CN202310961126A CN116668582A CN 116668582 A CN116668582 A CN 116668582A CN 202310961126 A CN202310961126 A CN 202310961126A CN 116668582 A CN116668582 A CN 116668582A
- Authority
- CN
- China
- Prior art keywords
- audio
- audio data
- sharing
- service
- identifier
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 72
- 238000004891 communication Methods 0.000 claims abstract description 18
- 230000004044 response Effects 0.000 claims description 18
- 238000004590 computer program Methods 0.000 claims description 8
- 239000010410 layer Substances 0.000 description 23
- 238000010586 diagram Methods 0.000 description 22
- 230000006870 function Effects 0.000 description 11
- 230000005540 biological transmission Effects 0.000 description 10
- 239000011159 matrix material Substances 0.000 description 8
- 230000003993 interaction Effects 0.000 description 7
- 238000007781 pre-processing Methods 0.000 description 6
- 238000012805 post-processing Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000013461 design Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 239000012792 core layer Substances 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000007667 floating Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72448—User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
- H04M1/72454—User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
- H04M1/72406—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by software upgrading or downloading
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Environmental & Geological Engineering (AREA)
- Telephone Function (AREA)
- Circuit For Audible Band Transducer (AREA)
Abstract
The application provides an audio file sharing method and electronic equipment, and relates to the technical field of terminals. The audio file sharing method in the application comprises the following steps: under the condition that the electronic equipment is in a call state and receives audio sharing operation input by a user, the target output equipment acquires first audio data to be shared, and comprises virtual output equipment which is connected with a microphone; the virtual output device transmits the first audio data to the microphone; and the communication module of the electronic equipment transmits the first audio data received by the microphone and the second audio data acquired by the microphone to the opposite-end equipment so as to enable the opposite-end equipment to play the first audio data. By adopting the method, the electronic equipment can share the audio file with the opposite-end equipment in the call state, the tone quality of the shared audio file is improved, the use limit of audio sharing is reduced, and the applicable scene of audio sharing is increased.
Description
Technical Field
The present application relates to the field of terminal technologies, and in particular, to a method for sharing audio files and an electronic device.
Background
With the continuous development of mobile terminal technology, more and more users use mobile terminals, and more functions, such as voice call and video call, are provided by the mobile terminals.
However, when the mobile terminal is in a call state, in response to a user music sharing operation, the local terminal device starts a speaker of the local terminal device, the speaker of the local terminal device plays music to be shared, the microphone of the local terminal collects the music and then transmits the music to the opposite terminal device, the sharing mode leads to serious tone quality damage of the audio file, and the mode is only suitable for a mode of playing the audio file by adopting the speaker, so that the use scene of sharing the audio file is limited. For other usage scenarios, the mobile terminal does not support sharing of the audio file, which affects the usage experience of sharing the audio file by the user.
Disclosure of Invention
In order to solve the technical problems, the application provides the audio file sharing method and the electronic device, so that the electronic device can share the audio file to the opposite terminal device in a call state, the tone quality of the shared audio file is improved, the use limit of audio sharing is reduced, and the applicable scene of audio sharing is increased.
In a first aspect, the present application provides a method for sharing an audio file, including: under the condition that the electronic equipment is in a call state and receives audio sharing operation input by a user, the target output equipment acquires first audio data to be shared, and comprises virtual output equipment which is connected with a microphone; the virtual output device transmits the first audio data to the microphone; and the communication module of the electronic equipment transmits the first audio data received by the microphone and the second audio data acquired by the microphone to the opposite-end equipment so as to enable the opposite-end equipment to play the first audio data.
Thus, if the user inputs the audio sharing operation under the condition that the electronic device is in the call state, the target output device acquires the first audio data to be shared, and the target output device comprises the virtual output device, wherein the virtual output device is a virtualized module and does not have the capability of playing the audio data. The virtual output device is connected with the microphone, the virtual output device transmits the first audio data to the microphone, the microphone can acquire the first audio data and the second audio data, the first audio data acquired by the microphone is transmitted to the opposite terminal device through the communication module of the local terminal, and the first audio data acquired by the microphone is not the audio data played by the acquisition loudspeaker, so that the problem that the sound quality of the first audio data acquired by the microphone is seriously reduced due to the fact that the first audio data are transmitted in the air is solved. Meanwhile, because the audio data is not needed to be shared by using a loudspeaker, the audio sharing mode can be suitable for different scenes.
According to a first aspect, in a case where the electronic device is in a call state and receives an audio sharing operation input by a user, the target output device obtains first audio data to be shared, including: the Audio Flinger service acquires first Audio data and an operation identifier corresponding to the Audio sharing operation, wherein the operation identifier is used for indicating a target output module adopted during the Audio sharing operation; the Audio Flinger service adds the first Audio data to a corresponding target output module according to the indication of the operation identifier; the Audio Flinger service requests to acquire first indication information from the Audio strategy service; the Audio strategy service determines first indication information according to Audio sharing operation and transmits the first indication information to an Audio Flinger service; and the Audio Flinger service controls the target output module to transmit the first Audio data to the virtual output device according to the first indication information.
In this way, the output modules corresponding to different Audio sharing operations are different, the sharing management module determines the operation identifier of the Audio sharing, so that the Audio Flinger service adds the first Audio data into the output module corresponding to the Audio sharing operation according to the indication of the operation identifier, and the first Audio data obtained under different Audio operations are stored through the different output modules; meanwhile, the Audio Flinger service controls the target output module to transmit first Audio data to the target output device according to the first indication information, and the target output module corresponds to the target output device, so that the problem that the target output device cannot acquire the first Audio data is avoided.
According to a first aspect, the Audio Flinger service obtains first Audio data and an operation identifier corresponding to the Audio sharing operation, including: the sharing management module responds to the audio sharing operation input by the user, and determines an operation identifier of the audio sharing operation; the audio track AudioTrack acquires first audio data; the AudioTrack acquires an operation identifier from the sharing management module; the AudioTrack binds the operation identifier and the first Audio data, and transmits the operation identifier and the first Audio data to the audioflinger service. In this way, the AudioTrack stores the corresponding operation identifier according to the audio sharing operation, the operation identifier is used for indicating the corresponding output module when the audio is shared, the AudioTrack binds the operation identifier with the first audio data and transmits the operation identifier to the AudioFlinger service, so that the subsequent AudioFlinger service can rapidly add the first audio data to the output module matched with the audio sharing through the operation identifier, and the problem that the output module receiving the first audio data cannot transmit the first audio data to the corresponding target output device is avoided.
According to a first aspect, an audio policy service determines first indication information according to an audio sharing operation, including: the audio strategy service acquires an operation identifier bound with the first audio data from the sharing management module; the audio strategy service determines a target output device of the first audio data according to the operation identifier and a first corresponding relation between the stored operation identifier and the target output device. Thus, the audio strategy service stores the first corresponding relation in advance, and can quickly determine the target output device corresponding to the first audio data.
According to the first aspect, if the Audio Flinger service detects that the operation identifier is the first identifier, the Audio Flinger service adds the first Audio data to the first output module according to the indication of the first identifier, and the first output module is matched with the virtual output device. Like this, first sign is used for instructing the corresponding output module that shares this time, and this first output module matches with virtual output device for first output module can be with first audio data transmission to virtual output device, can realize that the local end does not broadcast first audio data, and the purpose of first audio data is broadcast to the opposite end.
According to a first aspect, the Audio Flinger service adds first Audio data to a corresponding target output module according to an indication of an operation identifier, including: if the Audio Flinger service detects that the operation identifier is the second identifier, the Audio Flinger service adds the first Audio data to the second output module, the second output module is matched with the virtual output device and the target playing device, and the target playing device is the device for playing the Audio data currently by the electronic device. In this way, the second output module is matched with the virtual output device and the target playing device, so that the local terminal can share the audio to the opposite terminal device and the local terminal device can play the audio data. When the first audio data is added to the second output module, the second output module can transmit the first audio data to the virtual output module and the target playing device when the two terminals share the audio (namely, the local terminal shares the audio to the opposite terminal device and the local terminal device plays the audio data), so that the problem that the first audio data cannot be transmitted to the correct output device is avoided.
According to a first aspect, an audio policy service determines a target output device of first audio data according to an operation identifier and a first correspondence between stored operation identifiers and the target output device, comprising: the audio strategy service acquires an operation identifier bound with the first audio data from the sharing management module; if the audio strategy service detects that the operation identifier is the first identifier, determining that the indication information indicates that the target output device corresponding to the first audio data comprises the virtual output device according to the first identifier and the first corresponding relation. Thus, for single-ended sharing, the audio policy service can quickly determine that the target output device includes a virtual output device.
According to a first aspect, an audio policy service determines a target output device of first audio data according to an operation identifier and a first correspondence between stored operation identifiers and the target output device, comprising: the audio strategy service acquires an operation identifier bound with the first audio data from the sharing management module; if the audio strategy service detects that the operation identifier is the second identifier, according to the second identifier and the first corresponding relation, the audio strategy service determines that the first indication information indicates that the target output device corresponding to the second audio data comprises a virtual output device and a target playing device, and the target playing device is the device for playing the audio data currently by the electronic device. In this way, when the double-end sharing operation is performed, it is determined that the target output device corresponding to the second audio data includes the virtual output device and the target playing device, so as to ensure that the first audio data can be played at the local end device and the opposite end device at the same time.
According to a first aspect, the method further comprises: and the Audio Flinger service controls the target output module to transmit the first Audio data to the playing device of the electronic device according to the first indication information so as to play the first Audio data by the target playing device of the electronic device. In this way, the Audio player also transmits the first Audio data to the target playing device of the local terminal, so that the local terminal device can also play the first Audio data at the same time.
According to a first aspect, before the Audio Flinger service controls the target output module to transmit the first Audio data to the target output device according to the first indication information, the method further includes: the audio strategy service instructs the audio kernel to open the virtual output device according to the first instruction information; the audio kernel controls the virtual output device to connect with the microphone. Thus, when the audio policy service determines that the target output device comprises the virtual output device, the audio kernel is instructed to open the virtual output device so as to ensure that the virtual output device can receive the audio data output by the target output module, and the problem that the virtual output device is not opened, so that the failure of outputting the audio data by the target output device occurs is avoided.
According to a first aspect, an audio core controls a virtual output device to connect with a microphone; the audio kernel is used for closing a first switch, and the first switch is used for controlling connection or disconnection between the virtual output equipment and the microphone; after the communication module of the electronic device transmits the first audio data received by the microphone and the second audio data collected by the microphone to the opposite-end device, the method further comprises: the audio core opens the first switch. In this way, the connection between the microphone and the virtual output device is controlled by the first switch, and the virtual output device is disconnected from the microphone under normal conditions (such as the condition of not sharing audio), so that the output audio data can not influence the collected data of the microphone under normal conditions; during the audio sharing process, the first switch is closed to transmit the first audio data to the microphone.
According to a first aspect, the obtaining of the first audio data by an audio track AudioTrack comprises: if the sharing management module receives a second selected operation, detecting whether the current electronic device is in a screen recording state, wherein the second selected operation is used for indicating the electronic device to transmit audio data to be shared to the opposite terminal device and indicating the electronic device to play the audio data to be shared; if the sharing management module detects that the electronic equipment is in a screen recording state, the AudioTrack is instructed to acquire first audio data and third audio data according to a second selected operation, wherein the third audio data is identical to the first audio data; the AudioTrack obtains the operation identifier from the sharing management module, and the method comprises the following steps: the sharing management module binds a first identifier for the first audio data, wherein the first identifier is used for indicating that a target output module corresponding to the audio sharing is a first output module; after the audio track acquires the third audio data, the method further comprises: the sharing management module is a third audio binding screen recording identifier; the AudioTrack transmits the screen recording identification and the third Audio to an audioflinger service; the Audio Flinger service adds third Audio data to a third output module according to the screen recording identification, and the third output module is used for transmitting the Audio data generated in the screen recording process to a storage device of the electronic equipment so as to generate a screen recording file; the audio strategy service generates second indication information according to the screen recording identification and transmits the second indication information to the AudioFlinger service; and the audioplayer service controls the third output module to transmit the third audio data to the playing device of the electronic device for playing the audio data currently according to the second indication information so as to play the third audio data by the target playing device of the electronic device, wherein the target playing device is the device of the electronic device for playing the audio data currently. Therefore, when the electronic device is in a call state and in a screen recording state, and the local device adopts the double-end sharing audio file, in order to ensure normal sharing of the audio file, the sharing management module instructs the AudioTrack to acquire the first audio data and the third audio data, and the first audio data and the third audio data are identical, that is, the sharing management module instructs the AudioTrack to play the audio data to be shared twice. The AudioTrack binds the screen recording identifier and the third audio data, so that the AudioFlinger service can add the third audio data to the third Output module (namely the duplicate Output) according to the screen recording identifier, and the audio data generated in the screen recording process are ensured to be stored in the duplicate Output. The duplicate Output may transmit the stored first audio data to the target playback device, so as to implement playback at the home terminal. Meanwhile, the first audio data is added to the target output module, so that the first audio data can be ensured to be normally shared to the opposite terminal equipment.
According to a first aspect, the sharing management module determines an operation identifier of an audio sharing operation in response to the audio sharing operation input by a user, including: responding to a first selection operation of a first sharing mode in a call interface by a user, wherein the sharing management module acquires a first identification corresponding to the first selection operation, and the first sharing mode is used for indicating the electronic equipment to share audio data to the opposite terminal equipment; responding to a second selection operation of a second sharing mode in the call interface by the user, and acquiring a second identifier corresponding to the second selection operation by the sharing management module, wherein the second sharing mode is used for indicating the electronic equipment to transmit audio data to be shared to the opposite terminal equipment and indicating the electronic equipment to play the audio data to be shared; the audio sharing operation comprises a first selected operation and a second selected operation. Thus, the user can input the type of shared audio by selecting the first and second sharing styles.
According to a first aspect, an audio track acquires first audio data, comprising: responding to a first selected operation of a user on a first sharing style or a second selected operation of a second sharing style, and displaying a first display area on a call interface, wherein the first display area comprises audio icons of audio files to be shared; in response to a third selection operation of the audio icon in the first display area, the AudioTrack acquires first audio data selected by the user. Therefore, through the first display area, the user can conveniently and quickly select the audio data to be shared without exiting the call interface, and the use experience of the user is improved.
According to a first aspect, in response to a user first selected operation on a first sharing style or before a second selected operation on a second sharing style, the method further comprises: and responding to a fourth selected operation of the sharing control in the call interface by the user, and displaying the first sharing style and the second sharing style on the call interface. Therefore, when the user has a sharing requirement, the first sharing mode and the second sharing mode are displayed, the occupied area of a call interface is reduced, and the disturbance to the user is reduced.
According to a first aspect, the method further comprises: responding to a first selected operation of a user on a first sharing mode in a call interface or responding to a second selected operation of a user on a second sharing mode in the call interface, and detecting whether a microphone of the electronic equipment is in an operation state or not by the sharing management module; if the sharing management module detects that the microphone is in an operating state, determining to execute the operation of displaying the first display area on the call interface; and if the sharing management module detects that the microphone is in the off state, outputting prompt information and determining that the operation of displaying the first display area on the call interface is not executed, wherein the prompt information is used for prompting a user to start the microphone. Therefore, the electronic device can prompt the user to start the microphone in time, and the problem that the audio sharing fails due to the fact that the microphone is not started is avoided.
In a second aspect, the present application provides an electronic device comprising: one or more processors; a memory; and one or more computer programs, wherein the one or more computer programs are stored on the memory, which when executed by the one or more processors, cause the electronic device to perform the first aspect and the method of audio file sharing corresponding to any implementation of the first aspect.
Any implementation manner of the second aspect and the second aspect corresponds to any implementation manner of the first aspect and the first aspect, respectively. The technical effects corresponding to the second aspect and any implementation manner of the second aspect may be referred to the technical effects corresponding to the first aspect and any implementation manner of the first aspect, which are not described herein.
In a third aspect, the present application provides a computer readable medium storing a computer program, where the computer program when executed on an electronic device causes the electronic device to perform the method for sharing an audio file corresponding to any implementation manner of the first aspect and the first aspect.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments of the present application will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram illustrating a cell phone sharing music;
FIG. 2 is a schematic diagram illustrating sharing audio with a cell phone in a text chat state;
FIG. 3a is a schematic diagram of an exemplary electronic device;
FIG. 3b is a schematic diagram of a software architecture of an exemplary electronic device;
FIG. 4a is a flow chart illustrating a method of audio file sharing of the present application;
FIG. 4b is a flow chart illustrating a target output device acquiring first audio data to be shared;
FIG. 5 is a schematic diagram of an interface for an exemplary user-input audio sharing operation;
FIG. 6 is an interaction diagram between internal modules of a cell phone when sharing audio in a single-ended manner is shown in an exemplary manner;
fig. 7 is a schematic structural diagram of an internal module of a mobile phone exemplarily shown;
FIG. 8 is an interaction diagram between internal modules of a cell phone when double-ended sharing of audio is exemplarily shown;
fig. 9 is a schematic diagram illustrating the structure of another internal module of the mobile phone;
fig. 10 is a schematic diagram illustrating the structure of another internal module of the mobile phone;
fig. 11 is an interaction diagram between internal modules when the mobile phone is in a screen recording state and double-end sharing audio is performed;
Fig. 12 is a schematic diagram illustrating the hardware connection between the virtual output module and the microphone in this example.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone.
The terms first and second and the like in the description and in the claims of embodiments of the application, are used for distinguishing between different objects and not necessarily for describing a particular sequential order of objects. For example, the first target object and the second target object, etc., are used to distinguish between different target objects, and are not used to describe a particular order of target objects.
In embodiments of the application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g." in an embodiment should not be taken as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
In the description of the embodiments of the present application, unless otherwise indicated, the meaning of "a plurality" means two or more. For example, the plurality of processing units refers to two or more processing units; the plurality of systems means two or more systems.
Before the technical scheme of the embodiment of the application is described, an application scene of the embodiment of the application is described with reference to the attached drawings.
User a uses handset a to communicate with user B, and when user a wants to share song "QHC" to handset B of user B, handset a can share the song in the manner of fig. 1. As shown in fig. 1, an audio playing module in a mobile phone a plays audio data to be shared through a speaker 101 of the mobile phone a, a microphone 102 of the mobile phone a collects audio data played by the speaker 101 and transmits the collected audio data to a call application, and the audio data collected by the microphone 102 is transmitted to a call application of the mobile phone B through the call application. In the mode of sharing audio data, a speaker or a peripheral player is required to play the audio data, and when a microphone collects the audio data played by the speaker, the audio data is transmitted in the air, so that the tone quality of the audio data received by the call application of the mobile phone B is reduced. In addition, if the mobile phone a adopts the earphone to play the audio data, the audio data cannot be shared to the call application of the mobile phone B in the mode, so that the mode of sharing the audio data has limitation and does not use users.
In the current traffic voice call application, audio sharing during the call is not supported. A schematic diagram of a traffic voice call application is shown in fig. 2, and as shown in fig. 2a, in the text chat interface 201, a user may click on the music sharing control 202 in the traffic call application, and the traffic call application jumps to the interface 203 in response to the clicking operation of the user. The user may search the search box of the interface 203 for music to be shared (song "ZXX" shown as 2b in fig. 2), the interface 203 displays the searched music, and the call application transmits the music "ZXX" to the traffic call application of the peer device in response to the user clicking the send button 204.
However, the functionality of sending audio files during a call is not supported in the traffic call application.
The application provides a method for sharing audio files, which is executed by electronic equipment, wherein the electronic equipment can be mobile phones, tablet computers, vehicle-mounted equipment and the like. Under the condition that the electronic equipment is in a call state, responding to audio sharing operation input by a user, and acquiring first audio data to be shared by target output equipment, wherein the target output equipment comprises virtual output equipment which is connected with a microphone; the virtual output device transmits the first audio data to the microphone; the microphone transmits the first audio data and the collected second audio data to the opposite terminal equipment so as to enable the opposite terminal equipment to play the first audio data. In this example, the first audio data is output through the virtual output device, and the virtual output device is connected with the microphone, so that the microphone can acquire the first audio data to be shared.
Fig. 3a is a schematic structural diagram of an electronic device 100 according to an embodiment of the present application. It should be understood that the electronic device 100 shown in fig. 3a is only one example of an electronic device, and that the electronic device 100 may have more or fewer components than shown in the figures, may combine two or more components, or may have different configurations of components. The various components shown in fig. 3a may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
The electronic device 100 may include: processor 110, external memory interface 120, internal memory 121, universal serial bus (universal serial bus, USB) interface 130, charge management module 140, power management module 141, battery 142, antenna 1, antenna 2, mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headset interface 170D, sensor module 180, keys 190, motor 191, indicator 192, camera 193, display 194, and subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor, a gyroscope sensor, a barometric sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, and the like.
Fig. 3b is a block diagram of the software architecture of the electronic device 100 according to an embodiment of the application.
The layered architecture of the electronic device 100 divides the software into several layers, each with a distinct role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into three layers, an application layer, an application framework layer and a kernel layer from top to bottom.
The application layer may include a series of application packages.
As shown in fig. 3b, the application package may include an audio sharing application, a chat application, a WLAN, a short message, a camera, a gallery, a call, a map, navigation, bluetooth, etc. application.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 3b, the application framework layer may include an audio system, a view system, a resource manager, a window manager, etc., and may also include other modules such as a content provider, a telephony manager, a notification manager, etc.
The audio system includes an audioplayer service and an audio policy (AudioPolicy) service. The AudioPolicy service is used to determine a policy of an audio streaming device. The AudioFlinger service performs an audio policy determined by AudioPolicy and controls data streaming.
The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
The telephony manager is used to provide the communication functions of the electronic device 100. Such as the management of call status (including on, hung-up, etc.).
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
It will be appreciated that the layers and components contained in the layers in the software structure shown in fig. 3b do not constitute a specific limitation of the electronic device 100. In other embodiments of the application, electronic device 100 may include more or fewer layers than shown and may include more or fewer components per layer, as the application is not limited.
The process of audio file sharing in the present application will be specifically described below. For ease of understanding, some of the terms referred to in this description will be presented in this example.
(1) Audio track (AudioTrack): at the application layer for the output of audio data.
(2) An audio policy service (AudioPolicy) is located at the application framework layer for determining an audio policy for indicating the audio streaming device.
(3) AudioFlinger: at the application framework layer for implementing audio policies determined by AudioPolicy and controlling data streaming.
Fig. 4a is a flowchart of a method for sharing audio files according to an embodiment of the present application, including:
step 401: under the condition that the electronic equipment is in a call state and receives audio sharing operation input by a user, the target output equipment acquires first audio data to be shared, and comprises virtual output equipment which is connected with a microphone.
In this example, the electronic device takes a mobile phone as an example, and an audio sharing application is installed in the mobile phone, where the audio sharing application may be a system application.
In one embodiment, the target output device obtaining the first audio data to be shared includes the following steps, as shown in fig. 4 b:
Step 4011: the Audio Flinger service acquires first Audio data and an operation identifier corresponding to the Audio sharing operation, wherein the operation identifier is used for indicating a target output module adopted during the Audio sharing operation.
Step 4012: and the Audio Flinger service adds the first Audio data to the corresponding target output module according to the indication of the operation identifier.
Step 4013: and the Audio strategy service determines first indication information according to the Audio sharing operation and transmits the first indication information to the Audio Flinger service.
Step 4014: the Audio Flinger service controls the target output module to transmit first Audio data to target output equipment according to first indication information, and the target transmission equipment comprises virtual output equipment which is connected with a microphone.
The specific process of step 4011 is as follows:
in this example, the electronic device takes a mobile phone as an example, and an audio sharing application is installed in the mobile phone, where the audio sharing application may be a system application.
When the mobile phone is in an operating state, the audio sharing application is in an open state by default. Optionally, the user may close the audio sharing application at the setup interface. The audio sharing application includes a sharing management module, which is configured to record an operation identifier (hereinafter also referred to as a sharing status) of each sharing operation. The audio sharing operation includes a first selected operation of selecting single-ended sharing by a user or a second selected operation of selecting double-ended sharing by the user, wherein the type of single-ended sharing is used for indicating the electronic device to transmit first audio data to be shared to the opposite terminal device, the type of double-ended sharing is used for indicating the electronic device to transmit the first audio data to the opposite terminal device, and the playing device of the audio data currently played by the local terminal plays the first audio data.
When the mobile phone is in a call state, a user can input an audio sharing operation, and the sharing management module responds to the audio sharing operation input by the user and records the sharing state of the audio sharing. Optionally, the sharing status may be used to indicate a target Output module corresponding to the audio sharing operation, for example, the first identifier (e.g. Output-1) is used to indicate that the target Output module corresponding to the audio sharing operation is a first Output module, and the second identifier (e.g. Output-2) is used to indicate that the target Output module corresponding to the audio sharing operation is a second Output module. Optionally, other identifiers may be used to represent the target output modules corresponding to the audio sharing operation, for example, a character "a" is used to represent the target output module as the first output module, and a character "B" is used to represent the target output module corresponding to the current audio sharing operation as the second output module. Optionally, the first output module is configured to transmit the audio data to the virtual output device, the second output module is configured to transmit the audio data to the virtual output device and the target playing device, where the target playing device is a device that plays the audio data by the current electronic device, for example, if the mobile phone uses bluetooth sound to play music, the target playing device is bluetooth sound.
After the user inputs the audio sharing operation, the mobile phone can prompt the user to select the audio data to be shared, for example, the mobile phone can transfer to an audio data interface so that the user can select the audio data to be shared. The audio track AudioTrack in the audio sharing application may acquire the audio data selected by the user as the first audio data in response to an operation of selecting the audio data by the user.
Optionally, the AudioTrack may acquire an operation identifier of the current sharing state from the sharing management module, where the AudioTrack transmits the acquired first audio data and the corresponding operation identifier to the AudioFlinger service.
The specific process of step 4012 is as follows:
the AudioFlinger service obtains the first audio data and the corresponding operation identifier, and can add the first audio data to the corresponding target output module according to the operation identifier corresponding to the first audio data. For example, the AudioFlinger service detects that the operation is identified as "Output-1", determines that the target Output module of the first audio data is the first Output module (the identification of the first Output module is Output-1), and adds the first audio data to the first Output module (i.e., in Output-1). When the AudioFlinger service detects that the operation identifier is a second identifier (such as "Output-2"), the target Output module is determined to be a second Output module (in the AudioFlinger service, the identifier of the first Output module is Output-2).
It should be noted that, a plurality of Output modules (i.e. Output) are disposed in the application framework layer, and different Output modules are used for carrying audio data Output by the audio thread. In this example, the application framework layer includes a first Output module (Output-1 as shown in FIG. 7) and a second Output module (Output-2 as shown in FIG. 7) and a third Output module (Output-0 as shown in FIG. 7).
After the audioplayer service determines the target output device of the first audio data, a request for acquiring the playing device may be sent to AudioPolicy. After the AudioPolicy receives the request, step 4013 may be performed.
The specific process of step 4013 is as follows:
for example, audioPolicy (i.e. audio policy service) may obtain an operation identifier (i.e. a sharing status) of the current audio sharing from the sharing management module, and determine, according to the operation identifier of the current audio, a target output device of the first audio data. Optionally, the AudioPolicy generates first indication information, where the first indication information is used to indicate a target output device of the first audio data. And when the AudioPolicy detects that the operation identifier is the first identifier, determining that the target output device of the first audio data comprises a virtual output device. Optionally, the virtual Output device is a virtualized module, and the virtual Output device is configured to receive Output-1 or Output-2 and Output the audio data to the microphone. The virtual output device does not have the ability to play audio data.
When the AudioPolicy detects that the operation identifier is the second identifier, it is determined that the target output device of the first audio data comprises a virtual output device and a target playing device, wherein the target playing device is a device for playing the audio data by the current mobile phone. Optionally, the playing device in the mobile phone includes: the mobile phone comprises a wired earphone, a Bluetooth earphone, a loudspeaker of the mobile phone and peripheral playing equipment (such as Bluetooth sound) connected with the mobile phone.
Alternatively, when the AudioPolicy determines that the target output device further includes a target playback device, one of the playback devices for playing back the audio data may be selected to play back the first audio data according to a pre-stored device priority rule. Optionally, the device priority rule includes: if the AudioPolicy detects that the mobile phone is connected with the wired earphone and the Bluetooth earphone at the same time, playing audio data by the wired earphone; if the AudioPolicy detects that the mobile phone is connected with the Bluetooth device and is not connected with the wired earphone, the Bluetooth device plays audio data; if the AudioPolicy does not detect that the wired earphone is connected and the Bluetooth device is not connected, the loudspeaker of the mobile phone plays the audio data.
For example, if the AudioPolicy detects that the mobile phone is connected with a bluetooth headset, the AudioPolicy determines that the target output device includes the bluetooth headset according to a pre-stored device priority rule.
The AudioPolicy transmits the generated indication information to an AudioFlinger service to instruct the AudioFlinger service to control a target output module to transmit first audio data to a target output device.
After the AudioPolicy determines the target output device, information of the target output device is transmitted to an AudioFlinger service, and the AudioFlinger service performs step 4014.
The specific process of step 4014 is as follows:
for example, the AudioFlinger service may instruct the audio kernel to open the virtual output device and control the virtual output to connect with the microphone, when receiving the instruction information. If the target output device includes a virtual output device, the AudioFlinger service transmits the first audio data to the virtual output device.
If the target output device includes a virtual output device and a playing device of the mobile phone, the audioplayer service transmits the first audio data to the virtual output device and transmits the first audio data to the target playing device of the mobile phone, so as to be played by the playing device of the mobile phone.
Step 402: the virtual output device transmits the first audio data to the microphone.
Step 403: and the communication module of the electronic equipment transmits the first audio data received by the microphone and the second audio data acquired by the microphone to the opposite-end equipment so as to enable the opposite-end equipment to play the first audio data.
The microphone receives two paths of audio data, namely first audio data and second audio data collected by the microphone; the microphone transmits the received audio data to a communication module of the local end, the communication module of the local end transmits the first audio data and the second audio data to a communication module of opposite-end equipment, and an audio playing application of the opposite-end equipment receives the first audio data transmitted by the mobile phone and plays the audio data.
Optionally, if the target output device includes a target playing device, the target playing device of the mobile phone plays the first audio data.
In the example, the mobile phone transmits the first audio data to be shared to the virtual output device, and the virtual output device is connected with the microphone, so that the microphone does not need to acquire the audio data to be shared through collecting the sound in the environment, the tone quality of the first audio data acquired by the microphone is ensured, and the quality of the shared audio data is improved; and simultaneously, the use limitation of audio sharing is reduced, such as the problem that the first audio data must be played by using a loudspeaker.
Fig. 5 is a schematic diagram of an interface for an exemplary user-input audio sharing operation. The following specifically describes a process of inputting an audio sharing operation by a user and a process of acquiring audio data by AudioTrack with reference to fig. 5:
As shown in fig. 5a, the mobile phone a of the user a is in a traffic call state, and the sharing control 502 is displayed in the call interface 501. Alternatively, the sharing control 502 may be a hover sphere as shown at 5a in fig. 5; the sharing control 502 may also be a touch button, which in this example is not limited to the style and type of sharing control. In the talk interface 501 the microphone 503 is shown to be on. When a user wants to share songs with user B's cell phone B, the user can click on the share control 502 in the conversation interface 501. The audio sharing application displays a first sharing style 505 and a second sharing style 504 in response to a click operation of a user, as shown at 5b in fig. 5.
Optionally, the first sharing style may be a control. The sharing management module in the audio sharing application responds to the first selected operation or the second selected operation of the user, records the sharing state of the sharing operation, and generates a first identifier (such as "Output-1"). In this example, taking clicking the first sharing style 505 as an example, the user performs a selection operation (such as clicking, touching, long pressing, etc.) on the first sharing style 505, and the sharing management module records the first identifier. Meanwhile, a first display area 506 is displayed in the call interface 501, where the first display area includes audio icons of audio files to be shared; the audio icon is linked to a storage location of the audio file. As shown in 5c of fig. 5, the user clicks on the audio icon of "01.mp3", i.e., the user selects the audio data named "01.mp3", and AudioTrack in the audio sharing application may obtain the audio data of "01.mp3".
Optionally, the sharing management module may detect whether the microphone of the current mobile phone is turned on in response to the user clicking the first sharing pattern 505 or clicking the second sharing pattern 505, and if so, record the audio sharing operation (i.e. the sharing management module records the operation identifier). In this example, taking the operation that the sharing management module responds to the user clicking the first sharing pattern 505 as an example, if the sharing management module detects that the microphone is turned on, the first identifier is recorded. If the sharing management module detects that the microphone of the current mobile phone is in a closed state, prompt information can be displayed and the operation of displaying the first display area is determined not to be executed. As shown at 5d in fig. 5, the conversation interface 501 displays that the microphone 503 is turned off, and when the user clicks on the first sharing pattern 505, a prompt 507 (shown at 5e in fig. 5) is displayed on the conversation interface, which prompts the user to turn on the microphone and then to share the audio data.
Fig. 6 is an interaction diagram between internal modules of a mobile phone when audio is shared by a single terminal. Fig. 7 is a schematic structural diagram of the internal module of the mobile phone. The process of sharing audio data by a single terminal of a mobile phone is specifically described below with reference to fig. 6 and 7.
Step 601: the second application turns on the microphone.
Illustratively, the second application may have an application of a conversation function, such as a chat application, a conversation application, or the like. When the second application is turned on, the microphone is turned on by default. Alternatively, when the second application is turned on, the microphone may be turned off by default, and the microphone may be turned on in response to a user's operation of turning on the microphone.
Illustratively, the audio sharing application (i.e., the first application) may be a system application. Optionally, when the mobile phone detects that the second application is running, the audio sharing application is started. Optionally, the audio sharing application may be turned on by default, and the mobile phone may also turn off the audio sharing application in response to an operation of turning off the audio sharing application by the user. In this example, when the mobile phone detects that the second application runs, the audio sharing application is started.
And when the audio sharing application runs, displaying a sharing control on the current running interface. Alternatively, the sharing control may be a floating ball, a floating window, or the like. As shown at 5a in fig. 5, the sharing control is a hover sphere 502. Optionally, if the mobile phone is in a call state, the mobile phone jumps to other interfaces, and the sharing control is displayed on the current new interface.
Step 602: the user inputs a single-ended sharing operation.
For example, the user may click on the sharing control, and the audio sharing application displays the first sharing style and the second sharing style in response to the user clicking on the sharing control. The first sharing style and the second sharing style may be controls, such as click controls. The sharing management module may execute step 603 in response to the first selection operation of the first sharing style or the second selection operation of the second sharing style, that is, the sharing management module records the sharing status of the current audio sharing. And simultaneously, in response to a first selected operation of the first sharing style or a second selected operation of the second sharing style by the user, displaying a first display area on the current interface, wherein the first display area is used for displaying icons of the audio files so as to enable the user to determine the first audio data. The size of the first display area may be smaller than the size of the call interface.
For example, as shown in fig. 5b and 5c, the user clicks on the first sharing style 505 to display the first display area 506 in the current conversation interface 501. When the user selects "01.mp3", audioTrack in the audio sharing application acquires the audio data (i.e., the first audio data) of the "01.mp3". Alternatively, as shown at 5c in fig. 5, the first display area 506 may include a rollback control, such as a left arrow in the first display area 506, which the user clicks on, and the first display area may display other file lists (e.g., recorded video files in a gallery) in which the user selects audio data to be shared.
Step 603: the sharing management module records the sharing state of the audio sharing.
The sharing management module may record the sharing status of the audio sharing (i.e. the operation identifier) by storing the operation identifier. The operation identifier is used for representing a target output module matched with the current audio sharing operation, and it can also be understood that the sharing management module allocates a corresponding target output module for the audio sharing operation according to the received audio sharing operation, and records information of the target output module, and in this example, the identifier of the target output module is used as the operation identifier. For example, the operation identifier stored in the sharing management module is "Output-1", and the target Output module used for representing the matching of the single-ended sharing operation is Output-1; the operation identifier stored by the sharing management module is 'Output-2', and the target Output module used for representing the matching of the double-end sharing operation is Output-2; the operation identifier stored by the sharing management module is 'Output-0', and is used for representing that when the mobile phone does not share the audio data currently, the matched target Output module is Output-0.
In this example, as shown in 5b in fig. 5, the user clicks the first sharing style 505, and the sharing management module records that the operation identifier of this time is "Output-1".
Step 604: the first application (namely the audio sharing application) requests to acquire the sharing state of the audio sharing from the sharing management module.
For example, the AudioTrack in the audio sharing application obtains the first audio data, and the AudioTrack may request to obtain the sharing state of the audio sharing from the sharing management module.
Step 605: and the sharing management module returns the sharing state of the audio sharing to the first application.
The sharing management module returns the recorded operation identifier of the current audio sharing to the AudioTrack.
Step 606: the first application sends the first audio data and the sharing status to the AudioFlinger service.
For example, the AudioTrack in the first application may bind the first audio data and the operation identifier of the audio sharing, and send the first audio data and the operation identifier of the audio sharing to the audioplayer service in the application framework layer.
Step 607: the AudioFlinger service adds the first audio data to Output-1.
Illustratively, the audioplayer service obtains the operation identifier bound to the first audio data, detects the operation identifier as a first identifier (e.g., the operation identifier is "Output-1"), and determines to add the first audio data to Output-1.
The AudioFlinger service stores a first correspondence between an operation identifier and a first correspondence before an output module in advance, for example, the first correspondence includes: the corresponding relation between the first identifier and the first Output module, the corresponding relation between the second identifier and the second Output module, and the corresponding relation between the third identifier and the fourth Output module (Output-0 shown in fig. 7).
It should be noted that each output module is configured to transmit audio data to a matched output device, that is, the output module needs to have a corresponding output device to carry the audio data.
Step 608: the AudioFlinger service sends a request to acquire a playback device to AudioPolicy.
For example, audioPolicy may be used to determine an audio policy, i.e. a target output device for indicating to play this audio data. As shown in fig. 7, the AudioPolicy may store a device routing rule and a device priority rule. According to the device routing rule, audioPolicy can judge whether the target output device comprises a virtual output device. If the AudioPolicy determines that the type of the target output device includes the target playing device, determining the target playing device according to the device priority rule.
After the AudioPolicy receives the request for AudioFlinger service, step 609 may be performed.
Step 609: audioPolicy requests the sharing management module to acquire the sharing state of the sharing.
Step 610: and the sharing management module returns the sharing state of the sharing to the AudioPolicy.
The sharing management module returns the sharing status of the current sharing to the AudioPolicy, for example, the sharing management module transmits the operation identifier to the AudioPolicy.
Step 611: and determining that the target output device is the virtual output device according to the sharing state of the sharing.
The AudioPolicy detects whether the operation identifier shared at this time is a first identifier according to an equipment routing rule; and if the first identification is determined, determining that the target output device comprises the virtual output device. The AudioPolicy determines that the indication information indicates that the target output device is a virtual output device, and the indication information is transmitted to an AudioFlinger service as shown by a dotted line in fig. 7.
It should be noted that, the AudioPolicy may store the correspondence between the operation identifier and the target output device in advance, for example, the first identifier corresponds to the virtual output device, and the second identifier corresponds to the virtual output device and the target playing device.
Step 612: the AudioPolicy sends information indicating that the target output device is a virtual output device to an AudioFlinger service.
Step 613: the AudioFlinger service instructs AudioPolicy to open the virtual output device.
Step 614: after receiving the indication, audioPolicy instructs an audio kernel (AudioKernel) to turn on the virtual output device.
The AudioPolicy instructs the audio kernel to open a virtual output device, which is a virtualization module, according to the instruction information, and cannot play audio data.
Step 615: the audio kernel controls the virtual output device to connect with the microphone.
The indication information is for example used to indicate that audio data is transmitted to the microphone via the virtual output device. The audio kernel controls the virtual output device to be connected with the microphone according to the indication information. For ease of understanding, fig. 12 shows a hardware configuration of audio input and audio output in the present application.
As shown in fig. 12, the Audio input/output module (i.e., 1201 in fig. 12) is connected to an Audio Front End ("AFE") module, which is connected to an Audio device module (Audio Device Module, "ADM"), which is connected to an Audio stream management module (Audio Stream Manage, "ASM"). The AFE module includes a receive port (Rx port) and a transmit port (Tx port). The ADM includes an Audio data pass through Audio Universal post processing (Audio Common Post Processing, "COPP") module, an Audio receive matrix (Audio Rx Martrix), an Audio Universal Pre-processing (Audio Common Pre Processing, "COPREP") module, and an Audio transmit matrix (Audio Tx Martrix). The ASM module includes: audio stream post-processing (Audio Stream Post Processing, "POPP") module, audio decoding module, audio input preset (Audio Record Pre Processing, "poppp") module, and audio encoding module.
In this example, as shown in fig. 12, a virtual output device is disposed in the AFE module, where the virtual output device has a corresponding first switch S1, the virtual output device is connected to a first end of the first switch S1, and a second end of the first switch S1 is connected to the microphone. The first switch S1 is used to control the connection and disconnection between the virtual output device and the microphone. When the virtual output device is in the off state, the first switch S1 is in the off state (i.e. the virtual output device is not connected to the microphone). When the audio sharing application is started, the first switch defaults to an open state. The audio kernel controls the first switch S1 to be closed according to the indication information, thereby connecting the virtual output device and the microphone.
Step 616: the AudioFlinger service control Output-1 transmits the first audio data to the virtual Output device.
Illustratively, the AudioFlinger service transmits the first audio input to the virtual Output device, as shown by the black solid line in fig. 7, and Output-1 (i.e., the first Output module) transmits the first audio data to the virtual Output device.
Step 617: the virtual output device sends the first audio data and the collected second audio data to the opposite terminal device.
For example, since the microphone is in the on state, the second audio data in the surrounding environment may be collected in real time, while the microphone receives the first audio data transmitted by the virtual output device, as shown by the black solid arrow in fig. 7, the first audio data is transmitted from the virtual output device to the microphone.
The microphone transmits the first audio data and the second audio data to a local call application, the local call application transmits the first audio data and the second audio data to a communication module of opposite terminal equipment through the communication module, the communication module of the opposite terminal equipment transmits the acquired first audio data and second audio data to the call application of the equipment, and the opposite terminal equipment plays the received first audio data and second audio data.
The transmission process of the first audio data and the second audio data in the local device is specifically described below with reference to fig. 12.
As shown in fig. 12, audio data (such as first audio data) acquired by the local device is input to an audio decoding module in the ASM module through a P1 interface, the audio decoding module decodes the input audio data, and the decoded audio data is input to the POPP module through an input interface P2 of the POPP module. The audio stream processed by the POPP module is input into the audio receiving matrix through the input interface P3 of the audio receiving matrix, and the audio receiving matrix inputs the audio data into the COPP module through the input interface P4 of the COPP module. The COPP module transmits the processed audio data to the AFE module. The receiving port in the AFE module transmits the first audio data into a virtual output device (the virtual output device is not shown in fig. 12), and the virtual output device transmits the first audio data to the transmitting port through the input interface i1 of the transmitting port, as indicated by a dotted arrow for characterizing the first audio data input from the virtual output device and the second audio data collected by the microphone. In fig. 12, the second audio data collected by the microphone is transmitted to the transmitting port through the i1 interface. The transmission port transmits the received first audio data and second audio data to the COPreP module through an i2 interface of the audio general preprocessing (Audio Common Pre Processing, "COPreP") module. The COPreP module transmits the processed Audio data to an Audio transmission matrix (Audio Tx Martrix) through an i3 interface (i.e., an input interface of the Audio transmission matrix). The audio transmission matrix transmits the processed audio data to an audio input preset (Audio Record Pre Processing, POPreP) module through an i4 interface (an input interface of the audio input preset module); the popep module transmits the preprocessed Audio data to an Audio encoding (Audio encoding) module through an i5 interface (an input interface of the Audio encoding module). The audio coding module transmits the coded audio data to a call application of the local terminal through an output interface O2, the call application of the local terminal is transmitted to a communication module of opposite terminal equipment through the communication module of the local terminal, the communication module of the opposite terminal equipment transmits the acquired first audio data and second audio data to the call application of the equipment, and the opposite terminal equipment plays the received first audio data and second audio data.
Fig. 8 is an interaction diagram between internal modules of a mobile phone when double-ended sharing audio is exemplarily shown. Fig. 9 is a schematic structural diagram of the internal module of the mobile phone. The process of sharing audio data by both terminals of the mobile phone is specifically described below with reference to fig. 8 and 9.
Step 801: the second application turns on the microphone.
This step is substantially the same as step 601, and reference may be made to the description related to step 601, which will not be repeated here.
Step 802: the user inputs the operation of double-ended sharing.
This step is substantially the same as step 602 and reference is made to the relevant description in step 602. In this example, as shown in fig. 5b and 5c, the user clicks on the second sharing style 504, and the sharing management module records the sharing status of the current audio sharing (i.e. step 803). Meanwhile, a first display area 506 is displayed in the conversation interface 501, and when the user selects "01.mp3", the AudioTrack in the audio sharing application acquires the audio data (i.e., the first audio data) of "01.mp3".
Step 803: the sharing management module records the sharing state of the audio sharing.
In this example, the sharing management module stores the second identifier as an operation identifier (i.e. a sharing state) of the current audio sharing.
Step 804: the first application (namely the audio sharing application) requests to acquire the sharing state of the audio sharing from the sharing management module.
This step is substantially the same as step 604, and reference may be made to the description of step 604, which will not be repeated here.
Step 805: and the sharing management module returns the sharing state of the audio sharing to the first application.
The sharing management module returns the recorded operation identifier of the current audio sharing to the AudioTrack.
Step 806: the first application sends the first audio data and the sharing status to the AudioFlinger service.
This step is substantially the same as step 606, and reference may be made to the description of step 606, which will not be repeated here.
Step 807: the AudioFlinger service adds the first audio data to Output-2.
Step 808: the AudioFlinger service sends a request to acquire a playback device to AudioPolicy.
Step 809: audioPolicy requests the sharing management module to acquire the sharing state of the sharing.
Step 810: and the sharing management module returns the sharing state of the sharing to the AudioPolicy.
Steps 807 to 810 are substantially the same as steps 607 to 610, and reference may be made to the description of steps 607 to 610, which will not be repeated here.
Step 811: and determining that the target output device is the virtual output device and the earphone according to the sharing state of the sharing.
In an exemplary embodiment, the AudioPolicy detects that the operation identifier is a second identifier, and determines that the target output device includes a virtual output device and a playing device of the mobile phone. The audio policy may select one of the playing devices of the local device to play the first audio data according to the stored device priority rule. Optionally, the device priority rule includes: if the mobile phone is detected to be connected with the wired earphone, determining to play the audio data by using the wired earphone; if the mobile phone is detected to be connected with the Bluetooth playing device and not connected with the wired earphone, the Bluetooth playing device is determined to be used for playing the audio data, and if the mobile phone is detected to be not connected with the Bluetooth playing device and not connected with the wired earphone, the loudspeaker of the mobile phone is determined to be used for playing the audio data.
In this example, according to the device priority rule, when detecting that the mobile phone is connected with the bluetooth headset and is not connected with the wired headset, the AudioPolicy determines that the playing device used by the local terminal to play the first audio data is the bluetooth headset connected with the mobile phone.
The AudioPolicy determining indication information indicates that the target output device comprises a virtual output device and a Bluetooth headset connected with a mobile phone.
Step 812: the AudioPolicy sends information indicating that the target output device is a virtual output device and a bluetooth headset to an AudioFlinger service.
Step 813: the AudioFlinger service instructs AudioPolicy to open the virtual output device.
Step 814: after receiving the indication, audioPolicy instructs an audio kernel (AudioKernel) to turn on the virtual output device.
For example, the AudioPolicy determines that a virtual output device needs to be used, and may instruct the sound core to turn on the virtual output device.
Step 815: the audio kernel controls the virtual output device to connect with the microphone.
This step is similar to step 615, and specific procedures can be referred to in step 615, and will not be described here.
Step 816: the AudioFlinger service control Output-2 transmits the first audio data to the virtual Output device and the headphones.
The audioplayer service transmits the first audio input to the virtual Output device and the bluetooth headset, and Output-2 (i.e., the second Output module) transmits the first audio data to the virtual Output device and the bluetooth headset, as shown by the black solid line in fig. 8.
Step 817: the virtual output device sends the first audio data and the collected second audio data to the opposite terminal device.
This step is similar to step 617. Specific procedures can be found in step 617, and will not be described here.
Step 818: the earphone outputs the first audio data.
Illustratively, the Bluetooth headset of the audio core layer plays the first audio data. It should be noted that, as shown in fig. 12, the audio device output by the receiving port o1 may also be transmitted to the bluetooth headset through a digital-to-analog converter (DAC) module, and the bluetooth headset plays the processed audio.
In this example, when the user selects the double-ended sharing audio data, the mobile phone may transmit the first audio data to be shared to the opposite-end device through the microphone, and meanwhile, the playing device of the local end also plays the first audio data at the same time, so that the local end user and the opposite-end user can both hear the same audio data.
Fig. 10 is a schematic flow diagram of audio data when the mobile phone does not share audio. The following describes the transmission process of audio data between the internal modules of the mobile phone when the user does not select audio sharing in detail with reference to fig. 10.
The mobile phone is in a call state, and in the audio sharing application, in response to the operation of playing audio input by the user, the sharing management module (i.e. the sound sharing state management module in fig. 10) records the operation identifier (e.g. the operation identifier is the identifier "Output-0" of the fourth Output module). In addition, the audio sharing application determines that audio data to be played (hereinafter referred to as fourth audio data) is acquired by AudioTrack-2. The AudioTrack-2 requests the audio sharing status management module to acquire an operation identifier from the sharing management module (i.e., black arrow pointing to the audio sharing status management module from the AudioTrack-2), the sharing management module transmits the operation identifier to the AudioTrack-2 (not shown in fig. 10), the AudioTrack-2 binds the operation identifier and the fourth audio data, and the operation identifier and the fourth audio data are transmitted to the audioplayer service. The AudioFlinger service detects that an operation identifier bound with the fourth audio data is a third identifier (e.g., the third identifier is "Output-0"), and determines that an Output module corresponding to the fourth audio data is Output-0. The AudioFlinger service adds the fourth audio data to Output-0 (i.e., a fourth Output module). It should be noted that, in fig. 10, in order to represent the transmission of the fourth audio data from AudioTrack-2 to Output-0, a black arrow pointing to Output-0 by the sound sharing status management module is taken as an example, and the audio data is not actually transmitted by the AudioTrack through the sound sharing status management module.
The audiofilter service sends a request for acquiring information of the target transmission device to AudioPolicy, and the AudioPolicy can acquire an operation identifier bound with fourth audio from a sound sharing state management module, and determine to utilize a device priority rule to determine the target output device according to the operation identifier. According to the AudioPolicy, according to the device priority rule and the playing device used by the current device, the target output device of the fourth audio data is determined to be a Bluetooth headset. The AudioPolicy sends the determined indication information (i.e., the indication information indicates that the target output device is a bluetooth headset) to the AudioFlinger service (e.g., a dashed arrow pointing to the AudioFlinger service in fig. 10). The AudioFlinger service controls Output-0 to transmit fourth audio data to the bluetooth headset according to the first indication information (Output-0 points to a black arrow of the bluetooth headset as shown in fig. 10). The fourth audio data is played by the bluetooth headset.
Fig. 11 is an interaction diagram between internal modules of the mobile phone for double-ended audio sharing when the mobile phone is in a screen recording state and in a call state.
Step 1101: the second application turns on the microphone.
Step 1102: the user inputs the operation of double-ended sharing.
Steps 1101 to 1102 are substantially the same as steps 801 to 802, and reference may be made to the description related to steps 801 to 802, which will not be repeated here.
Step 1103: the sharing management module records the sharing state of the audio sharing.
For example, when the sharing management module receives the double-ended sharing operation (i.e., the second selected operation), it detects whether the mobile phone is currently in the screen recording state, if it detects that the mobile phone is currently in the screen recording state, step 1104 is executed, and at the same time, the sharing management module may determine that the operation identifier of the current audio sharing operation is "Output-1".
Step 1104: the sharing management module detects that the audio sharing application is in a screen recording state and indicates the audio sharing application to play the audio data to be shared twice.
If the mobile phone is detected to be in the screen recording state, the audio sharing application is instructed to play the audio data to be shared twice.
The audio sharing application may obtain first audio data, copy the first audio data, and obtain third audio data. The third audio data is identical to the first audio data.
The audio sharing application distributes corresponding AudioTrack to process the first audio data and the third audio data. Optionally, the audio sharing application may instruct AudioTrack to process the first audio data and the third audio data simultaneously. Alternatively, audioTrack, which can play the first audio data and the third audio data, may be the same or different. For example, audioTrack-1 processes the first audio data and AudioTrack-2 processes the third audio data; alternatively, audioTrack-1 processes the first audio data and the third audio data simultaneously. In this example, the AudioTrack-1 processes the first audio data and the AudioTrack-2 processes the third audio data.
It should be noted that, the sharing management module detects that the mobile phone is in a screen recording state, and can generate a screen recording identifier.
Step 1105: the audio sharing application requests to acquire the screen recording identification from the sharing management module.
Illustratively, audioTrack for processing the third audio data requests a screen capture identification from the audio sharing application.
Step 1106: and the sharing management module returns a screen recording identifier to the audio sharing application.
Illustratively, the sharing management module returns a screen record identifier to the requesting AudioTrack.
Step 1107: the first application sends third audio data and a screen recording identifier to the AudioFlinger service.
Illustratively, the AudioTrack (e.g., audioTrack-2) that obtains the screen recording identifier binds the third audio data and the screen recording identifier, and transmits the third audio data and the screen recording identifier to the AudioFlinger service.
Step 1108: the AudioFlinger service adds third audio data to the duplex Output.
The audioplayer service obtains a screen recording identifier bound with the third audio data, and determines that an Output module corresponding to the third audio data is a duplicate Output (i.e., a third Output module) according to the screen recording identifier and a corresponding relation between the stored screen recording identifier and the Output module. The audiofilter service adds third audio data to the duplex Output. Optionally, the duplicate Output is used to store audio data generated during the recording, so as to generate a recording file subsequently.
Step 1109: the AudioFlinger service sends a request for acquiring the target playing device to the AudioPolicy.
The duplicate Output has a playing function, and the audioplayer service sends a request for acquiring the target playing device to the AudioPolicy.
Step 1110: the AudioPolicy returns information of the target playing device to the AudioFlinger service.
According to the request, the AudioPolicy can determine the current device (i.e. the target playing device) for playing the audio data according to the stored device priority rule, and generate second indication information. The AudioPolicy returns the second indication information to the AudioFlinger service.
Step 1111: the AudioFlinger service control duplicate Output transmits the third audio data to the headphones.
Illustratively, the AudioFlinger service receives the second indication information. Because the duplex Output has a playing function, the audioplayer service controls the duplex Output to send the third audio data to the target playing device, so that the target playing device plays the third audio data. For example, the target playback device is a bluetooth headset.
Step 1112: the earphone outputs the third audio data.
Step 1113: the AudioFlinger service control duplicate Output sends the third audio data to the storage module.
Because double-end sharing is performed in the screen recording process, the local terminal equipment needs to play the audio data at the same time, and the duplicate Output has a data redirection function, and the duplicate Output can transmit the stored audio data to the storage module so as to realize the screen recording function.
Step 1114: the first application (namely the audio sharing application) requests to acquire the sharing state of the audio sharing from the sharing management module.
Illustratively, the AudioTrack-1 requests to obtain the sharing status of the audio sharing from the sharing management module, and the process of obtaining is substantially the same as that of step 804, and reference may be made to the related description in step 804.
Step 1115: and the sharing management module returns the sharing state of the audio sharing to the first application.
Step 1116: the first application sends the first audio data and the sharing status to the AudioFlinger service.
Step 1117: the AudioFlinger service adds the first audio data to Output-1.
It should be noted that, steps 1105 to 1108 and 1114 to 1117 are performed synchronously, and there is no sequence.
Step 1118: the AudioFlinger service sends a request to acquire a playback device to AudioPolicy.
Step 1119: audioPolicy requests the sharing management module to acquire the sharing state of the sharing.
Step 1120: and the sharing management module returns the sharing state of the sharing to the AudioPolicy.
Step 1121: and determining that the target output device is the virtual output device and the earphone according to the sharing state of the sharing.
In this example, the AudioPolicy receives the operation identifier "Output-1", and determines that the target Output device includes a virtual Output device according to a pre-stored correspondence.
Step 1122: the AudioPolicy sends information indicating that the target output device is a virtual output device to an AudioFlinger service.
Step 1123: the AudioFlinger service instructs AudioPolicy to open the virtual output device.
Step 1124: after receiving the indication, audioPolicy instructs an audio kernel (AudioKernel) to turn on the virtual output device.
Step 1125: the audio kernel controls the virtual output device to connect with the microphone.
Step 1126: the AudioFlinger controls Output-1 to transmit the first audio data to the virtual Output device.
Step 1127: the virtual output device sends the first audio data and the collected second audio data to the opposite terminal device.
Steps 1114 through 1127 are substantially the same as steps 604 through 617, and the description thereof may refer to steps 604 through 617, which are not repeated herein.
In this example, when the sharing management module detects that the mobile phone is in a screen recording state and the audio sharing operation input by the user is double-ended sharing, it is determined that the audio data to be shared is played twice, that is, the same first audio data and third audio data are played simultaneously by the audio track, and the third audio data are normally added to the duplex Output and the first audio data are added to the first Output module or the second Output module through the annotation of the first audio data and the third audio data.
It will be appreciated that the electronic device, in order to achieve the above-described functions, includes corresponding hardware and/or software modules that perform the respective functions. The present application can be implemented in hardware or a combination of hardware and computer software, in conjunction with the example algorithm steps described in connection with the embodiments disclosed herein. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Those skilled in the art may implement the described functionality using different approaches for each particular application in conjunction with the embodiments, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The present embodiment also provides a computer storage medium, where computer instructions are stored, and when the computer instructions are executed on an electronic device, the electronic device is caused to execute the related method steps to implement the method for sharing an audio file in the foregoing embodiment. The storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The present embodiment also provides a computer program product, which when run on a computer, causes the computer to perform the above-mentioned related steps to implement the method for sharing audio files in the above-mentioned embodiments.
The electronic device, the computer storage medium, the computer program product, or the chip provided in this embodiment are used to execute the corresponding methods provided above, so that the beneficial effects thereof can be referred to the beneficial effects in the corresponding methods provided above, and will not be described herein.
Any of the various embodiments of the application, as well as any of the same embodiments, may be freely combined. Any combination of the above is within the scope of the application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.
Claims (18)
1. A method for sharing audio files, comprising:
under the condition that the electronic equipment is in a call state and receives audio sharing operation input by a user, a target output device acquires first audio data to be shared, wherein the target output device comprises a virtual output device, and the virtual output device is connected with a microphone;
the virtual output device transmitting the first audio data to the microphone;
and the communication module of the electronic equipment transmits the first audio data received by the microphone and the second audio data acquired by the microphone to opposite-end equipment so that the opposite-end equipment can play the first audio data.
2. The method of claim 1, wherein, in a case where the electronic device is in a call state and receives an audio sharing operation input by a user, the target output device obtains first audio data to be shared, including:
The Audio Flinger service obtains the first Audio data and the operation identifier corresponding to the Audio sharing operation, wherein the operation identifier is used for indicating a target output module adopted in the current Audio sharing;
the Audio Flinger service adds the first Audio data to the corresponding target output module according to the indication of the operation identifier;
the Audio Flinger service requests to acquire first indication information from an Audio strategy service;
the Audio strategy service determines first indication information according to the Audio sharing operation and transmits the first indication information to the Audio Flinger service;
and the Audio Flinger service controls the target output module to transmit the first Audio data to the virtual output device according to the first indication information.
3. The method of claim 2, wherein the Audio player service obtaining the first Audio data and the operation identifier corresponding to the Audio sharing operation includes:
the sharing management module responds to the audio sharing operation input by the user, and determines an operation identifier of the audio sharing operation;
the audio track AudioTrack acquires the first audio data;
the AudioTrack acquires the operation identifier from the sharing management module;
And the AudioTrack binds the operation identifier and the first Audio data, and transmits the operation identifier and the first Audio data to the audioflinger service.
4. The method of claim 3, wherein the audio policy service determines the first indication information based on the audio sharing operation, comprising:
the audio strategy service acquires the operation identifier bound with the first audio data from the sharing management module;
and the audio strategy service determines the target output equipment of the first audio data according to the operation identifier and the first corresponding relation between the stored operation identifier and the target output equipment.
5. The method of claim 3, wherein the Audio Flinger service adds the first Audio data to a corresponding target output module according to the indication of the operation identifier, comprising:
and if the Audio Flinger service detects that the operation identifier is a first identifier, adding the first Audio data to a first output module according to the indication of the first identifier, wherein the first output module is matched with the virtual output equipment.
6. The method of claim 3, wherein the Audio Flinger service adds the first Audio data to a corresponding target output module according to the indication of the operation identifier, comprising:
and if the Audio player service detects that the operation identifier is a second identifier, adding the first Audio data to a second output module, wherein the second output module is matched with virtual output equipment and target playing equipment, and the target playing equipment is equipment for playing the Audio data currently by the electronic equipment.
7. The method of claim 4, wherein the audio policy service determining the target output device of the first audio data based on the operation identifier and a first correspondence between the stored operation identifier and the target output device comprises:
the audio strategy service acquires an operation identifier bound with the first audio data from the sharing management module;
and if the audio strategy service detects that the operation identifier is a first identifier, determining that the first indication information indicates that the target output device corresponding to the first audio data comprises the virtual output device according to the first identifier and the first corresponding relation.
8. The method of claim 4, wherein the audio policy service determining the target output device of the first audio data based on the operation identifier and a first correspondence between the stored operation identifier and the target output device comprises:
the audio strategy service acquires an operation identifier bound with the first audio data from the sharing management module;
if the audio strategy service detects that the operation identifier is the second identifier, according to the second identifier and the first corresponding relation, the audio strategy service determines that the first indication information indicates that the target output device corresponding to the second audio data comprises the virtual output device and the target playing device, and the target playing device is the device for playing the audio data currently by the electronic device.
9. The method of claim 8, wherein the method further comprises:
and the Audio player service controls the target output module to transmit the first Audio data to the playing device of the electronic device according to the first indication information so that the target playing device of the electronic device can play the first Audio data.
10. The method according to any one of claims 2-9, wherein the Audio Flinger service is further configured to, prior to controlling the target output module to transmit the first Audio data to a target output device according to the first indication information:
the audio policy service instructs an audio kernel to open the virtual output device according to the first instruction information;
the audio kernel controls the virtual output device to connect with the microphone.
11. The method of claim 10, wherein the audio kernel controls the virtual output device to connect with the microphone;
the audio kernel closes a first switch, the first switch being used to control connection or disconnection between the virtual output device and the microphone;
after the communication module of the electronic device transmits the first audio data received by the microphone and the second audio data collected by the microphone to the opposite-end device, the method further includes:
the audio core opens the first switch.
12. The method according to any one of claims 3 to 9, wherein the audio track acquiring the first audio data comprises:
If the sharing management module receives a second selected operation, detecting whether the current electronic device is in a screen recording state, wherein the second selected operation is used for indicating the electronic device to transmit audio data to be shared to the opposite terminal device and indicating the electronic device to play the audio data to be shared;
if the sharing management module detects that the electronic equipment is in a screen recording state, the AudioTrack is instructed to acquire the first audio data and third audio data according to the second selected operation, wherein the third audio data is identical to the first audio data;
the AudioTrack obtains the operation identifier from the sharing management module, including:
the sharing management module binds a first identifier for the first audio data, wherein the first identifier is used for indicating that a target output module corresponding to the audio sharing is a first output module;
after the audio track acquires the third audio data, the method further includes:
the sharing management module is used for binding a screen identification for the third audio;
the AudioTrack transmits the screen recording identification and the third Audio to the audioflinger service;
The Audio Flinger service adds the third Audio data to a third output module according to the screen recording identification, and the third output module is used for transmitting the Audio data generated in the screen recording process to a storage device of the electronic device so as to generate a screen recording file;
the Audio strategy service generates second indication information according to the screen recording identification and transmits the second indication information to the Audio Flinger service;
and the Audio player service controls a third output module to transmit the third Audio data to a playing device of the electronic device for playing the third Audio data according to the second indication information, wherein the target playing device of the electronic device is the device of the electronic device for playing the Audio data currently.
13. The method of any of claims 3-9, wherein the sharing management module, in response to the audio sharing operation entered by the user, determines an operation identification of the audio sharing operation, comprising:
responding to a first selected operation of a user on a first sharing mode in a call interface, wherein the sharing management module acquires a first identification corresponding to the first selected operation, and the first sharing mode is used for indicating the electronic equipment to share audio data to opposite-end equipment;
Responding to a second selection operation of a second sharing mode in a call interface by a user, wherein the sharing management module acquires a second identification corresponding to the second selection operation, and the second sharing mode is used for indicating the electronic equipment to transmit audio data to be shared to opposite-end equipment and indicating the electronic equipment to play the audio data to be shared;
wherein the audio sharing operation includes the first selected operation and the second selected operation.
14. The method of claim 13, wherein the audio track acquiring the first audio data comprises:
responding to a first selected operation of the user on the first sharing style or a second selected operation of the user on the second sharing style, and displaying a first display area on the call interface, wherein the first display area comprises an audio icon of an audio file to be shared;
in response to a third selected operation on an audio icon in the first display area, the AudioTrack acquires the first audio data selected by a user.
15. The method of claim 14, wherein in response to the user first selecting operation on the first sharing style or before the second selecting operation on the second sharing style, the method further comprises:
And responding to a fourth selected operation of the user on the sharing control in the call interface, and displaying the first sharing style and the second sharing style on the call interface.
16. The method of claim 14, wherein the method further comprises:
responsive to a first selected operation of a user on a first sharing style in a call interface or responsive to a second selected operation of a user on a second sharing style in a call interface, the sharing management module detects whether the microphone of the electronic device is in an operational state;
if the sharing management module detects that the microphone is in an operating state, determining to execute the operation of displaying a first display area on the call interface;
and if the sharing management module detects that the microphone is in the off state, outputting prompt information and determining that the operation of displaying the first display area on the call interface is not executed, wherein the prompt information is used for prompting a user to turn on the microphone.
17. An electronic device, comprising:
a memory and a processor, the memory coupled with the processor;
the memory stores program instructions that, when executed by the processor, cause the electronic device to perform the method of audio file sharing of any of claims 1 to 16.
18. A computer readable storage medium comprising a computer program, characterized in that the computer program, when run on an electronic device, causes the electronic device to perform the method of audio file sharing according to any of claims 1 to 16.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310961126.4A CN116668582B (en) | 2023-08-02 | 2023-08-02 | Audio file sharing method and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310961126.4A CN116668582B (en) | 2023-08-02 | 2023-08-02 | Audio file sharing method and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116668582A true CN116668582A (en) | 2023-08-29 |
CN116668582B CN116668582B (en) | 2023-11-24 |
Family
ID=87721084
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310961126.4A Active CN116668582B (en) | 2023-08-02 | 2023-08-02 | Audio file sharing method and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116668582B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102523564A (en) * | 2011-12-27 | 2012-06-27 | 长沙驰顺网络科技有限公司 | Method for increasing multimedia message service (MMS) end-to-end success rate in mobile communication industry |
CN103402171A (en) * | 2013-08-08 | 2013-11-20 | 华为终端有限公司 | Method and terminal for sharing background music during communication |
CN107645604A (en) * | 2017-09-29 | 2018-01-30 | 维沃移动通信有限公司 | A kind of call handling method and mobile terminal |
AU2020100981A4 (en) * | 2020-06-10 | 2020-07-16 | Oxti Corporation | Wireless headset amplifier |
CN114327351A (en) * | 2021-12-23 | 2022-04-12 | 努比亚技术有限公司 | Method and device for sharing local audio output and computer readable storage medium |
CN115620736A (en) * | 2021-07-16 | 2023-01-17 | 腾讯科技(深圳)有限公司 | Audio sharing method and device, computer readable storage medium and electronic equipment |
-
2023
- 2023-08-02 CN CN202310961126.4A patent/CN116668582B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102523564A (en) * | 2011-12-27 | 2012-06-27 | 长沙驰顺网络科技有限公司 | Method for increasing multimedia message service (MMS) end-to-end success rate in mobile communication industry |
CN103402171A (en) * | 2013-08-08 | 2013-11-20 | 华为终端有限公司 | Method and terminal for sharing background music during communication |
CN107645604A (en) * | 2017-09-29 | 2018-01-30 | 维沃移动通信有限公司 | A kind of call handling method and mobile terminal |
AU2020100981A4 (en) * | 2020-06-10 | 2020-07-16 | Oxti Corporation | Wireless headset amplifier |
CN115620736A (en) * | 2021-07-16 | 2023-01-17 | 腾讯科技(深圳)有限公司 | Audio sharing method and device, computer readable storage medium and electronic equipment |
CN114327351A (en) * | 2021-12-23 | 2022-04-12 | 努比亚技术有限公司 | Method and device for sharing local audio output and computer readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN116668582B (en) | 2023-11-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11635873B2 (en) | Information display method, graphical user interface, and terminal for displaying media interface information in a floating window | |
CN112261339B (en) | Video communication method, video communication device, electronic equipment and computer-readable storage medium | |
US20240192986A1 (en) | Application handoff method and apparatus | |
US12120162B2 (en) | Communication protocol switching method, apparatus, and system | |
US20220083642A1 (en) | Intelligent terminal login method and electronic device | |
CN113810760B (en) | Method for controlling screen projection, electronic device and computer readable storage medium | |
US20230138804A1 (en) | Enhanced video call method and system, and electronic device | |
CN112751971A (en) | Voice playing method and device and electronic equipment | |
WO2024016503A1 (en) | Communication method and electronic device | |
CN114500716A (en) | Cooperative call method, apparatus, device, storage medium and program product | |
CN112543431A (en) | Account synchronization method, medium and server | |
CN110278273A (en) | Multimedia file method for uploading, device, terminal, server and storage medium | |
CN105516972A (en) | Network connection method and device | |
CN116668582B (en) | Audio file sharing method and electronic equipment | |
CN115242994B (en) | Video call system, method and device | |
CN114244955A (en) | Service sharing method and system and electronic equipment | |
CN116033592B (en) | Method and device for using cellular communication function | |
CN114245060B (en) | Path processing method, device, equipment and storage medium | |
WO2024078412A1 (en) | Cross-screen sharing method, graphical interface, and related apparatus | |
CN115002820B (en) | Call state monitoring method, device, equipment and storage medium | |
CN117857646B (en) | Data network sharing method, electronic equipment and storage medium | |
US20230156112A1 (en) | Function conflict processing method and apparatus, electronic device, and readable storage medium | |
CN114338904B (en) | Incoming call prompting method, electronic equipment and readable storage medium | |
CN116233505A (en) | Screen projection method and electronic equipment | |
CN113971012A (en) | Multi-application audio customized output method, terminal and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |