WO2023024973A1 - 一种音频控制方法及电子设备 - Google Patents

一种音频控制方法及电子设备 Download PDF

Info

Publication number
WO2023024973A1
WO2023024973A1 PCT/CN2022/112778 CN2022112778W WO2023024973A1 WO 2023024973 A1 WO2023024973 A1 WO 2023024973A1 CN 2022112778 W CN2022112778 W CN 2022112778W WO 2023024973 A1 WO2023024973 A1 WO 2023024973A1
Authority
WO
WIPO (PCT)
Prior art keywords
audio
audio control
electronic device
audio output
output device
Prior art date
Application number
PCT/CN2022/112778
Other languages
English (en)
French (fr)
Inventor
张超
钱夏欢
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023024973A1 publication Critical patent/WO2023024973A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content

Definitions

  • the present application relates to the field of terminals, and in particular to an audio control method and electronic equipment.
  • a user or family often has multiple electronic devices.
  • a terminal such as a mobile phone to answer a call
  • other electronic devices may be making sounds.
  • the user's mobile phone receives a call from the contact "Zhang San”
  • electronic devices such as TVs or speakers in the home are playing audio and video
  • the audio and video played by these sound-producing devices may interfere with the call that the user answers on the mobile phone .
  • the user needs to manually adjust the volume of each sound-generating device one by one. For example, after the user receives a call from the contact "San Zhang", in order to ensure the call quality, the user can manually pause the audio played by the speaker, and then manually use the remote control to adjust the volume of the TV, and then answer the call from the contact "San Zhang” on the mobile phone. " call. Obviously, this audio control method requires the user to frequently switch the currently operating focus device, the process is relatively cumbersome, and the user experience is not high.
  • the present application provides an audio control method and an electronic device, which can uniformly control the sounding state of the sounding device in the electronic device receiving an incoming call, and improve user experience.
  • the present application provides an audio control method, including: the electronic device detects an incoming call event or an outgoing call event; furthermore, in response to the incoming call event or the outgoing call event, the electronic device can display the An audio output device and the electronic device are located in the same network) corresponding to the first audio control card; subsequently, when the electronic device receives the audio control operation input by the user in the first audio control card, in response to the audio control operation, The electronic device may send a corresponding audio control command to the first audio output device, so as to control the audio switch or volume played on the first audio output device.
  • the electronic device can display the audio control card of the audio output device, so that the user can directly adjust the volume, pause or adjust the volume of the audio output device in the corresponding audio control card according to his own needs. Audio control operations such as playback eliminate the need to manually operate each audio output device for audio control, thereby improving the user's audio experience.
  • the electronic device after the electronic device detects the incoming call event, it further includes: the electronic device displays an incoming call card corresponding to the incoming call event, and at this time, the first audio control card is located in the incoming call card; or, the electronic device displays The incoming call interface corresponding to the incoming call event, at this time, the first audio control card is located in the incoming call interface. That is to say, in the scenario of an incoming call, the electronic device can display the audio control card of the audio output device in the network on the incoming call card or the incoming call interface.
  • the electronic device after the electronic device obtains the outgoing call event, it further includes: the electronic device displays a dial interface corresponding to the outgoing call event, and at this time, the first audio control card is located on the dial interface. That is to say, in the outgoing call scenario, the electronic device can display the audio control card of the audio output device in the networking on the dialing interface.
  • the first audio control card includes a mute button; when the audio control operation input by the user is clicking the mute button, the corresponding audio control instruction is a mute instruction.
  • the first audio control card includes a pause button; when the audio control operation input by the user is clicking the pause button, the corresponding audio control instruction is a pause playback instruction.
  • the above-mentioned first audio control card includes a volume drag bar, and the slider on the volume drag bar is located at the first position, which is used to indicate that the volume of the first audio output device is the first volume value;
  • the corresponding audio control instruction is to adjust the volume of the first audio output device to a second volume value corresponding to the second position.
  • the above-mentioned first audio control card may include one or more items of a device identifier, a device name, a name of an audio and video file being played, or a playback progress of the first audio output device.
  • the electronic device after the electronic device detects the incoming call event or the outgoing call event, it further includes: the electronic device obtains the audio output device of each of the N (N is an integer greater than 0) audio output devices in the network. The sounding state of each audio output device; the electronic device determines the sounding audio output device according to the sounding state of each audio output device, and the sounding audio output device includes the first audio output device. For example, the electronic device may obtain the sounding status of each audio output device in the network through an AP or other device, or the electronic device may obtain the sounding state of each audio output device in the network through a P2P connection.
  • the above-mentioned first audio output device is an audio output device with the highest volume among the N audio output devices. That is to say, the electronic device can present to the user the audio control card of the audio output device with the loudest volume when an incoming or outgoing call is being made, so as to prevent the sound of the audio output device from interfering with the user's conversation.
  • the method further includes: the electronic device displays an expansion button, and the expansion button is located in the first audio control card or in the second audio control card.
  • the electronic device displays an expansion button, and the expansion button is located in the first audio control card or in the second audio control card.
  • the electronic device can display a sound device list, and the sound device list includes a first audio control card and a second audio control card corresponding to the second audio output device. That is to say, when there are multiple sounding audio output devices in the network, the electronic device can display the audio control card of each audio output device that is sounding, so that the user can control the sounding individually through the audio control card in the sounding device list.
  • Each audio output device of the phone can improve the user experience in the scene of incoming or outgoing calls.
  • the list of sounding devices further includes a first batch management button; the above method further includes: in response to the operation of the user selecting the first batch management button, the electronic device may send the first audio output device and the second batch management button Both audio output devices send a mute command. In this way, users can manage the muting of multiple audio output devices in the network in batches.
  • the list of sounding devices further includes a second batch management button; the above method further includes: in response to the operation of the user selecting the first batch management button, the electronic device sends the first audio output device and the second batch management button; All audio output devices send pause playback instructions. In this way, users can batch manage and pause playback of multiple audio output devices in the network.
  • the electronic device displays the first audio control card corresponding to the first audio output device, it further includes: if it is detected that the electronic device connects to the call request corresponding to the incoming call event or the outgoing call event, Then the electronic device continues to display the first audio control card, or hides the first audio control card. That is to say, the electronic device can display the audio control card of the audio output device in a call scene, and the user can also perform audio control on the corresponding audio output device through the audio control card during the process of using the electronic device to talk with a contact.
  • the electronic device may display the first audio control card in the call interface corresponding to the above call request; and/or, the electronic device may display the first audio control card in the notification center.
  • the electronic device after the electronic device receives the audio control operation input by the user in the first audio control card, it further includes: the electronic device may continue to display the first audio control card, or the electronic device may hide the first audio control card. Audio control card.
  • the present application provides an electronic device, including: a display screen, one or more processors, one or more memory devices, and one or more computer programs; wherein, the processor is coupled to the display screen and the memory device,
  • the above one or more computer programs are stored in the memory, and when the electronic device is running, the processor executes the one or more computer programs stored in the memory, so that the electronic device executes the audio control method described in any aspect above.
  • the present application provides a computer-readable storage medium, including computer instructions.
  • the computer instructions When the computer instructions are run on an electronic device, the electronic device is made to execute the audio control method described in any aspect above.
  • the present application provides a computer program product, which enables the electronic device to execute the audio control method described in any one of the above aspects when the computer program product is run on the electronic device.
  • the electronic device described in the second aspect provided above, the computer-readable storage medium described in the third aspect, and the computer program product described in the fourth aspect are all used to perform the corresponding method provided above , therefore, the beneficial effects it can achieve can refer to the beneficial effects in the corresponding method provided above, and will not be repeated here.
  • FIG. 1 is a schematic structural diagram of a communication system provided by an embodiment of the present application.
  • FIG. 2A is a first schematic diagram of an application scenario of an audio control method provided by an embodiment of the present application
  • FIG. 2B is a second schematic diagram of an application scenario of an audio control method provided by an embodiment of the present application.
  • FIG. 2C is a schematic diagram of an application scenario 3 of an audio control method provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram 4 of an application scenario of an audio control method provided by an embodiment of the present application.
  • FIG. 4 is a first schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of an operating system in an electronic device provided in an embodiment of the present application.
  • FIG. 6 is an interactive schematic diagram of an audio control method provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of an application scenario five of an audio control method provided by an embodiment of the present application.
  • FIG. 8 is a sixth schematic diagram of an application scenario of an audio control method provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of an application scenario VII of an audio control method provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of an eighth application scenario of an audio control method provided by an embodiment of the present application.
  • FIG. 11 is a schematic diagram of an application scenario nine of an audio control method provided by an embodiment of the present application.
  • FIG. 12 is a tenth schematic diagram of an application scenario of an audio control method provided by an embodiment of the present application.
  • FIG. 13 is an eleventh schematic diagram of an application scenario of an audio control method provided by an embodiment of the present application.
  • FIG. 14 is a schematic diagram of an application scenario twelve of an audio control method provided by an embodiment of the present application.
  • FIG. 15 is a schematic diagram of an application scenario thirteen of an audio control method provided by an embodiment of the present application.
  • FIG. 16 is a second structural schematic diagram of an electronic device provided by an embodiment of the present application.
  • the communication system 200 may include an electronic device 101 with a call function, and one or more audio output devices 102 with an audio playback function.
  • the electronic device 101 may be an electronic device installed with a voice call or video call APP, such as a mobile phone, a smart watch, or a tablet computer.
  • the audio output device 102 may be an electronic device provided with a speaker, such as a television, a sound box, or a mobile phone.
  • the electronic device 101 and the audio output device 102 may be interconnected through a communication network.
  • the communication network may be a wired network or a wireless network.
  • the above-mentioned communication network may be a local area network (local area networks, LAN), or a wide area network (wide area networks,
  • the above-mentioned communication network can be realized using any known network communication protocol, and the above-mentioned network communication protocol can be various wired or wireless communication protocols, such as Ethernet, universal serial bus (universal serial bus, USB), fire wire (FIREWIRE), Global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), wideband code division multiple access (wideband code division multiple access (WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), Bluetooth, wireless fidelity (Wi-Fi), NFC, voice over Internet protocol (VoIP), communication protocol supporting network slicing architecture, or any other suitable communication protocol.
  • Ethernet universal serial bus
  • USB fire wire
  • FIREWIRE Fire wire
  • GSM Global system for mobile communications
  • GPRS general packet radio service
  • CDMA code division multiple access
  • WCDMA wideband code division multiple access
  • TD-SCDMA time-division code division multiple access
  • LTE long term evolution
  • Bluetooth wireless
  • each electronic device 101 and audio output device 102 in the communication system 200 can form a network (ie, a network) according to a certain communication protocol and networking strategy, so that each device in the communication system 200 can communicate with each other.
  • a network ie, a network
  • the electronic device 101 and the audio output device 102 in the communication system 200 can access a Wi-Fi network provided by an access point (access point, AP) such as a router, so that the electronic device 101 and the audio output device 102 can pass through
  • the Wi-Fi network establishes a Wi-Fi connection.
  • the electronic device 101 and the audio output device 102 in the communication system 200 may log in with the same account (such as a Huawei account), and then interconnect through one or more servers on the network side.
  • the electronic device 101 can obtain the sounding state of the audio output device 102 in the current Wi-Fi network, and the sounding state can include whether Sound, volume and other audio parameters. If a certain audio output device 102 is making sound (such as playing an audio and video file), the electronic device 101 can also obtain audio parameters such as the volume of the sounding audio output device 102 and the name of the playing audio and video file.
  • the electronic device 101 accesses the Wi-Fi network provided by the AP, it can obtain parameters such as device identifiers and device names of other electronic devices connected to the current Wi-Fi network from the AP. Furthermore, the electronic device 101 can respectively establish communication connections, such as P2P connections, with other electronic devices in the current Wi-Fi network. In this way, the electronic device 101 can serve as a server (server), and other electronic devices in the current Wi-Fi network can serve as clients (client). The electronic device 101 can use the corresponding P2P connection to interact with each electronic device in the current Wi-Fi network, and obtain from each electronic device whether each electronic device has an audio playback function. If a certain electronic device has an audio playback function, the electronic device 101 can use it as an audio output device 102 to further obtain the sounding state of the audio output device 102 .
  • P2P connections such as P2P connections
  • the AP can also serve as a server to periodically obtain parameters such as device identification, device name, and whether it has an audio playback function of each electronic device in the current Wi-Fi network. After the electronic device 101 accesses the Wi-Fi network provided by the AP, the electronic device 101 can acquire the sounding status of the audio output device 102 in the current Wi-Fi network from the AP.
  • each audio output device 102 displays the audio control card 202 of the audio output device 102 that is making sound.
  • the sounding audio output device 102 may be the TV 1 in the living room, and the TV 1 is playing the first episode of the TV series A, and the volume level is 8.
  • the electronic device 101 can display controls such as a volume seekbar 203 , a mute button 204 and a pause button 205 in the audio control card 202 .
  • the electronic device 101 may generate a corresponding audio control instruction, and the audio control instruction may be used to instruct to adjust the volume of the TV 1 to level 3.
  • the electronic device 101 can send the above-mentioned audio control command to the TV 1 (that is, the audio output device 102 that is making sound) through the AP, so that the TV 1 can adjust the current volume level from 10 to 3 levels in response to the audio control command, so as to realize the Audio control of the audio output device 102 that is speaking.
  • the user can also click the mute button 204 in the audio control card 202 to trigger the TV 1 to mute.
  • the user can also click the pause button 205 in the above-mentioned audio control card 202 to trigger the TV 1 to pause playing the current playing content, which is not limited in this embodiment of the present application.
  • the electronic device 101 may also display an audio control card corresponding to each audio output device 102 in the current Wi-Fi network.
  • the audio output device 102 in the current Wi-Fi network includes TV 1 , TV 2 and speaker 3 .
  • the electronic device 101 can display the audio control card 202 of TV 1 and the audio control card 206 of TV 2 in the current incoming call card 201 And the audio control card 207 of the speaker 3.
  • the electronic device 101 when the electronic device 101 acquires an incoming call event, no matter whether the audio output device 102 in the network is making a sound, the electronic device 101 can display the audio control card of the corresponding audio output device 102, so that the user can directly Audio control is performed on the audio output device 102 in the corresponding audio control card.
  • the electronic device 101 when it detects that the user sends a call request to a certain contact or phone number (that is, an outgoing call event), it can also display the audio control card of the audio output device 102 according to the above method.
  • the electronic device 101 may display a dial interface 208 .
  • the electronic device 101 may display one or more audio control cards of the audio output device 102 on the dial interface 208 according to the above method, for example, the audio control card 202 of the TV 1 that is sounding. In this way, the user can also perform audio control on the audio output device 102 through the audio control card in an outgoing call scenario.
  • the electronic device 101 can display one or more audio output devices 102 Audio control card, so that users can directly perform audio control operations such as volume adjustment, pause or play on the audio output device 102 in the corresponding audio control card according to their own needs, without manually operating each audio output device 102 to perform audio control, thereby improving User experience with audio.
  • the above-mentioned audio control operation may be used to control the switch or size of the audio played on the audio output device.
  • the audio control operation may be a mute or pause operation, so as to control the audio output device to stop sounding.
  • the audio control operation may be a playback operation, so as to control the audio output device to produce sound.
  • the audio control operation may be a volume adjustment operation, so as to control the volume of the audio output device when playing audio.
  • FIG. 4 shows a schematic structural diagram of the mobile phone.
  • the mobile phone can include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, A speaker 170A, a receiver 170B, a microphone 170C, an earphone jack 170D, a sensor module 180 and the like.
  • a processor 110 an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, A speaker 170A, a receiver 170B, a microphone 170C, an earphone jack 170D, a sensor module 180 and the like.
  • a processor 110 an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module
  • the structure shown in the embodiment of the present invention does not constitute a specific limitation on the mobile phone.
  • the mobile phone may include more or fewer components than shown in the figure, or combine certain components, or separate certain components, or arrange different components.
  • the illustrated components can be realized in hardware, software or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU) wait. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor graphics processing unit
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • baseband processor baseband processor
  • neural network processor neural-network processing unit, NPU
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is a cache memory.
  • the memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 110 is reduced, thus improving the efficiency of the system.
  • the wireless communication function of the mobile phone can be realized by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor and the baseband processor.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in a mobile phone can be used to cover single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas.
  • Antenna 1 can be multiplexed as a diversity antenna for a WLAN.
  • the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 150 can provide wireless communication solutions including 2G/3G/4G/5G applied to mobile phones.
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA) and the like.
  • the mobile communication module 150 can receive electromagnetic waves through the antenna 1, filter and amplify the received electromagnetic waves, and send them to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signals modulated by the modem processor, and convert them into electromagnetic waves through the antenna 1 for radiation.
  • at least part of the functional modules of the mobile communication module 150 may be set in the processor 110 .
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be set in the same device.
  • the wireless communication module 160 can provide applications on mobile phones including wireless local area networks (wireless local area networks, WLAN) (such as wireless fidelity (wireless fidelity, Wi-Fi) network), bluetooth (bluetooth, BT), global navigation satellite system ( Global navigation satellite system (GNSS), frequency modulation (frequency modulation, FM), near field communication (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency-modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110 , frequency-modulate it, amplify it, and convert it into electromagnetic waves through the antenna 2 for radiation.
  • the antenna 1 of the mobile phone is coupled to the mobile communication module 150, and the antenna 2 is coupled to the wireless communication module 160, so that the mobile phone can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC , FM, and/or IR techniques, etc.
  • GSM global system for mobile communications
  • GPRS general packet radio service
  • code division multiple access code division multiple access
  • CDMA broadband Code division multiple access
  • WCDMA wideband code division multiple access
  • time division code division multiple access time-division code division multiple access
  • LTE long term evolution
  • BT GNSS
  • WLAN NFC
  • FM
  • the GNSS may include a global positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a Beidou navigation satellite system (beidou navigation satellite system, BDS), a quasi-zenith satellite system (quasi -zenith satellite system (QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • Beidou navigation satellite system beidou navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the mobile phone realizes the display function through the GPU, the display screen 194, and the application processor.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos and the like.
  • the display screen 194 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode, AMOLED), flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diodes (quantum dot light emitting diodes, QLED), etc.
  • the mobile phone may include 1 or N display screens 194, where N is a positive integer greater than 1.
  • the mobile phone can realize shooting function through ISP, camera 193 , video codec, GPU, display screen 194 and application processor.
  • the ISP is used for processing the data fed back by the camera 193 .
  • the light is transmitted to the photosensitive element of the camera through the lens, and the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin color.
  • ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be located in the camera 193 .
  • Camera 193 is used to capture still images or video.
  • the object generates an optical image through the lens and projects it to the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the light signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other image signals.
  • the mobile phone may include 1 or N cameras 193, where N is a positive integer greater than 1.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the mobile phone selects the frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy.
  • Video codecs are used to compress or decompress digital video.
  • a mobile phone can support one or more video codecs.
  • the mobile phone can play or record videos in multiple encoding formats, such as: moving picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the mobile phone.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. Such as saving music, video and other files in the external memory card.
  • the internal memory 121 may be used to store computer-executable program codes including instructions.
  • the processor 110 executes various functional applications and data processing of the mobile phone by executing instructions stored in the internal memory 121 .
  • the internal memory 121 may include an area for storing programs and an area for storing data.
  • the stored program area can store an operating system, at least one application program required by a function (such as a sound playing function, an image playing function, etc.) and the like.
  • the storage data area can store data (such as audio data, phone book, etc.) created during the use of the mobile phone.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (universal flash storage, UFS) and the like.
  • the mobile phone can realize the audio function through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signal.
  • the audio module 170 may also be used to encode and decode audio signals.
  • the audio module 170 may be set in the processor 110 , or some functional modules of the audio module 170 may be set in the processor 110 .
  • Speaker 170A also referred to as a "horn" is used to convert audio electrical signals into sound signals.
  • the cell phone can listen to music through speaker 170A, or listen to hands-free calls.
  • Receiver 170B also called “earpiece” is used to convert audio electrical signals into sound signals.
  • the receiver 170B can be placed close to the human ear to listen to the voice.
  • the microphone 170C also called “microphone” or “microphone” is used to convert sound signals into electrical signals.
  • the user can put his mouth close to the microphone 170C to make a sound, and input the sound signal to the microphone 170C.
  • the mobile phone can be provided with at least one microphone 170C.
  • the mobile phone can be provided with two microphones 170C, which can also implement a noise reduction function in addition to collecting sound signals.
  • the mobile phone can also be equipped with three, four or more microphones 170C to realize the collection of sound signals, noise reduction, identification of sound sources, and realization of directional recording functions, etc.
  • the earphone interface 170D is used for connecting wired earphones.
  • the earphone interface 170D can be a USB interface 130, or a 3.5mm open mobile terminal platform (OMTP) standard interface, or a cellular telecommunications industry association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA
  • the sensor module 180 may include a pressure sensor, a gyroscope sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, and the like.
  • the mobile phone may also include a charging management module, a power management module, a battery, buttons, an indicator, and one or more SIM card interfaces, etc., which are not limited in this embodiment of the present application.
  • the software system of the above-mentioned mobile phone may adopt a layered architecture, an event-driven architecture, a micro-kernel architecture, a micro-service architecture, or a cloud architecture.
  • the Android system with layered architecture is taken as an example to illustrate the software structure of the mobile phone.
  • FIG. 5 is a software structural block diagram of the mobile phone according to the embodiment of the present application.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate through software interfaces.
  • the Android system is divided into four layers, which are respectively the application program layer, the application program framework layer, the Android runtime (Android runtime) and the system library, and the kernel layer from top to bottom.
  • the application layer can consist of a series of application packages.
  • APPs applications
  • call memo
  • browser contacts
  • camera gallery
  • calendar map
  • bluetooth music, video, and short message
  • the calling APP also called dialing APP or phone APP
  • the calling APP can be preset when the mobile phone leaves the factory, or it can be a third-party application provided in the application market with a calling function.
  • the call function can also be set in other applications such as a chat APP.
  • a voice call function or a video call function may be set in the chat APP.
  • the above call function may be a call function based on network communication protocols such as 4G and 5G, or a call function based on a VoIP (Voice over Internet Protocol) protocol.
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer may include a phone service.
  • the call service can be used to realize the call function of the electronic device 100 .
  • the call service can manage the call state of the call APP (such as initiating, connecting, hanging up, etc.). If it is detected that the user uses the calling APP to make an incoming or outgoing call to a certain phone number, the calling service can monitor the call status of this call. If a dial-up failure event is detected, the call service can also obtain the reason for the dial-up failure sent by the base station, such as network failure, rejection by the other party, failure to connect after timeout, or busy signal from the other party.
  • the reason for the dial-up failure sent by the base station such as network failure, rejection by the other party, failure to connect after timeout, or busy signal from the other party.
  • the application framework layer may also be provided with an audio control module 501 .
  • the audio control module 501 can uniformly manage the sounding status of each audio output device in the network. Exemplarily, when the mobile phone joins a certain Wi-Fi network, the audio control module 501 can interact with the AP through the Wi-Fi module in the mobile phone, and obtain the sounding status of each audio output device in the network through the AP.
  • the device connected to the AP can report parameters such as its own device identification, device type, and whether it supports the audio playback function to the AP. In this way, the AP can know the specific audio output device in the current Wi-Fi network. Moreover, when the device connected to the AP starts to play the audio and video files, it can also report parameters such as the volume level and the name of the audio and video files to the AP. In this way, the AP can know in real time whether each audio output device in the current Wi-Fi network is making sound, the volume of the sound and other sounding status.
  • the audio control module 501 of the mobile phone can also obtain the sounding status of each audio output device in the current Wi-Fi network from the AP, for example Whether it is making a sound, the volume of the sound, etc.
  • the audio control module 501 can periodically obtain the sounding state of each audio output device in the current Wi-Fi network from the AP to ensure the acquisition Get the latest sounding status of each audio output device in the current Wi-Fi network.
  • the mobile phone can also establish a P2P connection with each audio output device in the Wi-Fi network through the Wi-Fi module.
  • the audio control module 501 can interact with each audio output device in the Wi-Fi network based on the corresponding P2P connection, so as to obtain the sounding status of each audio output device in the current Wi-Fi network.
  • the calling APP can interact with the audio control module 501 through the calling service.
  • the calling APP can obtain the sounding state of the audio output device that is sounding in the current Wi-Fi network from the audio control module 501 through the calling service.
  • the call APP displays the incoming call card or interface corresponding to the above incoming call event, it can also display the corresponding audio control card according to the sounding state of the audio output device that is sounding.
  • the audio control card can be set to adjust the volume of the relevant audio output device, pause the playback and other audio control operations, so that the user can control the sounding audio output device in the audio control card according to his own needs in the incoming call scene or the outgoing call scene. For audio control.
  • the call APP after the call APP receives the call event of the contact or sends a call request to the contact, it can also notify other applications (such as smart life APP or system UI, etc.) to obtain the current Wi-Fi connection from the audio control module 501.
  • the sounding status of the sounding audio output device in the network and then display the corresponding audio control card. That is to say, the application displaying the audio control card may be a calling APP or other applications, which is not limited in this embodiment of the present application.
  • the application framework layer can include window managers, content providers, view systems, phone managers, resource managers, notification managers, and so on.
  • the above-mentioned window manager is used to manage window programs.
  • the window manager can get the size of the display screen, determine whether there is a status bar, lock the screen, capture the screen, etc.
  • the above-mentioned content providers are used to store and obtain data, and make these data accessible to applications.
  • Said data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebook, etc.
  • Each display interface can consist of one or more controls.
  • controls may include interface elements such as icons, buttons, menus, tabs, text boxes, dialog boxes, status bars, navigation bars, and widgets (Widgets).
  • the above-mentioned phone manager is used to provide the communication function of the mobile phone 201 .
  • the management of call status including connected, hung up, etc.).
  • the resource manager mentioned above provides various resources for the application, such as localized strings, icons, pictures, layout files, video files and so on.
  • the above-mentioned notification manager enables the application to display notification information in the status bar, which can be used to convey notification-type messages, and can automatically disappear after a short stay without user interaction.
  • the notification manager is used to notify the download completion, message reminder, etc.
  • the notification manager can also be a notification that appears on the top status bar of the system in the form of a chart or scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, sending out prompt sounds, vibrating, and flashing lights, etc.
  • the system library may include multiple functional modules. For example: layer integrator (SurfaceFlinger), media library (Media Libraries), 3D graphics processing library (eg: OpenGL ES), 2D graphics engine (eg: SGL), etc.
  • SurfaceFlinger is used to manage the display subsystem and provides the fusion of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of various commonly used audio and video formats, as well as still image files, etc.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, compositing, and layer processing, etc.
  • 2D graphics engine is a drawing engine for 2D drawing.
  • the Android Runtime includes core library and virtual machine. The Android runtime is responsible for the scheduling and management of the Android system.
  • the core library consists of two parts: one part is the function function that the java language needs to call, and the other part is the core library of Android.
  • the application layer and the application framework layer run in virtual machines.
  • the virtual machine executes the java files of the application program layer and the application program framework layer as binary files.
  • the virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer includes at least a camera driver, an audio driver, a sensor driver, etc., which are not limited in this embodiment of the present application.
  • the mobile phone is still taken as an example of the electronic device 101 in the communication system 200 below, and an audio control method provided by the embodiment of the present application is described in detail with reference to the accompanying drawings.
  • an audio control method provided by an embodiment of the present application may include the following steps S601-S607.
  • the mobile phone forms a network with at least one audio output device.
  • the mobile phone can use one or more communication technologies such as Wi-Fi, Bluetooth, or UWB (ultra wide band, ultra-wideband) to form a network with at least one audio output device, and this embodiment of the present application does not make any restrictions on this.
  • Wi-Fi Wireless Fidelity
  • Bluetooth Wireless Fidelity
  • UWB ultra wide band, ultra-wideband
  • Wi-Fi communication technology electronic devices such as mobile phones can use SSID (Service Set Identifier, service set identifier) to access the Wi-Fi network provided by the AP, thereby forming a network with each electronic device in the Wi-Fi network.
  • SSID Service Set Identifier, service set identifier
  • the mobile phone can search for a Wi-Fi network 1 provided by a nearby AP.
  • the mobile phone can automatically send the saved SSID to the corresponding AP after detecting the Wi-Fi signal of Wi-Fi network 1.
  • SSID the user can manually input the SSID for accessing the Wi-Fi network 1 in the relevant interface displayed by the mobile phone, and then trigger the mobile phone to send the SSID entered by the user to the AP corresponding to the Wi-Fi network 1.
  • the AP corresponding to Wi-Fi network 1 After the AP corresponding to Wi-Fi network 1 receives the SSID sent by the mobile phone, it can verify whether the SSID is correct. If the SSID sent by the mobile phone is correct, the AP can establish a Wi-Fi connection with the mobile phone through the Wi-Fi protocol. When the Wi-Fi connection is successfully established, it means that the mobile phone has connected to the Wi-Fi network 1 provided by the AP. Correspondingly, if the SSID sent by the mobile phone is incorrect, the mobile phone cannot access the Wi-Fi network 1 provided by the AP.
  • Wi-Fi network 1 can also access the Wi-Fi network 1 provided by the AP according to the above method.
  • audio output devices such as TVs and speakers can also be connected to the Wi-Fi network 1 according to the above method. In this way, the mobile phone can form a network with audio output devices such as TVs and speakers by connecting to the Wi-Fi network 1 .
  • the mobile phone can also log in to the same account (such as a Huawei account) with audio output devices such as TVs and speakers.
  • the server may distinguish electronic devices of different users through different accounts.
  • Devices connected to the same Wi-Fi network can communicate through the AP.
  • the control command may be sent to the AP first, and then the AP sends the control command to the TV.
  • a P2P connection can also be established between any two devices connected to the same Wi-Fi network.
  • the mobile phone accesses the aforementioned Wi-Fi network 1, it can obtain device identifiers of other electronic devices in the Wi-Fi network 1, such as MAC addresses.
  • the mobile phone can establish a Wi-Fi Direct connection (that is, Wi-Fi direct connection) with the corresponding electronic device according to the obtained device identification. In this way, the mobile phone can communicate with other electronic devices in the Wi-Fi network 1 based on the Wi-Fi direct connection, without using the AP as a relay device to communicate with other electronic devices in the Wi-Fi network 1 .
  • the mobile phone can also directly establish a P2P connection with audio output devices such as TVs and speakers through functions such as Wi-Fi or Bluetooth to form a network, which is not limited in this embodiment of the present application.
  • the mobile phone acquires the sounding status of one or more audio output devices in the networking.
  • the mobile phone connected to Wi-Fi network 1 after the mobile phone is connected to Wi-Fi network 1, it can send query request 1 to the AP.
  • Query request 1 is used to request to obtain one or more audio outputs in the current Wi-Fi network 1 The sound state of the device.
  • the AP may respond to the above query request 1 and send the sounding status of one or more audio output devices in the current Wi-Fi network 1 to the mobile phone.
  • each device accesses the Wi-Fi network 1, it can report its own device identification, device type, and whether it supports audio playback function and other parameters to the AP.
  • the AP can determine that the device is an audio output device.
  • the AP can periodically obtain the sounding status of one or more audio output devices in the current Wi-Fi network.
  • the AP can periodically query audio parameters from each audio output device whether the current audio output device is sounding, the volume of the sound, etc. These audio parameters can reflect the sounding state of the corresponding audio output device.
  • an audio output device in the Wi-Fi network 1 starts to sound (or stops sounding)
  • it can actively report audio parameters such as the current sound and the volume of the sound to the AP.
  • the device when a device (such as a mobile phone, a TV, etc.) is connected to the Wi-Fi network 1, the device can use a preset private protocol to communicate with the AP.
  • the private protocol may be a type of protocol such as RTSP (Real Time Streaming Protocol, real-time streaming protocol).
  • RTSP Real Time Streaming Protocol, real-time streaming protocol
  • different fields such as field 1 to field 4 can be set in the above-mentioned private protocol.
  • field 1 is used to indicate the current volume level N
  • field 2 is used to indicate the current playback progress is X/Y/Z
  • field 3 is used to indicate the cover of the currently playing audio and video file
  • field 4 is used to indicate the current playback The title of the audio and video file.
  • the content in some fields can be empty.
  • Those skilled in the art can also set more or fewer fields for carrying different audio parameters according to actual needs.
  • field content illustrate field 1 Volume: N volume level field 2 Progress: X/Y/Z playback progress field 3 Data the cover field 4 Data title
  • Each audio output device in the Wi-Fi network 1 can send its own audio parameters in each field to the AP according to the above-mentioned proprietary protocol. In this way, the real-time audio parameters of all audio output devices in the current Wi-Fi network 1 can be stored in the AP. Subsequently, when the AP receives the query request 1 (such as a handshake request) sent by the mobile phone, the AP can also carry the audio parameters of each audio output device in a data packet and send it to the mobile phone according to the above-mentioned private protocol. The mobile phone can obtain the sounding status of one or more audio output devices in the current Wi-Fi network 1 by analyzing each field in the data packet sent by the AP.
  • the query request 1 such as a handshake request
  • the Wi-Fi module of the mobile phone after the Wi-Fi module of the mobile phone receives the sounding state of one or more audio output devices in the current Wi-Fi network 1 from the AP, it can send the obtained sounding state of the audio output device through the Wi-Fi module To the audio control module in the application framework layer (such as the audio control module 501 in FIG. 5 ). Furthermore, the audio control module can save and maintain the sounding state of one or more audio output devices in the current Wi-Fi network 1 .
  • the mobile phone after successfully accessing the Wi-Fi network 1, can periodically obtain the sounding status of one or more audio output devices in the current Wi-Fi network 1 from the AP according to the above method, so as to ensure that the mobile phone can Obtain the latest sounding status of each audio output device in Wi-Fi network 1.
  • the above step S602 may not be executed for the time being.
  • the subsequent mobile phone needs the sounding status of each audio output device, it can then obtain one of the current Wi-Fi network 1 according to the above method. or the sounding states of multiple audio output devices, which will be described in detail in subsequent embodiments, so details will not be described here.
  • the mobile phone may also send parameters such as its own device identification, device type, and whether it supports an audio playback function to the AP.
  • the mobile phone can also be used as an audio output device to report audio parameters such as volume when making a sound to the AP.
  • the AP can also use the mobile phone as an audio output device, Send the voice status of the mobile phone to other electronic devices such as smart watches.
  • the mobile phone can also obtain parameters such as device identification, device type, whether it supports audio playback function, whether it is making a sound, and volume level of all devices in the Wi-Fi network 1 from the AP. Furthermore, the mobile phone can filter out the audio output device in the Wi-Fi network 1 from the obtained parameters of each device, and determine the sounding state of the audio output device according to audio parameters such as whether the audio output device is sounding and the volume when it occurs. . At this time, the AP in the Wi-Fi network 1 no longer needs to determine the current audio output device in the Wi-Fi network 1 and the sounding state of the audio output device.
  • the mobile phone acquires the sounding state of the audio output device in the Wi-Fi network through the AP as an example.
  • the mobile phone can also directly interact with the audio output device in the Wi-Fi network, thereby obtaining The sounding state of the audio output device.
  • the mobile phone accesses the aforementioned Wi-Fi network 1, it can obtain parameters such as addresses of other devices currently connected to the Wi-Fi network 1 from the AP.
  • the mobile phone can establish a Wi-Fi direct connection with other devices in the Wi-Fi network 1 according to the obtained address. Taking the TV in Wi-Fi network 1 as an example, after the mobile phone establishes a Wi-Fi direct connection with the TV, it can still communicate with the TV according to the above-mentioned private protocol.
  • the mobile phone may send a query request 1 (such as a handshake request) to the TV through the Wi-Fi direct channel.
  • a query request 1 such as a handshake request
  • the TV can carry audio parameters such as its own volume level, playback progress, cover and title of audio and video files in corresponding fields, and finally send them to the mobile phone in the form of data packets.
  • the mobile phone can obtain the current sounding state of the TV by analyzing each field in the data packet sent by the TV.
  • the mobile phone also obtains the sounding status of other audio output devices through the Wi-Fi direct connection channel between the mobile phone and other audio output devices, which will not be described in this embodiment of the present application.
  • a device such as a mobile phone is used as an example to access a Wi-Fi network. It can be understood that the mobile phone and each audio output device can also establish a network connection through other networking methods (such as Bluetooth, UWB, etc.). Moreover, in other networking modes, the mobile phone may also obtain the sounding status of one or more audio output devices in the networking, which is not limited in this embodiment of the present application.
  • the mobile phone receives an incoming call event from the contact 1.
  • the mobile phone may receive an incoming call event from a contact during operation.
  • a call APP in a mobile phone may receive a call request from a contact in the address book.
  • the call APP can also receive call requests from unfamiliar numbers.
  • a chat APP in a mobile phone can receive a voice call request or a video call request from a contact.
  • the call APP takes the call APP in the mobile phone receiving the call request from contact 1 as an example.
  • the call APP can send the corresponding call event (such as incoming call Event 1).
  • the call service can monitor whether the user performs operations such as answering, rejecting or putting out, so as to manage the call status of this call.
  • the mobile phone displays the incoming call card (or incoming call interface) corresponding to the above incoming call event, and displays the audio control card 1 of the audio output device A that is making sound.
  • the call service in the mobile phone to receive the above-mentioned call event 1 as an example, after the call service receives the call event 1 sent by the call APP, it can call modules such as view (view) to draw the corresponding call card (or call interface).
  • the incoming call card refers to notifying the user of the incoming call event 1 in the form of a card.
  • the incoming call card may include information such as an answer button, a reject button, a contact's phone number or name, and the like.
  • the incoming call interface refers to notifying the user of the incoming call event 1 in the form of a full screen in the mobile phone.
  • the incoming call interface may also include information such as an answer button, a reject button, and a contact's phone number or name.
  • the user can display the incoming call event in the form of an incoming call card or an incoming call interface when setting an incoming call in the setting APP.
  • the mobile phone may automatically determine to display the incoming call card or the incoming call interface in combination with a specific application scenario when receiving an incoming call event. For example, if the mobile phone is displaying the desktop when the incoming call event 1 is received, the call service can call view to draw the corresponding incoming call interface. If the mobile phone is running the video APP to play the video when the incoming call event 1 is received, the call service can call view to draw the corresponding incoming call card, so as to prevent the full-screen display of the incoming call interface from interfering with the user watching the video in the video APP.
  • the call service After the call service receives the above-mentioned incoming call event 1, it can call view to draw the incoming call card 701 . Furthermore, the drawn caller card 701 is displayed on the current display interface 700 of the mobile phone by the calling APP.
  • the incoming call card 701 may include information such as the avatar, name, and phone number of the contact corresponding to the incoming call event 1, and may also include controls such as an answer button 701a and a reject button 701b.
  • the user can click the answer button 701a to trigger the mobile phone to connect to the call with the contact 1; or, the user can click the reject button 701b to trigger the mobile phone to reject the current call request of the contact 1.
  • the call service in the mobile phone after the call service in the mobile phone receives the above-mentioned incoming call event 1, in addition to drawing and displaying the corresponding incoming call card 701 according to the above-mentioned method, it can also select the corresponding incoming call card 701 from the audio control module (such as the audio control module 501 in FIG. ) to obtain the sounding status of one or more audio output devices that are sounding.
  • the call service after the call service receives the above-mentioned incoming call event 1, it can send a query request 2 to the audio control module 501.
  • the query request 2 is used to obtain the sounding status of the audio output device that is sounding in the current Wi-Fi network 1 .
  • the audio control module 501 can obtain the latest sounding status of one or more audio output devices in the current Wi-Fi network 1 through the above step S602. Then, after receiving the query request 2, the audio control module 501 can query whether there is currently an audio output device that is making sound.
  • the audio control module 501 may actively obtain the sounding status of one or more audio output devices in the current Wi-Fi network 1 from the AP according to the relevant method in the above step S602. For example, after receiving the above-mentioned query request 2, the audio control module 501 can interact with the AP through the Wi-Fi module of the mobile phone, and obtain the sounding status of one or more audio output devices in the current Wi-Fi network 1 from the AP. Furthermore, the audio control module 501 may determine whether there is currently an audio output device that is sounding according to the acquired sounding state of the audio output device.
  • the audio control module 501 can send audio parameters such as the device identification, device name, and current volume of the audio output device A to the call service.
  • the audio control module 501 may also send audio parameters such as the name of the audio and video file being played by the audio output device A (for example, the eighth episode of the TV series A), the playback progress, etc. to the call service.
  • the call service can call view to draw the audio control card 702 of the audio output device A according to the received audio parameters.
  • the audio control card 702 may include a device identifier 702a and a device name 702b of the audio output device A.
  • the audio control card 702 may further include a volume drag bar 702c, and the position of the slider on the volume drag bar 702c is used to indicate the volume of the current audio output device A.
  • the audio control card 702 may also include a mute button 702d and a pause (or play) button 702e. After the view draws the above-mentioned audio control card 702, the audio control card 702 can be displayed on the display interface 700 by the calling APP.
  • the calling APP can use the audio control card 702 as a part of the incoming call card 701 , and display the audio control card 702 together with the incoming call card 701 .
  • the user can know the audio output device A near the mobile phone through the audio control card 702, and the user can directly perform audio control (such as adjusting volume, mute, etc.) to the audio output device A through the audio control card 702.
  • the incoming call card 701 and audio control card 702 can be displayed as two independent cards on the current display interface 700 of the mobile phone .
  • the user can also know the audio output device A near the mobile phone when receiving an incoming call, and perform audio control on the audio output device A.
  • the application displaying the incoming call card 701 and the application displaying the audio control card 702 may also be different.
  • the calling APP acquires the above-mentioned incoming call event 1, on the one hand, it can draw and display the corresponding incoming call card 701 according to the above-mentioned method, and on the other hand, it can trigger the preset application to obtain one or more sounding events from the above-mentioned audio control module 501.
  • the sounding state of the audio output device can draw and display the corresponding audio control card 702 according to the sounding state of the sounding audio output device. In this way, the user can also see the corresponding caller card and audio control card on the display interface at the same time when a call is received.
  • the calling APP may trigger the calling service to call view to draw the incoming call interface 901 corresponding to the incoming call event 1.
  • the incoming call interface 901 may also include information such as the avatar, name, and phone number of the contact corresponding to the incoming call event 1, and may also include controls such as an answer button and a reject button. Different from the incoming call card 701, the incoming call interface 901 is displayed on the display interface of the mobile phone in the form of a full screen.
  • the call service after the call service receives the above-mentioned incoming call event 1, it can still obtain the sounding status of one or more sounding audio output devices from the audio control module 501 .
  • the audio control module 501 can send audio parameters such as the device identification, device name, and current volume of the audio output device A to the call service, and trigger the call service according to the received The audio parameters call view to draw the audio control card 702 of the audio output device A.
  • the calling APP can display the audio control card 702 on the incoming call interface 901 .
  • the user can know the audio output device A near the mobile phone through the audio control card 702, and the user can directly perform audio control on the audio output device A through the audio control card 702 (such as adjusting volume, mute, etc.).
  • the mobile phone can first display the incoming call card (or incoming call interface) of the incoming call event, and then display the audio control card of the audio output device A; or, the mobile phone can first display the audio control card of the audio output device A, and then display the incoming call event Alternatively, the mobile phone may simultaneously display the incoming call card (or incoming call interface) of the incoming call event and the audio control card of the audio output device A, which is not limited in this embodiment of the present application.
  • the mobile phone In response to the audio control operation input by the user on the audio control card 1, the mobile phone sends a corresponding audio control command to the audio output device A.
  • Audio control operation of audio output device A can be used to control the audio on/off or volume played on the audio output device.
  • the audio control operation may be an operation of adjusting volume, a mute operation, or a pause operation.
  • the user when the user needs to mute the audio output device A, he can click the mute button 702d in the audio control card 702 mentioned above.
  • the user when the user needs the audio output device A to pause the audio and video file being played, he can click the pause button 702e in the above-mentioned audio control card 702.
  • the user when the user needs to reduce (or increase) the volume of the audio output device A, he can drag the slider on the volume drag bar 702c in the above-mentioned audio control card 702, and pass the position of the slider on the volume drag bar 702c. Adjust the volume of audio output device A.
  • the mobile phone After detecting that the user has input the corresponding audio control operation to the audio control card 702, the mobile phone can continue to display the audio control card 702 in the incoming call card 701, or automatically hide the audio control card 702 in the incoming call card 701, the embodiment of the present application There are no restrictions on this.
  • the mobile phone may generate a corresponding audio control instruction (ie, a mute instruction). Furthermore, the mobile phone can send a mute command to the audio output device A through the AP. For example, the mobile phone may carry the device identifier of the audio output device A in the mute command above, and send the mute command to the AP. After receiving the mute command, the AP can parse out the device identifier of the audio output device A, indicating that the mobile phone needs to send the mute command to the audio output device A. Further, the AP may send the mute instruction to the audio output device A.
  • the corresponding audio control command can also be carried in a preset field and sent to the AP according to a preset private protocol. For example, if it is necessary to send a mute instruction to the AP, the mobile phone can add content 01 to the preset field A. When the AP parses out that the content in field A is 01, it can execute the corresponding mute command. For another example, if it is necessary to send a playback pause instruction to the AP, the mobile phone may add content 02 to the preset field A. When the AP parses out that the content in field A is 02, it can execute the corresponding pause playback instruction.
  • the mobile phone may directly send the mute command to the audio output device A.
  • the mobile phone may carry the corresponding audio control command in a preset field and send it to the audio output device A according to a preset private protocol.
  • the mobile phone may also send corresponding audio control instructions to the audio output device A according to the above method.
  • the audio output device A adjusts its own sound production state according to the above audio control instruction.
  • the audio output device A may execute the mute command after receiving the mute command from the mobile phone. In this way, the audio output device A can adjust its own sounding state from playing a certain audio and video file to a silent state.
  • the audio output device A can correspondingly execute the corresponding audio control command, thereby adjusting its own audio frequency according to user needs. Voice status.
  • the mobile phone After the mobile phone receives an incoming call event, it can obtain the sounding status of the nearby audio output device that is sounding. Furthermore, the mobile phone can display the audio control card of the corresponding audio output device, so that the user can directly control the corresponding audio according to their own needs.
  • the card performs audio control operations such as volume adjustment, pause, or playback on the audio output device that is making sound, without manually operating each audio output device for audio control, thereby improving the user's audio experience in the incoming call scenario.
  • the sounding status of the nearby audio output device that is sounding can also be obtained according to the above method, and then, by displaying the audio control card of the corresponding audio output device, the user can According to their own needs, directly perform audio control operations such as volume adjustment, pause or play on the audio output device that is sounding in the corresponding audio control card, so as to improve the user's audio experience in the outgoing call scene.
  • the user may not need to perform audio control on nearby audio output devices.
  • the mobile phone can acquire that the audio output device A is making sound according to the method of the above embodiment, and display the audio control card of the audio output device A. If the user thinks that the current volume of audio output device A is low and will not interfere with the current call, the user does not need to input relevant audio control operations in the audio control card of audio output device A, and the mobile phone will not be triggered. audio output device A for audio control.
  • the mobile phone displays the audio control card 702 of the audio output device A in the incoming call card 701
  • the mobile phone will The audio control card 702 can be automatically hidden in the caller card 701 .
  • the mobile phone can adaptively adjust the size or display layout of the incoming call card 701, which is not limited in this embodiment of the present application.
  • the mobile phone displays the audio control card 702 of the audio output device A in the incoming call card 701, when the user does not need to perform audio control on the audio output device A, the user can input the audio control card 702 to delete Actions (such as swiping left).
  • the mobile phone may hide the audio control card 702 in the incoming call card 701 in response to the delete operation input by the user.
  • the mobile phone displays the audio control card 702 of the audio output device A in the incoming call card 701, if it detects that the user operates the answer button 701a or the reject button 701b in the incoming call card 701, it means that the user may not need to control the audio output device A. A for audio control.
  • the mobile phone can also hide the audio control card 702 in the incoming call card 701 .
  • the audio output device A is still used as an example of the sounding audio output device.
  • the mobile phone can continue to display the audio control card of the audio output device A. 702.
  • the call interface 1201 may include controls such as mute and hang up. At this time, the mobile phone may continue to display the audio control card 702 of the audio output device A in the call interface 1201 . In this way, the user can also perform audio control on the audio output device A through the audio control card 702 during the process of using the mobile phone to communicate with the contact.
  • the mobile phone may also continue to display the audio control card 702 of the audio output device A in the notification center 1202. In this way, the user can also perform audio control on the audio output device A through the audio control card 702 during the process of using the mobile phone to communicate with the contact.
  • the mobile phone may prompt the user to restore the audio playback state of the audio output device (for example, audio output device A).
  • the mobile phone may display a dialog box to prompt the user whether to restore the audio playback state of the audio output device A to the audio playback state before the incoming call. If it is detected that the user clicks the confirmation button in the dialog box, the mobile phone can send a corresponding audio control command to the audio output device A, so that the audio output device A returns to the audio playback state before the mobile phone calls.
  • the mobile phone may display the audio control card of the current audio output device A after the end of the current call, so that the user can perform audio control on the audio output device A in the audio control card of the audio output device A.
  • the foregoing embodiment is illustrated by taking the audio output device A nearby that is sounding when the mobile phone receives a call as an example.
  • the mobile phone can obtain that the audio output devices that are making sound include audio output device A and audio output device B.
  • the mobile phone can display the audio control card of the audio output device with a higher volume (for example, audio output device A) in the incoming call card (or incoming call interface) according to the respective sounding states of the audio output device A and the audio output device B.
  • the mobile phone after the mobile phone receives the above-mentioned incoming call event 1, if the audio output device A and audio output device B are making sound, the mobile phone can also associate the audio output device A
  • the audio control card of and the audio control card corresponding to the audio output device B are displayed in the incoming call card (or incoming call interface).
  • the mobile phone after the mobile phone receives the above-mentioned incoming call event 1, if there are multiple audio output devices that are sounding, the mobile phone can set one of the sounding audio output devices (for example, audio output device A)
  • the audio control card 702 of is displayed in the incoming call card 701.
  • the incoming call card 701 (or audio control card 702 ) may further include an expand button 1301, which is used to trigger the mobile phone to display the audio control cards of other audio output devices that are making sounds. Subsequently, if it is detected that the user enters an expansion operation (for example, a click operation) on the expansion button 1301, the mobile phone may continue to execute the following step S607.
  • an expansion operation for example, a click operation
  • the mobile phone can display the audio control card (for example, audio control card 702) of the audio output device with the highest current volume in the incoming call card 701 .
  • the user can timely control the audio output device that interferes the most with the current call in the incoming call card 701 .
  • the mobile phone can also output audio from two or more audio output devices.
  • the control card is displayed in the incoming call card 701, which is not limited in this embodiment of the present application.
  • the mobile phone in response to the expansion operation input by the user in the audio control card 1, displays the audio control cards of N sounding audio output devices, where N is an integer greater than 1.
  • the mobile phone 701 of this incoming call event 1 is provided with an expansion button 1301, if it is detected that the user inputs an expansion operation (such as a click operation) on the expansion button 1301, then as shown in Figure 14, the mobile phone An audio control card that displays all audio output devices currently sounding.
  • the mobile phone may respond to the user's operation of clicking the expand button 1301 to display the sounding device list 1302 .
  • the audio device list 1302 may include the audio control card 702 of the audio output device A, the audio control card 1303 of the audio output device B, and the audio control card 1304 of the audio output device C.
  • the audio control card 702 of the audio output device A may not be included in the list 1302 of sounding devices.
  • the user can individually control each sounding audio output device through the audio control card in the sounding device list 1302, so as to improve the user's experience in the scene of an incoming call.
  • the user can input a pause operation to the audio output device A in the audio control card 702 to trigger the audio output device A to pause playing.
  • the user can input a mute operation to the audio output device B in the audio control card 1303 to trigger the audio.
  • Output device B plays silently.
  • the user may not input any audio control operation on the audio output device C in the audio control card 1304 .
  • the mobile phone may display the sounding device list 1302 in a modal, non-modal or semi-modal form, which is not limited in this embodiment of the present application.
  • the mobile phone responds to the audio control operation to adjust the sounding state of the corresponding audio output device.
  • the mobile phone can obtain the stored sound production status of all audio output devices that are currently producing sound from the audio control module 501 .
  • the mobile phone can obtain the latest sounding status of all audio output devices in the current WiFi network from the AP, and then determine the current sounding status of all audio output devices that are currently sounding. Voice status. Subsequently, the mobile phone may display the above-mentioned sounding device list 1302 according to the sounding states of all audio output devices that are currently sounding.
  • the management buttons may include one or more of a mute all button 1501 , a delete all button 1502 or a pause all button 1503 .
  • the mobile phone may send a mute instruction to all audio output devices currently sounding (such as audio output device A, audio output device B, and audio output device C) through the AP. In this way, all audio output devices that are currently sounding can execute the mute command to realize the mute playback function.
  • the mobile phone can send a pause playback instruction to all audio output devices (such as audio output device A, audio output device B, and audio output device C) that are currently sounding through the AP. In this way, all audio output devices that are currently sounding can execute the pause playback instruction to realize the pause playback function.
  • the mobile phone can also dynamically adjust the buttons in the audio control card (such as the above audio control card 702 ) or the sound device list (such as the above sound device list 1302 ). For example, when the mobile phone acquires that a certain audio output device (such as another mobile phone) is making sound, and the running service is a call service, if the user manually adjusts the volume of the other mobile phone through the audio control card at this time, it may affect the audio output of the other mobile phone. Call service on a mobile phone, then the mobile phone can not display the volume drag bar, mute or pause buttons when displaying the corresponding audio control card, or set the volume drag bar, mute or pause buttons to be non-interactive status. Of course, the mobile phone may not display the audio control card corresponding to the other mobile phone.
  • the mobile phone may not set all pause buttons 1503 and/or display the above-mentioned sounding device list 1302.
  • the mute all button 1501 prevents the user from interrupting the running video conference service by clicking the pause all button 1503 or the mute all button 1501 .
  • the mobile phone may display all pause buttons 1503 and/or all mute buttons 1501 in the list of sounding devices, but when detecting that the user clicks the all pause button 1503 or all mute buttons 1501, the mobile phone may not send the corresponding pause button to the tablet computer. Play command or mute command.
  • the mobile phone can delete all audio control cards in the sound device list 1302.
  • the mobile phone can hide the sounding device list 1302 or return to the upper-level interface of the sounding device list 1302 (such as the incoming call card 701).
  • the mobile phone displays the sounding device list 1302, if the user does not receive any operation on each audio control card in the sounding device list 1302 within a preset time, the mobile phone can hide the sounding device list 1302 or return to the sounding device list 1302's upper-level interface (such as the incoming call card 701).
  • the mobile phone displays the sounding device list 1302 in response to the user's operation of clicking the expand button 1301, as shown in FIG.
  • the icons of the collapse button 1504 and the unfold button 1301 may be the same or different. If it is detected that the user clicks the folding button 1504 in the sounding device list 1302, the mobile phone can hide the sounding device list 1302 or return to the upper level interface of the sounding device list 1302 (such as the incoming call card 701).
  • the mobile phone can also display the audio control card of the corresponding audio output device, so that the user can directly select the corresponding audio output device according to his own needs.
  • the audio control operations such as volume adjustment, pause or play are performed on the audio output device that is sounding, and there is no need to manually operate each audio output device for audio control, thereby improving the user's audio experience.
  • the display layout of the mobile phone in the vertical screen state is used as an example for illustration. It can be understood that when the mobile phone is in the horizontal screen state, the mobile phone can also display the corresponding display card or audio control card according to the above method, and those skilled in the art can make the display layout according to actual experience or actual application scenarios. There are no restrictions on this.
  • a mobile phone is taken as an example of the electronic device 101 in the communication system 200 to illustrate an audio control method provided by the embodiment of the present application.
  • any electronic device with a call function (such as a voice call function or a video call function) can perform audio control on the audio output device in the network according to the above method, and this embodiment of the present application does not impose any limitation on this.
  • the embodiment of the present application discloses an electronic device, including a processor, a memory connected to the processor, an input device, and an output device.
  • the input device and the output device can be integrated into one device, for example, a touch sensor (touch sensor or touch panel) can be used as an input device, a display can be used as an output device, and the touch sensor and display can be integrated into a touch screen (touch screen).
  • a touch sensor touch sensor or touch panel
  • a display can be used as an output device
  • the touch sensor and display can be integrated into a touch screen (touch screen).
  • the above-mentioned electronic device may include: a touch screen 1601 including a touch sensor 1606 and a display screen 1607; one or more processors 1602; memory 1603; one or more application programs (not shown) and one or more computer programs 1604, the above-mentioned devices can be connected through one or more communication buses 1605.
  • a communication module may also be included in the electronic device.
  • the above-mentioned one or more computer programs 1604 are stored in the above-mentioned memory 1603 and configured to be executed by the one or more processors 1602, the one or more computer programs 1604 include instructions, and the above-mentioned instructions can be used to execute the above-mentioned Each step in the embodiment.
  • all relevant content of each step involved in the above method embodiment can be referred to the functional description of the corresponding physical device, and will not be repeated here.
  • the above-mentioned processor 1602 may specifically be the processor 110 shown in FIG. 4
  • the above-mentioned memory 1603 may specifically be the internal memory 121 shown in FIG. 4
  • the above-mentioned display screen 1607 may specifically be the display screen 194 shown in FIG. 4
  • the above-mentioned touch sensor 1606 may specifically be the touch sensor in the sensor module 180 shown in FIG. 4 , which is not limited in this embodiment of the present application.
  • Each functional unit in each embodiment of the embodiment of the present application may be integrated into one processing unit, or each unit may physically exist separately, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware or in the form of software functional units.
  • the integrated unit is realized in the form of a software function unit and sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • the technical solution of the embodiment of the present application is essentially or the part that contributes to the prior art or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage
  • the medium includes several instructions to enable a computer device (which may be a personal computer, server, or network device, etc.) or a processor to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: flash memory, mobile hard disk, read-only memory, random access memory, magnetic disk or optical disk, and other various media capable of storing program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Telephone Function (AREA)

Abstract

本申请提供一种音频控制方法及电子设备,涉及终端领域,可在去电或接收到来电的电子设备中统一控制发声设备的发声状态,提高用户的使用体验。该方法包括:电子设备检测到来电事件或者去电事件;响应于来电事件或者去电事件,电子设备显示与第一音频输出设备对应的第一音频控制卡片,第一音频输出设备与电子设备位于同一组网内;电子设备接收用户在第一音频控制卡片中输入的音频控制操作;响应于音频控制操作,电子设备向第一音频输出设备发送对应的音频控制指令。

Description

一种音频控制方法及电子设备
本申请要求于2021年08月25日提交国家知识产权局、申请号为202110982720.2、申请名称为“一种音频控制方法及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及终端领域,尤其涉及一种音频控制方法及电子设备。
背景技术
随着智能终端技术的发展,一个用户或家庭中往往具备多个电子设备。在一些场景中,当用户使用手机等终端接听电话时,可能会出现其他电子设备正在发声的情况。例如,当用户的手机接收到联系人“张三”的来电时,如果家中的电视或音箱等电子设备正在播放音视频,则这些发声设备播放的音视频可能会干扰用户在手机上接听的通话。
目前,在上述场景中,用户需要手动的逐个调整各个发声设备的音量。例如,用户在接收到联系人“张三”的来电后,为保证通话质量,用户可以手动暂停音箱播放的音频,再手动使用遥控器调节电视的音量,进而在手机上接听联系人“张三”的来电。显然,这种音频控制方法需要用户频繁的切换当前操作的焦点设备,过程较为繁琐,用户的使用体验不高。
发明内容
本申请提供一种音频控制方法及电子设备,可在接收到来电的电子设备中统一控制发声设备的发声状态,提高用户的使用体验。
为达到上述目的,本申请采用如下技术方案:
第一方面,本申请提供一种音频控制方法,包括:电子设备检测到来电事件或者去电事件;进而,响应于该来电事件或者去电事件,电子设备可显示与第一音频输出设备(第一音频输出设备与电子设备位于同一组网内)对应的第一音频控制卡片;后续,当电子设备接收到用户在第一音频控制卡片中输入的音频控制操作后,响应于该音频控制操作,电子设备可向第一音频输出设备发送对应的音频控制指令,从而控制第一音频输出设备上播放的音频开关或大小。
也就是说,在来电场景或去电场景中,电子设备可通过显示音频输出设备的音频控制卡片,使得用户可以按照自身需求直接在相应的音频控制卡片中对音频输出设备进行音量调整、暂停或播放等音频控制操作,无需手动操作各个音频输出设备进行音频控制,从而提高用户的音频使用体验。
在一种可能的实现方式中,在电子设备检测到来电事件之后,还包括:电子设备显示与来电事件对应的来电卡片,此时,第一音频控制卡片位于来电卡片内;或者,电子设备显示与来电事件对应的来电界面,此时,第一音频控制卡片位于来电界面内。也就是说,在来电场景中,电子设备可将组网内音频输出设备的音频控制卡显示在来电卡片或来电界面内。
在一种可能的实现方式中,在电子设备获取到去电事件之后,还包括:电子设备 显示与去电事件对应的拨号界面,此时,第一音频控制卡片位于拨号界面。也就是说,在去电场景中,电子设备可将组网内音频输出设备的音频控制卡显示在拨号界面内。
在一种可能的实现方式中,上述第一音频控制卡片包括静音按钮;当用户输入的音频控制操作为点击静音按钮时,对应的音频控制指令为静音指令。
在一种可能的实现方式中,上述第一音频控制卡片包括暂停按钮;当用户输入的音频控制操作为点击暂停按钮时,对应的音频控制指令为暂停播放指令。
在一种可能的实现方式中,上述第一音频控制卡片包括音量拖动条,音量拖动条上的滑块位于第一位置,用于指示第一音频输出设备的音量为第一音量值;当当用户输入的音频控制操作为将滑块移动至音量拖动条的第二位置时,对应的音频控制指令为将第一音频输出设备的音量调整为与第二位置对应的第二音量值。
在一种可能的实现方式中,上述第一音频控制卡片可以包括第一音频输出设备的设备标识、设备名称、正在播放的音视频文件名称或播放进度中的一项或多项。
在一种可能的实现方式中,在电子设备检测到来电事件或者去电事件之后,还包括:电子设备获取组网内N(N为大于0的整数)个音频输出设备中每个音频输出设备的发声状态;电子设备根据每个音频输出设备的发声状态确定正在发声的音频输出设备,正在发声的音频输出设备包括第一音频输出设备。例如,电子设备可通过AP等设备获取组网内各个音频输出设备的发声状态,或者,电子设备可通过P2P连接获取组网内各个音频输出设备的发声状态。
在一种可能的实现方式中,上述第一音频输出设备为N个音频输出设备中音量最大的音频输出设备。也就是说,电子设备可将来电或去电时正在发声的音量最大的音频输出设备的音频控制卡片呈现给用户,避免该音频输出设备的发声干扰用户的通话。
在一种可能的实现方式中,当组网内正在发声的音频输出设备还包括第二音频输出设时,方法还包括:电子设备显示展开按钮,展开按钮位于第一音频控制卡片内或位于第一音频控制卡片周围;响应于用户选择展开按钮的操作,电子设备可显示发声设备列表,该声设备列表中包括第一音频控制卡片以及与第二音频输出设备对应的第二音频控制卡片。也就是说,当组网中有多个发声的音频输出设备时,电子设备可显示正在发声的各个音频输出设备的音频控制卡片,使用户可以在发声设备列表中通过音频控制卡片单独控制正在发声的各个音频输出设备,提高用户在来电或去电场景下的使用体验。
在一种可能的实现方式中,上述发声设备列表中还包括第一批量管理按钮;上述方法还包括:响应于用户选择第一批量管理按钮的操作,电子设备可向第一音频输出设备和第二音频输出设备均发送静音指令。这样,用户可以批量管理组网内的多个音频输出设备静音。
在一种可能的实现方式中,上述发声设备列表中还包括第二批量管理按钮;上述方法还包括:响应于用户选择第一批量管理按钮的操作,电子设备向第一音频输出设备和第二音频输出设备均发送暂停播放指令。这样,用户可以批量管理组网内的多个音频输出设备暂停播放。
在一种可能的实现方式中,在电子设备显示与第一音频输出设备对应的第一音频控制卡片之后,还包括:若检测到电子设备接通与来电事件或去电事件对应的通话请求,则电子设备继续显示第一音频控制卡片,或隐藏第一音频控制卡片。也就是说,电子设备可在通话场景下显示音频输出设备的音频控制卡片,用户在使用电子设备与联系人通话的过程中也可以通过音频控制卡片对相应的音频输出设备进行音频控制。
示例性的,电子设备可在与上述通话请求对应的通话界面中显示第一音频控制卡片;和/或,电子设备可在通知中心中显示第一音频控制卡片。
在一种可能的实现方式中,在电子设备接收用户在第一音频控制卡片中输入的音频控制操作之后,还包括:电子设备可继续显示第一音频控制卡片,或者,电子设备可隐藏第一音频控制卡片。
第二方面,本申请提供一种电子设备,包括:显示屏、一个或多个处理器、一个或多个存储器、以及一个或多个计算机程序;其中,处理器与显示屏、存储器均耦合,上述一个或多个计算机程序被存储在存储器中,当电子设备运行时,该处理器执行该存储器存储的一个或多个计算机程序,以使电子设备执行上述任一方面所述的音频控制方法。
第三方面,本申请提供一种计算机可读存储介质,包括计算机指令,当计算机指令在电子设备上运行时,使得电子设备执行上述任一方面所述的音频控制方法。
第四方面,本申请提供一种计算机程序产品,当计算机程序产品在电子设备上运行时,使得电子设备执行上述任一方面所述的音频控制方法。
可以理解地,上述提供的第二方面所述的电子设备、第三方面所述的计算机可读存储介质,以及第四方面所述的计算机程序产品均用于执行上文所提供的对应的方法,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。
附图说明
图1为本申请实施例提供的一种通信系统的架构示意图;
图2A为本申请实施例提供的一种音频控制方法的应用场景示意图一;
图2B为本申请实施例提供的一种音频控制方法的应用场景示意图二;
图2C为本申请实施例提供的一种音频控制方法的应用场景示意图三;
图3为本申请实施例提供的一种音频控制方法的应用场景示意图四;
图4为本申请实施例提供的一种电子设备的结构示意图一;
图5为本申请实施例提供的一种电子设备中操作系统的架构示意图;
图6为本申请实施例提供的一种音频控制方法的交互示意图;
图7为本申请实施例提供的一种音频控制方法的应用场景示意图五;
图8为本申请实施例提供的一种音频控制方法的应用场景示意图六;
图9为本申请实施例提供的一种音频控制方法的应用场景示意图七;
图10为本申请实施例提供的一种音频控制方法的应用场景示意图八;
图11为本申请实施例提供的一种音频控制方法的应用场景示意图九;
图12为本申请实施例提供的一种音频控制方法的应用场景示意图十;
图13为本申请实施例提供的一种音频控制方法的应用场景示意图十一;
图14为本申请实施例提供的一种音频控制方法的应用场景示意图十二;
图15为本申请实施例提供的一种音频控制方法的应用场景示意图十三;
图16为本申请实施例提供的一种电子设备的结构示意图二。
具体实施方式
下面将结合附图对本实施例的实施方式进行详细描述。
本申请实施例提供的一种音频控制方法,可应用于图1所示的通信系统200中。其中,通信系统200可包括具有通话功能的电子设备101,以及一个或多个具有音频播放功能的音频输出设备102。
例如,电子设备101可以为手机、智能手表或平板电脑等安装有语音通话或视频通话APP的电子设备。又例如,音频输出设备102可以为电视、音箱或手机等设置有扬声器的电子设备。
仍如图1所示,电子设备101与音频输出设备102之间可以通过通信网络互联。示例性的,该通信网络可以是有线网络,也可以是无线网络。例如,上述通信网络可以是局域网(local area networks,LAN),也可以是广域网(wide area networks,
WAN),例如互联网。上述通信网络可使用任何已知的网络通信协议来实现,上述网络通信协议可以是各种有线或无线通信协议,诸如以太网、通用串行总线(universal serial bus,USB)、火线(FIREWIRE)、全球移动通讯系统(global system for mobile communications,GSM)、通用分组无线服务(general packet radio service,GPRS)、码分多址接入(code division multiple access,CDMA)、宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE)、蓝牙、无线保真(wireless fidelity,Wi-Fi)、NFC、基于互联网协议的语音通话(voice over Internet protocol,VoIP)、支持网络切片架构的通信协议或任何其他合适的通信协议。
也就是说,通信系统200内的各个电子设备101和音频输出设备102可按照一定的通信协议和组网策略组建网络(即组网),使得通信系统200内的各个设备之间可以互相通信。
示例性的,通信系统200中的电子设备101和音频输出设备102可接入路由器等接入点(access point,AP)提供的Wi-Fi网络,使得电子设备101和音频输出设备102之间通过Wi-Fi网络建立Wi-Fi连接。又例如,通信系统200中的电子设备101和音频输出设备102可登录同一账号(例如华为账号),进而通过网络侧的一个或多个服务器互联。
以电子设备101和音频输出设备102接入同一Wi-Fi网络举例,在本申请实施例中,电子设备101可获取当前Wi-Fi网络中音频输出设备102的发声状态,该发声状态可以包括是否发声、音量大小等音频参数。如果某一音频输出设备102正在发声(例如正在播放音视频文件),则电子设备101还可获取正在发声的音频输出设备102的音量大小、正在播放的音视频文件名称等音频参数。
示例性的,电子设备101接入AP提供的Wi-Fi网络后,可从AP获取到接入当前Wi-Fi网络的其他电子设备的设备标识、设备名称等参数。进而,电子设备101可分别与当前Wi-Fi网络内的其他电子设备建立通信连接,例如P2P连接。这样,电子 设备101可作为服务端(server),当前Wi-Fi网络内的其他电子设备可作为客户端(client)。电子设备101可使用相应的P2P连接与当前Wi-Fi网络中的各个电子设备交互,从各个电子设备中获取每个电子设备是否具有音频播放功能。如果某一电子设备具有音频播放功能,则电子设备101可将其作为一个音频输出设备102,进一步获取该音频输出设备102的发声状态。
或者,AP也可作为服务端(server)定期获取当前Wi-Fi网络中各个电子设备的设备标识、设备名称、是否具有音频播放功能等参数。当电子设备101接入AP提供的Wi-Fi网络后,电子设备101可从AP中获取当前Wi-Fi网络中音频输出设备102的发声状态。
当电子设备101接收到联系人(例如联系人Sam)的来电事件时,如图2A所示,电子设备101除了可以显示与该来电事件对应的来电卡片(或来电界面)201外,还可以根据当前各个音频输出设备102的发声状态显示正在发声的音频输出设备102的音频控制卡片202。例如,正在发声的音频输出设备102可以为客厅中的电视1,电视1正在播放电视剧A的第1集,音量大小为8级。此时,仍如图2A所示,电子设备101可在音频控制卡片202中显示音量拖动条(volume seekbar)203、静音按钮204以及暂停按钮205等控件。
这样,用户在电子设备101上接收到来电事件后,可直接在电子设备101显示的音频控制卡片202中控制当前组网内正在发声的音频输出设备102的发声状态,避免正在发声的音频输出设备102对通话产生干扰。
例如,当用户希望电视1的音量降低时,可拖动音频控制卡片202中音量拖动条203上的滑块,将滑块移动至与更小的音量等级(例如3级音量)对应的位置。进而,如图3所示,响应于用户拖动上述滑块的操作,电子设备101可生成对应的音频控制指令,该音频控制指令可用于指示将电视1的音量大小调整为3级音量。并且,电子设备101可通过AP将上述音频控制指令发送给电视1(即正在发声的音频输出设备102),使电视1响应该音频控制指令将当前的10级音量调整为3级音量,实现对正在发声的音频输出设备102的音频控制。
当然,用户还可以在上述音频控制卡片202中点击静音按钮204,以触发电视1静音。或者,用户还可以在上述音频控制卡片202中点击暂停按钮205,以触发电视1暂停播放当前的播放内容,本申请实施例对此不做任何限制。
或者,如图2B所示,当电子设备101接收到联系人(例如联系人Sam)的来电事件时,电子设备101还可以显示与当前Wi-Fi网络中各个音频输出设备102对应的音频控制卡片。例如,当前Wi-Fi网络中的音频输出设备102包括电视1、电视2以及音箱3。电子设备101接收到联系人Sam的来电事件后,无需判断哪些音频输出设备102正在发声,电子设备101可在当前的来电卡片201中显示电视1的音频控制卡片202、电视2的音频控制卡片206以及音箱3的音频控制卡片207。也就是说,当电子设备101获取到来电事件时,无论组网内的音频输出设备102是否在发声,电子设备101均可显示相应音频输出设备102的音频控制卡片,方便用户可以按照自身需求直接在相应的音频控制卡片中对音频输出设备102进行音频控制。
又或者,当电子设备101检测到用户向某一联系人或电话号码发送通话请求(即 去电事件)时,也可以按照上述方法显示音频输出设备102的音频控制卡片。示例性的,当用户在电子设备101中输入对联系人Sam的拨号操作后,如图2C所示,电子设备101可显示拨号界面208。并且,电子设备101可按照上述方法在拨号界面208中显示一个或多个音频输出设备102的音频控制卡片,例如正在发声的电视1的音频控制卡片202。这样,用户在去电场景下也能够通过音频控制卡片对音频输出设备102进行音频控制。
可以看出,无论是在接收通话请求(例如语音通话请求或视频通话请求)的来电场景,还是在发送通话请求的去电场景中,电子设备101均可通过显示一个或多个音频输出设备102的音频控制卡片,使得用户可以按照自身需求直接在相应的音频控制卡片中对音频输出设备102进行音量调整、暂停或播放等音频控制操作,无需手动操作各个音频输出设备102进行音频控制,从而提高用户的音频使用体验。
其中,上述音频控制操作可用于控制音频输出设备上播放的音频开关或大小。例如,音频控制操作可以为静音或暂停操作,以控制音频输出设备不再发声。又例如,音频控制操作可以为播放操作,以控制音频输出设备发声。又例如,音频控制操作可以为音量调整操作,以控制音频输出设备播放音频时的音量大小。
示例性的,仍以手机作为上述通信系统200中的电子设备101举例,图4示出了手机的结构示意图。
手机可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180等。
可以理解的是,本发明实施例示意的结构并不构成对手机的具体限定。在本申请另一些实施例中,手机可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
手机的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。手机中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线 1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在手机上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
无线通信模块160可以提供应用在手机上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,手机的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得手机可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
手机通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot  light emitting diodes,QLED)等。在一些实施例中,手机可以包括1个或N个显示屏194,N为大于1的正整数。
手机可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,手机可以包括1个或N个摄像头193,N为大于1的正整数。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当手机在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。手机可以支持一种或多种视频编解码器。这样,手机可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展手机的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行手机的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储手机使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
手机可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。手机可以通过扬声器170A收听音乐,或收听免提通话。
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当手机接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。手机可以设置至少一个麦克风170C。在另一些实施例中,手机可以设置两个麦克风170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,手机还可以设置三个,四个或更多麦克风170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口130,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
传感器模块180中可以包括压力传感器,陀螺仪传感器,气压传感器,磁传感器,加速度传感器,距离传感器,接近光传感器,指纹传感器,温度传感器,触摸传感器,环境光传感器,骨传导传感器等。
当然,手机还可以包括充电管理模块、电源管理模块、电池、按键、指示器以及1个或多个SIM卡接口等,本申请实施例对此不做任何限制。
上述手机的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本申请实施例以分层架构的Android系统为例,示例性说明手机的软件结构。
示例性的,图5是本申请实施例的手机的软件结构框图。
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和系统库,以及内核层。
应用程序层可以包括一系列应用程序包。
如图5所示,应用程序层中可以安装通话,备忘录,浏览器,联系人,相机,图库,日历,地图,蓝牙,音乐,视频,短信息等APP(应用,application)。
其中,通话APP(也可称为拨号APP或电话APP)可以是手机出厂时预先设置的,也可以是应用市场中提供的具有通话功能的第三方应用。
或者,聊天APP等其他应用中也可以设置通话功能。例如,聊天APP中可以设置语音通话功能或视频通话功能。上述通话功能可以是基于4G、5G等网络通信协议的通话功能,也可以是基于VoIP(Voice over Internet Protocol)协议的通话功能。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
如图5所示,应用程序框架层可以包括通话服务(phone service)。通话服务可用于实现电子设备100的通话功能。
例如,通话服务可管理通话APP的通话状态(例如发起,接通,挂断等)。如果检测到用户使用通话APP与某一电话号码进行呼入或呼出操作,通话服务可监测本次通话的通话状态。如果检测到拨号失败的事件,通话服务还可以获取基站发送的拨号失败的原因,例如网络故障、对方拒接、超时未接通或对方忙音等。
在本申请实施例中,仍如图5所示,应用程序框架层还可以设置音频控制模块501。音频控制模块501可统一管理组网内各个音频输出设备的发声状态。示例性的,当手机加入某一Wi-Fi网络后,音频控制模块501可通过手机中的Wi-Fi模块与AP交互,通过AP获取组网内各个音频输出设备的发声状态。
例如,每当有设备接入AP时,接入AP的设备可将自身的设备标识、设备类型、是否支持音频播放功能等参数上报给AP。这样,AP可获知当前Wi-Fi网络中具体的音频输出设备。并且,当接入AP的设备开始播放音视频文件时,还可以将此时的音量大小、音视频文件的名称等参数上报给AP。这样,AP可实时获知当前Wi-Fi网络中各个音频输出设备是否在发声、发声时的音量大小等发声状态。当手机接入AP后,除了可以将自身的设备标识、设备类型等参数上报给AP外,手机的音频控制模块501还可以从AP获取当前Wi-Fi网络中各个音频输出设备的发声状态,例如是否在发声、发声时的音量大小等。
由于上述Wi-Fi网络中音频输出设备的发声状态可能会动态的发生变化,因此,音频控制模块501可以周期性的从AP获取当前Wi-Fi网络中各个音频输出设备的发声状态,以保证获取到当前Wi-Fi网络中各个音频输出设备最新的发声状态。
又例如,手机也可以通过Wi-Fi模块与Wi-Fi网络中的各个音频输出设备建立P2P连接。进而,音频控制模块501可基于相应的P2P连接分别与Wi-Fi网络中的各个音频输出设备交互,从而获取到当前Wi-Fi网络中各个音频输出设备的发声状态。
后续,当通话APP等应用接收到联系人的来电事件或向联系人发送通话请求后,通话APP可通过通话服务与音频控制模块501交互。例如,通话APP可通过通话服务从音频控制模块501中获取当前Wi-Fi网络中正在发声的音频输出设备的发声状态。进而,通话APP在显示与上述来电事件对应的来电卡片或来电界面的同时,还可以根据正在发声的音频输出设备的发声状态显示对应的音频控制卡片。音频控制卡片中可以设置对相关音频输出设备进行音量调整、暂停播放等音频控制操作的控件,使得用户可以在来电场景或去电场景下按照自身需求在音频控制卡片中对正在发声的音频输出设备进行音频控制。
在一些实施例中,通话APP接收到联系人的来电事件或向联系人发送通话请求后,还可以通知其他应用(例如智慧生活APP或者system UI等)从音频控制模块501中获取当前Wi-Fi网络中正在发声的音频输出设备的发声状态,进而显示对应的音频控制卡片。也就是说,显示音频控制卡片的应用可以是通话APP也可以是其他应用,本申请实施例对此不做任何限制。
如图5所示,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,电话管理器,资源管理器,通知管理器等。
其中,上述窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。
上述内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
上述视图系统可用于构建应用程序的显示界面。每个显示界面可以由一个或多个控件组成。一般而言,控件可以包括图标、按钮、菜单、选项卡、文本框、对话框、状态栏、导航栏、微件(Widget)等界面元素。
上述电话管理器用于提供手机201的通信功能。例如通话状态的管理(包括接通,挂断等)。
上述资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
上述通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,振动,指示灯闪烁等。
如图5所示,系统库可以包括多个功能模块。例如:图层整合器(SurfaceFlinger),媒体库(Media Libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。SurfaceFlinger用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。2D图形引擎是2D绘图的绘图引擎。
Android Runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
如图5所示,内核层是硬件和软件之间的层。内核层至少包含摄像头驱动,音频驱动,传感器驱动等,本申请实施例对此不做任何限制。
以下仍以手机为上述通信系统200中的电子设备101举例,结合附图详细阐述本申请实施例提供的一种音频控制方法。
示例性的,如图6所示,本申请实施例提供的一种音频控制方法可以包括以下步骤S601-S607。
S601、手机与至少一个音频输出设备进行组网。
具体的,手机可以使用Wi-Fi、蓝牙或UWB(ultra wide band,超宽带)等一种或多种通信技术与至少一个音频输出设备进行组网,本申请实施例对此不作任何限制。
以Wi-Fi通信技术举例,手机等电子设备可使用SSID(Service Set Identifier,服 务集标识)接入AP提供的Wi-Fi网络,从而与Wi-Fi网络内的各个电子设备形成组网。例如,手机打开无线局域网络功能(或称为Wi-Fi功能)后,可搜索到附近AP提供的Wi-Fi网络1。当用户希望接入Wi-Fi网络1时,如果手机中保存有Wi-Fi网络1的SSID,则手机检测到Wi-Fi网络1的Wi-Fi信号后可自动向对应的AP发送已保存的SSID。或者,用户可以在手机显示的相关界面中手动输入接入Wi-Fi网络1的SSID,进而触发手机将用户输入的SSID发送至与Wi-Fi网络1对应的AP。
与Wi-Fi网络1对应的AP接收到手机发来的SSID后,可验证该SSID是否正确。如果手机发来的SSID正确,则AP可与手机通过Wi-Fi协议建立Wi-Fi连接。当Wi-Fi连接建立成功后,说明手机已接入AP提供的Wi-Fi网络1。相应的,如果手机发来的SSID不正确,则手机无法接入AP提供的Wi-Fi网络1。
类似的,Wi-Fi网络1中的其他设备也可按照上述方法接入AP提供的Wi-Fi网络1。例如,在手机接入Wi-Fi网络1之前,电视、音箱等音频输出设备也可以按照上述方法接入Wi-Fi网络1。这样,手机通过接入Wi-Fi网络1可与电视、音箱等音频输出设备形成一个组网。
可选的,手机还可以与电视、音箱等音频输出设备登录同一账号(例如华为账号)。服务器可以通过不同账号区分不同用户的电子设备。
接入同一Wi-Fi网络中的各个设备之间可通过AP进行通信。例如,手机需要向Wi-Fi网络1中的电视发送控制指令时,可以先将该控制指令发送至AP,再由AP将该控制指令发送至电视。或者,接入同一Wi-Fi网络中的任意两个设备之间也可以建立P2P连接。例如,手机接入上述Wi-Fi网络1后,可以获取到Wi-Fi网络1中其他电子设备的设备标识,例如MAC地址等。进而,手机可根据获取到的设备标识与对应的电子设备建立Wi-Fi Direct连接(即Wi-Fi直连)。这样,手机可基于Wi-Fi直连与Wi-Fi网络1中的其他电子设备进行通信,不需要将AP作为中转设备与Wi-Fi网络1中的其他电子设备进行通信。
当然,手机也可以通过Wi-Fi或蓝牙等功能与电视、音箱等音频输出设备直接建立P2P连接,进而形成组网,本申请实施例对此不做任何限制。
S602、手机获取组网中一个或多个音频输出设备的发声状态。
仍以手机接入Wi-Fi网络1举例,手机接入Wi-Fi网络1后,可向AP发送查询请求1,查询请求1用于请求获取当前Wi-Fi网络1中一个或多个音频输出设备的发声状态。进而,AP可响应上述查询请求1将当前Wi-Fi网络1中一个或多个音频输出设备的发声状态发送给手机。
示例性的,当各个设备接入Wi-Fi网络1时,可将自身的设备标识、设备类型以及是否支持音频播放功能等参数上报给AP。当某一设备支持音频播放功能时,AP可确定该设备为一个音频输出设备。进而,AP可定期获取当前Wi-Fi网络中一个或多个音频输出设备的发声状态。例如,AP可周期性的主动从各个音频输出设备中查询当前的音频输出设备是否在发声,发声时的音量大小等音频参数,这些音频参数可反映出对应音频输出设备的发声状态。又例如,当Wi-Fi网络1中的某一音频输出设备开始发声(或停止发声)时,可主动向AP上报当前正在发声以及发声时的音量大小等音频参数。
在一些实施例中,当某一设备(例如手机、电视等)接入Wi-Fi网络1后,该设备可使用预设的私有协议与AP通信。其中,该私有协议可以为RTSP(Real Time Streaming Protocol,实时流传输协议)等类型的协议。如表1所示,上述私有协议中可以设置字段1-字段4等不同的字段。例如,字段1用于指示当前的音量等级N,字段2用于指示当前的播放进度为X/Y/Z,字段3用于指示当前播放的音视频文件的封面,字段4用于指示当前播放的音视频文件的标题。其中,一些字段中的内容可以为空。本领域技术人员也可以按照实际需要设置更多或更少的字段用于携带不同的音频参数。
表1
字段 内容 说明
字段1 Volume:N 音量等级
字段2 Progress:X/Y/Z 播放进度
字段3 Data 封面
字段4 Data 标题
Wi-Fi网络1中的各个音频输出设备可按照上述私有协议将自身的音频参数携带在各个字段中发送给AP。这样,AP中可以存储当前Wi-Fi网络1中所有音频输出设备实时的音频参数。后续,当AP接收到手机发送的查询请求1(例如握手请求)后,AP也可按照上述私有协议将各个音频输出设备的音频参数携带在数据包中发送给手机。手机通过解析AP发送的数据包中的各个字段,可以获取到当前Wi-Fi网络1中一个或多个音频输出设备的发声状态。
示例性的,手机的Wi-Fi模块从AP接收到当前Wi-Fi网络1中一个或多个音频输出设备的发声状态后,可以通过Wi-Fi模块将获取到的音频输出设备的发声状态发送至应用程序框架层中的音频控制模块(例如图5中的音频控制模块501)。进而,音频控制模块可保存并维护当前Wi-Fi网络1中一个或多个音频输出设备的发声状态。
在一些实施例中,手机可以在成功接入Wi-Fi网络1后,周期性的按照上述方法从AP获取当前Wi-Fi网络1中一个或多个音频输出设备的发声状态,从而保证手机能够获取到Wi-Fi网络1内各个音频输出设备最新的发声状态。
或者,手机在成功接入Wi-Fi网络1后也可以暂时先不执行上述步骤S602,当后续手机需要各个音频输出设备的发声状态时,可再按照上述方法获取当前Wi-Fi网络1中一个或多个音频输出设备的发声状态,后续实施例中将对此进行详细阐述,故此处不予赘述。
在另一些实施例中,手机作为Wi-Fi网络1中的一个设备也可以向AP发送自身的设备标识、设备类型以及是否支持音频播放功能等参数。并且,手机也可以作为音频输出设备向AP上报发声时的音量大小等音频参数。这样,当Wi-Fi网络中的其他电子设备(例如智能手表等)从AP获取Wi-Fi网络1中一个或多个音频输出设备的发声状态时,AP也可将手机作为一个音频输出设备,将手机的发声状态发送给智能手表等上述其他电子设备。
在另一些实施例中,手机也可以从AP中获取Wi-Fi网络1中所有设备的设备标 识、设备类型、是否支持音频播放功能、是否在发声以及音量大小等参数。进而,手机可从获取到的各个设备的参数中筛选出Wi-Fi网络1中的音频输出设备,并根据音频输出设备是否发声以及发生时的音量大小等音频参数确定出音频输出设备的发声状态。此时,Wi-Fi网络1中的AP无需再确定当前Wi-Fi网络1中的音频输出设备以及音频输出设备的发声状态。
上述实施例中是以手机通过AP获取Wi-Fi网络内音频输出设备的发声状态举例说明的,在另一些实施例中,手机也可以直接与Wi-Fi网络内的音频输出设备交互,从而获取音频输出设备的发声状态。例如,手机接入上述Wi-Fi网络1后,可以从AP获取到当前接入Wi-Fi网络1的其他设备的地址等参数。进而,手机可按照获取到的地址与Wi-Fi网络1中的其他设备建立Wi-Fi直连。以Wi-Fi网络1中的电视举例,手机与电视建立Wi-Fi直连后,仍然可以按照上述私有协议与电视进行通信。
例如,手机可以通过Wi-Fi直连通道向电视发送查询请求1(例如握手请求)。进而,电视可响应该查询请求1,将自身的音量等级、播放进度、音视频文件的封面和标题等音频参数携带在相应的字段中,最终以数据包的形式发送给手机。这样,手机通过解析电视发来的数据包中的各个字段,可以获取到电视当前的发声状态。类似的,手机还以通过与其他音频输出设备之间的Wi-Fi直连通道获取到其他音频输出设备的发声状态,本申请实施例对此不再赘述。
另外,上述实施例中是以手机等设备接入Wi-Fi网络举例说明的,可以理解的是,手机以及各个音频输出设备还可以通过其他组网方式(例如蓝牙、UWB等)建立网络连接。并且,在其他组网方式中,手机也可以获取到组网内一个或多个音频输出设备的发声状态,本申请实施例对此不做任何限制。
S603、手机接收到联系人1的来电事件。
当手机具有通话功能时,手机在运行过程中可能会接收到联系人的来电事件。例如,手机中的通话APP可以接收到通讯录中某一联系人发来的通话请求。当然,通话APP也可以接收陌生号码发来的通话请求。
或者,手机中具有通话功能的其他应用也可以接收联系人的来电事件。例如,手机中的聊天APP可以接收联系人发来的语音通话请求或视频通话请求。
以手机中的通话APP接收到联系人1发来的通话请求举例,通话APP接收到联系人1发来的通话请求后,可向应用程序框架层中的通话服务发送对应的来电事件(例如来电事件1)。进而,通话服务可监测用户是否执行接听、拒接或外放等操作,从而管理本次通话的通话状态。
S604、手机显示与上述来电事件对应的来电卡片(或来电界面),并显示正在发声的音频输出设备A的音频控制卡片1。
仍以手机中的通话服务接收到上述来电事件1举例,通话服务接收到通话APP发送的来电事件1后,可调用view(视图)等模块绘制对应的来电卡片(或来电界面)。其中,来电卡片是指以卡片的形式向用户通知本次来电事件1。来电卡片中可以包括接听按钮、拒接按钮、联系人的电话号码或名称等信息。来电界面是指在手机中以全屏的形式向用户通知本次来电事件1,类似的,来电界面中也可以包括接听按钮、拒接按钮、联系人的电话号码或名称等信息。
示例性的,用户可以在设置APP中设置来电时以来电卡片或来电界面的方式显示来电事件。或者,手机可结合接收到来电事件时的具体应用场景自动确定显示来电卡片或来电界面。例如,如果接收来电事件1时手机正在显示桌面,则通话服务可调用view绘制相应的来电界面。如果接收来电事件1时手机正在运行视频APP播放视频,则通话服务可调用view绘制相应的来电卡片,避免全屏显示来电界面干扰用户在视频APP中观看视频。
如图7所示,通话服务接收到上述来电事件1后,可调用view绘制来电卡片701。进而由通话APP将绘制出的来电卡片701显示在手机当前的显示界面700中。来电卡片701中可以包括与来电事件1对应的联系人的头像、名称以及电话号码等信息,还可以包括接听按钮701a和拒接按钮701b等控件。用户可以点击接听按钮701a触发手机接通与联系人1的通话;或者,用户可以点击拒接按钮701b触发手机拒绝联系人1的本次通话请求。
在本申请实施例中,手机中的通话服务接收到上述来电事件1后,除了按照上述方法绘制并显示对应的来电卡片701外,还可以从音频控制模块(例如图5中的音频控制模块501)中获取正在发声的一个或多个音频输出设备的发声状态。例如,通话服务接收到上述来电事件1后,可以向音频控制模块501发送查询请求2,查询请求2用于获取当前Wi-Fi网络1中正在发声的音频输出设备的发声状态。音频控制模块501通过上述步骤S602可获取到当前Wi-Fi网络1中一个或多个音频输出设备最新的发声状态。那么,音频控制模块501接收到上述查询请求2后,可查询当前是否有正在发声的音频输出设备。
或者,音频控制模块501接收到上述查询请求2后,可按照上述步骤S602中的相关方法主动从AP获取当前Wi-Fi网络1中一个或多个音频输出设备的发声状态。例如,音频控制模块501接收到上述查询请求2后,可通过手机的Wi-Fi模块与AP交互,从AP获取当前Wi-Fi网络1中一个或多个音频输出设备的发声状态。进而,音频控制模块501可根据获取到的音频输出设备的发声状态确定当前是否有正在发声的音频输出设备。
以音频输出设备A为正在发声的音频输出设备举例,音频控制模块501可将音频输出设备A的设备标识、设备名称、当前的音量大小等音频参数发送给通话服务。可选的,音频控制模块501还可以将音频输出设备A正在播放的音视频文件的名称(例如电视剧A第8集)、播放进度等音频参数发送给通话服务。进而,通话服务可根据接收到的音频参数调用view绘制音频输出设备A的音频控制卡片702。仍如图7所示,音频控制卡片702中可以包括音频输出设备A的设备标识702a、设备名称702b。音频控制卡片702中还可以包括音量拖动条702c,音量拖动条702c上滑块所在的位置用于指示当前音频输出设备A的音量大小。音频控制卡片702中还可以包括静音按钮702d和暂停(或播放)按钮702e。view绘制出上述音频控制卡片702后,可由通话APP将音频控制卡片702显示在显示界面700中。
例如,仍如图7所示,通话APP可将音频控制卡片702作为来电卡片701中的一部分,在显示来电卡片701时一同显示音频控制卡片702。用户可通过音频控制卡片702获知手机附近的音频输出设备A,并且,用户可通过音频控制卡片702直接对 音频输出设备A进行音频控制(例如调整音量、静音等)。
或者,如图8所示,通话APP获取到绘制出的来电卡片701和音频控制卡片702后,可将来电卡片701和音频控制卡片702作为两个独立的卡片显示在手机当前的显示界面700中。这样,用户也可以在接收到来电时获知手机附近的音频输出设备A,并对音频输出设备A进行音频控制。
又或者,显示来电卡片701的应用和显示音频控制卡片702的应用也可以不相同。例如,通话APP获取到上述来电事件1后,一方面可按照上述方法绘制并显示对应的来电卡片701,另一方面可触发预设应用从上述音频控制模块501中获取正在发声的一个或多个音频输出设备的发声状态。进而,上述预设应用可根据正在发声的音频输出设备的发声状态绘制并显示对应的音频控制卡片702。这样,用户也可以在来电时同时在显示界面中看到对应的来电卡片和音频控制卡片。
在另一些实施例中,通话APP将上述来电事件1发送至通话服务后,可触发通话服务调用view绘制与来电事件1对应的来电界面901。如图9所示,与来电卡片701类似的,来电界面901中也可以包括与来电事件1对应的联系人的头像、名称以及电话号码等信息,还可以包括接听按钮和拒接按钮等控件。与来电卡片701不同的是,来电界面901是以全屏的形式显示在手机的显示界面中。
在这种场景下,与上述实施例类似的,通话服务接收到上述来电事件1后,仍然可以从音频控制模块501中获取正在发声的一个或多个音频输出设备的发声状态。仍以音频输出设备A为正在发声的音频输出设备举例,音频控制模块501可将音频输出设备A的设备标识、设备名称、当前的音量大小等音频参数发送给通话服务,触发通话服务根据接收到的音频参数调用view绘制音频输出设备A的音频控制卡片702。进而,仍如图9所示,通话APP可将音频控制卡片702显示在来电界面901。这样,在来电时用户可通过音频控制卡片702获知手机附近的音频输出设备A,并且,用户可通过音频控制卡片702直接对音频输出设备A进行音频控制(例如调整音量、静音等)。
需要说明的是,手机可以先显示来电事件的来电卡片(或来电界面),再显示音频输出设备A的音频控制卡片;或者,手机可以先显示音频输出设备A的音频控制卡片,再显示来电事件的来电卡片(或来电界面);或者,手机可以同时显示来电事件的来电卡片(或来电界面)和音频输出设备A的音频控制卡片,本申请实施例对此不做任何限制。
S605、响应于用户在音频控制卡片1中输入的音频控制操作,手机向音频输出设备A发送对应的音频控制指令。
仍以音频输出设备A为正在发声的音频输出设备举例,如图7所示,手机在来电卡片701中显示出音频输出设备A的音频控制卡片702后,用户可在音频控制卡片702中输入对音频输出设备A的音频控制操作。该音频控制操作可用于控制音频输出设备上播放的音频开关或大小。例如,该音频控制操作可以为调节音量的操作、静音操作或暂停操作等。
例如,当用户需要音频输出设备A静音时,可点击上述音频控制卡片702中的静音按钮702d。又例如,用户需要音频输出设备A暂停播放正在播放的音视频文件 时,可点击上述音频控制卡片702中的暂停按钮702e。又例如,用户需要降低(或提高)音频输出设备A的音量时,可在上述音频控制卡片702中拖动音量拖动条702c上的滑块,通过滑块在音量拖动条702c上的位置调整音频输出设备A的音量。
当检测到用户向上述音频控制卡片702输入相应的音频控制操作后,手机可以继续在来电卡片701中显示音频控制卡片702,也可以在来电卡片701中自动隐藏音频控制卡片702,本申请实施例对此不做任何限制。
示例性的,如果手机检测到用户点击上述音频控制卡片702中的静音按钮702d,则手机可生成对应的音频控制指令(即静音指令)。进而,手机可通过AP将静音指令发送至音频输出设备A。例如,手机可在上述静音指令中携带音频输出设备A的设备标识,并将静音指令发送至AP。AP接收到该静音指令后可解析出音频输出设备A的设备标识,说明手机需要将静音指令发送给音频输出设备A。进而,AP可将上述静音指令发送至音频输出设备A。
与步骤S602中手机与AP(或音频输出设备)的交互过程类似的。手机向AP发送静音指令等音频控制指令时,也可按照预设的私有协议将相应的音频控制指令携带在预设字段中发送给AP。例如,如果需要向AP发送静音指令,则手机可在预设的字段A中添加内容01。当AP解析出字段A中的内容为01时,可执行对应的静音指令。又例如,如果需要向AP发送暂停播放指令,则手机可在预设的字段A中添加内容02。当AP解析出字段A中的内容为02时,可执行对应的暂停播放指令。
或者,如果手机与音频输出设备A之间建立有P2P连接(例如Wi-Fi直连),则手机可直接将上述静音指令发送至音频输出设备A。例如,手机可按照预设的私有协议将相应的音频控制指令携带在预设字段中发送给音频输出设备A。
类似的,如果手机检测到用户在上述音频控制卡片702中输入其他音频控制操作,则手机也可按照上述方法将相应的音频控制指令发送给音频输出设备A。
S606、音频输出设备A按照上述音频控制指令调整自身的发声状态。
仍以手机发送的音频控制指令为静音指令举例,音频输出设备A接收到来自手机的静音指令后,可执行该静音指令。这样,音频输出设备A可将自身的发声状态从播放某一音视频文件调整为静音状态。
类似的,如果音频输出设备A接收到手机发送的其他音频控制指令(例如暂停指令、降低音量的指令等),则音频输出设备A可相应执行对应的音频控制指令,从而按照用户需求调整自身的发声状态。
可以看出,手机接收到来电事件后可获取附近正在发声的音频输出设备的发声状态,进而,手机可通过显示相应音频输出设备的音频控制卡片,使得用户可以按照自身需求直接在相应的音频控制卡片中对正在发声的音频输出设备进行音量调整、暂停或播放等音频控制操作,无需手动操作各个音频输出设备进行音频控制,从而提高来电场景下用户的音频使用体验。
类似的,在手机发送语音或视频通话请求的去电场景中,也可按照上述方法获取附近正在发声的音频输出设备的发声状态,进而,通过显示相应音频输出设备的音频控制卡片,使得用户可以按照自身需求直接在相应的音频控制卡片中对正在发声的音频输出设备进行音量调整、暂停或播放等音频控制操作,从而提高去电场景下用户的 音频使用体验。
当然,在一些来电场景或去电场景下,用户可能不需要对附近的音频输出设备进行音频控制。例如,手机可以按照上述实施例的方法获取到音频输出设备A正在发声,并显示音频输出设备A的音频控制卡片。如果用户认为音频输出设备A当前的音量较小,不会干扰到本次通话,则用户可以不在音频输出设备A的音频控制卡片中输入相关的音频控制操作,也就不会触发手机对正在发声的音频输出设备A进行音频控制。
示例性的,如图10所示,手机在来电卡片701中显示出音频输出设备A的音频控制卡片702后,如果在预设时间内没有接收到用户对音频控制卡片702的任何操作,则手机可自动在来电卡片701中隐藏音频控制卡片702。此时,手机可适应性的调整来电卡片701的大小或显示布局,本申请实施例对此不做任何限制。
或者,如图11所示,手机在来电卡片701中显示出音频输出设备A的音频控制卡片702后,当用户不需要对音频输出设备A进行音频控制时,用户可以向音频控制卡片702输入删除操作(例如向左滑动的操作)。此时,手机可响应用户输入的删除操作在来电卡片701中隐藏音频控制卡片702。
又或者,手机在来电卡片701中显示出音频输出设备A的音频控制卡片702后,如果检测到用户操作来电卡片701中的接听按钮701a或拒接按钮701b,说明用户可能不需要对音频输出设备A进行音频控制。此时,手机也可在来电卡片701中隐藏音频控制卡片702。
在另一些实施例中,仍以音频输出设备A为正在发声的音频输出设备举例,当用户接通与本次来电事件1对应的通话后,手机还可以继续显示音频输出设备A的音频控制卡片702。
例如,如果手机检测到用户点击来电卡片701中的接听按钮701a,如图12中的(a)所示,手机可显示与联系人1的通话界面1201。通话界面1201中可以包括静音、挂断等控件。此时,手机可以在通话界面1201中继续显示音频输出设备A的音频控制卡片702。这样,用户在使用手机与联系人通话的过程中也可以通过音频控制卡片702对音频输出设备A进行音频控制。
又例如,如果手机检测到用户点击上述来电卡片701中的接听按钮701a,如图12中的(b)所示,如果检测到用户打开了手机的通知中心1202(也可称为下拉菜单、通知栏或控制中心等),则手机也可以在通知中心1202中继续显示音频输出设备A的音频控制卡片702。这样,用户在使用手机与联系人通话的过程中也可以通过音频控制卡片702对音频输出设备A进行音频控制。
在另一些实施例中,当用户结束本次通话后,手机可以提示用户恢复上述音频输出设备(例如音频输出设备A)的音频播放状态。例如,手机可以显示对话框提示用户是否将音频输出设备A的音频播放状态恢复为来电前的音频播放状态。如果检测到用户点击对话框中的确认按钮,则手机可向音频输出设备A发送对应的音频控制指令,使得音频输出设备A恢复为手机来电前的音频播放状态。又或者,手机可在本次通话结束后显示当前音频输出设备A的音频控制卡片,以便用户在音频输出设备A的音频控制卡片中对音频输出设备A进行音频控制。
上述实施例是以手机来电时获取到附近正在发声的音频输出设备为音频输出设备A举例说明的。在一些实施例中,手机来电时获取到的正在发声的音频输出设备可以有多个。例如,手机接收到上述来电事件1后,可获取到正在发声的音频输出设备包括音频输出设备A和音频输出设备B。此时,手机可根据音频输出设备A和音频输出设备B各自的发声状态,将音量更大的音频输出设备(例如音频输出设备A)的音频控制卡片显示在来电卡片(或来电界面)中。
或者,仍以音频输出设备A和音频输出设备B正在发声举例,手机接收到上述来电事件1后,如果音频输出设备A和音频输出设备B正在发声,则手机还可以将与音频输出设备A对应的音频控制卡片以及与音频输出设备B对应的音频控制卡片均显示在来电卡片(或来电界面)中。
又或者,如图13所示,手机接收到上述来电事件1后,如果获取到正在发声的音频输出设备有多个,则手机可将其中一个正在发声的音频输出设备(例如音频输出设备A)的音频控制卡片702显示在来电卡片701中。此时,来电卡片701(或音频控制卡片702)中还可以包括展开按钮1301,展开按钮1301用于触发手机显示其他正在发声的音频输出设备的音频控制卡片。后续,如果检测到用户对展开按钮1301输入展开操作(例如点击操作),则手机可继续执行下述步骤S607。
可选的,仍如图13所示,当正在发声的音频输出设备有多个时,手机可将当前音量最大的音频输出设备的音频控制卡片(例如音频控制卡片702)显示在来电卡片701中。这样,用户可以在来电卡片701中及时控制对本次通话干扰最大的音频输出设备。
当然,如果来电卡片701中预留了两个或更多个音频控制卡片的位置,则当正在发声的音频输出设备有多个时,手机还可以将两个或更多个音频输出设备的音频控制卡片显示在来电卡片701中,本申请实施例对此不做任何限制。
S607(可选的)、响应于用户在音频控制卡片1中输入的展开操作,手机显示N个正在发声的音频输出设备的音频控制卡片,N为大于1的整数。
仍如图13所示,当本次来电事件1的来电卡片701中设置有展开按钮1301时,如果检测到用户对展开按钮1301输入展开操作(例如点击操作),则如图14所示,手机可显示当前正在发声的所有音频输出设备的音频控制卡片。例如,当手机获取到当前正在发声的音频输出设备为音频输出设备A、音频输出设备B以及音频输出设备C时,手机可响应用户点击展开按钮1301的操作显示发声设备列表1302。发声设备列表1302中可以包括音频输出设备A的音频控制卡片702、音频输出设备B的音频控制卡片1303以及音频输出设备C的音频控制卡片1304。或者,发声设备列表1302中可以不包括音频输出设备A的音频控制卡片702。
这样,用户可以在发声设备列表1302中通过音频控制卡片单独控制正在发声的各个音频输出设备,提高用户在来电场景下的使用体验。
例如,如果用户觉得音频输出设备A播放音频时的音量过大,则用户可以在音频控制卡片702中对音频输出设备A输入暂停操作,触发音频输出设备A暂停播放。同时,如果用户觉得音频输出设备B播放的画面不影响本次通话,但音频输出设备B播放的音频音量过大,则用户可以在音频控制卡片1303中对音频输出设备B 输入静音操作,触发音频输出设备B静音播放。同时,如果用户觉得音频输出设备C的发声对本次通话基本没有影响,则用户可以不在音频控制卡片1304中对音频输出设备C输入任何音频控制操作。
可选的,手机可采用模态化、非模态化或半模态化的形式显示发声设备列表1302,本申请实施例对此不做任何限制。
其中,用户对各个音频控制卡片输入音频控制操作后,手机响应该音频控制操作调整相应音频输出设备的发声状态的具体实现过程与步骤S605-S606中的相关描述类似,故此处不再赘述。
另外,当检测到用户对上述展开按钮1301输入展开操作(例如点击操作)后,手机可以从音频控制模块501中获取已存储的当前正在发声的所有音频输出设备的发声状态。或者,当检测到用户对上述展开按钮1301输入展开操作(例如点击操作)后,手机可以从AP获取当前WiFi网络中所有音频输出设备最新的发声状态,进而确定当前正在发声的所有音频输出设备的发声状态。后续,手机可根据当前正在发声的所有音频输出设备的发声状态显示上述发声设备列表1302。
在另一些实施例中,手机检测到用户对展开按钮1301输入展开操作后,如图15所示,手机除了可以按照上述方法显示发声设备列表1302外,还可以显示批量管理发声设备列表1302中各个音频控制卡片的管理按钮。例如,该管理按钮可以包括全部静音按钮1501、全部删除按钮1502或全部暂停按钮1503中的一个或多个。
示例性的,如果检测到用户点击全部静音按钮1501,则手机可通过AP向当前正在发声的所有音频输出设备(例如音频输出设备A、音频输出设备B以及音频输出设备C)发送静音指令。这样,当前正在发声的所有音频输出设备均可执行该静音指令,实现静音播放功能。类似的,如果检测到用户点击全部暂停按钮1503,则手机可通过AP向当前正在发声的所有音频输出设备(例如音频输出设备A、音频输出设备B以及音频输出设备C)发送暂停播放指令。这样,当前正在发声的所有音频输出设备均可执行该暂停播放指令,实现暂停播放功能。
在一些场景中,手机还可以动态调整音频控制卡片(例如上述音频控制卡片702)或声设备列表(例如上述发声设备列表1302)中的按钮。例如,当手机获取到某一音频输出设备(例如另一手机)正在发声,且运行的业务为通话业务时,如果此时用户通过音频控制卡片手动调整另一手机的音量等操作可能会影响另一手机上的通话业务,那么,手机可在显示对应的音频控制卡片时不显示音量拖动条、静音或暂停播放等按钮,或者将音量拖动条、静音或暂停播放等按钮设置为不可交互的状态。当然,手机也可以不显示与上述另一手机对应的音频控制卡片。
又例如,当手机获取到某一音频输出设备(例如平板电脑)正在发声,且运行的业务为视频会议业务,则手机还可以在显示上述发声设备列表1302中不设置全部暂停按钮1503和/或全部静音按钮1501,避免用户点击全部暂停按钮1503或全部静音按钮1501打断正在运行视频会议业务。或者,手机可以在发声设备列表中显示全部暂停按钮1503和/或全部静音按钮1501,但是,当检测到用户点击全部暂停按钮1503或全部静音按钮1501时,手机可以不向平板电脑发送对应的暂停播放指令或静音指令。
又例如,如果检测到用户点击全部删除按钮1502,说明用户此时不需要对正在发声的音频输出设备进行音频控制,则手机可在发声设备列表1302中删除所有音频控制卡片。或者,检测到用户点击全部删除按钮1502后,手机可隐藏发声设备列表1302或返回至发声设备列表1302的上一级界面(例如来电卡片701)。
或者,当手机显示出发声设备列表1302后,如果在预设时间内没有接收到用户对发声设备列表1302中各个音频控制卡片的任何操作,则手机可隐藏发声设备列表1302或返回至发声设备列表1302的上一级界面(例如来电卡片701)。
又或者,当手机响应于用户点击展开按钮1301的操作显示出发声设备列表1302后,仍如图15所示,发声设备列表1302中可以设置与展开按钮1301对应的折叠按钮1504。折叠按钮1504与展开按钮1301的图标可以相同或不同。如果检测到用户点击发声设备列表1302中的折叠按钮1504,则手机可隐藏发声设备列表1302或返回至发声设备列表1302的上一级界面(例如来电卡片701)。
可以看出,在来电场景或去电场景下,当手机附近正在发声的音频输出设备有多个时,手机也可通过显示相应音频输出设备的音频控制卡片,使得用户可以按照自身需求直接在相应的音频控制卡片中对正在发声的音频输出设备进行音量调整、暂停或播放等音频控制操作,无需手动操作各个音频输出设备进行音频控制,从而提高用户的音频使用体验。
需要说明的是,上述实施例中手机在显示来电卡片或音频控制卡片时,均是以手机在竖屏状态下进行显示布局举例说明的。可以理解的是,当手机为横屏状态时,手机也可以按照上述方法显示对应的显示来电卡片或音频控制卡片,本领域技术人员可按照实际经验或实际应用场景进行显示布局,本申请实施例对此不做任何限制。
另外,上述实施例中是以手机作为上述通信系统200中的电子设备101举例,阐述本申请实施例提供的一种音频控制方法。可以理解的是,任意具有通话功能(例如语音通话功能或视频通话功能)的电子设备均可按照上述方法对组网内的音频输出设备进行音频控制,本申请实施例对此不做任何限制。
本申请实施例公开了一种电子设备,包括处理器,以及与处理器相连的存储器、输入设备和输出设备。其中,输入设备和输出设备可集成为一个设备,例如,可将触摸传感器(touch sensor or touch panel)作为输入设备,将显示屏(display)作为输出设备,并将触摸传感器和显示屏集成为触摸屏(touch screen)。
如图16所示,上述电子设备可以包括:触摸屏1601,所述触摸屏1601包括触摸传感器1606和显示屏1607;一个或多个处理器1602;存储器1603;一个或多个应用程序(未示出);以及一个或多个计算机程序1604,上述各器件可以通过一个或多个通信总线1605连接。当然,电子设备中还可以包括通信模块等其他部件。
其中,上述一个或多个计算机程序1604被存储在上述存储器1603中并被配置为被该一个或多个处理器1602执行,该一个或多个计算机程序1604包括指令,上述指令可以用于执行上述实施例中的各个步骤。其中,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应实体器件的功能描述,在此不再赘述。
示例性的,上述处理器1602具体可以为图4所示的处理器110,上述存储器1603具体可以为图4所示的内部存储器121,上述显示屏1607具体可以为图4所示 的显示屏194,上述触摸传感器1606具体可以为图4所示的传感器模块180中的触摸传感器,本申请实施例对此不做任何限制。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请实施例各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:快闪存储器、移动硬盘、只读存储器、随机存取存储器、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请实施例的具体实施方式,但本申请实施例的保护范围并不局限于此,任何在本申请实施例揭露的技术范围内的变化或替换,都应涵盖在本申请实施例的保护范围之内。因此,本申请实施例的保护范围应以所述权利要求的保护范围为准。

Claims (18)

  1. 一种音频控制方法,其特征在于,包括:
    电子设备检测到来电事件或者去电事件;
    响应于所述来电事件或者所述去电事件,所述电子设备显示与第一音频输出设备对应的第一音频控制卡片,所述第一音频输出设备与所述电子设备位于同一组网内;
    所述电子设备接收用户在所述第一音频控制卡片中输入的音频控制操作;
    响应于所述音频控制操作,所述电子设备向所述第一音频输出设备发送对应的音频控制指令。
  2. 根据权利要求1所述的方法,其特征在于,在电子设备检测到来电事件之后,还包括:
    所述电子设备显示与所述来电事件对应的来电卡片,所述第一音频控制卡片位于所述来电卡片内;或者,
    所述电子设备显示与所述来电事件对应的来电界面,所述第一音频控制卡片位于所述来电界面内。
  3. 根据权利要求1所述的方法,其特征在于,在电子设备获取到去电事件之后,还包括:
    所述电子设备显示与所述去电事件对应的拨号界面,所述第一音频控制卡片位于所述拨号界面。
  4. 根据权利要求1-3中任一项所述的方法,其特征在于,所述第一音频控制卡片包括静音按钮;
    当所述音频控制操作为点击所述静音按钮时,所述音频控制指令为静音指令。
  5. 根据权利要求1-4中任一项所述的方法,其特征在于,所述第一音频控制卡片包括暂停按钮;
    当所述音频控制操作为点击所述暂停按钮时,所述音频控制指令为暂停播放指令。
  6. 根据权利要求1-5中任一项所述的方法,其特征在于,所述第一音频控制卡片包括音量拖动条,所述音量拖动条上的滑块位于第一位置,用于指示所述第一音频输出设备的音量为第一音量值;
    当所述音频控制操作为将所述滑块移动至所述音量拖动条的第二位置时,所述音频控制指令为将所述第一音频输出设备的音量调整为与所述第二位置对应的第二音量值。
  7. 根据权利要求1-6中任一项所述的方法,其特征在于,所述第一音频控制卡片包括所述第一音频输出设备的设备标识、设备名称、正在播放的音视频文件名称或播放进度中的一项或多项。
  8. 根据权利要求1-7中任一项所述的方法,其特征在于,在电子设备检测到来电事件或者去电事件之后,还包括:
    所述电子设备获取所述组网内N个音频输出设备中每个音频输出设备的发声状态,N为大于0的整数;
    所述电子设备根据每个音频输出设备的发声状态确定正在发声的音频输出设备, 正在发声的音频输出设备包括所述第一音频输出设备。
  9. 根据权利要求8所述的方法,其特征在于,所述第一音频输出设备为所述N个音频输出设备中音量最大的音频输出设备。
  10. 根据权利要求8-9中任一项所述的方法,其特征在于,当所述组网内正在发声的音频输出设备还包括第二音频输出设时,所述方法还包括:
    所述电子设备显示展开按钮,所述展开按钮位于所述第一音频控制卡片内或位于所述第一音频控制卡片周围;
    响应于用户选择所述展开按钮的操作,所述电子设备显示发声设备列表,所述声设备列表中包括所述第一音频控制卡片以及与所述第二音频输出设备对应的第二音频控制卡片。
  11. 根据权利要求10所述的方法,其特征在于,所述发声设备列表中还包括第一批量管理按钮;所述方法还包括:
    响应于用户选择所述第一批量管理按钮的操作,所述电子设备向所述第一音频输出设备和所述第二音频输出设备均发送静音指令。
  12. 根据权利要求10所述的方法,其特征在于,所述发声设备列表中还包括第二批量管理按钮;所述方法还包括:
    响应于用户选择所述第一批量管理按钮的操作,所述电子设备向所述第一音频输出设备和所述第二音频输出设备均发送暂停播放指令。
  13. 根据权利要求1-12中任一项所述的方法,其特征在于,在所述电子设备显示与所述第一音频输出设备对应的第一音频控制卡片之后,还包括:
    若检测到所述电子设备接通与所述来电事件或去电事件对应的通话请求,则所述电子设备继续显示所述第一音频控制卡片,或隐藏所述第一音频控制卡片。
  14. 根据权利要求13所述的方法,其特征在于,所述电子设备继续显示所述第一音频控制卡片,包括:
    所述电子设备在与所述通话请求对应的通话界面中显示所述第一音频控制卡片;和/或,
    所述电子设备在通知中心中显示所述第一音频控制卡片。
  15. 根据权利要求1-14中任一项所述的方法,其特征在于,在所述电子设备接收用户在所述第一音频控制卡片中输入的音频控制操作之后,还包括:
    所述电子设备继续显示所述第一音频控制卡片,或者,所述电子设备隐藏所述第一音频控制卡片。
  16. 一种电子设备,其特征在于,包括:
    触摸屏,所述触摸屏包括触摸传感器和显示屏;
    一个或多个处理器;
    存储器;
    其中,所述存储器中存储有一个或多个计算机程序,所述一个或多个计算机程序包括指令,当所述指令被所述电子设备执行时,使得所述电子设备执行如权利要求1-15中任一项所述的音频控制方法。
  17. 一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,其特征 在于,当所述指令在电子设备上运行时,使得所述电子设备执行如权利要求1-15中任一项所述的音频控制方法。
  18. 一种计算机程序产品,其特征在于,当所述计算机程序产品在电子设备上运行时,使得所述电子设备执行所述权利要求1-15中任一项所述的音频控制方法。
PCT/CN2022/112778 2021-08-25 2022-08-16 一种音频控制方法及电子设备 WO2023024973A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110982720.2A CN115729508A (zh) 2021-08-25 2021-08-25 一种音频控制方法及电子设备
CN202110982720.2 2021-08-25

Publications (1)

Publication Number Publication Date
WO2023024973A1 true WO2023024973A1 (zh) 2023-03-02

Family

ID=85289653

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/112778 WO2023024973A1 (zh) 2021-08-25 2022-08-16 一种音频控制方法及电子设备

Country Status (2)

Country Link
CN (1) CN115729508A (zh)
WO (1) WO2023024973A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101562671A (zh) * 2008-04-18 2009-10-21 鸿富锦精密工业(深圳)有限公司 音频设备的音量控制方法及通信装置
CN102043395A (zh) * 2009-10-26 2011-05-04 孙振峰 一种一体化家电系统中控制电器静音的控制装置及方法
WO2012172857A1 (ja) * 2011-06-14 2012-12-20 シャープ株式会社 システム、テレビジョン受像機、情報端末、制御方法、プログラム、及び記録媒体
CN103546616A (zh) * 2013-09-30 2014-01-29 深圳市同洲电子股份有限公司 一种调节音量的方法及装置
CN103796071A (zh) * 2013-11-06 2014-05-14 四川长虹电器股份有限公司 一种智能手机与智能家电的互动方法
CN104867296A (zh) * 2014-02-24 2015-08-26 联想(北京)有限公司 一种音量调节方法及装置
CN113542488A (zh) * 2021-06-30 2021-10-22 青岛海信移动通信技术股份有限公司 采用终端设备控制受控终端的方法、设备和存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101562671A (zh) * 2008-04-18 2009-10-21 鸿富锦精密工业(深圳)有限公司 音频设备的音量控制方法及通信装置
CN102043395A (zh) * 2009-10-26 2011-05-04 孙振峰 一种一体化家电系统中控制电器静音的控制装置及方法
WO2012172857A1 (ja) * 2011-06-14 2012-12-20 シャープ株式会社 システム、テレビジョン受像機、情報端末、制御方法、プログラム、及び記録媒体
CN103546616A (zh) * 2013-09-30 2014-01-29 深圳市同洲电子股份有限公司 一种调节音量的方法及装置
CN103796071A (zh) * 2013-11-06 2014-05-14 四川长虹电器股份有限公司 一种智能手机与智能家电的互动方法
CN104867296A (zh) * 2014-02-24 2015-08-26 联想(北京)有限公司 一种音量调节方法及装置
CN113542488A (zh) * 2021-06-30 2021-10-22 青岛海信移动通信技术股份有限公司 采用终端设备控制受控终端的方法、设备和存储介质

Also Published As

Publication number Publication date
CN115729508A (zh) 2023-03-03

Similar Documents

Publication Publication Date Title
US11880628B2 (en) Screen mirroring display method and electronic device
US11818420B2 (en) Cross-device content projection method and electronic device
EP4030276B1 (en) Content continuation method and electronic device
CN112422874B (zh) 一种摄像头的控制方法及电子设备
WO2022089271A1 (zh) 无线投屏方法、移动设备及计算机可读存储介质
US11968058B2 (en) Method for adding smart home device to contacts and system
WO2023130991A1 (zh) 协同通话方法、装置、设备、存储介质和程序产品
US11973895B2 (en) Call method and apparatus
WO2022037480A1 (zh) 任务处理方法及相关电子设备
KR102491006B1 (ko) 데이터 송신 방법 및 전자 기기
US20230125956A1 (en) Wireless Communication System and Method
US20220124607A1 (en) Method for Accessing Network by Smart Home Device and Related Device
WO2023197655A1 (zh) 用于选择网络的方法及装置
WO2022171009A1 (zh) 通信方法及电子设备
WO2022127670A1 (zh) 一种通话方法、相关设备和系统
US11792631B2 (en) Emergency call method and user terminal
WO2023024973A1 (zh) 一种音频控制方法及电子设备
WO2022110939A1 (zh) 一种设备推荐方法及电子设备
CN114697438B (zh) 一种利用智能设备进行通话的方法、装置、设备及存储介质
WO2023273487A1 (zh) 一种发送多路信令的方法及装置
KR102668913B1 (ko) 통화 방법 및 장치
WO2023165513A1 (zh) 通信方法、电子设备及装置
CN113613230A (zh) 一种扫描参数的确定方法及电子设备
CN115914983A (zh) 数据交互方法、电子设备及计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22860319

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE