CN116405593B - Audio processing method and related device - Google Patents

Audio processing method and related device Download PDF

Info

Publication number
CN116405593B
CN116405593B CN202310670910.XA CN202310670910A CN116405593B CN 116405593 B CN116405593 B CN 116405593B CN 202310670910 A CN202310670910 A CN 202310670910A CN 116405593 B CN116405593 B CN 116405593B
Authority
CN
China
Prior art keywords
preset
mute
audio signal
call
message
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310670910.XA
Other languages
Chinese (zh)
Other versions
CN116405593A (en
Inventor
蒋成虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202310670910.XA priority Critical patent/CN116405593B/en
Publication of CN116405593A publication Critical patent/CN116405593A/en
Application granted granted Critical
Publication of CN116405593B publication Critical patent/CN116405593B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72463User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions to restrict the functionality of the device

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Environmental & Geological Engineering (AREA)
  • Telephone Function (AREA)

Abstract

The embodiment of the application provides an audio processing method and a related device, and relates to the technical field of terminals. The method comprises the following steps: at a first moment, the electronic equipment is in a call, and responds to a first operation for blocking a sound pick-up hole, and a call silencing function is started; the first operation is terminated at a second moment, the call is not terminated, the electronic equipment is in a call mute state, and the second moment is later than the first moment; and at a third moment, the conversation is not finished, the electronic equipment responds to a second operation for blocking the sound pick-up hole, the conversation silencing function is closed, and the third moment is later than the second moment. Therefore, the electronic equipment does not need to add an extra sensor, the cost of the electronic equipment cannot be increased, and the conversation silence function is realized through the microphone device, so that the conversation silence function is realized more conveniently and rapidly in the conversation process.

Description

Audio processing method and related device
Technical Field
The application relates to the technical field of terminals, in particular to an audio processing method and a related device.
Background
During the conversation, the user can place the electronic device at the ear for conversation. When the user needs to start the call mute function, the user needs to leave the electronic equipment from the ear, and after the electronic equipment is on, the user also needs to realize call mute by triggering the mute function in the screen of the electronic equipment, so that the other party of the call cannot receive the voice information of the user.
In addition, if the call interface operates in the background of the electronic equipment, the user also needs to switch the call interface to the foreground of the electronic equipment to perform mute operation; or if the call interface is displayed as a dial, the user also needs to switch to the call interface to perform mute selection operation.
However, this makes the operation of the user to start the call mute function cumbersome, and the user cannot start the call mute function quickly.
Disclosure of Invention
According to the audio processing method and the related device, the electronic equipment can analyze the audio signals collected by the microphone device and judge whether the pick-up hole of the microphone device is blocked, and when the pick-up hole of the microphone device is blocked, the electronic equipment can start a call mute function. Therefore, the electronic equipment does not need to add an extra sensor, the cost of the electronic equipment cannot be increased, and the conversation silence function is realized through the microphone device, so that the conversation silence function is realized more conveniently and rapidly in the conversation process.
In a first aspect, an audio processing method provided by an embodiment of the present application includes:
at a first moment, the electronic equipment is in a call, and responds to a first operation for blocking a sound pick-up hole, and a call silencing function is started; the first operation is terminated at a second moment, the call is not terminated, the electronic equipment is in a call mute state, and the second moment is later than the first moment; and at a third moment, the conversation is not finished, the electronic equipment responds to a second operation for blocking the sound pick-up hole, the conversation silencing function is closed, and the third moment is later than the second moment. Therefore, the electronic equipment does not need to add an extra sensor, the cost of the electronic equipment cannot be increased, and the conversation silence function is realized through the microphone device, so that the conversation silence function is realized more conveniently and rapidly in the conversation process.
In a possible implementation, the call is triggered by a target application of the electronic device, where the electronic device includes a first preset module, and the first preset module is configured to determine whether the pick-up hole is blocked, and before the call mute function is started, further includes: the microphone device reports the collected audio signals to a first preset module; the first preset module determines that the pick-up hole is blocked according to the audio signal, and reports a message for indicating that the pick-up hole is blocked to the target application; starting a call mute function, comprising: the target application initiates a talk mute function based on the message indicating that the pickup aperture is blocked. Therefore, the target application can realize the call mute function according to the wheat blocking message of the first preset module, so that a user can conveniently and rapidly perform call mute operation, and user experience is improved.
In a possible implementation, the electronic device further includes a second preset module, where the second preset module is configured to mute or cancel mute the audio signal, and the target application starts a call mute function based on a message for indicating that the pick-up hole is blocked, and includes: when the target application acquires the message for indicating that the pick-up hole is blocked, the target application transmits a mute message to the application program framework layer; the application framework layer issues a mute message to the hardware abstraction layer; the hardware abstraction layer issues a mute message to a second preset module; the second preset module performs mute processing on the audio signal. In this way, the second preset module can mute the audio signal to be sent according to the mute message issued by the target application, so that the speaking voice of the user can not be heard by the other party.
In one possible implementation, before the call mute function is turned off, the method further includes: the microphone device reports the collected audio signals to a first preset module; the first preset module determines that the pick-up hole is blocked according to the audio signal, and reports a message for indicating that the pick-up hole is blocked to the target application; before closing the call mute function, the method comprises the following steps: the target application turns off the call mute function based on the message indicating that the pickup hole is blocked. Therefore, the target application can realize the function of closing the call silence according to the wheat blocking message of the first preset module, so that the user can conveniently and rapidly recover the normal call, and the user experience is improved.
In one possible implementation, the target application turns off the call mute function based on a message indicating that the pickup hole is blocked, including: when the target application acquires the message for indicating that the pick-up hole is blocked, the target application transmits a message for canceling silence to the application program framework layer; the application framework layer issues a mute canceling message to the hardware abstraction layer; the hardware abstraction layer issues a mute canceling message to a second preset module; the second preset module performs the mute canceling treatment on the audio signal. In this way, the second preset module can normally send the audio signal according to the mute canceling message issued by the target application, so that the speaking voice of the user can be heard by the other party.
In a possible implementation, the first preset module determines that the pick-up hole is blocked according to the audio signal, including: the method comprises the steps that a first preset module determines an energy value of an N frame of audio signal and an energy value of a preset frequency band in the N frame of audio signal; when the energy value of the N frame of audio signal is smaller than a first preset value, and the energy value of the preset frequency band in the N frame of audio signal is compared with the energy value of the preset frequency band in the N-1 frame of audio signal, the first preset module determines that the pickup hole is blocked; the number of the microphone devices is 1, and the microphone devices collect one frame of audio signal every a first preset duration. Therefore, according to the energy value of the audio signal, whether the microphone blockage occurs in the case of only one microphone in the electronic equipment can be determined, so that the mute function is realized or the mute function is canceled.
In one possible implementation, when the energy value of the N-th frame audio signal is smaller than a first preset value, and the energy value of the preset frequency band in the N-th frame audio signal is greater than or equal to a second preset value compared with the energy value of the preset frequency band in the N-1 th frame audio signal, the first preset module determines that the pickup hole is blocked, including: in the second preset time period, when the energy values of the audio signals are smaller than the first preset value, the first preset module determines that the pick-up hole is blocked; the nth frame of audio signal is an audio signal within a second preset duration. Therefore, by calculating the energy value of the audio signal within the second preset duration, the situation that the user mistakenly touches the pick-up hole of the mic device to cause a hole blocking event can be prevented, and therefore the mute or non-mute function can be achieved more accurately.
In a possible implementation, the first preset module determines that the pick-up hole is blocked according to the audio signal, including: the first preset module determines the energy value of the audio signal of each microphone device; when the energy value of the audio signal of one microphone device is smaller than a third preset value, the energy value of the audio signal of one microphone device is smaller than the energy values of the audio signals of other microphone devices, and the difference value between the energy value of the audio signal of one microphone device and the energy value of the audio signal of the other microphone device is larger than or equal to a fourth preset value, the first preset module determines that the pickup hole is blocked; wherein the number of microphone devices is greater than 1. Therefore, whether the microphone blockage occurs in the case of a plurality of microphones in the electronic equipment can be determined according to the energy value of the audio signal, so that the mute function is realized or the mute function is cancelled.
In one possible implementation, when the energy value of the audio signal of one of the microphone devices is smaller than the third preset value, and the energy value of the audio signal of one of the microphone devices is smaller than the energy value of the audio signal of the other microphone devices, and the difference between the energy value of the audio signal of one of the microphone devices and the energy value of the audio signal of the other microphone devices is greater than or equal to the fourth preset value, the first preset module determines that the pickup hole is blocked, including: and in the third preset time period, when the energy value of the audio signal of one microphone device is smaller than a third preset value, the energy value of the audio signal of one microphone device is smaller than the energy value of the audio signal of the other microphone device, and the difference value between the energy value of the audio signal of one microphone device and the energy value of the audio signal of the other microphone device is larger than or equal to a fourth preset value, the first preset module determines that the pickup hole is blocked. Therefore, by calculating the energy value of the audio signal in the third preset duration, the situation that the user mistakenly touches the pick-up hole of the mic device to cause a hole blocking event can be prevented, and therefore the mute or non-mute function can be achieved more accurately.
In one possible implementation, the first preset module and the second preset module are both operated in an audio data signal processor ADSP in the electronic device, and when a call is established, the first preset module and the second preset module start to operate; and stopping the operation of the first preset module and the second preset module when the call is ended. Therefore, the power consumption of the electronic equipment can be saved, and the cruising ability is improved.
In one possible implementation, the electronic device is also vibrated when the talk mute function is activated at a first moment; at the third moment, when the call mute function is turned off, the electronic equipment also vibrates. Therefore, the user can be reminded of whether the user enters a mute or non-mute state currently, the user does not need to block the pick-up hole all the time, and user experience is improved.
In one possible implementation, when the call mute function is started, before the electronic device vibrates, the method further includes: a motor driving module of a driving layer of the electronic equipment obtains a mute message of a hardware abstraction layer; the motor drive module instructs the motor device to vibrate. Therefore, the user can be reminded of whether the user enters a mute state currently, and the user experience is improved.
In one possible implementation, when the call mute function is turned off, before the electronic device vibrates, the method further includes: a motor driving module of a driving layer of the electronic equipment obtains a mute canceling message of a hardware abstraction layer; the motor drive module instructs the motor device to vibrate. Therefore, the user can be reminded of whether the user enters the non-mute state currently, normal conversation can be achieved, and user experience is improved.
In one possible implementation, the target application sets the mute state identifier to an identifier for indicating non-mute when the call is established. Thus, each time a call is established, the call is defaulted to a non-mute state, so that the opposite party can hear the voice of the user, and the normal call is not affected.
In one possible implementation, before the target application issues the mute message to the application framework layer, the target application sets the mute state identifier as an identifier for indicating mute; before the target application issues the unmuted message to the application framework layer, the target application sets the mute state identifier to an identifier for indicating non-mute. Therefore, the target application can reverse the mute state identification, and can accurately judge whether to send the mute message or cancel the mute message when the wheat blocking message is acquired next time, so that the mute state identification can be consistent with the behavior of the user.
In a second aspect, an embodiment of the present application provides an apparatus for audio processing, where the apparatus may be an electronic device, or may be a chip or a chip system in the electronic device. The apparatus may include a processing unit. The processing unit is configured to implement any method related to processing performed by the electronic device in the first aspect or any of the possible implementations of the first aspect. When the apparatus is an electronic device, the processing unit may be a processor. The apparatus may further comprise a storage unit, which may be a memory. The storage unit is configured to store instructions, and the processing unit executes the instructions stored in the storage unit, so that the electronic device implements the method described in the first aspect or any one of the possible implementation manners of the first aspect. When the apparatus is a chip or a system of chips within an electronic device, the processing unit may be a processor. The processing unit executes instructions stored by the storage unit to cause the electronic device to implement the method described in the first aspect or any one of the possible implementations of the first aspect. The memory unit may be a memory unit (e.g., a register, a cache, etc.) within the chip, or a memory unit (e.g., a read-only memory, a random access memory, etc.) within the electronic device that is external to the chip.
The processing unit is used for calling and also used for starting a call mute function.
In a possible implementation manner, the processing unit is configured to report the collected audio signal to the first preset module, and is further configured to determine that the pick-up hole is blocked according to the audio signal, and is further configured to report a message indicating that the pick-up hole is blocked to the target application, and specifically is further configured to start a call mute function based on the message indicating that the pick-up hole is blocked.
In a possible implementation manner, the processing unit is configured to issue a mute message to the application framework layer, and further configured to issue the mute message to the hardware abstraction layer, and further configured to issue the mute message to the second preset module, and specifically further configured to mute the audio signal.
In a possible implementation manner, the processing unit is configured to report the collected audio signal to the first preset module, and is further configured to determine that the pick-up hole is blocked according to the audio signal, and is further configured to report a message indicating that the pick-up hole is blocked to the target application, and specifically is further configured to close a call mute function based on the message indicating that the pick-up hole is blocked.
In a possible implementation manner, the processing unit is configured to issue a mute cancellation message to the application framework layer, and is further configured to issue the mute cancellation message to the hardware abstraction layer, and is further configured to issue the mute cancellation message to the second preset module, and specifically is further configured to perform mute cancellation processing on the audio signal.
In a possible implementation manner, the processing unit is configured to determine an energy value of the nth frame of audio signal, and an energy value of a preset frequency band in the nth frame of audio signal, and is further configured to determine that the pickup hole is blocked.
In a possible implementation manner, the processing unit is configured to determine that the pick-up hole is blocked within a second preset duration.
In a possible implementation, the processing unit is configured to determine an energy value of the audio signal of each microphone device and is further configured to determine that the pick-up hole is blocked.
In a possible implementation manner, the processing unit is configured to determine that the pick-up hole is blocked within a third preset duration.
In a possible implementation manner, the processing unit is configured to start the operation of the first preset module and the second preset module, and is further configured to stop the operation of the first preset module and the second preset module.
In a possible implementation, the processing unit is configured to vibrate.
In a possible implementation manner, the processing unit is configured to obtain a mute message of the hardware abstraction layer, and is further configured to vibrate.
In a possible implementation, the processing unit is configured to obtain a unmuted message of the hardware abstraction layer, and is further configured to vibrate.
In a possible implementation manner, the processing unit is configured to set the mute state identifier to an identifier for indicating non-mute.
In a possible implementation manner, the processing unit is configured to set the mute state identifier as an identifier for indicating mute, and is further configured to set the mute state identifier as an identifier for indicating non-mute.
In a third aspect, an embodiment of the present application provides a terminal device, including a processor and a memory, the memory being configured to store code instructions, the processor being configured to execute the code instructions to perform the method described in the first aspect or any one of the possible implementations of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium having stored therein a computer program or instructions which, when run on a computer, cause the computer to perform the method described in the first aspect or any one of the possible implementations of the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product comprising a computer program which, when run on a computer, causes the computer to perform the method described in the first aspect or any one of the possible implementations of the first aspect.
In a sixth aspect, the present application provides a chip or chip system comprising at least one processor and a communication interface, the communication interface and the at least one processor being interconnected by wires, the at least one processor being adapted to execute a computer program or instructions to perform the method described in the first aspect or any one of the possible implementations of the first aspect. The communication interface in the chip can be an input/output interface, a pin, a circuit or the like.
In one possible implementation, the chip or chip system described above further includes at least one memory, where the at least one memory has instructions stored therein. The memory may be a memory unit within the chip, such as a register, a cache, etc., or may be a memory unit of the chip (e.g., a read-only memory, a random access memory, etc.).
It should be understood that, the second aspect to the sixth aspect of the present application correspond to the technical solutions of the first aspect of the present application, and the advantages obtained by each aspect and the corresponding possible embodiments are similar, and are not repeated.
Drawings
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 2 is a schematic software structure of an electronic device according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a module interaction process according to an embodiment of the present application;
fig. 4 is a schematic flow chart of an audio processing method according to an embodiment of the present application;
fig. 5 is a schematic diagram of an audio processing method according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a chip according to an embodiment of the present application.
Detailed Description
In order to facilitate the clear description of the technical solutions of the embodiments of the present application, the following simply describes some terms and techniques involved in the embodiments of the present application:
1. Terminology
In embodiments of the present application, the words "first," "second," and the like are used to distinguish between identical or similar items that have substantially the same function and effect. For example, the first chip and the second chip are merely for distinguishing different chips, and the order of the different chips is not limited. It will be appreciated by those of skill in the art that the words "first," "second," and the like do not limit the amount and order of execution, and that the words "first," "second," and the like do not necessarily differ.
It should be noted that, in the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
In the embodiments of the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a alone, a and B together, and B alone, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b, or c may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or plural.
During the conversation, the user can place the electronic device at the ear for conversation. When the user needs to start the call mute function, the user needs to leave the electronic equipment from the ear, and after the electronic equipment is on, the user also needs to realize call mute by triggering the mute function in the screen of the electronic equipment, so that the other party of the call cannot receive the voice information of the user. In addition, if the call interface operates in the background of the electronic equipment, the user also needs to switch the call interface to the foreground of the electronic equipment to perform mute operation; or if the call interface is displayed as a dial, the user also needs to switch to the call interface to perform mute selection operation.
Therefore, when a user is talking, the operation of starting the talking mute function is complicated, and the user cannot quickly and conveniently start the talking mute function.
In some implementations, a sensor may be added to the electronic device that may trigger a talk mute function by recognizing a user's motion, gesture, distance, etc.
For example, in one possible implementation, inertial sensors may be added to the electronic device. In the conversation process, when the inertial sensor identifies some actions of a user, a mute message can be reported to the electronic equipment, so that the electronic equipment can realize a conversation mute function, wherein the actions of the user can comprise actions such as throwing, turning and the like on the electronic equipment.
In another possible implementation, an infrared sensor can be added in the electronic device, and in the conversation process, the infrared sensor can detect the distance between the electronic device and the ear of the user in real time, and when the distance between the electronic device and the ear of the user exceeds a preset value, the infrared sensor can report a mute message to the electronic device, so that the electronic device can realize the conversation mute function.
However, recognizing the user's motion, posture, distance, etc. through an inertial sensor, an infrared sensor, etc. triggers a talk-around mute function, additional sensors are required to be added to the electronic device, resulting in an increase in the cost of the electronic device. In the conversation process, the inertial sensor or the infrared sensor and the like need to be kept on for real-time monitoring, so that the power consumption of the electronic equipment is increased, and the cruising ability is weakened.
In view of this, according to the audio processing method provided by the embodiment of the application, the electronic device can analyze the audio signal collected by the microphone device, determine whether the pick-up hole of the microphone device is blocked, and when the pick-up hole of the microphone device is blocked, the electronic device can turn on the call mute function. Therefore, the electronic equipment does not need to add an extra sensor, the cost of the electronic equipment cannot be increased, and the conversation silence function is realized through the microphone device, so that the conversation silence function is realized more conveniently and rapidly in the conversation process.
It may be understood that the electronic device according to the embodiment of the present application may also be any form of terminal device, for example, the electronic device may include a handheld device, an in-vehicle device, and the like. For example, some electronic devices are: a mobile phone, tablet, palm, notebook, mobile internet device (mobile internet device, MID), wearable device, virtual Reality (VR) device, augmented reality (augmented reality, AR) device, wireless terminal in industrial control (industrial control), wireless terminal in unmanned (self driving), wireless terminal in teleoperation (remote medical surgery), wireless terminal in smart grid (smart grid), wireless terminal in transportation security (transportation safety), wireless terminal in smart city (smart city), wireless terminal in smart home (smart home), cellular phone, cordless phone, session initiation protocol (session initiation protocol, SIP) phone, wireless local loop (wireless local loop, WLL) station, personal digital assistant (personal digital assistant, PDA), handheld device with wireless communication function, public computing device or other processing device connected to wireless modem, vehicle-mounted device, wearable device, electronic device in the 5G network or evolving land mobile network (public land mobile network), and the like, without limiting the application.
By way of example, and not limitation, in embodiments of the application, the electronic device may also be a wearable device. The wearable device can also be called as a wearable intelligent device, and is a generic name for intelligently designing daily wear by applying wearable technology and developing wearable devices, such as glasses, gloves, watches, clothes, shoes and the like. The wearable device is a portable device that is worn directly on the body or integrated into the clothing or accessories of the user. The wearable device is not only a hardware device, but also can realize a powerful function through software support, data interaction and cloud interaction. The generalized wearable intelligent device includes full functionality, large size, and may not rely on the smart phone to implement complete or partial functionality, such as: smart watches or smart glasses, etc., and focus on only certain types of application functions, and need to be used in combination with other devices, such as smart phones, for example, various smart bracelets, smart jewelry, etc. for physical sign monitoring.
In addition, in the embodiment of the application, the electronic equipment can also be electronic equipment in an internet of things (internet of things, ioT) system, and the IoT is an important component of the development of future information technology, and the main technical characteristics of the IoT are that the article is connected with a network through a communication technology, so that the man-machine interconnection and the intelligent network of the internet of things are realized.
The electronic device in the embodiment of the application may also be referred to as: a User Equipment (UE), a Mobile Station (MS), a Mobile Terminal (MT), an access terminal, a subscriber unit, a subscriber station, a mobile station, a remote terminal, a mobile device, a user terminal, a wireless communication device, a user agent, or a user equipment, etc.
In an embodiment of the present application, the electronic device or each network device includes a hardware layer, an operating system layer running on top of the hardware layer, and an application layer running on top of the operating system layer. The hardware layer includes hardware such as a central processing unit (central processing unit, CPU), a memory management unit (memory management unit, MMU), and a memory (also referred to as a main memory). The operating system may be any one or more computer operating systems that implement business processes through processes (processes), such as a Linux operating system, a Unix operating system, an Android operating system, an iOS operating system, or a windows operating system. The application layer comprises applications such as a browser, an address book, word processing software, instant messaging software and the like.
By way of example, fig. 1 shows a schematic diagram of an electronic device.
The electronic device may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It should be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the electronic device. In other embodiments of the application, the electronic device may include more or less components than illustrated, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a SIM card interface, and/or a USB interface, among others.
It should be understood that the connection relationship between the modules illustrated in the embodiments of the present application is only illustrative, and does not limit the structure of the electronic device. In other embodiments of the present application, the electronic device may also use different interfacing manners, or a combination of multiple interfacing manners in the foregoing embodiments.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc. applied on an electronic device. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the electromagnetic waves to a modem for demodulation. For example, in the embodiment of the present application, the mobile communication module 150 may send the signal output by the modem to a device such as a base station, or may receive the signal of the device such as the base station, and send the signal to the modem for demodulation.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc. for application on an electronic device.
The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via an antenna, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via an antenna. For example, in the embodiment of the present application, the wireless communication module 160 may send the signal output by the modem to a device such as a base station, or may receive the signal of the device such as the base station, and send the signal to the modem for demodulation.
The electronic device may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as audio playback or recording, etc.
The audio module 170 is used to convert digital audio signals to analog audio signal outputs and also to convert analog audio inputs to digital audio signals. The speaker 170A, also referred to as a "horn," is used to convert an audio electrical signal into a sound signal, and may include 1 or N speakers 170A, N being a positive integer greater than 1 in the electronic device. The electronic device may listen to music, video, or hands-free conversation, etc., through speaker 170A. A receiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When the electronic device picks up a phone call or voice message, the voice can be picked up by placing the receiver 170B close to the human ear. Microphone 170C, also known as a "microphone" or "microphone," is used to convert sound signals into electrical signals. The earphone interface 170D is used to connect a wired earphone. For example, in the embodiment of the present application, in a voice call scenario, the speaker 170A may convert a voice signal received by the electronic device into a sound signal, so that information sent by the other party may be played; the microphone 170C may convert the voice signal of the user into an electrical signal and transmit the electrical signal to the counterpart, so that the two parties to the conversation may implement a voice conversation.
Fig. 2 is a software configuration block diagram of an electronic device according to an embodiment of the present application. The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into five layers, from top to bottom, an application layer, an application framework layer, an Zhuoyun row (Android run) and system libraries, a hardware abstraction layer, and a kernel layer, respectively.
The application layer may include a series of application packages. As shown in fig. 2, the application package may include applications such as telephone, camera, video, etc. Applications may include system applications and three-way applications.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 2, the application framework layer may include a window manager, a resource manager, a notification manager, and so on. In the embodiment of the application, the application program framework layer can send or cancel the mute information, report the blocking information and the like.
The window manager is used for managing window programs. The window manager may obtain the display screen size, determine if there is a status bar, lock screen, touch screen, drag screen, intercept screen, etc.
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. Such as prompting text messages in status bars, sounding prompts, vibrating electronic devices, flashing indicator lights, etc.
Android runtimes include core libraries and virtual machines. Android run time is responsible for scheduling and management of the Android system.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like. For example, in the embodiment of the present application, the virtual machine may be used to perform functions such as detection of a call blocking, uplink or downlink voice processing, muting or message issuing with muting canceled, etc.
The system library may include a plurality of functional modules. For example: media libraries (media libraries), function libraries (function libraries), graphics processing libraries (e.g., openGL ES), etc.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio video encoding formats, such as: MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
The function library provides multiple service API interfaces for the developer, and is convenient for the developer to integrate and realize various functions quickly.
The graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The hardware abstraction layer is a layer of abstracted structure between the kernel layer and the Android run. The hardware abstraction layer may be a package for hardware drivers that provides a unified interface for the invocation of upper layer applications. In the embodiment of the application, the hardware abstraction layer can send the mute message to the hardware driver or cancel the mute message, and can report the wheat blocking message to the application framework layer. The kernel layer is a layer between hardware and software. The core layer may include motor drives, audio drives, display drives, and the like.
It should be noted that, the embodiment of the present application is only illustrated by using an android system, and in other operating systems (such as a Windows system, an IOS system, etc.), the scheme of the present application can be implemented as long as the functions implemented by each functional module are similar to those implemented by the embodiment of the present application.
The following describes an interaction process between each module in combination with the audio processing method according to the embodiment of the present application, taking a call process as an example.
As shown in fig. 3, during a call, a microphone device of the electronic device may be in an operating state to continuously collect sounds of the surrounding environment. The microphone device may also be referred to as a mic device, and the sound collected by the mic device may also be referred to as an audio signal. The microphone device can report the collected audio signals to a communication blocking detection module of an audio data signal processor (audio digital signal processor, ADSP), and the communication blocking detection module can detect whether a sound pick-up hole of the microphone device has a hole blocking event.
When the user wants to mute, the user can block the pick-up hole of the mic device, at this time, the communication blocking detection module can analyze the audio signal collected by the mic device, and after detecting that the pick-up hole of the mic device is blocked, the communication blocking detection module can report a blocking message to the hardware abstraction layer. The hardware abstraction layer may report the blocking message to the application framework layer, and further the application framework layer may report the blocking message to a call application in the application framework layer, for example, the call application may be a phone application. For convenience of description, a telephone application will be described as an example.
After the telephone application acquires the blocking message, a mute function may be triggered, for example, a button of the telephone application interface may be displayed in a mute state. The telephony application may also issue a mute message to the application framework layer, which may issue the mute message to the hardware abstraction layer.
After the hardware abstraction layer acquires the mute message, on one hand, the hardware abstraction layer can issue the mute message to a motor driving module of the kernel layer, and the motor driving module can trigger a motor device in the hardware device to vibrate so as to prompt a user that the user has entered a mute state currently. After the user receives the vibration prompt, the user can not always block the pick-up hole of the mic device.
On the other hand, the hardware abstraction layer may issue the mute message to the mute or cancel mute flow processing module, and the mute or cancel mute flow processing module may set the audio signal collected by the mic device to be an audio signal in a mute state. The uplink voice processing module can transmit the audio signal in the mute state to the uplink processing module of the modem, and the uplink processing module can send the audio signal in the mute state to equipment such as a base station, so that the other end of the call can receive the audio signal in the mute state, thereby achieving the effect of mute call.
When the user wants to cancel silence, the user can plug the pickup hole of the mic device again, at this moment, the conversation blocks up the wheat detection module and can analyze the audio signal that the mic device gathered, after detecting the pickup hole of the mic device and taking place to block up the hole event, the conversation blocks up the wheat detection module and can report to the hardware abstraction layer and block up the wheat message. The hardware abstraction layer may report the blocking message to the application framework layer, and the application framework layer may report the blocking message to a telephony application in the application framework layer.
After the telephone application acquires the blocking message, the telephone application may trigger a cancel mute function, e.g., a button displayed in a non-mute state on the telephone application interface. The telephony application may also issue an unmute message to the application framework layer, which may issue the unmute message to the hardware abstraction layer.
After the hardware abstraction layer acquires the mute canceling message, on one hand, the hardware abstraction layer can issue the mute canceling message to a motor driving module of the kernel layer, and the motor driving module can trigger a motor device in the hardware device to vibrate so as to prompt a user that the user has exited the mute state currently. After the user receives the vibration prompt, the user can not always block the pick-up hole of the mic device.
On the other hand, the hardware abstraction layer may send the mute canceling message to the mute or mute canceling flow processing module, and the mute or mute canceling flow processing module may not set a mute state for the audio signal collected by the mic device. After the uplink voice processing module performs automatic gain control, noise reduction, echo elimination and other processing on the audio signals acquired by the mic device, the processed audio signals can be transmitted to the uplink processing module of the modem, and the uplink processing module can send the audio signals to equipment such as a base station, so that the other end of the call can receive the audio signals, and the call can be performed.
It can be understood that, in the call process, the downlink processing module of the modem of the electronic device may receive a data packet sent by a device such as a base station, and send the data packet to the downlink voice processing module of the ADSP, where the downlink voice processing module may parse the data packet to obtain an audio signal, and convert the audio signal into a sound signal through a speaker in a hardware device for playing.
Fig. 4 shows a flow chart of an audio processing method according to an embodiment of the application.
S401, establishing a call.
S402, the telephone application sets the mute state to false.
After the call is established, the phone application may set the initial mute state to unmuted. The mute state may be identified by a field of a character string type, for example, the blocking message may be identified by a "mute" field, and the value of the mute state may be represented by a data type such as a character string, integer, or boolean. For example, the mute state may be boolean, and the mute state may be set to true to indicate that mute is currently performed, and the mute state may be set to false to indicate that mute is currently performed; alternatively, the mute state may be integer, and when the mute state is set to 1, the current mute state may be indicated, and when the mute state is set to 0, the current non-mute state may be indicated; alternatively, the mute state may indicate that it is currently mute when set to 0, and the mute state may indicate that it is currently not mute when set to 1. It may be appreciated that the field identifier of the mute state and the specific value may be set by the electronic device, which is not limited by the embodiment of the present application.
For convenience of description, the mute state true represents mute, and the mute state false represents non-mute will be described as an example.
It will be appreciated that each time a new call is established, the telephony application may reset the initial mute state to an unmuted state, i.e., each time a new call is established, it defaults to an unmuted state, which may enable the opposite party to hear the user's voice, thereby not affecting the normal call.
S403, enabling a call module.
After the call is established, the call module enables to understand that the call blocking detection module, the uplink voice processing module, the downlink voice processing module and other modules in the ADSP can start to operate. For example, the communication blocking detection module can detect whether a hole blocking event occurs according to an audio signal collected by the mic device, the uplink voice processing module can perform automatic gain control, noise reduction, echo elimination and other processing, and the downlink voice processing module can analyze a data packet into an audio signal and the like.
S404, a communication blocking detection module detects whether a hole is blocked.
It will be appreciated that one or more mic devices may be included in the electronic device, each of which may collect a frame of audio signal every preset time period T1, for example, the preset time period T1 may be 10 milliseconds. Each frame of audio signal may include a high-frequency band signal, for example, the high-frequency band signal may be a frequency band signal of 2 khz to 4khz, and the range of values of the high-frequency band signal by different electronic devices may be different.
The case where one or more mic devices are present in the electronic apparatus can be explained separately below.
(1) A mic device is present in the electronic device.
After each frame of audio signal acquired by the mic device is acquired, the conversation blocking detection module can calculate and obtain an energy value corresponding to each frame of audio signal and an energy value corresponding to a high-frequency band signal in each frame of audio signal. The communication blocking detection module can determine whether a blocking event occurs according to the attenuation amplitude of the energy value corresponding to the high-frequency band signal and the energy value corresponding to the audio signal.
In a possible implementation, the energy value of the audio signal may be represented by a Root Mean Square (RMS) value of the audio signal. For example, in x 1 、x 2 ……x k Taking the above-mentioned K sampled audio signals within the preset period T1 as an example, the energy value RMS of the audio signal may satisfy the following formula:
RMS = ((x 1 ^2+x 2 ^2+……+x K ^2)/ K)^0.5。
wherein K is a positive integer greater than or equal to 1, for example, K may take a value of 80, and the embodiment of the present application is not limited.
Taking the currently collected nth frame of audio signal as an example, when the conversation and microphone blocking detection module calculates that the energy value corresponding to the nth frame of audio signal is lower than the preset threshold a, and the energy value corresponding to the high-frequency band signal in the nth frame of audio signal is compared with the energy value corresponding to the high-frequency band signal in the previous nth-1 frame of audio signal, the conversation and microphone blocking detection module can determine that a microphone hole blocking event occurs in a pick-up hole of a mic device, and step S405 can be executed; otherwise, the communication blocking detection module may determine that the pick-up hole of the mic device has not occurred a blocking event, and may execute step S409.
The preset threshold a and the preset attenuation threshold B may be preset by the electronic device, for example, the preset threshold a may be about 15dB, the preset attenuation threshold B may be about 30dB, and the values of the preset threshold a and the preset attenuation threshold B are specifically not limited in the embodiment of the present application.
(2) There are a plurality of mic devices in an electronic device.
It can be understood that the plurality of mic devices can be located at any position of the bottom, the top, the side, the back, etc. of the electronic device, and the position of each of the plurality of mic devices in the electronic device is not limited in the embodiment of the present application.
The communication blocking detection module can acquire the audio signals acquired by each mic device. If the energy value of the audio signal of one of the mic devices is lower than the preset threshold C, and meanwhile, the energy value is smaller than the energy values of the audio signals of other mic devices, and the difference value between the energy value and the energy values of the audio signals of other mic devices is greater than or equal to the preset energy difference value D, the communication blocking detection module can determine that a blocking event occurs in the pick-up hole of the mic device, and step S405 can be executed; otherwise, the communication blocking detection module may determine that the pick-up hole of the mic device has not occurred a blocking event, and may execute step S409.
The preset threshold C and the preset energy difference D may be preset by the electronic device, for example, the preset threshold C may be about 15dB, and the preset energy difference D may be about 15dB, which is specifically a value of the preset threshold C and the preset energy difference D, which is not limited in the embodiment of the present application. It is understood that the values of the preset threshold C and the preset threshold a may be the same or different, and the values of the preset threshold C and the preset energy difference D may be the same or different.
It will be appreciated that for the scenario of one or more microphone devices described above, the talk-around detection module may calculate the energy value for the preset time period T2. In a possible implementation, the call blocking detection module may calculate an energy value of the continuous M-frame audio signal, and assuming that each frame of signal is separated by 10 ms, the preset time period T2 may be m×10 ms. For example, M may take a value of 30, that is, the talk-around detection module may calculate an energy value of 30 consecutive frames of the audio signal, where the preset time period T2 may be 300 ms.
Like this, the stifled wheat detection module of conversation is through calculating the energy value in the preset time period T2, can prevent that the user from taking place the condition of stifled hole event to the pick-up hole mistake touching of mic device to more accurate realization silence or non-silence function.
It will be appreciated that the user may also implement the mute or non-mute function by spacing the mic devices, for example, the user may spacing the mic devices 2 times. The electronic equipment can analyze whether an event of blocking the wheat for 2 times at intervals occurs according to the audio signals collected by the mic device, so that the mute or non-mute conversation is realized, and the mode of blocking the wheat by a specific user is not limited.
S405, the communication traffic jam detection module sends a traffic jam message.
After the communication blocking detection module determines that the pick-up hole of the mic device is blocked, the communication blocking detection module can send a blocking message to the hardware abstraction layer. The hardware abstraction layer may report the blocking message to the application framework layer, and the application framework layer may report the blocking message to a telephony application in the application framework layer.
In a possible implementation, the blocking message may be identified by a field of a character string type, and illustratively, the blocking message may be identified by a "block" field, and the value of the blocking message may be of a data type such as a character string, integer or boolean type. For example, when the value of the block field is 1, it may indicate that the pick-up hole of the mic device is blocked, and when the value of the block field is 0, it may indicate that the pick-up hole of the mic device is not blocked; or when the value of the block field is 0, it may indicate that the pick-up hole of the mic device is blocked, and when the value of the block field is 1, it may indicate that the pick-up hole of the mic device is not blocked, specifically, the field identifier and the value of the blocking message are not limited by the embodiment of the application.
S406, call mute or non-mute processing.
After the telephone application of the application program layer acquires the blocking message, whether to send the mute message or cancel the mute message can be determined according to the current mute state. For example, if the current mute state is not mute, for example, the value is false, after receiving the blocking message, the phone application may issue a mute message to the application framework layer; if the current mute state is mute, for example, the value is true, the phone application may issue a mute cancellation message to the application framework layer after receiving the blocking message.
After the application framework layer obtains the mute or cancel the mute message, the mute or cancel the mute message may be issued to the hardware abstraction layer, and the execution flow of the specific hardware abstraction layer may refer to the related description in the embodiment corresponding to fig. 3, which is not repeated.
S407, reversing the mute state of the call.
It can be understood that, in the step S406, if the current mute state is non-mute and the value is false, the current mute state may be set to true when the phone application issues the mute message; if the current mute state is mute and the value is true, the current mute state can be set as false when the telephone application issues a call non-mute message.
S408, vibration prompting.
After the hardware abstraction layer acquires the mute message, the hardware abstraction layer can send the mute message to the motor driving module of the kernel layer, and the motor driving module can trigger the motor device in the hardware device to vibrate so as to prompt the user that the user has entered the mute state currently.
After the hardware abstraction layer acquires the mute canceling message, the hardware abstraction layer can issue the mute canceling message to a motor driving module of the kernel layer, and the motor driving module can trigger a motor device in the hardware device to vibrate so as to prompt a user to enter a non-mute state currently.
It can be understood that the electronic device may also prompt the user to enter the mute state or the non-mute state currently by means of a prompt tone, and specifically prompt the user, which is not limited by the embodiment of the present application.
S409, whether the call hangs up.
When the call is hung up, step S410 may be performed; when the call is not hung up, step S404 may be executed, and the call blocking detection module may continue to detect whether a blocking event occurs.
S410, ending the call.
After the call is ended, each module of the ADSP in the electronic device may stop running, for example, the call blocking detection module does not detect a call blocking event. Thus, the electronic device can reduce power consumption and increase endurance.
It will be appreciated that the embodiments of the present application may be applied to various conversations or scenarios using mic devices, such as, for example, voice conversations, video conversations, voice over phone (voice over internet protocol, voIP) conversations, audio recordings, video recordings, speech recognition, etc. The embodiment of the application can also apply the hole blocking event of the mic device to the functions of hardware equipment switching, volume adjustment, telephone answering and hanging up, suspension and restoration of audio or video recording, voice assistant starting or closing and the like, and is not limited.
By taking a scenario of switching hardware devices as an example, when the electronic device detects a hole blocking event of a mic device, the electronic device can switch between a speaker and an earphone device, so that function switching between a hands-free device and the earphone device is realized. Or taking the scene of volume adjustment as an example, when the electronic equipment detects the hole blocking event of the mic device, the volume can be increased or decreased.
The method according to the embodiment of the present application will be described in detail by way of specific examples. The following embodiments may be combined with each other or implemented independently, and the same or similar concepts or processes may not be described in detail in some embodiments.
Fig. 5 shows an audio processing method according to an embodiment of the present application. The method comprises the following steps:
s501, at a first moment, the electronic equipment is in a call, and responds to a first operation for blocking a sound pick-up hole, and a call silencing function is started.
In the embodiment of the present application, the first time may be understood as a time when the electronic device starts the call mute function.
The call may include a voice call, a video call, a VoIP call, etc., and embodiments of the present application are not limited.
The first operation may be an operation of blocking the sound pickup hole by a user, or may be other operations of blocking the sound pickup hole, which is not limited in the embodiment of the present application.
S502, at a second moment, the first operation is terminated, the call is not terminated, and the electronic equipment is in a call mute state, wherein the second moment is later than the first moment.
In the embodiment of the present application, the second time may be understood as a time when the first operation is terminated.
And S503, when the call is not ended, the electronic equipment responds to a second operation for blocking the sound pick-up hole, and the call silencing function is closed, wherein the third time is later than the second time.
In the embodiment of the present application, the third time may be understood as a time when the electronic device turns off the call silence function.
The second operation may be an operation of blocking the sound pickup hole by the user, or may be other operations of blocking the sound pickup hole, which is not limited in the embodiment of the present application. It will be appreciated that the first and second operations may be the same or different.
When the pick-up hole of the microphone device is blocked, the electronic equipment can start or close the call mute function. Therefore, the electronic equipment can realize the call mute function through the microphone device, so that the call mute function is realized more conveniently and rapidly in the call process.
Optionally, on the basis of the embodiment corresponding to fig. 5, the call is triggered by the target application of the electronic device, where the electronic device includes a first preset module, and the first preset module is configured to determine whether the pick-up hole is blocked, and before starting the call mute function, the method may further include: the microphone device reports the collected audio signals to a first preset module; the first preset module determines that the pick-up hole is blocked according to the audio signal, and reports a message for indicating that the pick-up hole is blocked to the target application; starting a call mute function, comprising: the target application initiates a talk mute function based on the message indicating that the pickup aperture is blocked.
In the embodiment of the present application, the target application may be understood as an application capable of performing a call, for example, the target application may be a phone application in the embodiment corresponding to fig. 3.
The first preset module may be understood as a call blocking detection module in the embodiment corresponding to fig. 3.
The message for indicating that the pick-up hole is blocked may be understood as a blocking message in the embodiment corresponding to fig. 3, and the blocking message may specifically refer to the description related to step S405 in the embodiment corresponding to fig. 4, which is not repeated.
The target application may refer to the related description in the embodiments corresponding to fig. 3 and fig. 4, and will not be described again.
The target application can realize the call mute function according to the wheat blocking message of the first preset module, so that a user can conveniently and rapidly perform call mute operation, and user experience is improved.
Optionally, on the basis of the embodiment corresponding to fig. 5, the electronic device further includes a second preset module, where the second preset module is configured to mute or cancel mute the audio signal, and the target application starts the call mute function based on the message for indicating that the pick-up hole is blocked, and may include: when the target application acquires the message for indicating that the pick-up hole is blocked, the target application transmits a mute message to the application program framework layer; the application framework layer issues a mute message to the hardware abstraction layer; the hardware abstraction layer issues a mute message to a second preset module; the second preset module performs mute processing on the audio signal.
In the embodiment of the present application, the second preset module may be understood as a mute or cancel mute flow processing module in the embodiment corresponding to fig. 3. The mute processing of the audio signal by the second preset module may refer to the related description in the embodiment corresponding to fig. 3, which is not repeated.
The second preset module can mute the audio signal to be sent according to the mute message issued by the target application, so that the speaking voice of the user can not be heard by the other party.
Optionally, on the basis of the embodiment corresponding to fig. 5, before the call silence function is turned off, the method may further include: the microphone device reports the collected audio signals to a first preset module; the first preset module determines that the pick-up hole is blocked according to the audio signal, and reports a message for indicating that the pick-up hole is blocked to the target application; before closing the call mute function, the method comprises the following steps: the target application turns off the call mute function based on the message indicating that the pickup hole is blocked.
In the embodiment of the present application, the target application may refer to the related description in the embodiment corresponding to fig. 3 and fig. 4, which is described in detail below, based on the message for indicating that the pick-up hole is blocked.
The target application can realize the function of closing the call silence according to the wheat blocking message of the first preset module, so that the user can conveniently and rapidly recover the normal call, and the user experience is improved.
Optionally, on the basis of the embodiment corresponding to fig. 5, the target application turns off the call mute function based on the message for indicating that the pick-up hole is blocked, and may include: when the target application acquires the message for indicating that the pick-up hole is blocked, the target application transmits a message for canceling silence to the application program framework layer; the application framework layer issues a mute canceling message to the hardware abstraction layer; the hardware abstraction layer issues a mute canceling message to a second preset module; the second preset module performs the mute canceling treatment on the audio signal.
In the embodiment of the present application, the second preset module may refer to the related description in the embodiment corresponding to fig. 3, and will not be repeated. The second preset module can normally send audio signals according to the mute canceling message issued by the target application, so that the speaking voice of the user can be heard by the other party.
Optionally, on the basis of the embodiment corresponding to fig. 5, the determining, by the first preset module, that the pickup hole is blocked according to the audio signal may include: the method comprises the steps that a first preset module determines an energy value of an N frame of audio signal and an energy value of a preset frequency band in the N frame of audio signal; when the energy value of the N frame of audio signal is smaller than a first preset value, and the energy value of the preset frequency band in the N frame of audio signal is compared with the energy value of the preset frequency band in the N-1 frame of audio signal, the first preset module determines that the pickup hole is blocked; the number of the microphone devices is 1, and the microphone devices collect one frame of audio signal every a first preset duration.
In the embodiment of the present application, the calculation of the energy value of the audio signal may refer to the related description in step S404 in the embodiment corresponding to fig. 4, which is not repeated.
The first preset value may be understood as the preset threshold value a in the embodiment corresponding to fig. 4, and the specific value of the preset threshold value a may refer to the related description in step S404 of the embodiment corresponding to fig. 4, which is not repeated.
The preset frequency band may be understood as the high-frequency band signal in step S404 in the embodiment corresponding to fig. 4, which is not described herein.
The second preset value may be understood as the preset attenuation threshold B in the embodiment corresponding to fig. 4, and the specific value of the preset attenuation threshold B may refer to the related description in step S404 of the embodiment corresponding to fig. 4, which is not repeated.
The first preset duration may be understood as the preset time T1 in step S404 in the embodiment corresponding to fig. 4, which is not described herein.
According to the energy value of the audio signal, whether the microphone blockage occurs in the electronic equipment when only one microphone is in the electronic equipment can be determined, so that the mute function is realized or the mute function is canceled.
Optionally, on the basis of the embodiment corresponding to fig. 5, when the energy value of the N-th frame audio signal is smaller than the first preset value, and the energy value of the preset frequency band in the N-th frame audio signal is compared with the energy value of the preset frequency band in the N-1-th frame audio signal, the first preset module determines that the pickup hole is blocked, which may include: in the second preset time period, when the energy values of the audio signals are smaller than the first preset value, the first preset module determines that the pick-up hole is blocked; the nth frame of audio signal is an audio signal within a second preset duration.
In the embodiment of the present application, the second preset duration may be understood as the preset time T2 in step S404 in the embodiment corresponding to fig. 4, which is not described herein.
By calculating the energy value of the audio signal in the second preset duration, the situation that the user mistakenly touches the pick-up hole of the mic device to cause a hole blocking event can be prevented, and therefore the mute or non-mute function can be achieved more accurately.
Optionally, on the basis of the embodiment corresponding to fig. 5, the determining, by the first preset module, that the pickup hole is blocked according to the audio signal may include: the first preset module determines the energy value of the audio signal of each microphone device; when the energy value of the audio signal of one microphone device is smaller than a third preset value, the energy value of the audio signal of one microphone device is smaller than the energy values of the audio signals of other microphone devices, and the difference value between the energy value of the audio signal of one microphone device and the energy value of the audio signal of the other microphone device is larger than or equal to a fourth preset value, the first preset module determines that the pickup hole is blocked; wherein the number of microphone devices is greater than 1.
In the embodiment of the present application, the third preset value may be understood as the preset threshold C in the embodiment corresponding to fig. 4, and the specific value of the preset threshold C may refer to the related description in step S404 in the embodiment corresponding to fig. 4, which is not repeated.
The fourth preset value may be understood as the preset energy difference value D in the embodiment corresponding to fig. 4, and the specific preset energy difference value D may be referred to the related description in step S404 of the embodiment corresponding to fig. 4, which is not repeated.
According to the energy value of the audio signal, whether the microphone blockage occurs in the case of a plurality of microphones in the electronic equipment can be determined, so that the mute function is realized or cancelled.
Optionally, on the basis of the embodiment corresponding to fig. 5, when the energy value of the audio signal of one of the microphone devices is smaller than the third preset value, and the energy value of the audio signal of one of the microphone devices is smaller than the energy value of the audio signal of the other microphone devices, and the difference between the energy value of the audio signal of one of the microphone devices and the energy value of the audio signal of the other microphone devices is greater than or equal to the fourth preset value, the determining, by the first preset module, that the pickup hole is blocked may include: and in the third preset time period, when the energy value of the audio signal of one microphone device is smaller than a third preset value, the energy value of the audio signal of one microphone device is smaller than the energy value of the audio signal of the other microphone device, and the difference value between the energy value of the audio signal of one microphone device and the energy value of the audio signal of the other microphone device is larger than or equal to a fourth preset value, the first preset module determines that the pickup hole is blocked.
In the embodiment of the present application, the third preset duration may be understood as the preset time T2 in step S404 in the embodiment corresponding to fig. 4, which is not described herein. It is understood that the third preset time period may be the same as or different from the second preset time period.
By calculating the energy value of the audio signal in the third preset duration, the situation that the user mistakenly touches the pick-up hole of the mic device to cause a hole blocking event can be prevented, and therefore the mute or non-mute function can be achieved more accurately.
Optionally, on the basis of the embodiment corresponding to fig. 5, the first preset module and the second preset module are both operated in an audio data signal processor ADSP in the electronic device, and when the call is established, the first preset module and the second preset module are started to operate; and stopping the operation of the first preset module and the second preset module when the call is ended.
In the embodiment of the application, the first preset module and the second preset module operate along with the establishment of the call and stop operating along with the end of the call, so that the power consumption of the electronic equipment can be saved, and the cruising ability can be increased.
Optionally, on the basis of the embodiment corresponding to fig. 5, at the first moment, when the call mute function is started, the electronic device vibrates; at the third moment, when the call mute function is turned off, the electronic equipment also vibrates.
In the embodiment of the application, the electronic equipment can vibrate when the call mute function is started or closed, so that a user is reminded of whether the user is in a mute or non-mute state currently, and the user does not need to block the sound pick-up hole all the time, thereby improving the user experience.
Optionally, on the basis of the embodiment corresponding to fig. 5, when the call mute function is started, before the electronic device vibrates, the method may further include: a motor driving module of a driving layer of the electronic equipment obtains a mute message of a hardware abstraction layer; the motor drive module instructs the motor device to vibrate.
In the embodiment of the application, the motor driving module can instruct the motor device to vibrate according to the mute message issued by the target application, thereby reminding a user whether the user has entered a mute state currently, so that the user can not always block the pick-up hole, and the user experience is improved.
Optionally, on the basis of the embodiment corresponding to fig. 5, when the call silence function is turned off, before the electronic device vibrates, the method may further include: a motor driving module of a driving layer of the electronic equipment obtains a mute canceling message of a hardware abstraction layer; the motor drive module instructs the motor device to vibrate.
In the embodiment of the application, the motor driving module can instruct the motor device to vibrate according to the mute canceling message issued by the target application, thereby reminding a user whether the user currently enters a non-mute state or not, and normal conversation can be realized, so that the user can not always block the pick-up hole, and the user experience is improved.
Optionally, on the basis of the embodiment corresponding to fig. 5, when the call is established, the target application sets the mute state identifier to an identifier for indicating non-mute.
In the embodiment of the present application, the mute state identifier may be understood as the mute state in the embodiment corresponding to fig. 4, and the specific mute state and the identifier for indicating non-mute may refer to the related description in step S402 in the embodiment corresponding to fig. 4, which is not repeated.
The target application can reset the initial mute state to an unmuted state each time a call is established, that is, defaults to an unmuted state each time a call is established, so that the opposite party can hear the user's voice, thereby not affecting the normal call.
Optionally, on the basis of the embodiment corresponding to fig. 5, when the target application acquires the message that the pick-up hole is blocked, the target application sets the mute state identifier as the identifier for indicating mute; when the target application acquires the message that the pick-up hole is blocked, the target application sets the mute state identifier as an identifier for indicating non-mute.
In the embodiment of the present application, the identifier for indicating silence and the identifier for indicating non-silence may refer to the related description in step S402 of the embodiment corresponding to fig. 4, which is not repeated.
The target application reverses the mute state identifier, and can accurately judge whether to send the mute message or cancel the mute message when the wheat blocking message is acquired next time, so that the mute state identifier can be consistent with the behavior of the user.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or fully authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region, and provide corresponding operation entries for the user to select authorization or rejection.
The foregoing description of the solution provided by the embodiments of the present application has been mainly presented in terms of a method. To achieve the above functions, it includes corresponding hardware structures and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the present application may be implemented in hardware or a combination of hardware and computer software, as the method steps of the examples described in connection with the embodiments disclosed herein. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The embodiment of the application can divide the functional modules of the device for realizing the method according to the method example, for example, each functional module can be divided corresponding to each function, and two or more functions can be integrated in one processing module. The integrated modules may be implemented in hardware or in software functional modules. It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation.
Fig. 6 is a schematic structural diagram of a chip according to an embodiment of the present application. Chip 600 includes one or more (including two) processors 601, communication lines 602, communication interfaces 603, and memory 604.
In some implementations, the memory 604 stores the following elements: executable modules or data structures, or a subset thereof, or an extended set thereof.
The method described in the above embodiments of the present application may be applied to the processor 601 or implemented by the processor 601. The processor 601 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in the processor 601 or instructions in the form of software. The processor 601 may be a general purpose processor (e.g., a microprocessor or a conventional processor), a digital signal processor (digital signal processing, DSP), an application specific integrated circuit (application specific integrated circuit, ASIC), an off-the-shelf programmable gate array (field-programmable gate array, FPGA) or other programmable logic device, discrete gates, transistor logic, or discrete hardware components, and the processor 601 may implement or perform the methods, steps, and logic diagrams associated with the various processes disclosed in embodiments of the application.
The steps of the method disclosed in connection with the embodiments of the present application may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a state-of-the-art storage medium such as random access memory, read-only memory, programmable read-only memory, or charged erasable programmable memory (electrically erasable programmable read only memory, EEPROM). The storage medium is located in a memory 604, and the processor 601 reads information in the memory 604 and in combination with its hardware performs the steps of the method described above.
The processor 601, the memory 604 and the communication interface 603 may communicate with each other via a communication line 602.
In the above embodiments, the instructions stored by the memory for execution by the processor may be implemented in the form of a computer program product. The computer program product may be written in the memory in advance, or may be downloaded in the form of software and installed in the memory.
Embodiments of the present application also provide a computer program product comprising one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wired (e.g., coaxial cable, fiber optic, digital subscriber line (digital subscriber line, DSL), or wireless (e.g., infrared, wireless, microwave, etc.), or semiconductor medium (e.g., solid state disk, SSD)) or the like.
The embodiment of the application also provides a computer readable storage medium. The methods described in the above embodiments may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. Computer readable media can include computer storage media and communication media and can include any medium that can transfer a computer program from one place to another. The storage media may be any target media that is accessible by a computer.
As one possible design, the computer-readable medium may include compact disk read-only memory (CD-ROM), RAM, ROM, EEPROM, or other optical disk memory; the computer readable medium may include disk storage or other disk storage devices. Moreover, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes Compact Disc (CD), laser disc, optical disc, digital versatile disc (digital versatile disc, DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.

Claims (14)

1. An audio processing method, the method being applied to an electronic apparatus including a microphone device including a sound pickup hole, the method comprising:
at a first moment, the electronic equipment is in a call, and the electronic equipment responds to a first operation for blocking the pick-up hole and starts a call mute function;
The first operation is terminated at a second moment, the conversation is not ended, the electronic equipment is in a conversation silence state, and the second moment is later than the first moment;
the electronic equipment responds to a second operation for blocking the sound pick-up hole, and the call mute function is closed at a third moment, wherein the third moment is later than the second moment;
the call is triggered by a target application of the electronic device, the electronic device comprises a first preset module, the first preset module is used for determining whether the pick-up hole is blocked, and before the call mute function is started, the method further comprises the steps of:
the microphone device reports the collected audio signals to the first preset module;
the first preset module determines that the pickup hole is blocked according to the audio signal, and the first preset module reports a message for indicating that the pickup hole is blocked to the target application;
the call mute function is started, which comprises the following steps: the target application starts the call mute function based on the message for indicating that the pick-up hole is blocked;
the first preset module determines that the pickup hole is blocked according to the audio signal, and the method comprises the following steps:
The first preset module determines the energy value of an N frame audio signal and the energy value of a preset frequency band in the N frame audio signal;
when the energy value of the N frame of audio signal is smaller than a first preset value, and the energy value of the preset frequency band in the N frame of audio signal is compared with the energy value of the preset frequency band in the N-1 frame of audio signal, the first preset module determines that the pickup hole is blocked;
the number of the microphone devices is 1, the microphone devices acquire one frame of audio signal every a first preset time length, and the preset frequency band is a high frequency band;
or alternatively, the process may be performed,
the first preset module determines that the pickup hole is blocked according to the audio signal, and the method comprises the following steps:
the first preset module determines the energy value of the audio signal of each microphone device;
when the energy value of the audio signal of one microphone device is smaller than a third preset value, the energy value of the audio signal of one microphone device is smaller than the energy values of the audio signals of other microphone devices, and the difference value between the energy value of the audio signal of one microphone device and the energy value of the audio signal of the other microphone device is larger than or equal to a fourth preset value, the first preset module determines that the pickup hole is blocked; wherein the number of microphone devices is greater than 1.
2. The method of claim 1, wherein the electronic device further comprises a second preset module, the second preset module is configured to mute or cancel muting the audio signal, and the target application starts the call muting function based on the message for indicating that the pick-up hole is blocked, and the method comprises:
when the target application acquires the message for indicating that the pick-up hole is blocked, the target application transmits a mute message to an application program framework layer;
the application framework layer issues the mute message to a hardware abstraction layer;
the hardware abstraction layer issues the mute message to the second preset module;
and the second preset module performs mute processing on the audio signal.
3. The method of claim 1, wherein before the closing the call mute function, further comprising:
the microphone device reports the collected audio signals to a first preset module;
the first preset module determines that the pickup hole is blocked according to the audio signal, and reports a message for indicating that the pickup hole is blocked to a target application;
Before the call mute function is closed, the method comprises the following steps: and the target application closes the call mute function based on the message for indicating that the pick-up hole is blocked.
4. The method of claim 3, wherein the target application turns off the call mute function based on the message indicating that a pickup hole is blocked, comprising:
when the target application acquires the message for indicating that the pick-up hole is blocked, the target application issues a message for canceling silence to an application program framework layer;
the application framework layer issues the mute canceling message to a hardware abstraction layer;
the hardware abstraction layer issues the mute canceling message to a second preset module;
and the second preset module performs the silence canceling treatment on the audio signal.
5. The method of claim 1, wherein the first preset module determining that the pickup hole is blocked when the energy value of the nth frame of audio signal is less than a first preset value and the energy value of the preset frequency band in the nth frame of audio signal is greater than or equal to a second preset value as compared to the energy value of the preset frequency band in the N-1 frame of audio signal, comprising:
In a second preset time period, when the energy values of the audio signals are smaller than the first preset value, the first preset module determines that the pickup hole is blocked; the nth frame of audio signal is the audio signal within the second preset duration.
6. The method of claim 1, wherein the first preset module determines that the pickup hole is blocked when the energy value of the audio signal of one of the microphone devices is less than a third preset value, the energy value of the audio signal of one of the microphone devices is less than the energy value of the audio signal of the other microphone devices, and the difference between the energy value of the audio signal of one of the microphone devices and the energy value of the audio signal of the other microphone devices is greater than or equal to a fourth preset value, comprising:
and in a third preset time period, when the energy value of the audio signal of one microphone device is smaller than the third preset value, the energy value of the audio signal of one microphone device is smaller than the energy value of the audio signal of the other microphone device, and the difference value between the energy value of the audio signal of one microphone device and the energy value of the audio signal of the other microphone device is larger than or equal to the fourth preset value, the first preset module determines that the pick-up hole is blocked.
7. The method according to any one of claims 1-4, wherein the first and second preset modules each run in an audio data signal processor ADSP in the electronic device,
when the call is established, the first preset module and the second preset module start to operate;
and when the call is ended, the first preset module and the second preset module stop running.
8. The method of claim 1, wherein the step of determining the position of the substrate comprises,
when the call mute function is started at the first moment, the electronic equipment vibrates;
and at the third moment, when the call mute function is closed, the electronic equipment also vibrates.
9. The method of claim 8, wherein the electronic device, prior to vibrating when the talk mute function is activated, further comprises:
the method comprises the steps that a motor driving module of a driving layer of the electronic equipment obtains a mute message of a hardware abstraction layer;
the motor drive module indicates motor device vibration.
10. The method according to claim 8 or 9, wherein when the call mute function is turned off, before the electronic device vibrates, further comprising:
The method comprises the steps that a motor driving module of a driving layer of the electronic equipment obtains a mute canceling message of a hardware abstraction layer;
the motor drive module indicates motor device vibration.
11. The method according to any of claims 1-4, wherein the target application sets a mute state flag as a flag for indicating non-mute when the call is established.
12. The method of claim 11, wherein the step of determining the position of the probe is performed,
before the target application issues a mute message to an application framework layer, the target application sets a mute state identifier as an identifier for indicating mute;
before the target application issues the unmuted message to the application framework layer, the target application sets a mute state identifier as an identifier for indicating non-mute.
13. An electronic device, comprising: a memory for storing a computer program and a processor for executing the computer program to perform the method of any of claims 1-12.
14. A computer readable storage medium storing instructions that, when executed, cause a computer to perform the method of any one of claims 1-12.
CN202310670910.XA 2023-06-08 2023-06-08 Audio processing method and related device Active CN116405593B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310670910.XA CN116405593B (en) 2023-06-08 2023-06-08 Audio processing method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310670910.XA CN116405593B (en) 2023-06-08 2023-06-08 Audio processing method and related device

Publications (2)

Publication Number Publication Date
CN116405593A CN116405593A (en) 2023-07-07
CN116405593B true CN116405593B (en) 2023-10-20

Family

ID=87014657

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310670910.XA Active CN116405593B (en) 2023-06-08 2023-06-08 Audio processing method and related device

Country Status (1)

Country Link
CN (1) CN116405593B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1848864A (en) * 2005-04-14 2006-10-18 中兴通讯股份有限公司 Method for adding silence function in mobile communication terminal
CN103546647A (en) * 2013-10-31 2014-01-29 宇龙计算机通信科技(深圳)有限公司 Method and device for communication through mobile phone
CN104737553A (en) * 2012-09-28 2015-06-24 诺基亚技术有限公司 Porous cover structures for mobile device audio
CN105430172A (en) * 2015-10-31 2016-03-23 惠州华阳通用电子有限公司 Automatic conversation control method and device
CN108650392A (en) * 2018-04-24 2018-10-12 维沃移动通信有限公司 A kind of call recording method and mobile terminal
CN111432323A (en) * 2020-04-01 2020-07-17 Oppo(重庆)智能科技有限公司 Microphone testing method and device of mobile terminal, electronic equipment and storage medium
CN217789862U (en) * 2022-04-26 2022-11-11 中国人民解放军63811部队 Sound collection device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1848864A (en) * 2005-04-14 2006-10-18 中兴通讯股份有限公司 Method for adding silence function in mobile communication terminal
CN104737553A (en) * 2012-09-28 2015-06-24 诺基亚技术有限公司 Porous cover structures for mobile device audio
CN103546647A (en) * 2013-10-31 2014-01-29 宇龙计算机通信科技(深圳)有限公司 Method and device for communication through mobile phone
CN105430172A (en) * 2015-10-31 2016-03-23 惠州华阳通用电子有限公司 Automatic conversation control method and device
CN108650392A (en) * 2018-04-24 2018-10-12 维沃移动通信有限公司 A kind of call recording method and mobile terminal
CN111432323A (en) * 2020-04-01 2020-07-17 Oppo(重庆)智能科技有限公司 Microphone testing method and device of mobile terminal, electronic equipment and storage medium
CN217789862U (en) * 2022-04-26 2022-11-11 中国人民解放军63811部队 Sound collection device

Also Published As

Publication number Publication date
CN116405593A (en) 2023-07-07

Similar Documents

Publication Publication Date Title
CN111078448B (en) Method for processing audio abnormity and electronic equipment
CN109814767A (en) Message treatment method and relevant apparatus
WO2020006711A1 (en) Message playing method and terminal
US9769311B2 (en) Presence-based device mode modification
CN109062535A (en) Sounding control method, device, electronic device and computer-readable medium
CN104767860A (en) Incoming call prompt method and device and terminal
CN114079892A (en) Bluetooth communication method, wearable device and system
CN116471355B (en) Audio playing method and electronic equipment
CN115129196A (en) Application icon display method and terminal
CN114422710B (en) Video recording control method for electronic equipment, electronic equipment and readable medium
CN111274043B (en) Near field communication method, near field communication device, near field communication system, storage medium and electronic equipment
US9503556B2 (en) Handling voice calls
CN111104209B (en) Task processing method and related equipment
CN116017388B (en) Popup window display method based on audio service and electronic equipment
CN116405593B (en) Audio processing method and related device
CN108541370A (en) Export method, electronic equipment and the storage medium of audio
WO2022089563A1 (en) Sound enhancement method, earphone control method and apparatus, and earphone
US20230224398A1 (en) Audio output channel switching method and apparatus and electronic device
CN117714584A (en) Audio control method and related device
CN114173315B (en) Bluetooth reconnection method and terminal equipment
CN116744244B (en) Communication method, storage medium and electronic equipment
CN114449103B (en) Reminding method, graphical user interface and terminal
WO2024032035A9 (en) Voice signal output method and electronic device
CN116405589B (en) Sound processing method and related device
CN116489272B (en) Voice message playing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant