CN116456441B - Sound processing device, sound processing method and electronic equipment - Google Patents

Sound processing device, sound processing method and electronic equipment Download PDF

Info

Publication number
CN116456441B
CN116456441B CN202310718953.0A CN202310718953A CN116456441B CN 116456441 B CN116456441 B CN 116456441B CN 202310718953 A CN202310718953 A CN 202310718953A CN 116456441 B CN116456441 B CN 116456441B
Authority
CN
China
Prior art keywords
sound
detection processing
signal
processing unit
main control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310718953.0A
Other languages
Chinese (zh)
Other versions
CN116456441A (en
Inventor
顾正明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202310718953.0A priority Critical patent/CN116456441B/en
Publication of CN116456441A publication Critical patent/CN116456441A/en
Application granted granted Critical
Publication of CN116456441B publication Critical patent/CN116456441B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W52/00Power management, e.g. TPC [Transmission Power Control], power saving or power classes
    • H04W52/02Power saving arrangements
    • H04W52/0209Power saving arrangements in terminal devices
    • H04W52/0261Power saving arrangements in terminal devices managing power supply demand, e.g. depending on battery level
    • H04W52/0274Power saving arrangements in terminal devices managing power supply demand, e.g. depending on battery level by switching on or off the equipment or parts thereof
    • H04W52/028Power saving arrangements in terminal devices managing power supply demand, e.g. depending on battery level by switching on or off the equipment or parts thereof switching on or off only a part of the equipment circuit blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Environmental & Geological Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Telephone Function (AREA)

Abstract

The application discloses a sound processing device which is applied to electronic equipment, wherein the sound processing device comprises a main control module and a sound detection processing module, the sound detection processing module comprises a first sound detection processing unit, the first sound detection processing unit is used for receiving a first sound signal, generating first information when the first sound signal is determined to comprise a first type of sound signal according to the first sound signal, and sending the first information to the main control module; the main control module is used for receiving the first information and responding to the first information and is in an awakening state. Therefore, the sound detection processing module outside the main control module wakes the main control module from the dormant state to the wake-up state according to the sound detection result, or keeps the main control module in the wake-up state, so that the power consumption of the main control module can be effectively reduced, the power consumption of the electronic equipment can be effectively reduced, and corresponding services can be timely carried out. The application also discloses a sound processing method and electronic equipment.

Description

Sound processing device, sound processing method and electronic equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a sound processing apparatus, a sound processing method, and an electronic device.
Background
With the development of science and technology, sound technologies such as voice interaction and acoustic payment are widely used in electronic devices. Taking a mobile phone as an example, some mobile phones can support services such as voice interaction, sound wave payment and the like. Currently, in order to be able to recognize a sound in time for a corresponding service, devices such as a System On Chip (SOC) and a Codec (Codec) in a mobile phone need to be in an awake state (may also be referred to as a power-on state) all the time. These devices are always in an awake state, which makes the mobile phone have a problem of higher power consumption.
Disclosure of Invention
The application provides a sound processing device, a sound processing method and electronic equipment, which are used for solving the problems that some devices of the electronic equipment are always in an awake state and the power consumption of the electronic equipment is high. The power consumption of the equipment can be effectively reduced, corresponding service can be timely carried out under the condition of reducing the power consumption, and the experience of a user is improved.
In order to solve the above-mentioned technical problem, in a first aspect, an embodiment of the present application provides a sound processing apparatus, which is applied to an electronic device, where the sound processing apparatus includes a main control module and a sound detection processing module, and the sound detection processing module includes a first sound detection processing unit, where the first sound detection processing unit is configured to receive a first sound signal, generate first information when it is determined that the first sound signal includes a first type of sound signal according to the first sound signal, and send the first information to the main control module; the main control module is used for receiving the first information and responding to the first information and is in an awakening state.
In this implementation manner, the electronic device may be, for example, a mobile phone, or may be a device such as a computer, a watch, or an earphone, which may be selected and set as required. The main control module may be, for example, an SOC, the sound detection processing module may be, for example, a Codec, and the first sound signal may be, for example, a sound signal after being subjected to processing such as analog-to-digital conversion, which corresponds to a sound signal received by a microphone of a mobile phone or a specific ultrasonic receiving device. The first type of sound signal may be, for example, an ultrasound signal, i.e. the present solution may be applied to ultrasound-based sound wave payment scenarios in, for example, a cell phone. Of course, the first type of sound signal may also be an infrasound signal, a user sound signal corresponding to a certain user, or the like, which may be selected and set as needed. The first information may be, for example, an interrupt signal for adjusting an operation state of the SOC including an awake state (may also be referred to as a power-down state) and a sleep state.
Further, in this implementation manner, the sound detection processing module other than the main control module detects the sound signal, and according to the detection result corresponding to the sound signal, the main control module is in the wake-up state when the preset condition is satisfied (i.e., when the first sound signal includes the first type sound signal). In this way, in the case that there is no service to be performed (for example, in the case that the first sound signal does not include the first type sound signal), the main control module may be in a sleep state, and in the case that there is a service to be performed (for example, in the case that the first sound signal includes the first type sound signal), the sound detection processing module may wake up the main control module through the first information, so that the main control module performs a corresponding service, thereby effectively reducing power consumption of the main control module, effectively reducing power consumption of the sound processing device, and further effectively reducing power consumption of the electronic device. And the main control module in the wake-up state can timely perform corresponding service, so that the experience of the user is improved. In addition, the sound detection processing module can enable the main control module to be in a wake-up state continuously through the first information under the condition that the first sound signal comprises the first type sound signal, so that the main control module can perform corresponding service in time, and the experience of a user is improved.
In a possible implementation manner of the first aspect, the main control module and the sound detection processing module may be connected through an interface, or may be connected through other manners, and may be selected and set as required.
In a possible implementation manner of the first aspect, the sound detection processing module further includes a second sound detection processing unit, where the second sound detection processing unit is configured to receive the first sound signal and store the first sound signal; the main control module is also used for acquiring a first sound signal from the second sound detection processing unit under the condition of being in an awake state, and performing a first service according to the first sound signal.
Therefore, the sound detection processing module can store the first sound signal through the second sound detection processing unit, and send the first sound signal to the main control module for corresponding service processing after the main control module wakes up, so that the problem of sound signal loss can be avoided, and the normal processing of the service is ensured.
Further, the first service may be the ultrasonic-based sound wave payment service, or may be a voice interaction service, etc., which may be selected and set as required.
In a possible implementation manner of the first aspect, the first sound detection processing unit includes a first sound detection processing subunit and a second sound detection processing subunit, where the first sound detection processing subunit is configured to receive a first sound signal, obtain a first target sound signal according to the first sound signal, and send the first target sound signal to the second sound detection processing subunit, where the first target sound signal is a first frequency band signal in the first sound signal; the second sound detection processing subunit is configured to receive the first target sound signal, determine whether the first target sound signal includes a first type sound signal according to the first target sound signal, generate first information if the first sound signal includes the first type sound signal, and send the first information to the main control module.
In this way, the first sound detection processing subunit may screen the signal in the first frequency band from the first sound signal as the first target sound signal, so that the second sound detection processing subunit conveniently and accurately determines whether the first target sound signal is the first type sound signal or whether the first target sound signal includes the first type sound signal according to the first target sound signal.
The signal in the first frequency band may be, for example, a frequency band greater than a preset frequency threshold, and the frequency threshold may be, for example, 20kHz corresponding to ultrasound. The signal in the first frequency band may also be a signal in a preset frequency range, for example, a signal greater than 20kHz and less than 2MHz, which may be selected and set as required.
In a possible implementation of the first aspect, the first sound detection processing subunit and the second sound detection processing subunit may be electrically connected, for example, an output of the first sound detection processing subunit is electrically connected to an input of the second sound detection processing subunit. Of course, the first sound detection processing subunit and the second sound detection processing subunit may also be connected by other means, which may be selected and set as desired.
In a possible implementation of the first aspect, the first sound detection processing unit may also include part of the first sound detection processing subunit and the second sound detection processing subunit, or may include other sound detection processing subunits or devices, which may be selected and arranged as desired.
In a possible implementation of the first aspect, the first type of sound signal is an ultrasound signal, and the first sound detection processing unit comprises a high pass filter or a band pass filter.
In a possible implementation of the first aspect, the first type of sound signal may also be an infrasound signal, and the first sound detection processing unit includes a low pass filter. Or the first type of sound signal may be a sound signal corresponding to a certain user and having its sound characteristics, and the first sound detection processing unit may be a corresponding filter, which may be selected and set as needed.
In a possible implementation of the first aspect, the second sound detection processing subunit includes a sound detection algorithm, which may also be referred to as a sound type detection function or a sound type detection algorithm, for determining the type of the sound signal according to the calculation result by calculating signal characteristic information such as a gain of the sound signal.
In a possible implementation of the first aspect, the sound detection algorithm may be, for example, a gain processing algorithm or a gain processing function for calculating a gain of the sound signal, the gain processing function being for determining the gain of the first target sound signal and determining whether the first target sound signal comprises the first type of sound signal based on the gain.
The gain processing function calculates the corresponding gain based on the sound signal output, for example, by the high-pass filter, and if the obtained gain is equal to or greater than a preset gain threshold, the sound signal is considered to be an ultrasound signal or includes an ultrasound signal, i.e. it is determined that the first sound signal includes a first type of sound signal. Otherwise, it is determined that the first sound signal does not include the first type of sound signal. The gain threshold may be, for example, 30dB, or other values, which may be selected and set as desired.
In a possible implementation manner of the first aspect, the second sound detection processing subunit may also include other types of sound detection algorithms, so as to implement recognition of a sound signal type in an application scenario, and obtain a corresponding sound signal type.
In a possible implementation of the first aspect, the second sound detection processing unit includes a first-in first-out memory. Therefore, the voice signal can be conveniently stored, and the problem of losing the voice signal is avoided.
Of course, the second sound detection processing unit may also include other types of memory devices, and may include other devices, which may be selected and set as desired.
In a possible implementation manner of the first aspect, the main control module and the sound detection processing module are connected through a first interface, and the first interface is used for transmitting the first information, that is, the main control module is used for receiving the first information through the first interface.
In a possible implementation manner of the first aspect, the first interface may be a general-purpose input/output interface, for example. Of course, the first interface may be another type of interface, which may be selected and set as desired.
In a possible implementation manner of the first aspect, the main control module and the sound detection processing module are connected through a second interface, and the second interface is used for transmitting the first sound signal, that is, the main control module is used for obtaining the first sound signal from the sound detection processing module through the second interface.
In a possible implementation of the first aspect described above, the second interface may be a serial audio interface, for example. Of course, the second interface may be another type of interface, which may be selected and set as desired.
In a possible implementation manner of the first aspect, the main control module and the sound detection processing module are connected through a first interface and a second interface, where the first interface is used to transmit the first information, and the second interface is used to transmit the first sound signal. Of course, the main control module and the sound detection processing module can be connected through other more or fewer interfaces, and can be selected and set according to requirements.
In a possible implementation of the first aspect, the main control module includes a first main control processing unit and a second main control processing unit, where the first main control processing unit is configured to receive the first information, and in response to the first information, be in an awake state, and send the second information to the second main control processing unit; the second main control processing unit is used for receiving the second information, responding to the second information, being in an awake state, and sending third information to the sound detection processing module so as to acquire the first sound signal from the sound detection processing module.
Therefore, after the main control module is in the wake-up state, the main control module can acquire the sound signal from the sound detection processing module so as to process corresponding service, ensure the normal processing of the service and effectively promote the user experience.
In a possible implementation of the first aspect, the first information may be the foregoing interrupt signal, the second information may be a notification signal for waking up the second main control processing unit, and the third information may be a request signal for acquiring a sound signal. The form and content of the first information, the second information, and the third information may be selected and set as desired.
In a possible implementation of the first aspect, the first main control processing unit and the second main control processing unit are electrically connected, and of course, the first main control processing unit and the second main control processing unit may be connected in other manners, which may be selected and set as required.
In a possible implementation of the first aspect, the master control module may also include part of the first master control processing unit and the second master control processing unit, or may include other master control processing units or devices, which may be selected and set as required.
In one possible implementation manner of the first aspect, the first information is an interrupt signal for adjusting an operation state of the main control module; or the first information is a notification signal including wake-state identification information. Of course, the first information may be other information or signals, which may be selected and set as desired.
In a possible implementation manner of the first aspect, the sound detection processing module further includes a third sound detection processing unit, a fourth sound detection processing unit, and a fifth sound detection processing unit, where the third sound detection processing unit is configured to receive a second sound signal, amplify the second sound signal to obtain a first sound sub-signal, and send the first sound sub-signal to the fourth sound detection processing unit, where the second sound signal is a sound signal collected by the electronic device; the fourth sound detection processing unit is used for receiving the first sound sub-signal, performing analog-to-digital conversion processing on the first sound sub-signal to obtain a second sound sub-signal, and sending the second sound sub-signal to the fifth sound detection processing unit; the fifth sound detection processing unit is used for receiving the second sound signal, performing signal extraction filtering processing on the second sound signal to obtain a first sound signal, and sending the first sound signal to the first sound detection processing unit and the second sound detection processing unit respectively.
In this way, the third sound detection processing unit, the fourth sound detection processing unit and the fifth sound detection processing unit can process the sound signals collected by the electronic device into the first sound signals which are convenient for subsequent processing.
In a possible implementation manner of the first aspect, the third sound detection processing unit is connected to a fourth sound detection processing unit, the fourth sound detection processing unit is connected to a fifth sound detection processing unit, and the fifth sound detection processing unit is connected to the first sound detection processing unit and the second sound detection processing unit, respectively.
In a possible implementation manner of the first aspect, an output terminal of the third sound detection processing unit is connected to an input terminal of the fourth sound detection processing unit, an output terminal of the fourth sound detection processing unit is connected to an input terminal of the fifth sound detection processing unit, and output terminals of the fifth sound detection processing unit are respectively connected to an input terminal of the first sound detection processing unit and an input terminal of the second sound detection processing unit.
Of course, the first sound detection processing unit, the second sound detection processing unit, the third sound detection processing unit, the fourth sound detection processing unit, and the fifth sound detection processing unit may also be connected by other means, which may be selected and set as needed.
The sound signal collected by the electronic device may be, for example, a sound signal collected by a microphone of the electronic device, or may be a sound signal collected by a sound collection device in the electronic device for collecting a corresponding sound signal such as an ultrasonic signal.
In a possible implementation of the first aspect, the third sound detection processing unit includes a programmable gain amplifier, the fourth sound detection processing unit includes an analog-to-digital converter, and the fifth sound detection processing unit includes a signal decimation filter.
Of course, the programmable gain amplifier, the analog-to-digital converter, the signal extraction filter may be other devices for implementing the same or different functions, and the third sound detection processing unit, the fourth sound detection processing unit, and the fifth sound detection processing unit may also include other devices, which may be selected and set as needed.
In a possible implementation of the first aspect, the sound detection processing module may also include part of the third sound detection processing unit, the fourth sound detection processing unit and the fifth sound detection processing unit, or may include other sound detection processing units or devices, which may be selected and set as required.
In a possible implementation manner of the first aspect, the main control module is configured to be in an awake state in response to the first information, and includes: the main control module is used for responding to the first information and switching from the dormant state to the awakening state; or the main control module is used for responding to the first information and keeping the awakening state.
That is, in case that the main control module is in the sleep state, the main control module may be switched from the sleep state to the awake state according to the first information. Under the condition that the main control module is in the awakening state, the main control module can keep the awakening state unchanged.
In a possible implementation manner of the first aspect, the main control module is a system-on-chip. Of course, the main control module can also be other control modules related to service implementation, which can be selected and set according to the needs.
In a possible implementation of the first aspect, the first master processing unit may be, for example, an application processor. Of course, the first main control processing unit may be other devices, which may be selected and set according to the needs.
In a possible implementation of the first aspect, the second main control processing unit may be, for example, a digital signal processor or an advanced digital signal processor. Of course, the second main control processing unit may be other devices, which may be selected and set as required.
In a possible implementation of the first aspect, the sound detection processing module is a codec. Of course, the codec may also be other modules related to the sound signal processing, i.e. the sound detection processing module may also be other devices, which may be selected and arranged as desired.
In a second aspect, embodiments of the present application provide an electronic device including the aforementioned sound processing apparatus.
Of course, the electronic device may also have other devices or components, which may be selected and arranged as desired.
In a third aspect, an embodiment of the present application provides a sound processing method applied to an electronic device, where the electronic device includes a sound processing apparatus, the sound processing apparatus includes a main control module and a sound detection processing module, and the sound detection processing module includes a first sound detection processing unit, the method includes: the first sound detection processing unit receives a first sound signal, generates first information when the first sound signal comprises a first type sound signal according to the first sound signal, and sends the first information to the main control module; the main control module receives the first information and is in an awakening state in response to the first information.
In a possible implementation of the third aspect, the sound detection processing module further includes a second sound detection processing unit, and the method further includes: the second sound detection processing unit receives the first sound signal and stores the first sound signal; and under the condition that the main control module is in an awake state, acquiring a first sound signal from the second sound detection processing unit, and performing a first service according to the first sound signal.
In one possible implementation manner of the third aspect, the first sound detection processing unit includes a first sound detection processing subunit and a second sound detection processing subunit, the first sound detection processing unit receives the first sound signal, generates first information in a case where it is determined that the first sound signal includes a first type of sound signal according to the first sound signal, and sends the first information to the main control module, and includes: the first sound detection processing subunit receives a first sound signal, obtains a first target sound signal according to the first sound signal, and sends the first target sound signal to the second sound detection processing subunit, wherein the first target sound signal is a first frequency band signal in the first sound signal; the second sound detection processing subunit receives the first target sound signal, determines whether the first target sound signal comprises a first type sound signal according to the first target sound signal, generates first information when the first sound signal comprises the first type sound signal, and sends the first information to the main control module.
In a possible implementation manner of the third aspect, the second sound detection processing subunit receives the first target sound signal, determines whether the first target sound signal includes a first type of sound signal according to the first target sound signal, and includes: a gain of the first target sound signal is determined, and whether the first target sound signal includes a first type sound signal is determined based on the gain.
In a possible implementation manner of the third aspect, the master control module includes a first master control processing unit and a second master control processing unit, and the method further includes: the first main control processing unit receives the first information, is in an awakening state in response to the first information, and sends second information to the second main control processing unit; the second main control processing unit receives the second information, is in an awake state in response to the second information, and transmits third information to the sound detection processing module to acquire the first sound signal from the sound detection processing module.
In a possible implementation manner of the third aspect, the sound detection processing module further includes a third sound detection processing unit, a fourth sound detection processing unit, and a fifth sound detection processing unit, and the method further includes: the third sound detection processing unit receives a second sound signal, amplifies the second sound signal to obtain a first sound sub-signal, and sends the first sound sub-signal to the fourth sound detection processing unit, wherein the second sound signal is a sound signal acquired by the electronic equipment; the fourth sound detection processing unit receives the first sound sub-signal, performs analog-to-digital conversion processing on the first sound sub-signal to obtain a second sound sub-signal, and sends the second sound sub-signal to the fifth sound detection processing unit; the fifth sound detection processing unit receives the second sound signal, performs signal extraction and filtering processing on the second sound signal to obtain a first sound signal, and sends the first sound signal to the first sound detection processing unit and the second sound detection processing unit.
In a possible implementation of the third aspect, the waking state includes: switching from a sleep state to an awake state; or remain awake.
In a fourth aspect, an implementation manner of the present application provides an electronic device, including: a memory for storing a computer program, the computer program comprising program instructions; a processor for executing program instructions for causing an electronic device to perform the sound processing method as provided in the first aspect and/or any one of the possible implementation manners of the first aspect.
In a fifth aspect, an implementation of the present application provides a computer readable storage medium storing a computer program comprising program instructions to be executed by an electronic device to cause the electronic device to perform a sound processing method as provided in the first aspect and/or any one of the possible implementations of the first aspect.
In a sixth aspect, an implementation of the present application provides a computer program product comprising a computer program to be run by an electronic device to cause the electronic device to perform the sound processing method as provided by the first aspect and/or any one of the possible implementations of the first aspect.
The relevant advantageous effects of the second aspect to the sixth aspect may be referred to the relevant description of the first aspect, and are not described herein.
Drawings
In order to more clearly illustrate the technical solution of the present application, the following description will briefly explain the drawings used in the description of the embodiments.
FIG. 1 is a diagram illustrating various frequency bands of sound provided by the present application, according to some embodiments of the present application;
FIG. 2 is a schematic diagram illustrating a configuration of a sound processing apparatus according to some embodiments of the present application;
FIG. 3 is a schematic diagram illustrating another configuration of a sound processing apparatus according to some embodiments of the present application;
FIG. 4 is a schematic diagram illustrating another configuration of a sound processing apparatus according to some embodiments of the present application;
FIG. 5 is a schematic diagram illustrating another configuration of a sound processing apparatus according to some embodiments of the present application;
FIG. 6 is a schematic diagram illustrating a hardware architecture of a mobile phone according to some embodiments of the present application;
fig. 7 is a schematic view showing another structure of a sound processing apparatus according to some embodiments of the present application;
FIG. 8 is a schematic diagram illustrating another configuration of a sound processing apparatus according to some embodiments of the present application;
Fig. 9 is a schematic view showing another structure of a sound processing apparatus according to some embodiments of the present application;
fig. 10 is a schematic view showing another structure of a sound processing apparatus according to some embodiments of the present application;
FIG. 11 is a schematic diagram illustrating another configuration of a sound processing apparatus according to some embodiments of the present application;
FIG. 12 is a schematic diagram illustrating one configuration of an electronic device, according to some implementations of the application;
FIG. 13 is a schematic diagram illustrating another configuration of an electronic device, according to some implementations of the application;
fig. 14 is a schematic diagram illustrating one architecture of a system on a chip (SoC) in accordance with some implementations of the application.
Detailed Description
The technical scheme of the application will be further described with reference to the accompanying drawings.
As shown in fig. 1, sounds can be classified into Infrasound (Infrasound), natural sound (ultrasonic), ultrasonic sound (ultrasonic), etc., according to frequencies, wherein Infrasound is 20Hz or less, natural sound is 20Hz to 20kHz, and ultrasonic sound is 20kHz or more. In addition, the portion near 20Hz in natural sound is low-frequency sound (low-frequency sound), the portion near 20kHz in ultrasound is animal and chemical (animals and chemistry) related sound, the sound with higher frequency is medical and destruction (medical and destructive) related sound, and the sound with higher frequency than 2MHz includes diagnosis and nondestructive detection (diagnostic and NDE) related sound.
The ultrasonic wave can be applied to the scenes such as sound wave payment, sound wave identity authentication, sound wave transmission files and the like. Sonic payment refers to, for example, a function of realizing payment by ultrasound by mapping payment information to ultrasound. The acoustic identity authentication refers to, for example, setting the identity authentication information to a unique code, mapping the code to the unique acoustic information, and performing identity authentication, where the application scenario of acoustic identity authentication is mainly acoustic membership cards, acoustic ticket, acoustic business card, acoustic sign-in, acoustic queuing, and the like. The sound wave file transmission refers to, for example, a function of mapping file contents to ultrasound and realizing file transmission by ultrasound.
The electronic device may be configured to implement sound processing and business implementation by providing sound processing means. The sound processing apparatus and the sound processing method provided by the present application will be described below with a mobile phone as an example of an electronic device provided by an implementation of the present application, and an ultrasonic service as an example of a sound-related service.
As shown in fig. 2, in one implementation of the present application, a sound processing apparatus for implementing sound service processing in a mobile phone includes a System On Chip (SOC) and a Codec (Codec), and the SOC and the Codec are connected. The Codec is configured to perform processes such as encoding and decoding on a sound signal collected by the microphone, and send the processed sound signal to the SOC, where the SOC is configured to perform corresponding service processing, such as performing an acoustic payment service, according to the sound signal.
Currently, in order to timely detect ultrasound, the SOC and the Codec in the mobile phone are always required to be in an awake state, i.e. normally open to monitor an ultrasonic event, which causes a problem of higher power consumption of the mobile phone. In addition, since the ultrasonic waves are unnatural sounds, the ultrasonic waves are required to be emitted by a specific instrument, and the frequency to which the ultrasonic waves are applied is very low, it is particularly critical whether the power consumption associated with ultrasonic detection can be very low.
As shown in fig. 3, in one implementation, the SOC includes an application processor (Application Processor, AP) and a digital signal processor (digital signal processor, DSP) to which the AP and DSP are connected. In addition, the Codec may be, for example, analog-to-Digital (AD) and/or Digital-to-Analog (DA) Codec.
In this implementation, in order to reduce the power consumption of the handset, in the absence of an ultrasound service that needs to be executed, the AP in the SOC may be in a sleep state, the DSP enters a low power interference (Low Power Interference, LPI) mode, and performs scene recognition to determine whether the received sound signal includes an ultrasound signal.
Illustratively, the Codec is in an awake state and sends the processed sound signal to the DSP, which further processes the sound signal to determine if an ultrasound signal is present in the sound signal. If the ultrasonic service exists, the DSP wakes up the AP and sends the sound signal to the AP so that the AP performs the ultrasonic service corresponding to the sound signal. If not, the AP does not need to be awakened.
In this way, although the AP is in the sleep state, the power consumption of the sound processing apparatus can be reduced, but the power consumption of the sound processing apparatus also includes the power consumption generated by the DSP, the power consumption generated by the Codec, and the power consumption of a memory for storing sound signals such as a Double Data Rate (DDR) memory, a tightly coupled (Tightly Coupled Memory, TCM) memory, and the like, which are related to ultrasonic traffic. Therefore, the power consumption of the sound processing apparatus is still high.
Based on this, the present application provides a sound processing apparatus and a sound processing method. As shown in fig. 4, in one implementation of the present application, a sound processing apparatus included in a mobile phone includes an SOC (as an example of a main control module) and a Codec (as an example of a sound detection processing module). The SOC and the Codec may be connected by a General-purpose input/output (GPIO) interface (as an example of a first interface) and a serial audio interface (serial audio interface, SAI) (as an example of a second interface). The SOC includes an AP (as an example of the first main control processing unit) and an advanced digital signal processor (advanced digital signal processor, ADSP) (as an example of the second main control processing unit), which may be, for example, the aforementioned DSP.
The Codec includes a programmable gain amplifier (Programmable Gain Amplifier, PGA) (as an example of a third sound detection processing unit), an Analog-to-Digital Converter, ADC) (as an example of a fourth sound detection processing unit), a signal extraction filter (Decimation Filter, DF) (as an example of a fifth sound detection processing unit), an ultrasonic detection (Ultrasound Detection) module (as an example of a first sound detection processing unit), and a first-in-first-out (FIFO) memory (as an example of a second sound detection processing unit).
The function of the ultrasonic detection module is to determine whether there is an ultrasonic signal in the acquired sound signal, and the ultrasonic detection module includes a High Pass Filter (HPF) (or may include a Band Pass Filter (BPF)) (as an example of the first sound detection processing subunit), and includes a Mag Det unit (as an example of the second sound detection processing subunit). The magdet unit refers to a processing unit that calculates a gain of the sound signal based on an amplitude (magnitude), and may include a corresponding gain processing function or gain processing algorithm. The Mag Det unit may calculate a gain of the input sound signal, and determine whether the sound signal has an ultrasonic signal according to the calculated gain. For example, if the gain is greater than or equal to a preset gain threshold, it is determined that an ultrasonic signal exists in the sound signal, and if the gain is less than the preset gain threshold, it is determined that the ultrasonic signal does not exist in the sound signal. The gain threshold may be, for example, 30dB, or other values, which may be selected and set as desired.
The FIFO is used for buffering a preset number of sound signals and is used for solving the problem of data loss when the SOC is awakened. In addition, the preset number may be selected and set as desired.
The input of the PGA is connected, for example, to the output of a microphone for receiving sound signals collected by the microphone, the output of the PGA is connected to the input of an ADC, the output of the ADC is connected to the input of a signal decimation filter, and the output of the signal decimation filter is connected to the input of a high pass filter and to the input of a FIFO, respectively. The output end of the high-pass filter is connected with the input end of the Mag Det unit, and the output end of the Mag Det unit is connected with the GPIO interface. In addition, the output of the FIFO is connected to the SAI interface.
In the implementation manner, under the condition that the ultrasonic service to be executed does not exist, the SOC is in the dormant state, namely, the AP and the ADSP included in the SOC are in the dormant state, so that the power consumption of the mobile phone can be effectively reduced. The external Codec is in a normally open state for detecting the sound signal to determine whether the collected sound signal includes an ultrasonic signal (which may also be referred to as an ultrasonic signal).
Illustratively, after the microphone in the handset collects the sound signal (may also be referred to as sound data or audio data) S1, the microphone sends the sound signal S1 (as an example of a second sound signal) to the PGA as an input signal through, for example, an analog signal input (Analog signal input, AINP) interface, the PGA amplifies the sound signal S1 to obtain an amplified sound signal S2 (as an example of a first sound sub-signal), and then sends the amplified sound signal S2 to the ADC. The ADC performs analog-to-digital conversion on the amplified sound signal S2 to obtain a corresponding digital sound signal S3 (as an example of a second sound signal), and sends the sound signal S3 to the signal decimation filter. The signal decimation filter decimates the sound signal S3 to obtain a sound signal S4 (as an example of the first sound signal), and sends the sound signal S4 to the HPF in the ultrasonic detection module and to the FIFO for buffering, respectively, in two ways. The function of the ultrasonic detection module is to judge whether an ultrasonic signal exists in the sound signals, and the function of the FIFO is to buffer the preset number of sound signals, namely, the FIFO is used for buffering the sound signals S4 and solving the problem of data loss during the wake-up of the SOC. The sound signal S4 can be understood as an upstream signal of the MIC.
After receiving the sound signal S4, the HPF performs filtering processing on the sound signal S4, filters out a low-frequency signal by a high-frequency signal, obtains a sound signal S5 (as an example of a first target sound signal), and sends the obtained sound signal S5 to the magdet unit. The Mag Det unit calculates the gain of the input sound signal S5, if the gain is greater than or equal to the preset gain threshold, it is determined that an ultrasonic signal exists in the sound signal S5, and the Mag Det unit sends an interrupt signal (as an example of the first information) to the AP in the SOC through the GPIO interface, and triggers detection of the interrupt to wake up the SOC. If the gain is smaller than the preset gain threshold, it is determined that the sound signal S5 does not include the ultrasonic signal, and the Mag Det unit does not need to send an interrupt signal to the SOC, that is, does not need to wake up the SOC.
After receiving the interrupt signal, the AP switches from the sleep state to the awake state in response to the interrupt signal and transmits notification information (as an example of the second information) to the ADSP, wakes up the ADSP from the sleep state to the awake state, and establishes an ultrasonic detection path corresponding to the SAI interface, so that the ADSP acquires the upstream data of the FIFO from the FIFO through the SAI interface path, that is, acquires the sound signal S4 stored in the FIFO. Also, the ADSP may also acquire the sound signal after the sound signal S4 stored in the FIFO from the FIFO through the SAI interface. The ADSP may perform a corresponding ultrasonic service, such as the aforementioned sonic payment service, based on the acquired sound signal. That is, the ADSP side starts an ultrasonic application algorithm according to the acquired sound signal to perform a corresponding service. In addition, the ADSP may further process the acquired sound signal and send the obtained sound signal to the AP, so that the AP performs a corresponding ultrasonic service according to the sound signal, where the ultrasonic service is, for example, the foregoing acoustic payment service. The ADSP acquires the sound signal S4 stored in the FIFO from the FIFO through the SAI interface, and may be that the ADSP transmits a data acquisition request (as an example of the third information) to the FIFO through the SAI interface, and the FIFO transmits the corresponding sound signal to the ADSP in response to the data acquisition request.
In the implementation manner, under the condition that the ultrasonic service to be executed does not exist, the SOC is in a dormant state, namely, the AP and the ADSP included in the SOC are in the dormant state, and the SOC is woken up only when the ultrasonic detection module arranged outside the SOC detects the ultrasonic signal, compared with the mode that the DSP cannot be in the dormant state, the power consumption of the sound processing device can be further reduced, and therefore the power consumption of the mobile phone is effectively reduced.
Further, in the implementation manner, the high-pass filter in the Codec and the Mag Det unit are directly matched to detect whether the ultrasonic signal exists in the sound signal, and the power consumption of the 2 devices is lower than that of the DSP, so that the power consumption of the sound processing device can be effectively reduced, and the power consumption of the mobile phone is reduced.
The method of detecting whether or not an ultrasonic signal is present in a sound signal by combining a high-pass filter in the Codec and a Mag Det unit is a method of hardening an ultrasonic detection algorithm (for example, the gain processing algorithm described above), that is, a method of hardening an ultrasonic detection algorithm in the Codec. In this way, the SOC includes both the AP and ADSP in sleep states, and only the Codec circuit consumes less power than the above-described manner of running the ultrasonic detection algorithm on the DSP.
In addition, in the implementation mode, in the process that the ultrasonic detection module detects whether the ultrasonic signal exists in the sound signal, the obtained sound signal is stored in the FIFO, and after the SOC wakes up, the sound signal is sent to the SOC for service processing, so that the problem of sound signal loss can be avoided, normal processing of the service is ensured, and user experience is improved.
Further, in another implementation, as shown in fig. 5, in a case where it is determined that there is no ultrasound traffic to be executed, the SOC may be in a sleep state, that is, the SOC includes both the AP and the DSP in the sleep state. Wherein, a DSP in the Codec may be separately provided to recognize whether an ultrasonic signal exists in the sound signal, and in case that the ultrasonic signal exists, the DSP in the Codec transmits an interrupt signal to the SOC to wake up the SOC. In this implementation, since the power consumption of the built-in DSP is high, and power consumption is also required for a Microphone (MIC) path or the like in the Codec, the power consumption of the sound processing module is still high.
In the implementation manner shown in fig. 4 of the present application, as described above, the ultrasonic detection algorithm is hardened in the Codec, in this manner, the high-pass filter in the Codec and the Mag Det unit are directly used to cooperate to detect whether the ultrasonic signal exists in the sound signal, only the power consumption of the Codec circuit exists, and the power consumption of the 2 devices, i.e. the high-pass filter and the Mag Det unit, is lower than that of the manner of running the ultrasonic detection algorithm on the DSP, so that the power consumption of the sound processing apparatus can be effectively reduced, thereby reducing the power consumption of the mobile phone.
In other embodiments of the present application, the interrupt signal may also be a notification signal (i.e., notification information) including wake-up status identification information, so that the AP wakes up according to the notification signal. Of course, the Mag Det unit may also send other types of signals to the AP to wake up the AP, which may be selected and set as desired.
Further, in another implementation manner of the present application, if the ultrasonic detection module detects the ultrasonic signal when the SOC is in the awake state, a notification signal including the awake state identification information may also be sent to the SOC, so that the SOC maintains the awake state for performing the ultrasonic service. And if the SOC does not receive the notification signal comprising the wake-up state identification information within the preset time, the SOC is switched to the sleep state.
In other implementations of the application, the Mag Det unit may also be a sound type detection unit implemented based on other types of sound type detection functions, which may be selected and set as desired.
In other embodiments of the present application, the aforementioned PGA, ADC, signal extraction filter, PHF, BPF, FIFO, AP, ADSP, etc. may be other devices for performing the same or similar functions, or may be devices for performing different functions, and the sound processing apparatus may include other more or fewer devices, which may be selected and arranged as desired.
In addition, the GPIO interface and the SAI interface may be other types of interfaces, which may be set as needed.
In some implementations of the application, the connections between the devices may be electrical connections based on interfaces, or may be other types of communication connections, wired connections, or other connections, etc., which may be selected and arranged as desired.
The sound processing device provided in the implementation manner is applied to an ultrasonic service scene, can realize lower power consumption, namely can reduce the power consumption of the ultrasonic service scene, and solves the problem of ultrasonic identification power consumption, so that the sound processing device can also be called as a low-power-consumption ultrasonic detection device.
The following describes the structure of the mobile phone provided by the implementation mode of the application.
Fig. 6 shows a schematic diagram of a structure of a mobile phone.
The handset may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) connector 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It will be appreciated that the structure illustrated in the embodiments of the present application is not limited to a specific configuration of the mobile phone. In other embodiments of the application, the handset may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The processor 110 may generate operation control signals according to the instruction operation code and the timing signals to complete instruction fetching and instruction execution control.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and the like.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal or as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, an MIPI interface, etc.
It will be understood that the connection relationship between the modules illustrated in the embodiment of the present application is only illustrative, and is not limited to the structure of the mobile phone. In other embodiments of the present application, the mobile phone may also use different interfacing modes, or a combination of multiple interfacing modes in the foregoing embodiments.
The wireless communication function of the mobile phone can be realized by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor and the like.
The internal memory 121 may be used to store computer executable program code including instructions. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the handset (e.g., audio data, phonebook, etc.), etc. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like. The processor 110 performs various functional applications of the cellular phone and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
The handset may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or a portion of the functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. The handset may listen to music through speaker 170A or to hands-free conversations.
A receiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When the phone picks up a call or voice message, the phone can pick up voice by placing the receiver 170B close to the ear.
Microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can sound near the microphone 170C through the mouth, inputting a sound signal to the microphone 170C. The handset may be provided with at least one microphone 170C. In other embodiments, the handset may be provided with two microphones 170C, which may also perform noise reduction in addition to collecting sound signals. In other embodiments, the handset may also be provided with three, four or more microphones 170C to enable collection of sound signals, noise reduction, identification of sound sources, directional recording, etc.
In other implementations of the application, the mobile phone may also be, for example, a tablet, notebook, palm top, mobile internet device (mobile internet device, MID), wearable device (including, for example, smart watches, smart bracelets, pedometers, etc.), personal digital assistant, portable media player, navigation device, video game device, set top box, virtual reality and/or augmented reality device, internet of things device, industrial control device, streaming media client device, electronic book, reading device, P-car device, OS machine, and other devices, which may be selected and set as desired.
In other implementations of the application, the sound processing device may also be applied in other scenarios than ultrasound, and wake up the SOC in case e.g. Codec determines that a certain kind of signal is included in the received sound signal. For example, in the case where the infrasound wave is included in the sound signal, which is applied to the aforementioned infrasound scene, the SOC is awakened. Alternatively, when the sound signal includes natural sound, the SOC is awakened, as applied to the natural sound scene described above. Or, in a speech recognition scenario, the SOC is awakened in the case where a wake-up word is included in the sound signal or the sound signal includes a sound of a preset user.
The sound processing device provided by the implementation mode of the application can be different in structure based on different application scenes. The structure of the sound processing apparatus provided by the implementation of the present application will be further described below.
As shown in fig. 7, the present application provides a sound processing apparatus applied to an electronic device, and in one implementation manner of the present application, the sound processing apparatus includes a main control module and a sound detection processing module, where the main control module is connected with the sound detection processing module. The master control module may be a system-on-chip. Of course, the main control module can also be other control modules related to service implementation, which can be selected and set according to the needs. The sound detection processing module is a coder-decoder. Of course, the codec may also be other modules related to sound signal processing, which may be selected and set as desired.
The main control module can be in a dormant state, and the sound detection processing module comprises a first sound detection processing unit. The first sound detection processing unit is used for receiving the first sound signal, generating first information when the first sound signal comprises a first type sound signal according to the first sound signal, and sending the first information to the main control module; the main control module is used for receiving the first information and responding to the first information, and switching from the dormant state to the wake-up state.
Therefore, in the implementation mode, the main control module can be in a dormant state, the sound detection processing module outside the main control module detects the sound signal and wakes up the main control module according to the detection result corresponding to the sound signal under the condition that the condition is met, so that the power consumption of the main control module is effectively reduced, the power consumption of the sound processing device is effectively reduced, and the power consumption of the electronic equipment is effectively reduced. And under the condition of reducing power consumption, the main control module can timely perform corresponding service, so that the experience of a user is improved.
In addition, the main control module can also be in an awake state, and the sound detection processing module comprises a first sound detection processing unit. The first sound detection processing unit is used for receiving the first sound signal, generating first information when the first sound signal comprises a first type sound signal according to the first sound signal, and sending the first information to the main control module; the main control module is used for receiving the first information and responding to the first information to keep the awakening state. Therefore, the main control module can keep the wake-up state to carry out corresponding service, and the user experience is improved.
Further, as shown in fig. 8, in an implementation manner of the present application, the first sound detection processing unit includes a first sound detection processing subunit and a second sound detection processing subunit, and an output end of the first sound detection processing subunit is connected to an input end of the second sound detection processing subunit. The first sound detection processing subunit is configured to receive a first sound signal, obtain a first target sound signal according to the first sound signal, and send the first target sound signal to the second sound detection processing subunit, where the first target sound signal is a first frequency band signal in the first sound signal. The second sound detection processing subunit is configured to receive the first target sound signal, determine whether the first target sound signal includes a first target type sound signal according to the first target sound signal, generate first information if it is determined that the first sound signal includes the first type sound signal, and send the first information to the main control module.
In this way, the first sound detection processing subunit may screen the signal in the first frequency band from the first sound signal as the first target sound signal, so that the second sound detection processing subunit conveniently and accurately determines whether the first target sound signal is the first type sound signal or whether the first target sound signal includes the first type sound signal according to the first target sound signal. The first frequency band may be selected and set as desired.
Further, as shown in fig. 9, in one implementation of the present application, the sound detection processing module further includes a second sound detection processing unit, where the second sound detection processing unit is configured to receive the first sound signal and store the first sound signal; the main control module is also used for acquiring a first sound signal from the second sound detection processing unit under the condition of being in an awake state, and performing a first service according to the first sound signal.
Therefore, the sound detection processing module can store the first sound signal through the second sound detection processing unit, and send the first sound signal to the main control module for corresponding service processing after the main control module wakes up, so that the problem of sound signal loss can be avoided, and the normal processing of the service is ensured.
Further, as shown in fig. 10, in an implementation manner of the present application, the sound detection processing module further includes a third sound detection processing unit, a fourth sound detection processing unit, and a fifth sound detection processing unit, where the third sound detection processing unit is connected to the fourth sound detection processing unit, the fourth sound detection processing unit is connected to the fifth sound detection processing unit, and the fifth sound detection processing unit is connected to the first sound detection processing unit and the second sound detection processing unit, respectively. The third sound detection processing unit is used for receiving a second sound signal, amplifying the second sound signal to obtain a first sound sub-signal, and sending the first sound sub-signal to the fourth sound detection processing unit, wherein the second sound signal is collected by the sound processing device; the fourth sound detection processing unit is used for receiving the first sound sub-signal, performing analog-to-digital conversion processing on the first sound sub-signal to obtain a second sound sub-signal, and sending the second sound sub-signal to the fifth sound detection processing unit; the fifth sound detection processing unit is used for receiving the second sound signal, performing signal extraction filtering processing on the second sound signal to obtain a first sound signal, and sending the first sound signal to the first sound detection processing unit and the second sound detection processing unit respectively.
In this way, the sound detection processing module can process the sound signal detected by the electronic device into a signal which is convenient for subsequent processing.
Further, as shown in fig. 11, in one implementation manner of the present application, an input end of the third sound detection processing unit is configured to receive a sound signal collected by the electronic device, an output end of the third sound detection processing unit is connected to an input end of the fourth sound detection processing unit, an output end of the fourth sound detection processing unit is connected to an input end of the fifth sound detection processing unit, and output ends of the fifth sound detection processing unit are respectively connected to input ends of the first sound detection processing unit and the second sound detection processing unit.
In addition, as shown in fig. 11, the main control module and the sound detection processing module are connected through a first interface and a second interface, wherein the first interface is used for transmitting first information, and the second interface is used for transmitting first sound signals. Of course, the main control module and the sound detection processing module may be connected through other more or fewer interfaces.
In addition, the main control module comprises a first main control processing unit and a second main control processing unit, and the first main control processing unit is connected with the second main control processing unit. The first main control processing unit is used for receiving first information through the first interface, responding to the first information to be in an awakening state and sending second information to the second main control processing unit; the second main control processing unit is used for receiving the second information, responding to the second information, being in an awake state, and sending third information to the sound detection processing module so as to acquire the first sound signal from the sound detection processing module through the second interface.
Therefore, after the main control module is in the wake-up state, the main control module can acquire the sound signal from the sound detection processing module so as to process corresponding service, ensure the normal processing of the service and effectively promote the user experience.
According to the sound processing device provided by the application, after the microphone in the electronic equipment collects the sound signals, the microphone sends the sound signals to the third sound detection processing unit through the corresponding interface, the third sound detection processing unit amplifies the sound signals to obtain amplified sound signals, and then the amplified sound signals are sent to the fourth sound detection processing unit. The fourth sound detection processing unit performs analog-to-digital conversion on the amplified sound signal to obtain a corresponding digital sound signal, and sends the sound signal to the fifth sound detection processing unit. The fifth sound detection processing unit performs decimation filtering processing on the sound signal to obtain a sound signal, and sends the sound signal to the first sound detection processing subunit and the second sound detection processing unit, respectively. The first sound detection processing subunit processes the sound signals to obtain first target sound signals, and sends the obtained first target sound signals to the second sound detection processing subunit. In case it is determined that the first target sound signal includes a first type of sound signal, the second sound detection processing subunit sends, for example, an interrupt signal (as an example of the first information) to the first main control processing unit in the main control module through the first interface to wake up the main control module. Under the condition that the sound signals do not comprise the first type sound signals, the interrupt signals do not need to be sent to the main control module, namely the main control module does not need to be awakened, and the main control module continues to keep in a dormant state.
After receiving the interrupt signal, the first main control processing unit switches from the sleep state to the wake state in response to the interrupt signal, and sends notification information (as an example of the second information) to the second main control processing unit, wakes the second main control processing unit from the sleep state to the wake state, and establishes a sound detection path corresponding to the second interface, so that the second main control processing unit obtains uplink data of the second sound detection processing unit from the second sound detection processing unit through the second interface path, that is, obtains the sound signal stored by the second sound detection processing unit. And, the second main control processing unit may further acquire all sound signals after the main control module stored in the second sound detection processing unit wakes up from the second sound detection processing unit through the second interface, for example, the second main control processing unit sends an acquisition request (as an example of the third information) to the second sound detection processing unit through the second interface, so that the second sound detection processing unit sends the corresponding sound signals to the second main control processing unit. And the second main control processing unit correspondingly processes the acquired sound signals and then sends the acquired sound signals to the first main control processing unit so that the first main control processing unit performs corresponding services according to the sound signals.
In the implementation manner, under the condition that the ultrasonic service to be executed does not exist, the main control module is in a dormant state, namely, the first main control processing unit and the second main control processing unit included in the main control module are in the dormant state, and the main control module is awakened only when the sound detection processing module detects the first type of sound signal, so that the power consumption of the sound processing device can be further reduced, and the power consumption of the mobile phone is effectively reduced.
In addition, under the condition that the main control module is in the wake-up state, the sound detection processing module can also maintain the main control module to be in the wake-up state unchanged through the first signal, so that the main control module can process the service in time.
Further, in this implementation manner, the first sound detection processing subunit and the second sound detection processing unit in the sound detection processing module cooperate to detect whether the first type sound signal exists in the sound signal, and the power consumption of these 2 devices is lower than that of the second main control processing unit, so that the power consumption of the sound processing device can be effectively reduced, and the power consumption of the mobile phone is reduced. The first sound detection processing subunit and the second sound detection processing subunit in the sound detection processing module cooperate to detect whether the first type of sound signal exists in the sound signals, which is a scheme for hardening the sound detection algorithm, that is, a scheme for hardening the sound detection algorithm in the sound detection processing module. In the mode, the first main control processing unit and the second main control processing unit which are included in the main control module are in a dormant state under the condition that no ultrasonic service to be executed exists, and only the circuit of the sound detection processing module has power consumption.
In addition, in the implementation manner, in the process that the sound detection processing module detects whether the first type sound signal exists in the sound signals, the obtained sound signals are stored in the second sound detection processing unit, and after the main control module wakes up, the sound signals are sent to the main control module for service processing, so that the problem of sound signal loss can be avoided, normal processing of the service is ensured, and user experience is improved.
In the sound processing device provided by the implementation mode of the application, various sound detection algorithms corresponding to application scenes are hardened in the sound detection processing module, so that the sound detection processing module can recognize whether the sound signals to be analyzed comprise preset types of sound signals or not to obtain sound detection results, and the working mode of the main control module is adjusted to be in a wake-up state or a sleep state according to the sound detection results. Compared with the mode that the main control module carries out sound type detection to obtain a sound detection result, and according to the sound detection result, the working mode of the main control module is adjusted to be in an awake state or a dormant state.
In other implementations of the application, the sound processing apparatus may include more or fewer modules, and each module may include more or fewer units, each unit may include more or fewer sub-units, and each unit may be selected and configured as needed to implement the functions described above.
Further, the application further provides a chip system, which comprises the sound processing device. In addition, the application further provides electronic equipment, and the electronic equipment comprises the chip system.
Further, as shown in fig. 12, an embodiment of the present application provides an electronic apparatus including the foregoing sound processing device. Of course, the electronic device may also include other devices.
For example, referring to fig. 13, fig. 13 is a schematic structural diagram of an electronic device 900 according to an implementation of the present application. The electronic device 900 may include one or more processors 901 coupled to a controller hub 904. For at least one implementation, the controller hub 904 communicates with the processor 901 via a multi-drop Bus, such as a Front Side Bus (FSB), a point-to-point interface, such as a fast channel interconnect (QuickPath Interconnect, QPI), or similar connection. The processor 901 executes instructions that control the general type of data processing operations. In one implementation, the controller hub 904 includes, but is not limited to, a graphics memory controller hub (Graphics Memory controller hub, GMCH) (not shown) and an input/output hub (IOH) (which may be on separate chips) (not shown), where the GMCH includes memory and graphics controllers and is coupled to the IOH.
Electronic device 900 may also include a coprocessor 906 and memory 902 coupled to controller hub 904. Alternatively, one or both of the memory 902 and GMCH may be integrated within the processor 901 (as described herein), with the memory 902 and co-processor 906 being directly coupled to the processor 901 and the controller hub 904, the controller hub 904 being in a single chip with the IOH.
The memory 902 may be, for example, dynamic random access memory (Dynamic Random Access Memory, DRAM), phase change memory (Phase Change Memory, PCM), or a combination of both.
In one implementation, coprocessor 906 is a special-purpose processor, such as, for example, a high-throughput many-core (Many Integrated Core, MIC) processor, network or communication processor, compression engine, graphics processor, general-purpose graphics processor (General Purpose Graphics Processing Units, GPGPU), embedded processor, or the like. Optional properties of coprocessor 906 are shown in fig. 13 with dashed lines.
In one implementation, the electronic device 900 may further include a network interface (Network Interface Card, NIC) 903. The network interface 903 may include a transceiver to provide a radio interface for the electronic device 900 to communicate with any other suitable device (e.g., front end module, antenna, etc.). In various implementations, the network interface 903 may be integrated with other components of the electronic device 900. The network interface 903 may implement the functionality of the communication unit in the above-described implementation.
The electronic device 900 may further include an input/output (I/O) device 905. Input/output (I/O) devices 905 may include: a user interface, the design enabling a user to interact with the electronic device 900; the design of the peripheral component interface enables the peripheral component to also interact with the electronic device 900; and/or sensors designed to determine environmental conditions and/or location information associated with the electronic device 900.
It is noted that fig. 13 is merely exemplary. That is, although fig. 13 shows that the electronic apparatus 900 includes a plurality of devices such as a processor 901, a controller hub 904, and a memory 902, in practical applications, an apparatus using the methods of the present application may include only a part of the devices of the electronic apparatus 900, for example, may include only the processor 901 and the NIC903. The nature of the alternative device is shown in dashed lines in fig. 13.
One or more tangible, non-transitory computer-readable media for storing data and/or instructions may be included in the memory of the electronic device 900. The computer-readable storage medium has stored therein instructions, and in particular, temporary and permanent copies of the instructions.
In the present application, the electronic device 900 may be a terminal device such as a mobile phone, a tablet computer, a personal digital assistant (Personal Digital Assistant, PDA) or a desktop computer. The instructions stored in the memory of the electronic device may include: instructions that when executed by at least one unit in the processor cause the electronic device to implement the sound processing method as mentioned above.
Illustratively, fig. 14 is a schematic structural diagram of a SoC (System on Chip) 1000 provided in accordance with an implementation of the present application. In fig. 14, similar parts have the same reference numerals. In addition, the dashed box is an optional feature of the more advanced SoC 1000. The SoC1000 may be used in any electronic device according to the present application, and may implement corresponding functions according to the device in which it is located and the instructions stored therein.
In fig. 14, the SoC1000 includes: an interconnect unit 1002 coupled to the processor 1001; a system agent unit 1006; a bus controller unit 1005; an integrated memory controller unit 1003; a set or one or more coprocessors 1007 which may include integrated graphics logic, image processors, audio processors, and video processors; a Static Random-Access Memory (SRAM) unit 1008; direct memory access (Direct Memory Access, DMA) unit 1004. In one implementation, the coprocessor 1007 includes a special-purpose processor, such as, for example, a network or communication processor, compression engine, GPGPU, high-throughput MIC processor, embedded processor, or the like.
One or more computer-readable media for storing data and/or instructions may be included in the SRAM cell 1008. The computer-readable storage medium may have stored therein instructions, and in particular, temporary and permanent copies of the instructions. The instructions may include: the execution by at least one unit in the processor 1001 causes the electronic device to implement the instructions of the sound processing method as mentioned above.
The implementation mode of the application provides electronic equipment, which comprises: a memory for storing a computer program, the computer program comprising program instructions; and a processor for executing program instructions to cause the electronic device to perform the sound processing method as described above.
An implementation of the present application provides a computer-readable storage medium storing a computer program including program instructions that are executed by an electronic device to cause the electronic device to perform the foregoing sound processing method.
An implementation of the application provides a computer program product comprising a computer program to be run by an electronic device to cause the electronic device to perform the aforementioned sound processing method.
The terms "first," "second," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
In the drawings, some structural or methodological features may be shown in a particular arrangement and/or order. However, it should be understood that such a particular arrangement and/or ordering may not be required. Rather, in some implementations, the features can be arranged in a different manner and/or order than shown in the illustrative drawings. Additionally, the inclusion of structural or methodological features in a particular figure is not meant to imply that such features are required in all implementations, and in some implementations, such features may not be included or may be combined with other features.
While the application has been shown and described with respect to certain implementations thereof, it will be apparent to those of ordinary skill in the art that the foregoing is a further detailed description of the application in connection with the specific implementations, and it is not intended to limit the practice of the application to those descriptions. Various changes in form and detail may be made therein by those skilled in the art, including a few simple inferences or alternatives, without departing from the spirit and scope of the present application.

Claims (12)

1. The sound processing device is characterized in that the sound processing device is applied to electronic equipment and comprises a main control module and a sound detection processing module, the main control module comprises a first main control processing unit and a second main control processing unit, the sound detection processing module is a coder-decoder, the sound detection processing module comprises a first sound detection processing unit and a second sound detection processing unit, the first sound detection processing unit comprises a first sound detection processing subunit and a second sound detection processing subunit, the first sound detection processing subunit comprises a filter, the second sound detection processing subunit comprises a sound type detection algorithm hardened in the second sound detection processing subunit, wherein,
the second sound detection processing unit is used for receiving a first sound signal and storing the first sound signal;
the first sound detection processing subunit is configured to receive the first sound signal, perform filtering processing on the first sound signal through the filter to obtain a first target sound signal, and send the first target sound signal to the second sound detection processing subunit, where the first target sound signal is a first frequency band signal in the first sound signal;
The second sound detection processing subunit is configured to receive the first target sound signal, perform sound type detection processing on the first target sound signal by using the sound type detection algorithm, determine a gain of the first target sound signal, determine whether the first target sound signal includes a first type sound signal according to the gain, generate first information if it is determined that the first target sound signal includes the first type sound signal, and send the first information to the first main control processing unit in the main control module;
the first main control processing unit is used for receiving the first information, responding to the first information, being in an awake state and sending second information to the second main control processing unit;
the second main control processing unit is used for receiving the second information, responding to the second information, being in an awake state, and sending third information to the second sound detection processing unit in the sound detection processing module so as to acquire the first sound signal from the second sound detection processing unit for performing a first service.
2. The sound processing apparatus of claim 1 wherein the first type of sound signal is an ultrasonic signal and the first sound detection processing subunit comprises a high pass filter or a band pass filter.
3. The sound processing apparatus of claim 1, wherein the second sound detection processing unit comprises a first-in first-out memory.
4. A sound processing apparatus according to any one of claims 1-3, wherein the main control module is connected to the sound detection processing module through a first interface and a second interface, wherein the first interface is used for transmitting the first information, and the second interface is used for transmitting the first sound signal.
5. A sound processing apparatus according to any one of claims 1 to 3, wherein,
the first information is an interrupt signal for adjusting the working state of the main control module; or alternatively
The first information is a notification signal including the wake-up state identification information.
6. The sound processing apparatus of claim 1, wherein the sound detection processing module further comprises a third sound detection processing unit, a fourth sound detection processing unit, and a fifth sound detection processing unit, wherein,
The third sound detection processing unit is used for receiving a second sound signal, amplifying the second sound signal to obtain a first sound sub-signal, and sending the first sound sub-signal to the fourth sound detection processing unit, wherein the second sound signal is a sound signal acquired by the electronic equipment;
the fourth sound detection processing unit is configured to receive the first sound sub-signal, perform analog-to-digital conversion processing on the first sound sub-signal to obtain a second sound sub-signal, and send the second sound sub-signal to the fifth sound detection processing unit;
the fifth sound detection processing unit is configured to receive the second sound signal, perform signal extraction filtering processing on the second sound signal, obtain the first sound signal, and send the first sound signal to the first sound detection processing unit and the second sound detection processing unit respectively.
7. The sound processing apparatus of claim 6 wherein the third sound detection processing unit comprises a programmable gain amplifier, the fourth sound detection processing unit comprises an analog-to-digital converter, and the fifth sound detection processing unit comprises a signal decimation filter.
8. A sound processing apparatus according to any one of claims 1-3, wherein the main control module is configured to be in an awake state in response to the first information, and comprises:
the main control module is used for responding to the first information and switching from a dormant state to the awakening state; or alternatively
The main control module is used for responding to the first information and keeping the awakening state.
9. A sound processing apparatus according to any one of claims 1-3, wherein the main control module is a system on a chip.
10. An electronic device comprising a sound processing apparatus as claimed in any one of claims 1-9.
11. The sound processing method is characterized by being applied to electronic equipment, the electronic equipment comprises a sound processing device, the sound processing device comprises a main control module and a sound detection processing module, the main control module comprises a first main control processing unit and a second main control processing unit, the sound detection processing module is a coder-decoder, the sound detection processing module comprises a first sound detection processing unit and a second sound detection processing unit, the first sound detection processing unit comprises a first sound detection processing subunit and a second sound detection processing subunit, the first sound detection processing subunit comprises a filter, and the second sound detection processing subunit comprises a sound type detection algorithm hardened in the second sound detection processing subunit, and the method comprises:
The second sound detection processing unit receives a first sound signal and stores the first sound signal;
the first sound detection processing subunit receives the first sound signal, performs filtering processing on the first sound signal through the filter to obtain a first target sound signal, and sends the first target sound signal to the second sound detection processing subunit, wherein the first target sound signal is a first frequency band signal in the first sound signal;
the second sound detection processing subunit receives the first target sound signal, performs sound type detection processing on the first target sound signal through the sound type detection algorithm to determine the gain of the first target sound signal, determines whether the first target sound signal comprises a first type sound signal according to the gain, generates first information when the first target sound signal comprises the first type sound signal, and sends the first information to the first main control processing unit in the main control module;
the first main control processing unit receives the first information, is in an awake state in response to the first information, and transmits second information to the second main control processing unit;
The second main control processing unit receives the second information, responds to the second information, is in an awake state, and sends third information to the second sound detection processing unit in the sound detection processing module so as to acquire the first sound signal from the second sound detection processing unit for performing a first service.
12. The sound processing method of claim 11, wherein in the awake state comprises:
switching from a sleep state to the awake state; or alternatively
And maintaining the awakening state.
CN202310718953.0A 2023-06-16 2023-06-16 Sound processing device, sound processing method and electronic equipment Active CN116456441B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310718953.0A CN116456441B (en) 2023-06-16 2023-06-16 Sound processing device, sound processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310718953.0A CN116456441B (en) 2023-06-16 2023-06-16 Sound processing device, sound processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN116456441A CN116456441A (en) 2023-07-18
CN116456441B true CN116456441B (en) 2023-10-31

Family

ID=87128890

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310718953.0A Active CN116456441B (en) 2023-06-16 2023-06-16 Sound processing device, sound processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN116456441B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113031749A (en) * 2019-12-09 2021-06-25 Oppo广东移动通信有限公司 Electronic device
WO2022033574A1 (en) * 2020-08-13 2022-02-17 北京京东尚科信息技术有限公司 Method and apparatus for waking up device
WO2022068544A1 (en) * 2020-09-29 2022-04-07 华为技术有限公司 Voice wake-up method, electronic device, and chip system
CN111819533B (en) * 2018-10-11 2022-06-14 华为技术有限公司 Method for triggering electronic equipment to execute function and electronic equipment
WO2022156438A1 (en) * 2021-01-20 2022-07-28 华为技术有限公司 Wakeup method and electronic device
CN114816026A (en) * 2021-01-21 2022-07-29 华为技术有限公司 Low-power-consumption standby method, electronic equipment and computer-readable storage medium
WO2023029967A1 (en) * 2021-08-31 2023-03-09 华为技术有限公司 Audio playback method, and electronic device
CN115985323A (en) * 2023-03-21 2023-04-18 北京探境科技有限公司 Voice wake-up method and device, electronic equipment and readable storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9992745B2 (en) * 2011-11-01 2018-06-05 Qualcomm Incorporated Extraction and analysis of buffered audio data using multiple codec rates each greater than a low-power processor rate
US20140006825A1 (en) * 2012-06-30 2014-01-02 David Shenhav Systems and methods to wake up a device from a power conservation state

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111819533B (en) * 2018-10-11 2022-06-14 华为技术有限公司 Method for triggering electronic equipment to execute function and electronic equipment
CN113031749A (en) * 2019-12-09 2021-06-25 Oppo广东移动通信有限公司 Electronic device
WO2022033574A1 (en) * 2020-08-13 2022-02-17 北京京东尚科信息技术有限公司 Method and apparatus for waking up device
WO2022068544A1 (en) * 2020-09-29 2022-04-07 华为技术有限公司 Voice wake-up method, electronic device, and chip system
WO2022156438A1 (en) * 2021-01-20 2022-07-28 华为技术有限公司 Wakeup method and electronic device
CN114816026A (en) * 2021-01-21 2022-07-29 华为技术有限公司 Low-power-consumption standby method, electronic equipment and computer-readable storage medium
WO2023029967A1 (en) * 2021-08-31 2023-03-09 华为技术有限公司 Audio playback method, and electronic device
CN115985323A (en) * 2023-03-21 2023-04-18 北京探境科技有限公司 Voice wake-up method and device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
CN116456441A (en) 2023-07-18

Similar Documents

Publication Publication Date Title
CN110364151B (en) Voice awakening method and electronic equipment
KR102354275B1 (en) Speech recognition method and apparatus, and storage medium
CN108684029B (en) Bluetooth pairing connection method and system, Bluetooth device and terminal
CN107742523B (en) Voice signal processing method and device and mobile terminal
CN108597507A (en) Far field phonetic function implementation method, equipment, system and storage medium
CN108320751B (en) Voice interaction method, device, equipment and server
WO2021238354A1 (en) Sound leakage canceling method and electronic device
CN110572866B (en) Management method of wake-up lock and electronic equipment
CN114816026B (en) Low-power consumption standby method, electronic equipment and computer readable storage medium
WO2022262410A1 (en) Sound recording method and apparatus
CN112771828A (en) Audio data communication method and electronic equipment
CN113473013A (en) Display method and device for beautifying effect of image and terminal equipment
CN116795753A (en) Audio data transmission processing method and electronic equipment
WO2022022585A1 (en) Electronic device and audio noise reduction method and medium therefor
CN116456441B (en) Sound processing device, sound processing method and electronic equipment
US20210264923A1 (en) Audio system with digital microphone
CN114822525A (en) Voice control method and electronic equipment
WO2023124248A9 (en) Voiceprint recognition method and apparatus
CN109065042B (en) Electronic equipment and information processing method
CN112866867A (en) Low-power-consumption noise reduction method and device, readable storage medium and earphone
CN115617191A (en) Touch anomaly suppression method, electronic device and storage medium
WO2022052730A1 (en) Method and apparatus for repairing abnormal application exit, and electronic device
CN115103304A (en) Position information calling method and device
CN113162837B (en) Voice message processing method, device, equipment and storage medium
CN114245443A (en) Wake-up alignment method, system and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant