CN116030821A - Audio processing method, device, electronic equipment and readable storage medium - Google Patents

Audio processing method, device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN116030821A
CN116030821A CN202310305630.9A CN202310305630A CN116030821A CN 116030821 A CN116030821 A CN 116030821A CN 202310305630 A CN202310305630 A CN 202310305630A CN 116030821 A CN116030821 A CN 116030821A
Authority
CN
China
Prior art keywords
audio processing
audio
audio signal
strategy
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310305630.9A
Other languages
Chinese (zh)
Inventor
鲁勇
刘波
刘海平
梁健林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Intengine Technology Co Ltd
Original Assignee
Beijing Intengine Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Intengine Technology Co Ltd filed Critical Beijing Intengine Technology Co Ltd
Priority to CN202310305630.9A priority Critical patent/CN116030821A/en
Publication of CN116030821A publication Critical patent/CN116030821A/en
Pending legal-status Critical Current

Links

Images

Abstract

The application discloses an audio processing method, an audio processing device, electronic equipment and a readable storage medium, wherein the audio processing method comprises the following steps: acquiring an audio signal; identifying an audio processing strategy corresponding to the current audio processing scene; when the audio processing strategy is identified as a preset strategy, masking a part of the audio signal; outputting the processed audio signal. The audio processing scheme provided by the application can improve the flexibility of audio processing, thereby reducing the power consumption.

Description

Audio processing method, device, electronic equipment and readable storage medium
Technical Field
The present invention relates to the field of communications, and in particular, to an audio processing method, an audio processing device, an electronic device, and a readable storage medium.
Background
Digital processing of audio media with the development of computer technology, particularly the implementation of mass storage devices and mass storage devices on PCs, digital processing of audio media is possible. At the heart of digital processing is the sampling of audio information. By processing the collected samples, various effects can be achieved.
The filter is generally used to process the audio data, and the wave device can effectively filter the frequency point of the specific frequency or the frequency outside the frequency point in the power line to obtain a power signal of the specific frequency or eliminate the power signal of the specific frequency. However, in the current audio processing scheme, for different audio processing scenes, a filter is required to perform a large amount of computation on audio data, and finally output audio signals, so that the current audio processing scheme needs to consume a large amount of computing resources and has high power consumption.
Disclosure of Invention
Aiming at the technical problems, the application provides an audio processing method, an audio processing device, electronic equipment and a readable storage medium, which can improve the flexibility of audio processing and reduce the power consumption.
In order to solve the above technical problems, the present application provides an audio processing method, including:
acquiring an audio signal;
identifying an audio processing strategy corresponding to the current audio processing scene;
when the audio processing strategy is identified as a preset strategy, masking a part of the audio signal;
outputting the processed audio signal.
Optionally, in some embodiments of the present application, when the audio processing policy is identified as a preset policy, masking a portion of the audio signal includes:
when the audio processing strategy is identified as a preset strategy, determining a filter coefficient corresponding to the audio processing strategy;
and masking a portion of the audio signal based on the filter coefficients and multiplier coefficients.
Optionally, in some embodiments of the present application, the masking the portion of the audio signal based on the filter coefficient and the multiplier coefficient includes:
determining a mask corresponding to the audio processing strategy;
masking the audio signal and the filter coefficients, respectively;
calculating the product between the covered audio signal and the filter coefficient to obtain a multiplier coefficient;
the outputting the processed audio signal includes: and outputting the processed audio signal based on the multiplier coefficient.
Optionally, in some embodiments of the present application, when the audio processing policy is identified as a preset policy, determining a filter coefficient corresponding to the audio processing policy includes:
when the audio processing strategy is identified as a voice awakening strategy, outputting a first filter coefficient corresponding to the voice awakening strategy;
and when the audio processing strategy is recognized as a voice recognition strategy, outputting a second filter coefficient corresponding to the voice recognition strategy.
Optionally, in some embodiments of the present application, further includes:
and when the audio processing strategy is not the preset strategy, processing the audio signal according to the current audio processing scene.
Correspondingly, the application also provides an audio processing device, which comprises:
the acquisition module is used for acquiring the audio signal;
the identification module is used for identifying an audio processing strategy corresponding to the current audio processing scene;
the processing module is used for covering the part of the audio signal when the audio processing strategy is identified as a preset strategy;
and the output module is used for outputting the processed audio signals.
Optionally, in some embodiments of the present application, the processing module includes:
a determining unit, configured to determine a filter coefficient corresponding to the audio processing policy when the audio processing policy is identified as a preset policy;
and the processing unit is used for covering the part of the audio signal based on the filter coefficient and the multiplier coefficient.
Optionally, in some embodiments of the present application, the processing unit is specifically configured to:
determining a mask corresponding to the audio processing strategy;
masking the audio signal and the filter coefficients, respectively;
calculating the product between the covered audio signal and the filter coefficient to obtain a multiplier coefficient;
the outputting the processed audio signal includes: and outputting the processed audio signal based on the multiplier coefficient.
The application also provides an electronic device comprising a memory storing a computer program and a processor implementing the steps of the method as described above when executing the computer program.
The present application also provides a computer storage medium storing a computer program which, when executed by a processor, implements the steps of the method as described above.
As described above, the present application provides an audio processing method, apparatus, electronic device, and readable storage medium, where the audio processing method includes: after the audio signal is acquired, an audio processing strategy corresponding to the current audio processing scene is identified, when the audio processing strategy is identified as a preset strategy, the part of the audio signal is subjected to covering processing, and finally, the processed audio signal is output. In the audio processing scheme provided by the application, corresponding audio processing strategies can be output according to different audio processing scenes, when the audio processing strategies are identified to be preset strategies, masking processing is carried out on parts of the audio signals, and finally, the processed audio signals are output, so that the flexibility of audio processing can be improved, and the power consumption is reduced.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application. In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
Fig. 1 is a schematic structural diagram of an audio processing system according to an embodiment of the present application;
fig. 2 is a schematic flow chart of an audio processing method according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an audio processing apparatus according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an intelligent terminal provided in an embodiment of the present application.
The realization, functional characteristics and advantages of the present application will be further described with reference to the embodiments, referring to the attached drawings. Specific embodiments thereof have been shown by way of example in the drawings and will herein be described in more detail. These drawings and the written description are not intended to limit the scope of the inventive concepts in any way, but to illustrate the concepts of the present application to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the element defined by the phrase "comprising one … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element, and furthermore, elements having the same name in different embodiments of the present application may have the same meaning or may have different meanings, a particular meaning of which is to be determined by its interpretation in this particular embodiment or by further combining the context of this particular embodiment.
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
In the following description, suffixes such as "module", "component", or "unit" for representing elements are used only for facilitating the description of the present application, and are not of specific significance per se. Thus, "module," "component," or "unit" may be used in combination.
The embodiments related to the present application are specifically described below, and it should be noted that the order of description of the embodiments in the present application is not limited to the priority order of the embodiments.
The embodiment of the application provides an audio processing method, an audio processing device, a storage medium and electronic equipment. Specifically, the audio processing method of the embodiment of the application may be performed by an electronic device, where the electronic device may be a terminal. The electronic device may be an electronic device such as a smart phone, a tablet computer, a notebook computer, a touch screen, a game console, a personal computer (PC, personalComputer), a personal digital assistant (Personal Digital Assistant, PDA), etc., and the electronic device may further include a client, which may be an audio processing client or other clients. The electronic device can be connected with the server in a wired or wireless mode, the server can be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, and a cloud server for providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligent platforms and the like.
For example, when the audio processing method is run in the electronic device, after the electronic device acquires the audio signal, the electronic device identifies an audio processing policy corresponding to the current audio processing scene, then, when the electronic device identifies that the audio processing policy is a preset policy, the electronic device performs covering processing on a part of the audio signal, and finally, the electronic device outputs the processed audio signal.
Referring to fig. 1, fig. 1 is a schematic system diagram of an audio processing apparatus according to an embodiment of the present application. The system may include at least one electronic device 1000, at least one server or personal computer 2000. The electronic device 1000 held by the user may be connected to different servers or personal computers through a network. The electronic device 1000 may be an electronic device having computing hardware capable of supporting and executing software products corresponding to multimedia. In addition, the electronic device 1000 may also have one or more multi-touch sensitive screens for sensing and obtaining input from a user through touch or slide operations performed at multiple points of the one or more touch sensitive display screens. In addition, the electronic device 1000 may be connected to a server or a personal computer 2000 through a network. The network may be a wireless network or a wired network, such as a Wireless Local Area Network (WLAN), a Local Area Network (LAN), a cellular network, a 2G network, a 3G network, a 4G network, a 5G network, etc. In addition, the different electronic devices 1000 may be connected to other embedded platforms or to a server, a personal computer, or the like using their own bluetooth network or hotspot network. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligent platforms.
The embodiment of the application provides an audio processing method which can be executed by electronic equipment. The electronic equipment comprises a touch display screen and a processor, wherein the touch display screen is used for presenting a graphical user interface and receiving an operation instruction generated by a user acting on the graphical user interface. When a user operates the graphical user interface through the touch display screen, the graphical user interface can control local content of the electronic equipment by responding to a received operation instruction, and can also control content of a server side by responding to the received operation instruction. For example, the user-generated operational instructions acting on the graphical user interface include instructions for processing the initial audio signal, and the processor is configured to launch a corresponding application upon receiving the user-provided instructions. Further, the processor is configured to render and draw a graphical user interface associated with the application on the touch-sensitive display screen. A touch display screen is a multi-touch-sensitive screen capable of sensing touch or slide operations performed simultaneously by a plurality of points on the screen. The user performs touch operation on the graphical user interface by using a finger, and when the graphical user interface detects the touch operation, the graphical user interface controls the graphical user interface of the application to display the corresponding operation.
According to the audio processing scheme, corresponding audio processing strategies can be output according to different audio processing scenes, when the audio processing strategies are identified to be preset strategies, masking processing is carried out on parts of audio signals, and finally, processed audio signals are output, so that the flexibility of audio processing can be improved, and power consumption is reduced.
The following will describe in detail. It should be noted that the following description order of embodiments is not a limitation of the priority order of embodiments.
An audio processing method, comprising: and acquiring an audio signal, identifying an audio processing strategy corresponding to the current audio processing scene, and when the audio processing strategy is identified as a preset strategy, masking a part of the audio signal and outputting the processed audio signal.
Referring to fig. 2, fig. 2 is a flow chart of an audio processing method according to an embodiment of the present application. The specific flow of the digital audio processing method can be as follows:
101. an audio signal is acquired.
The audio signal (audiosignal) is a signal representing a mechanical wave, and is an information carrier in which the wavelength and intensity of the mechanical wave change. According to the characteristics of the mechanical wave, it can be classified into regular signals and irregular signals. The audio signal may be collected by a sound sensor (such as a microphone) built in the electronic device, for example, the audio signal may be obtained within 10 minutes, the audio signal may be obtained within 20 minutes, and the audio signal of the current environment may be continuously obtained.
102. An audio processing policy corresponding to the current audio processing scene is identified.
Optionally, the method and the device adapt to different audio processing scenes by setting different audio processing strategies, wherein the audio processing strategy corresponding to the current audio processing scene can be identified after the current audio processing scene is determined, specifically, the current audio processing scene can be identified through a neural network, the type of the current audio processing scene can also be selected by a user, specifically, the method and the device can be set according to actual conditions, and when the audio processing strategy is identified as a preset strategy, step 103 is executed.
103. And when the audio processing strategy is identified as the preset strategy, masking the part of the audio signal.
In this application, when the audio processing policy is identified as the preset policy, parameters of the filter may be adjusted, and based on the adjusted parameters, the masking of the portion of the audio signal is achieved, that is, optionally, in some embodiments, the step of "masking the portion of the audio signal when the audio processing policy is identified as the preset policy" may specifically include:
(11) When the audio processing strategy is identified as a preset strategy, determining a filter coefficient corresponding to the audio processing strategy;
(12) And masking a portion of the audio signal based on the filter coefficients.
In this application, the voice wake-up policy and the voice recognition policy are determined as preset policies, that is, it can be understood that, in the voice wake-up scenario and the voice recognition scenario, the filter coefficients and the multiplier coefficients of the filter are adjusted.
Optionally, in some embodiments, the step of determining the filter coefficient and the multiplier coefficient corresponding to the audio processing policy when the audio processing policy is identified as the preset policy may specifically include:
(21) When the audio processing strategy is identified as a voice awakening strategy, outputting a first filter coefficient corresponding to the voice awakening strategy;
(22) And outputting a second filter coefficient corresponding to the voice recognition strategy when the voice processing strategy is recognized as the voice recognition strategy.
The filter is a filter circuit consisting of a capacitor, an inductor and a resistor. The filter can effectively filter the frequency points of the specific frequency or the frequencies outside the frequency points in the power line to obtain a power signal of the specific frequency or eliminate the power signal of the specific frequency.
The filter coefficients, also called unit impulse response, refer to an infinite instantaneous impulse and since the integral over the time axis is 1 and t tends towards zero, the size of the unit impulse response should be infinite, but it is known that infinite numbers have a size comparison and the unit impulse response can be measured with a coefficient.
The method includes the steps of obtaining corresponding multiplier coefficients and filter coefficients, masking a portion of an audio signal, calculating a product of the audio signal and the multiplier coefficients, and completing processing of the audio signal based on the filter coefficients to output a processed audio signal, that is, optionally, in some embodiments, masking the portion of the audio signal based on the filter coefficients and the multiplier coefficients, which specifically includes:
(31) Determining a mask corresponding to the audio processing strategy;
(32) Masking the audio signal and the filter coefficients, respectively;
(33) And calculating the product between the covered audio signal and the filter coefficient to obtain a multiplier coefficient.
For example, specifically, when the audio signal Vn is a 24bits signal Vn [23:0], the filter coefficient thereof is Fm, and the multiplier m=vn×fm; in a voice wake-up scenario, m= { Vn [23:16], 0000_0000_0000} { Fm [23:16], 0000_0000_0000}, i.e. in a voice wake-up scenario, the audio signal Vn covers 16 bits of data, the valid bit is [23:16], and at the same time, the filter coefficient also covers 16 bits of data: 0000_0000_0000_0000. In a speech recognition scenario, m= { Vn [23:08],0000_0000} { Fm [23:08] 0000_0000}, i.e. in a speech recognition scenario, the audio signal Vn covers 8 bits of data, the valid bits are [23:8], and the filter coefficients cover 8 bits of data as well: 0000_0000_.
Wherein the multiplier can multiply two binary numbers, which is composed of a more basic adder. The multiplier may be implemented using a series of computer arithmetic techniques. The multiplier is not only used as a main basic unit for analog operations such as multiplication, division, power and evolution, but also widely used in electronic communication systems as modulation, demodulation, mixing, phase demodulation and automatic gain control; in addition, the circuit can be used for occasions such as filtering, waveform formation, frequency control and the like, so that the circuit is a functional circuit with wide application.
In addition, in some embodiments, the present application also adjusts the filter order, i.e., assuming that tap=64 is needed for reference audio processing, i.e., filter coefficients F0-F63, when the speech wakes up: f0-f15=0, f16-F47 was used for calculation, f48-f63=0. During voice recognition: f0-f7=0, f08-F55 is used for calculation, f56-f63=0, and reference audio processing: F00-F63 are used for calculation.
According to the above, the audio processing method can adjust the precision of the multiplier according to different application scenes, and the corresponding power consumption reduction in voice awakening and voice recognition is realized by effectively reducing the number of the reverse bit of the multiplier; meanwhile, the order of the effective filter can be adjusted, and the corresponding power consumption reduction in voice awakening and voice recognition is realized by reducing the number of effective multiplication operations.
104. Outputting the processed audio signal.
Optionally, in some embodiments, step "output processed audio signal" includes: the processed audio signal is output based on the multiplier coefficients.
Optionally, in some embodiments, the audio processing method provided in the present application may specifically further include: and when the audio processing strategy is not the preset strategy, processing the audio signal according to the current audio processing scene.
The above completes the audio processing flow of the present application.
As can be seen from the foregoing, the present application provides an audio processing method, after an audio signal is acquired, then, an audio processing policy corresponding to a current audio processing scene is identified, when the audio processing policy is identified as a preset policy, a part of the audio signal is subjected to covering processing, and finally, a processed audio signal is output.
In order to facilitate better implementation of the audio processing method, the application also provides an audio processing device based on the audio processing device. Where the meaning of the terms is the same as in the above-described audio processing method, specific implementation details may be referred to in the description of the method embodiments.
Referring to fig. 3, fig. 3 is a schematic structural diagram of an audio processing apparatus provided in the present application, where the audio processing apparatus may include an acquisition module 201, an identification module 202, a processing module 203, and an output module 204, and may specifically be as follows:
an acquisition module 201, configured to acquire an audio signal.
The audio signal (audio signals) is a signal representing a mechanical wave, and is an information carrier in which the wavelength and intensity of the mechanical wave change. According to the characteristics of the mechanical wave, it can be classified into regular signals and irregular signals. The audio signal may be collected by a sound sensor (such as a microphone) built in the electronic device, for example, the audio signal may be obtained within 10 minutes, the audio signal may be obtained within 20 minutes, and the audio signal of the current environment may be continuously obtained.
The identifying module 202 is configured to identify an audio processing policy corresponding to a current audio processing scene.
Optionally, the method and the device adapt to different audio processing scenes by setting different audio processing strategies, wherein the audio processing strategy corresponding to the current audio processing scene can be identified after the current audio processing scene is determined, specifically, the current audio processing scene can be identified through a neural network, the type of the current audio processing scene can be selected by a user, and the method and the device can be specifically set according to actual conditions.
And the processing module 203 is configured to mask a portion of the audio signal when the audio processing policy is identified as a preset policy.
In this application, when the audio processing policy is identified as a preset policy, parameters of the filter may be adjusted, and based on the adjusted parameters, partial coverage of the audio signal is achieved, that is, optionally, in some embodiments, the processing module 203 may specifically include:
and the determining unit is used for determining the filter coefficient corresponding to the audio processing strategy when the audio processing strategy is identified as the preset strategy.
And a processing unit for masking a portion of the audio signal based on the filter coefficients and the multiplier coefficients.
Alternatively, in some embodiments, the processing unit may be specifically configured to: determining a mask corresponding to the audio processing strategy; masking the audio signal and the filter coefficients, respectively; and calculating the product between the covered audio signal and the filter coefficient to obtain a multiplier coefficient.
Alternatively, in some embodiments, the processing unit may be specifically configured to: when the audio processing strategy is identified as a voice awakening strategy, outputting a first filter coefficient corresponding to the voice awakening strategy; and outputting a second filter coefficient corresponding to the voice recognition strategy when the voice processing strategy is recognized as the voice recognition strategy.
The output module 204 is configured to output the processed audio signal.
Alternatively, in some embodiments, the output module 204 may be specifically configured to: the processed audio signal is output based on the multiplier coefficients.
Alternatively, in some embodiments, the output module 204 may be specifically configured to: and when the audio processing strategy is not the preset strategy, processing the audio signal according to the current audio processing scene.
The above completes the audio processing flow of the present application.
As can be seen from the foregoing, the present application provides an audio processing apparatus, where after an audio signal is acquired by the acquiring module 201, the identifying module 202 identifies an audio processing policy corresponding to a current audio processing scenario, when the processing module 203 identifies that the audio processing policy is a preset policy, a part of the audio signal is covered, and finally, the output module 204 outputs the processed audio signal.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
The embodiment of the present invention further provides an electronic device 500, as shown in fig. 4, where the electronic device 500 may integrate the above-mentioned audio processing apparatus, and may further include a Radio Frequency (RF) circuit 501, a memory 502 including one or more computer readable storage media, an input unit 503, a display unit 504, a sensor 505, an audio circuit 506, a wireless fidelity (WiFi, wireless Fidelity) module 507, a processor 508 including one or more processing cores, and a power supply 509. Those skilled in the art will appreciate that the electronic device 500 structure shown in fig. 4 is not limiting of the electronic device 500 and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components. Wherein:
the RF circuit 501 may be configured to receive and send information or signals during a call, and in particular, after receiving downlink information of a base station, the downlink information is processed by one or more processors 508; in addition, data relating to uplink is transmitted to the base station. Typically, RF circuitry 501 includes, but is not limited to, an antenna, at least one amplifier, a tuner, one or more oscillators, a subscriber identity Module (SIM, subscriberIdentity Module) card, a transceiver, a coupler, a low noise amplifier (LNA, low NoiseAmplifier), a duplexer, and the like. In addition, RF circuitry 501 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol including, but not limited to, global system for mobile communications (GSM, global Systemof Mobile communication), universal packet Radio Service (GPRS, generalPacket Radio Service), code division multiple access (CDMA, code DivisionMultiple Access), wideband code division multiple access (WCDMA, wideband CodeDivision Multiple Access), long term evolution (LTE, long TermEvolution), email, short message Service (SMS, shortMessaging Service), and the like.
The memory 502 may be used to store software programs and modules, and the processor 508 executes the software programs and modules stored in the memory 502 to perform various functional applications and information processing. The memory 502 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as a sound playing function, a target data playing function, etc.), and the like; the storage data area may store data (such as audio signals, phonebooks, etc.) created according to the use of the electronic device 500, and the like. In addition, memory 502 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. Accordingly, the memory 502 may also include a memory controller to provide access to the memory 502 by the processor 508 and the input unit 503.
The input unit 503 may be used to receive input numeric or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, in one particular embodiment, the input unit 503 may include a touch-sensitive surface, as well as other input devices. The touch-sensitive surface, also referred to as a touch display screen or a touch pad, may collect touch operations thereon or thereabout by a user (e.g., operations thereon or thereabout by a user using any suitable object or accessory such as a finger, stylus, etc.), and actuate the corresponding connection means according to a predetermined program. Alternatively, the touch-sensitive surface may comprise two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device and converts it into touch point coordinates, which are then sent to the processor 508, and can receive commands from the processor 508 and execute them. In addition, touch sensitive surfaces may be implemented in a variety of types, such as resistive, capacitive, infrared, and surface acoustic waves. The input unit 503 may comprise other input devices besides a touch-sensitive surface. In particular, other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, mouse, joystick, etc.
The display unit 504 may be used to display information entered by a user or provided to a user as well as various graphical user interfaces of the electronic device 500, which may be composed of graphics, text, icons, video, and any combination thereof. The display unit 504 may include a display panel, which may be optionally configured in the form of a liquid crystal display (LCD, liquid Crystal Display), an Organic Light-emitting diode (OLED), or the like. Further, the touch-sensitive surface may overlay a display panel, and upon detection of a touch operation thereon or thereabout, the touch-sensitive surface is passed to the processor 508 to determine the type of touch event, and the processor 508 then provides a corresponding visual output on the display panel based on the type of touch event. Although in fig. 4 the touch sensitive surface and the display panel are implemented as two separate components for input and output functions, in some embodiments the touch sensitive surface may be integrated with the display panel to implement the input and output functions.
The electronic device 500 may also include at least one sensor 505, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel according to the brightness of ambient light, and a proximity sensor that may turn off the display panel and/or backlight when the electronic device 500 is moved to the ear. As one of the motion sensors, the gravitational acceleration sensor may detect the acceleration in each direction (generally, three axes), and may detect the gravity and direction when stationary, and may be used for applications of recognizing the gesture of a mobile phone (such as horizontal/vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer, and knocking), and other sensors such as gyroscopes, barometers, hygrometers, thermometers, and infrared sensors, which may be further configured in the electronic device 500, will not be described herein.
Audio circuitry 506, speakers, and a microphone may provide an audio interface between the user and the electronic device 500. The audio circuit 506 may transmit the received electrical signal converted from the audio signal to a speaker, where it is converted into a sample signal for output; on the other hand, the microphone converts the collected sample signal into an electrical signal, which is received by the audio circuit 506 and converted into an audio signal, which is processed by the audio signal output processor 508, and then sent via the RF circuit 501 to, for example, another electronic device 500, or the audio signal is output to the memory 502 for further processing. Audio circuitry 506 may also include an ear bud jack to provide communication of the peripheral ear bud with electronic device 500.
WiFi belongs to a short-distance wireless transmission technology, and the electronic equipment 500 can help a user to send and receive emails, browse webpages, access streaming media and the like through the WiFi module 507, so that wireless broadband Internet access is provided for the user. Although fig. 4 shows a WiFi module 507, it is understood that it does not belong to the necessary constitution of the electronic device 500, and may be omitted entirely as needed within a range that does not change the essence of the invention.
The processor 508 is a control center of the electronic device 500, connects various parts of the entire handset using various interfaces and lines, and performs various functions of the electronic device 500 and processes data by running or executing software programs and/or modules stored in the memory 502, and invoking data stored in the memory 502, thereby performing overall monitoring of the handset. Optionally, the processor 508 may include one or more processing cores; preferably, the processor 508 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 508.
The electronic device 500 also includes a power supply 509 (e.g., a battery) for powering the various components, which may be logically connected to the processor 508 via a power management system that performs functions such as managing charge, discharge, and power consumption. The power supply 509 may also include one or more of any of a direct current or alternating current power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power data indicator, and the like.
Although not shown, the electronic device 500 may further include a camera, a bluetooth module, etc., which will not be described herein. In particular, in this embodiment, the processor 508 in the electronic device 500 loads executable files corresponding to the processes of one or more application programs into the memory 502 according to the following instructions, and the processor 508 executes the application programs stored in the memory 502, so as to implement various functions:
and acquiring an audio signal, identifying an audio processing strategy corresponding to the current audio processing scene, and when the audio processing strategy is identified as a preset strategy, masking a part of the audio signal and outputting the processed audio signal.
In the foregoing embodiments, the descriptions of the embodiments are focused on, and the portions of an embodiment that are not described in detail in the foregoing embodiments may be referred to in the detailed description of the audio processing method, which is not repeated herein.
As can be seen from the above, the electronic device 500 according to the embodiment of the present invention may output the corresponding audio processing policy according to different audio processing scenarios, and when it is identified that the audio processing policy is the preset policy, mask a portion of the audio signal, and finally output the processed audio signal, thereby improving the flexibility of audio processing and reducing power consumption.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, embodiments of the present application also provide a storage medium having stored thereon a plurality of instructions adapted to be loaded by a processor to perform the steps in the above-described audio processing method.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Wherein the storage medium may include: read Only Memory (ROM), random access memory (RAM, random AccessMemory), magnetic or optical disk, and the like.
The instructions stored in the storage medium may perform steps in any audio processing method provided by the embodiments of the present invention, so that the beneficial effects that any audio processing method provided by the embodiments of the present invention can be achieved, which are detailed in the previous embodiments and are not described herein.
The foregoing describes in detail the audio processing method, apparatus, system and storage medium provided by the embodiments of the present invention, and specific examples are applied to illustrate the principles and embodiments of the present invention, and the above description of the embodiments is only for helping to understand the method and core idea of the present invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present invention, the present description should not be construed as limiting the present invention.

Claims (10)

1. An audio processing method, comprising:
acquiring an audio signal;
identifying an audio processing strategy corresponding to the current audio processing scene;
when the audio processing strategy is identified as a preset strategy, masking a part of the audio signal;
outputting the processed audio signal.
2. The method of claim 1, wherein masking the portion of the audio signal when the audio processing policy is identified as a preset policy comprises:
when the audio processing strategy is identified as a preset strategy, determining a filter coefficient corresponding to the audio processing strategy;
and masking a portion of the audio signal based on the filter coefficients and multiplier coefficients.
3. The method of claim 2, wherein masking the portion of the audio signal based on the filter coefficients and multiplier coefficients comprises:
determining a mask corresponding to the audio processing strategy;
masking the audio signal and the filter coefficients, respectively;
calculating the product between the covered audio signal and the filter coefficient to obtain a multiplier coefficient;
the outputting the processed audio signal includes: and outputting the processed audio signal based on the multiplier coefficient.
4. The method of claim 2, wherein determining the filter coefficients corresponding to the audio processing policy when the audio processing policy is identified as a preset policy comprises:
when the audio processing strategy is identified as a voice awakening strategy, outputting a first filter coefficient corresponding to the voice awakening strategy;
and when the audio processing strategy is recognized as a voice recognition strategy, outputting a second filter coefficient corresponding to the voice recognition strategy.
5. The method according to any one of claims 1 to 4, further comprising:
and when the audio processing strategy is not the preset strategy, processing the audio signal according to the current audio processing scene.
6. An audio processing apparatus, comprising:
the acquisition module is used for acquiring the audio signal;
the identification module is used for identifying an audio processing strategy corresponding to the current audio processing scene;
the processing module is used for covering the part of the audio signal when the audio processing strategy is identified as a preset strategy;
and the output module is used for outputting the processed audio signals.
7. The apparatus of claim 6, wherein the processing module comprises:
a determining unit, configured to determine a filter coefficient corresponding to the audio processing policy when the audio processing policy is identified as a preset policy;
and the processing unit is used for covering the part of the audio signal based on the filter coefficient and the multiplier coefficient.
8. The apparatus according to claim 7, wherein the processing unit is specifically configured to:
determining a mask corresponding to the audio processing strategy;
masking the audio signal and the filter coefficients, respectively;
calculating the product between the covered audio signal and the filter coefficient to obtain a multiplier coefficient;
the outputting the processed audio signal includes: and outputting the processed audio signal based on the multiplier coefficient.
9. An electronic device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the audio processing method of any of claims 1 to 5 when the computer program is executed.
10. A readable storage medium, characterized in that the readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the audio processing method according to any of claims 1 to 5.
CN202310305630.9A 2023-03-27 2023-03-27 Audio processing method, device, electronic equipment and readable storage medium Pending CN116030821A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310305630.9A CN116030821A (en) 2023-03-27 2023-03-27 Audio processing method, device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310305630.9A CN116030821A (en) 2023-03-27 2023-03-27 Audio processing method, device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN116030821A true CN116030821A (en) 2023-04-28

Family

ID=86076293

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310305630.9A Pending CN116030821A (en) 2023-03-27 2023-03-27 Audio processing method, device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN116030821A (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6301596B1 (en) * 1999-04-01 2001-10-09 Ati International Srl Partial sum filter and method therefore
US20020010728A1 (en) * 2000-06-20 2002-01-24 Stoye Robert William Processor for FIR filtering
CN109343902A (en) * 2018-09-26 2019-02-15 Oppo广东移动通信有限公司 Operation method, device, terminal and the storage medium of audio processing components
CN109817236A (en) * 2019-02-01 2019-05-28 安克创新科技股份有限公司 Audio defeat method, apparatus, electronic equipment and storage medium based on scene
CN110580897A (en) * 2019-08-23 2019-12-17 Oppo广东移动通信有限公司 audio verification method and device, storage medium and electronic equipment
CN112992153A (en) * 2021-04-27 2021-06-18 太平金融科技服务(上海)有限公司 Audio processing method, voiceprint recognition device and computer equipment
CN113194387A (en) * 2021-04-27 2021-07-30 北京小米移动软件有限公司 Audio signal processing method, audio signal processing device, electronic equipment and storage medium
US20210375274A1 (en) * 2020-05-29 2021-12-02 Beijing Baidu Netcom Science And Technology Co., Ltd. Speech recognition method and apparatus, and storage medium
CN113921022A (en) * 2021-12-13 2022-01-11 北京世纪好未来教育科技有限公司 Audio signal separation method, device, storage medium and electronic equipment
WO2022097944A1 (en) * 2020-11-06 2022-05-12 삼성전자주식회사 Electronic device and audio signal processing method thereof
CN114495960A (en) * 2021-12-25 2022-05-13 浙江大华技术股份有限公司 Audio noise reduction filtering method, noise reduction filtering device, electronic equipment and storage medium
CN115113855A (en) * 2022-05-31 2022-09-27 腾讯科技(深圳)有限公司 Audio data processing method and device, electronic equipment and storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6301596B1 (en) * 1999-04-01 2001-10-09 Ati International Srl Partial sum filter and method therefore
US20020010728A1 (en) * 2000-06-20 2002-01-24 Stoye Robert William Processor for FIR filtering
CN109343902A (en) * 2018-09-26 2019-02-15 Oppo广东移动通信有限公司 Operation method, device, terminal and the storage medium of audio processing components
CN109817236A (en) * 2019-02-01 2019-05-28 安克创新科技股份有限公司 Audio defeat method, apparatus, electronic equipment and storage medium based on scene
CN110580897A (en) * 2019-08-23 2019-12-17 Oppo广东移动通信有限公司 audio verification method and device, storage medium and electronic equipment
US20210375274A1 (en) * 2020-05-29 2021-12-02 Beijing Baidu Netcom Science And Technology Co., Ltd. Speech recognition method and apparatus, and storage medium
WO2022097944A1 (en) * 2020-11-06 2022-05-12 삼성전자주식회사 Electronic device and audio signal processing method thereof
CN112992153A (en) * 2021-04-27 2021-06-18 太平金融科技服务(上海)有限公司 Audio processing method, voiceprint recognition device and computer equipment
CN113194387A (en) * 2021-04-27 2021-07-30 北京小米移动软件有限公司 Audio signal processing method, audio signal processing device, electronic equipment and storage medium
CN113921022A (en) * 2021-12-13 2022-01-11 北京世纪好未来教育科技有限公司 Audio signal separation method, device, storage medium and electronic equipment
CN114495960A (en) * 2021-12-25 2022-05-13 浙江大华技术股份有限公司 Audio noise reduction filtering method, noise reduction filtering device, electronic equipment and storage medium
CN115113855A (en) * 2022-05-31 2022-09-27 腾讯科技(深圳)有限公司 Audio data processing method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN108430101B (en) Antenna closing method and device, storage medium and electronic equipment
CN110929838B (en) Bit width localization method, device, terminal and storage medium in neural network
CN107393548B (en) Method and device for processing voice information collected by multiple voice assistant devices
EP4175349A1 (en) Method for processing relax measurement parameter update and terminal device
CN108470571B (en) Audio detection method and device and storage medium
CN107172671B (en) Channel switching method, device, storage medium and terminal
CN107219951B (en) Touch screen control method and device, storage medium and terminal equipment
CN111654902B (en) Method and device for reducing GPS signal interference of mobile terminal and mobile terminal
CN107734618B (en) Application program closing method and device, storage medium and electronic equipment
CN109688611B (en) Frequency band parameter configuration method, device, terminal and storage medium
CN116486833B (en) Audio gain adjustment method and device, storage medium and electronic equipment
CN115985323B (en) Voice wakeup method and device, electronic equipment and readable storage medium
CN110277097B (en) Data processing method and related equipment
CN116994596A (en) Howling suppression method and device, storage medium and electronic equipment
CN113593602B (en) Audio processing method and device, electronic equipment and storage medium
CN111966206B (en) Power saving control method and device for fingerprint sensor and mobile terminal
CN112261634B (en) Bluetooth transmission control method and device, terminal equipment and storage medium
CN116030821A (en) Audio processing method, device, electronic equipment and readable storage medium
CN106155733B (en) Method and device for running application program
CN109831771B (en) Bluetooth searching method and device, mobile terminal and storage medium
CN115995231B (en) Voice wakeup method and device, electronic equipment and readable storage medium
CN111405649B (en) Information transmission method and device and mobile terminal
CN116631423A (en) Audio signal processing method and device, storage medium and electronic equipment
CN116665699A (en) Digital audio processing method and device, storage medium and electronic equipment
CN114189436B (en) Multi-cluster configuration deployment method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination