CN117714969A - Sound effect processing method, device and storage medium - Google Patents

Sound effect processing method, device and storage medium Download PDF

Info

Publication number
CN117714969A
CN117714969A CN202310849596.1A CN202310849596A CN117714969A CN 117714969 A CN117714969 A CN 117714969A CN 202310849596 A CN202310849596 A CN 202310849596A CN 117714969 A CN117714969 A CN 117714969A
Authority
CN
China
Prior art keywords
audio data
processing
audio
post
sound effect
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310849596.1A
Other languages
Chinese (zh)
Inventor
晏细猫
李孟鸽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202310849596.1A priority Critical patent/CN117714969A/en
Publication of CN117714969A publication Critical patent/CN117714969A/en
Pending legal-status Critical Current

Links

Landscapes

  • Circuit For Audible Band Transducer (AREA)

Abstract

The application provides an audio processing method, equipment and a storage medium, wherein audio data are acquired through electronic equipment; the electronic equipment calls an audio post-processing algorithm in an audio post-processing algorithm module, performs audio processing on the audio data to obtain processed audio data, and the audio post-processing algorithm module is positioned at a hardware abstraction layer of an operating system of the electronic equipment; the electronic device outputs the processed audio data. According to the method for performing sound effect post-processing on the decoded audio data at the hardware abstraction layer, the algorithm integration on the audio processor of the system-level chip can be omitted when the sound effect post-processing is performed, the cooperative processing of software and hardware is realized, and the complexity of the sound effect post-processing is effectively reduced.

Description

Sound effect processing method, device and storage medium
Technical Field
The present disclosure relates to the field of electronic technologies, and in particular, to an audio processing method, an audio processing device, and a storage medium.
Background
With the development of electronic devices in the direction of being lighter and thinner, the sound outlet of the electronic device is usually designed at the bottom, which makes it difficult to ensure the effect of outputting sound. For this reason, it is necessary to optimize sound through an acoustic post-processing algorithm to improve the effect of the output sound.
In order to improve the effect of outputting sound, it is generally required to integrate an Audio post-processing algorithm on an Audio Processor (Audio Processor) of a System On Chip (SOC), perform Audio processing on decoded Audio data based on the Audio post-processing algorithm on the Audio Processor, and output the processed Audio data.
However, since the sound effect post-processing algorithm is run on the system-level chip, the algorithm integration is needed to be performed in the audio processor of the system-level chip, on one hand, the difficulty of the algorithm integration is high on the system-level chip, and on the other hand, different instruction sets corresponding to different system-level chips are needed to be performed, and the algorithm integration is needed to be performed once in the audio processor every time the different system-level chips are replaced, so that the workload is high and complicated, and the current sound effect processing is complex.
Disclosure of Invention
The application provides a sound effect processing method, a sound effect processing device and a storage medium, and aims to reduce the complexity of sound effect processing.
In order to achieve the above purpose, the present application adopts the following technical scheme:
first aspect: the application provides an audio processing method, which comprises the following steps: the electronic equipment acquires audio data; the electronic equipment calls an audio post-processing algorithm in an audio post-processing algorithm module, performs audio processing on the audio data to obtain processed audio data, and the audio post-processing algorithm module is positioned at a hardware abstraction layer of an operating system of the electronic equipment;
A hardware abstraction layer; the electronic device outputs the processed audio data.
In the embodiment of the application, the sound effect post-processing algorithm is arranged in a software layer of the system instead of a traditional system-level chip, and the method for performing sound effect post-processing on the audio data at the software side is adopted, so that the algorithm integration on an audio processor of the system-level chip is not depended on when the sound effect post-processing is performed, the cooperative processing of software and hardware is realized, and the complexity of the sound effect post-processing is effectively reduced.
In one possible implementation manner, the electronic device invokes an audio post-processing algorithm in an audio post-processing algorithm module to perform audio processing on the audio data to obtain processed audio data, including: the electronic equipment decodes the audio data to obtain decoded audio data; and calling an audio post-processing algorithm in the audio post-processing algorithm module, and performing audio processing on the decoded audio data to obtain processed audio data.
In one possible implementation, the electronic device decodes the audio data to obtain decoded audio data, including: the electronic equipment performs hard decoding on the audio data in a hard decoder of a system-level chip to obtain hard decoded audio data; invoking an audio post-processing algorithm in the audio post-processing algorithm module to perform audio processing on the decoded audio data to obtain processed audio data, including: and calling an audio post-processing algorithm in the audio post-processing algorithm module, and performing audio processing on the hard decoded audio data to obtain processed audio data.
In one possible implementation, the electronic device decodes the audio data to obtain decoded audio data, including: the electronic equipment utilizes a soft decoding module of an application program framework layer in an electronic equipment operating system to carry out soft decoding on the audio data to obtain soft decoded audio data; invoking an audio post-processing algorithm in the audio post-processing algorithm module to perform audio processing on the decoded audio data to obtain processed audio data, including: and calling an audio post-processing algorithm in the audio post-processing algorithm module, and performing audio processing on the soft decoded audio data to obtain processed audio data.
In one possible implementation manner, the electronic device invokes an audio post-processing algorithm in an audio post-processing algorithm module to perform audio processing on the audio data to obtain processed audio data, including: the electronic equipment calls a target sound effect post-processing algorithm from a plurality of sound effect post-processing algorithms in a sound effect post-processing algorithm module based on a control instruction, wherein the control instruction is used for indicating a target sound effect selected by a user, and the target sound effect post-processing algorithm corresponds to the target sound effect; and performing sound effect processing on the audio data based on the target sound effect post-processing algorithm to obtain processed audio data.
According to the method and the device, the corresponding target sound effect post-processing algorithm can be called to process the audio data based on the control instruction, the audio meeting the user requirement can be obtained, the complexity of sound effect post-processing is reduced, the requirement of the user is met, and the use experience of the user is improved.
In one possible implementation, the electronic device outputs the processed audio data, including: the electronic equipment performs basic sound effect processing on the processed audio data to obtain optimized audio data, wherein the basic sound effect processing comprises at least one of equalization processing, tone adjustment and volume adjustment; outputting the optimized audio data.
In one possible implementation, the electronic device obtains audio data, including: the electronic device obtains audio data from at least one of a television, a device connected via a high definition multimedia interface, a device connected via a coaxial output interface, and a multimedia processor.
In one possible implementation, the method further includes: performing power amplification on the processed audio data to obtain power amplified audio data; and outputting the audio data with amplified power.
Second aspect: the application provides an electronic device, the electronic device includes a processor and a memory: the memory is used for storing the program codes and transmitting the program codes to the processor; the processor is configured to perform the steps of a sound effect processing method according to the first aspect above according to instructions in the program code.
Third aspect: the present application provides a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of a sound effect processing method of the first aspect above.
Drawings
FIG. 1 is a schematic diagram of an audio processor-based sound effect post-processing hardware architecture;
FIG. 2 is a schematic diagram of an audio post-processing hardware structure based on an external digital signal processor;
FIG. 3 is a scene graph of a first sound effect processing provided in an embodiment of the present application;
FIG. 4 is a scene graph of a second sound effect processing provided in an embodiment of the present application;
fig. 5 is a schematic diagram of an audio processing hardware structure according to an embodiment of the present application;
fig. 6 is a schematic diagram of sound effect processing based on a soft decoding manner according to an embodiment of the present application;
fig. 7 is a schematic diagram of sound effect processing based on a hard decoding manner according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The terms first, second, third and the like in the description and in the claims and drawings are used for distinguishing between different objects and not for limiting the specified sequence.
In the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as examples, illustrations, or descriptions. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
Along with the development of electronic devices in the direction of being lighter and thinner, particularly electronic devices with smart screens are lighter and thinner, wherein the electronic devices with smart screens include mobile phones, tablet computers, video monitors, indoor and outdoor large-screen advertising machines and the like, and sound outlets of the electronic devices are usually designed at the bottoms of the electronic devices, so that the effect of outputting sound is difficult to be ensured. For this reason, it is necessary to optimize sound through an acoustic post-processing algorithm to improve the effect of the output sound. By way of example, the sound effect post-processing algorithms may include a stereo widening algorithm for widening the width of the sound field, a human voice dialect enhancing algorithm for enhancing human voice performance, a speaker transient distortion correction algorithm for reducing speaker tail, enhancing sound clarity, and the like.
Currently, to improve the effect of output sound, it is generally required to integrate an Audio post-processing algorithm on an Audio Processor (Audio Processor) of a system-on-chip, as shown in fig. 1, which is a schematic diagram of an Audio post-processing hardware structure based on the Audio Processor. An audio processor or a Digital Signal Processor (DSP) is carried on the system-in-chip, and the audio processor is taken as an example, and comprises a decoder, an audio post-processing algorithm module and an output module. The audio data is input to a decoder in the audio processor for decoding. The sound effect post-processing algorithm module performs sound effect post-processing on the decoded audio data based on the sound effect post-processing algorithm to obtain sound effect post-processed audio data, and the sound effect post-processed audio data is sent to the power amplifier through the output module. The processed audio data is then output through a Power Amplifier (PA) and a speaker.
However, since the sound effect post-processing algorithm is run on a system-on-chip, it is necessary to rely on the system-on-chip for the algorithm integration in the audio processor of the system-on-chip. On one hand, the difficulty of algorithm integration is high on the system-level chip, on the other hand, different instruction sets corresponding to different system-level chips are needed to be reworked once each time when different system-level chips are replaced, the workload is high and complicated, meanwhile, a user cannot control the working progress and cannot perform algorithm integration by himself, and the existing sound effect processing is complex.
Based on this, the application provides a sound effect processing method, wherein the sound effect post-processing algorithm is arranged in a software layer of the system, namely, the sound effect post-processing algorithm is arranged in a hardware abstract layer of the system instead of the traditional audio processor of the system-level chip. Decoding the audio data, inputting the decoded audio data into an audio post-processing algorithm module positioned at a hardware abstraction layer of an operating system of the electronic equipment after the decoded audio data is obtained, and performing audio post-processing on the decoded audio data in the hardware abstraction layer based on an audio post-processing algorithm provided by the audio post-processing algorithm module to obtain the processed audio data. And calling back the processed audio data to a basic sound effect processor in the system-in-chip, and outputting the processed audio data by the basic sound effect processor. According to the method for performing sound effect post-processing on the decoded audio data at the hardware abstraction layer, the algorithm integration on the audio processor of the system-level chip can be omitted when the sound effect post-processing is performed, the cooperative processing of software and hardware is realized, and the complexity of the sound effect post-processing is effectively reduced.
Meanwhile, compared with the method for performing sound effect post-processing through an external digital signal processor, the sound effect processing method provided by the application can reduce the complexity of sound effect post-processing and simultaneously reduce the cost. As shown in fig. 2, the figure is a schematic diagram of an audio post-processing hardware structure based on an external digital signal processor.
Different from the mode of carrying out algorithm integration in the audio processor of the system-in-chip in fig. 1, the audio data after decoding is transmitted to the audio post-processing algorithm module in the external digital signal processor based on the audio post-processing method of the external digital signal processor, and the audio post-processing algorithm module carries out audio post-processing on the audio data after decoding based on the audio post-processing algorithm. After that, the audio data after the sound effect post-processing is also transmitted to the power amplifier, and the audio data is output through the power amplifier and the speaker. In order to realize sound effect processing, the external digital signal processor with sound effect post-processing algorithm is required to be additionally configured for the system chip, so that the realization cost is increased. The audio post-processing algorithm is operated at a software layer, and a digital signal processor is not required to be additionally configured, so that the complexity of the audio post-processing can be reduced, and the cost can be reduced.
The following describes a scenario corresponding to the sound effect processing method provided in the embodiment of the present application with reference to fig. 3 to 4.
As shown in fig. 3, the first sound effect processing scenario provided in the embodiment of the present application is shown in this figure. The user may obtain audio data using a music playing application. The audio data is passed to an audio post-processing algorithm module located on the advanced reduced instruction set machine (Advanced RISC Machine, ARM) processor side of the hardware abstraction layer of the electronic device. And processing the audio data based on an audio post-processing algorithm in the audio post-processing algorithm module to obtain processed audio data. After the processed audio data is obtained, the processed audio data can be output through the power amplifier and the speaker. When the user clicks the play button to play audio, the processed audio data can be output.
As shown in fig. 4, the diagram is a scene diagram of a second sound effect processing provided in an embodiment of the present application. Based on fig. 3, the audio data can be processed based on the selection of the user, so as to obtain the sound effect meeting the needs of the user and improve the user experience.
Specifically, the electronic device may obtain audio data, i.e., acquire audio data, by a user using a music playing application. The user clicks the sound effect setting button 1101 on the play interface of the music play application, and can enter the sound effect setting interface. In the sound effect setting interface, the user may select different sound effects, such as, for example, intelligence, stereo, human voice enhancement, and sharpness enhancement, as shown in the sound effect setting interface in fig. 4. Wherein, the stereo option corresponds to a stereo expansion algorithm, the voice enhancement option corresponds to a voice contrast enhancement algorithm, and the definition enhancement option corresponds to a loudspeaker transient distortion correction algorithm. If the smart option is selected, the electronic device may select an audio post-processing algorithm suitable for the audio data from a plurality of audio post-processing algorithms based on the audio data to perform audio processing on the audio data.
Taking the stereo option as an example, when the user selects the stereo option on the sound effect setting interface, the electronic device may generate a corresponding control instruction based on the operation of the user, where the control instruction is used to instruct the target sound effect selected by the user, that is, instruct to invoke a sound effect post-processing algorithm corresponding to the stereo option, so as to perform sound effect processing on the audio data.
After the audio data are decoded to obtain decoded audio data, the decoded audio data and the control instruction are transmitted to an ARM side sound effect post-processing algorithm module. And calling a target sound effect post-processing algorithm from a plurality of sound effect post-processing algorithms in the sound effect post-processing algorithm module based on the control instruction, wherein the target sound effect post-processing algorithm corresponds to the stereo option selected by the user, namely calling a stereo expansion algorithm from the plurality of sound effect post-processing algorithms. And performing sound effect processing on the decoded audio data based on the stereo expansion algorithm to obtain processed audio data. After the processed audio data is obtained, the processed audio data can be output through the power amplifier and the speaker.
Because the sound effect post-processing algorithm is arranged in the software layer, the sound effect processing is carried out on the audio data in the software layer, the algorithm integration on the audio processor of the system-level chip is not depended, the cooperative processing of software and hardware is realized, and the complexity of the sound effect post-processing is effectively reduced. Meanwhile, the digital signal processor is not required to be additionally configured, so that the complexity of sound effect post-processing is reduced, and meanwhile, the cost is reduced. The following describes the sound effect processing method provided in the embodiment of the present application with reference to fig. 5 to 7. Fig. 5 is a schematic diagram of an audio processing hardware structure according to an embodiment of the present application.
And decoding the audio data to obtain decoded audio data.
The audio data is data to be processed, and may be audio data from a Television (TV), a device connected through a high-definition multimedia interface (High Definition Multimedia Interface, HDMI), a device connected through a coaxial output interface (Sony/Philips Digital Inter Face, SPDIF), a multimedia processor (Multimedia Processor, MMP), or the like, for example. By way of example, the interconnection of devices such as notebook computers, televisions, projectors, game consoles, etc. may be achieved through HDMI interfaces and HDMI lines.
The decoding method can be divided into hard decoding and soft decoding. The hard decoding is hardware decoding, namely decoding the high-definition video through the video acceleration function of the display card. Soft decoding, i.e. software decoding, relies on a central processing unit (Central Processing Unit, CPU) for decoding. The graphics processor (Graphics Processing Unit, GPU) or video processing unit (Video Processing Unit, VPU) of the graphics card required for hard decoding is better suited to handle large data volumes, low difficulty repetitive tasks than the CPU required for soft decoding. The process of decoding the processed data is a process of converting a digital signal into an analog signal.
In the embodiment of the application, different decoding modes can be adopted for different types of audio data, and different decoding modes correspond to different decoding positions. Illustratively, in the present application, audio data is input into an audio processor of a system-in-chip, and the audio data is hard-decoded by a hard decoder in the audio processor, so that hard-decoded audio data can be obtained; in the soft decoding module of the electronic device, soft decoding is performed on the audio data, so that soft decoded audio data can be obtained, and in an electronic device operating system, a frame layer (FWK) is applied, and the soft decoding module based on the frame layer performs soft decoding on the audio data, so that soft decoded audio data is obtained.
And the electronic equipment performs sound effect processing on the audio data based on a sound effect post-processing algorithm to obtain processed audio data. Specifically, the application places the sound effect post-processing algorithm in a software layer of the system, specifically, places the sound effect post-processing algorithm in a sound effect post-processing module of a hardware abstraction layer. The audio data after decoding is subjected to audio processing at the software layer, so that decoupling of the audio processing and the system-level chip is realized, and therefore, the audio processing does not need to rely on the system-level chip to perform algorithm integration at the audio processor module when the audio processing is performed, and the complexity of the audio post-processing is reduced.
The sound effect post-processing algorithm module may be specifically located on the ARM side of the hardware abstraction layer, and on this basis, the embodiments of the present application are described.
Inputting the decoded audio data into an audio post-processing algorithm module at the ARM side of the hardware abstraction layer to perform audio processing on the decoded audio data so as to obtain processed audio data. Specifically, the hardware abstraction layer ARM side comprises an input module, an audio post-processing algorithm module and an output module.
The input module is used for receiving the decoded audio data and transmitting the decoded audio data to the sound effect post-processing algorithm module, at least one sound effect post-processing algorithm is integrated in the sound effect post-processing algorithm module, and the sound effect post-processing algorithm is used for processing the audio data, namely performing sound effect processing on the decoded audio data, so that the effect of outputting the audio can meet the requirement of a user.
In the embodiment of the present application, the decoded audio data may be pulse code modulated (Pulse Code Modulation, PCM) audio data. In particular, the pulse code modulated audio data is an uncompressed audio data stream. Pulse code modulation is a process of sampling, quantizing and encoding a continuously-changing analog signal to generate a digital signal, and has higher signal reconstruction quality, namely, the pulse code modulation is adopted to process audio data, so that the audio data with higher quality can be obtained, the loss in the transmission process is reduced, and the tone quality of the audio data is improved.
It should be noted that, in the present application, specific types of the sound effect post-processing algorithm are not specifically limited, and exemplary sound effect post-processing algorithms may include a stereo expansion algorithm for widening a sound field width, a human voice dialect enhancement algorithm for enhancing human voice performance, a speaker transient distortion correction algorithm for reducing speaker tail and enhancing sound definition, and the like. According to the method, the decoded audio data can be subjected to sound effect processing based on a sound effect post-processing algorithm, the decoded audio data can also be subjected to sound effect processing based on a plurality of sound effect post-processing algorithms, for example, the decoded audio data is processed based on a stereo expansion algorithm, after the data with the widened sound field width are obtained, the data with the widened sound field width are further processed by adopting a human voice dialogue enhancement algorithm, human voice expression in the data with the widened sound field width is improved, and the processed audio data are obtained.
The sound effect post-processing algorithm module transmits the processed audio data to the output module, the output module transmits the processed audio data back to the system-in-chip, and the system-in-chip transmits the processed audio data to the power amplifier. The power amplifier is connected with the loudspeaker, and can push the loudspeaker to output the audio data after the power amplification processing is carried out on the processed audio data.
In one possible implementation manner, in order to further improve the sound quality, the output module at the ARM side may tune the processed audio data to a basic sound processor of the audio processor in the system-in-chip, so as to perform basic sound processing on the processed audio data, and obtain optimized audio data. Illustratively, the base sound effect processing includes at least one of Equalization (EQ), tone adjustment, and volume adjustment. The basic sound effect processor transmits the optimized audio data to the power amplifier, and the power amplifier amplifies the power of the optimized audio data and outputs the amplified data through the loudspeaker.
Specifically, since soft decoding and hard decoding have advantages, for example, hard decoding has advantages of smooth playing, low power consumption, etc., it is suitable for decoding large data volume audio data; the soft decoding has the advantages of being free from the format limitation of the audio data, good in image quality and the like, and further can decode different types of audio data by adopting different decoding modes.
Illustratively, for audio data from TV, HDMI, SPDIF, PCM, etc., hard decoding may be employed, and for MMP multimedia signal streams, soft decoding may be employed. In the audio processing method provided by the application, the audio data can be decoded in a soft decoding mode, the audio data can be decoded in a hard decoding mode, and a plurality of to-be-processed audio data can be decoded in soft decoding and hard decoding modes. The following describes the sound effect processing procedure based on the soft decoding method and the sound effect processing procedure based on the hard decoding method, respectively, with reference to fig. 6 and 7.
Fig. 6 is a schematic diagram of sound effect processing based on a soft decoding method according to an embodiment of the present application. And an audio playing Application program (Application) in the electronic equipment sends the audio data to a soft decoding module of the framework layer for soft decoding to obtain the soft decoded audio data.
Because the communication connection is not established between the frame layer and the hardware abstraction layer, the soft decoded audio data is transmitted to the Hardware Abstraction Layer (HAL) from the frame layer through the system level chip, so that the soft decoded audio data is subjected to sound effect processing in the sound effect post-processing algorithm module of the hardware abstraction layer. The electronic device sends the soft decoded audio data to the system-level chip through the framework layer, and the system-level chip forwards the soft decoded audio data to the hardware abstraction layer, wherein the hardware abstraction layer comprises an audio post-processing algorithm module integrated with an audio post-processing algorithm.
The hardware abstraction layer is an interface layer between the kernel of the operating system and the hardware circuit, and aims to abstract the hardware. The hardware interface details of a specific platform are hidden, and a virtual hardware platform is provided for an operating system, so that the operating system has hardware independence, and can be transplanted on various platforms. The combination of software and hardware is realized because the software test and the hardware test can be respectively completed based on the hardware abstraction layer.
Specifically, the soft decoded audio data is forwarded to an audio post-processing algorithm module in the hardware abstraction layer, and the audio post-processing algorithm module is used for running an audio post-processing algorithm and performing audio processing on the soft decoded audio data.
In one possible implementation, the processor of the electronic device may invoke a target sound effect post-processing algorithm in the sound effect post-processing algorithm module based on the control instruction sent by the sound effect setting application, and perform sound effect processing on the soft decoded audio data based on the target sound effect post-processing algorithm.
The sound effect setting application program and the audio playing application program are located on the same software layer, namely, the sound effect setting application program and the audio playing application program belong to the application program layer. The user of the electronic device, i.e. the user, may set the corresponding target sound effect by the audio setting application according to the desired audio effect, wherein the target sound effect corresponds to the audio effect desired to be achieved by the user.
Specifically, the audio setting application program responds to the setting operation of a user, and sends a control instruction to the sound effect post-processing algorithm module of the hardware operation layer through the decoding module of the frame layer so as to call a target sound effect post-processing algorithm in the sound effect post-processing algorithm module to perform sound effect processing on the soft decoded audio data.
The control instruction is used for indicating to call a target sound effect post-processing algorithm from at least one sound effect post-processing algorithm in the sound effect post-processing algorithm module, wherein the target sound effect post-processing algorithm corresponds to the target sound effect. For example, the user desires to enhance the performance of the voice in the audio data, and the target sound effect in this case is a voice enhancement sound effect corresponding to the voice dialect enhancement algorithm, and the user sets the voice enhancement sound effect to the target sound effect through the audio setting application. For example, the user may select a human voice enhancement option in the sound effect setting interface to set the human voice enhancement sound effect as the target sound effect.
Therefore, the audio setting application program can respond to the setting operation of a user, and send a control instruction to the sound effect post-processing algorithm module of the hardware abstraction layer through the soft decoding module of the framework layer, and call the human voice dialect enhancement algorithm in the sound effect algorithm module. And then in the hardware abstraction layer, the sound effect post-processing algorithm module carries out sound effect processing on the soft decoded audio data based on the voice interaction enhancement algorithm to obtain processed audio data, wherein the processed audio data has the voice enhancement effect.
According to the method and the device for processing the audio data, the corresponding target sound effect post-processing algorithm can be called for processing the audio data based on the setting operation of the user, the audio meeting the user requirements can be obtained, the complexity of sound effect post-processing is reduced, meanwhile, the requirements of the user are met, and the use experience of the user is improved.
After the processed audio data is obtained, the sound effect post-processing algorithm module recalls the processed audio data to a basic sound effect processor in the system-in-chip. The basic sound effect processor can transmit the processed audio data to the power amplifier, and the power amplifier amplifies the processed audio data and outputs the amplified data through the loudspeaker.
In one possible implementation, to further improve the sound quality, the base sound processor may perform base sound processing on the processed audio data, for example, performing Equalization (EQ) on the processed audio data, to obtain optimized audio data, before outputting the processed audio data. The basic sound effect processor transmits the optimized audio data to the power amplifier, and the power amplifier amplifies the power of the optimized audio data and outputs the amplified optimized audio data through the loudspeaker.
In conclusion, the sound effect post-processing algorithm is arranged in the software layer, sound effect processing is carried out on the soft decoded audio data based on the sound effect post-processing algorithm in the software layer, algorithm integration on an audio processor of a system-level chip can be not relied on, the cooperative processing of software and hardware is realized, and the complexity of sound effect post-processing is effectively reduced. Meanwhile, the digital signal processor is not required to be additionally configured, so that the complexity of sound effect post-processing is reduced, and meanwhile, the cost is reduced.
Fig. 7 is a schematic diagram of sound effect processing based on a hard decoding method according to an embodiment of the present application. Unlike the soft decoding method, the sound effect processing method based on the hard decoding method decodes the audio data in the system-in-chip to obtain the decoded audio data.
Specifically, the system-in-chip acquires audio data, and hard decoding is performed on the audio data in a hard decoding module of the system-in-chip to obtain the hard decoded audio data.
The system-level chip sends the hard decoded audio data to a processor, and the processor is loaded with an audio effect post-processing algorithm.
Specifically, the hard decoded audio data is sent to a sound effect post-processing algorithm module in a Hardware Abstraction (HAL) layer by a decoding module of a system-in-chip, and the sound effect post-processing algorithm runs in a sound effect post-processing algorithm module of an ARM in the hardware abstraction layer.
And after the sound effect post-processing algorithm module receives the hard decoded audio data, performing sound effect processing on the hard decoded audio data based on the sound effect post-processing algorithm in the sound effect post-processing algorithm module to obtain processed audio data.
In one possible implementation, the ARM of the electronic device may invoke a target sound effect post-processing algorithm in the sound effect post-processing algorithm module based on a control instruction sent by the sound effect setting application, and perform sound effect processing on the decoded audio data based on the target sound effect post-processing algorithm. Wherein the sound effect setting application belongs to the application layer. The user of the electronic device, i.e. the user, can set the corresponding target sound effect through the audio setting application program according to the desired audio effect, so that the obtained processed audio data meets the audio effect desired by the user.
Specifically, the audio setting application program responds to the setting operation of a user and sends a control instruction to the sound effect post-processing algorithm module of the hardware operation layer so as to call a target sound effect post-processing algorithm in the sound effect post-processing algorithm module and perform sound effect processing on the decoded audio data. The control instruction is used for indicating to call a target sound effect post-processing algorithm from at least one sound effect post-processing algorithm in the sound effect post-processing algorithm module, wherein the target sound effect post-processing algorithm corresponds to the target sound effect.
After the processed audio data is obtained, the sound effect post-processing algorithm module recalls the processed audio data to a basic sound effect processor in the system-in-chip. The basic sound effect processor can transmit the processed audio data to the power amplifier, the power amplifier performs power amplification on the processed audio data to obtain the audio data after power amplification, and the loudspeaker is pushed to output the audio data after power amplification.
In one possible implementation, to further improve the sound quality, the base sound processor may perform base sound processing on the processed audio data to obtain optimized audio data before outputting the processed audio data. The basic sound effect processor transmits the optimized audio data to the power amplifier, and the power amplifier amplifies the power of the optimized audio data and outputs the amplified data through the loudspeaker.
It should be noted that, in the present application, audio processing may be performed on multiple types of audio data at the same time, where, as shown in fig. 6, only audio data suitable for soft decoding may be included in the multiple types of audio data; as may be shown in fig. 7, only audio data suitable for hard decoding is included; audio data suitable for soft decoding and audio data suitable for hard decoding may also be included at the same time. In the case where the audio data includes both the audio data suitable for soft decoding and the audio data suitable for hard decoding, the specific audio processing method may be combined with the methods corresponding to fig. 6 and fig. 7, which are not described herein.
To sum up, in the embodiment of the application, the system-level chip acquires the audio data, sends the audio data to the processor of the electronic device, performs the sound effect processing on the audio data by the processor based on the sound effect post-processing algorithm, acquires the processed audio data, and recalls the processed audio data to the system-level chip, and the system-level chip outputs the processed audio data. Because the sound effect post-processing algorithm is arranged in the processor of the electronic equipment, the system-level chip is not required to be relied on for carrying out algorithm integration, the complexity of sound effect processing can be effectively reduced, and meanwhile, an external digital signal processor is not required to be adopted, so that the sound effect processing cost can be effectively reduced.
In some embodiments, the electronic device may be a cell phone, tablet, desktop, laptop, notebook, ultra mobile personal computer (Ultra-mobile Personal Computer, UMPC), handheld computer, netbook, personal digital assistant (Personal Digital Assistant, PDA), wearable electronic device, smart watch, etc., and the specific form of the electronic device is not particularly limited in this application. In this embodiment, the structure of the electronic device may be shown in fig. 8, and fig. 8 is a schematic structural diagram of the electronic device according to the embodiment of the present application.
As shown in fig. 8, the electronic device may include a processor 110, an external memory interface 120, an internal memory 121, an antenna, a wireless communication module 130, an audio processor 140, a speaker 140A, a receiver 140B, a microphone 140C, an earphone interface 140D, a power amplifier 140E, a hard decoder 140F, and a base sound processor 140G. Sensor module 150, keys 160, and display 170, etc. Wherein the sensor module 150 may include a touch sensor 150A, a bone conduction sensor 150B, etc.
It is to be understood that the configuration illustrated in this embodiment does not constitute a specific limitation on the electronic apparatus. In other embodiments, the electronic device may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors. For example, in the present application, the processor 110 may obtain audio data; the electronic equipment calls an audio post-processing algorithm in an audio post-processing algorithm module, performs audio processing on the audio data to obtain processed audio data, and the audio post-processing algorithm module is positioned at a hardware abstraction layer; the electronic device outputs the processed audio data.
The controller can be a neural center and a command center of the electronic device. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
The I2S interface may be used for audio communication. In some embodiments, the processor 110 may contain multiple sets of I2S buses. The processor 110 may be coupled to the audio processor 140 via an I2S bus to enable communication between the processor 110 and the audio processor 140. In some embodiments, the audio processor 140 may transmit audio signals to the wireless communication module 130 through the I2S interface to implement a function of answering a call through a bluetooth headset.
PCM interfaces may also be used for audio communication to sample, quantize and encode analog signals. In some embodiments, the audio processor 140 and the wireless communication module 130 may be coupled by a PCM bus interface. In some embodiments, the audio processor 140 may also transmit audio signals to the wireless communication module 130 through the PCM interface to implement a function of answering a call through the bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus for asynchronous communications. The bus may be a bi-directional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, the audio processor 140 may transmit an audio signal to the wireless communication module 130 through a UART interface, implementing a function of playing music through a bluetooth headset.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal or as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the display 170, the wireless communication module 130, the audio processor 140, the sensor module 150, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, an MIPI interface, etc.
It should be understood that the connection relationship between the modules illustrated in this embodiment is only illustrative, and does not limit the structure of the electronic device. In other embodiments of the present application, the electronic device may also use different interfacing manners in the foregoing embodiments, or a combination of multiple interfacing manners.
The wireless communication module 130 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc. for application on an electronic device. The wireless communication module 130 may be one or more devices integrating at least one communication processing module. The wireless communication module 130 receives electromagnetic waves via an antenna, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 130 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via an antenna.
The electronic device implements display functions through the GPU, the display 170, and the application processor, etc. The GPU is a microprocessor for image processing, and is connected to the display 170 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display 170 is used to display images, videos, and the like. The display 170 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (flex), a mini, a Micro-led, a quantum dot light-emitting diode (quantumdot light emitting diodes, QLED), or the like. In some embodiments, the electronic device may include 1 or N display screens 170, N being a positive integer greater than 1.
A series of graphical user interfaces (graphical user interface, GUIs) may be displayed on the display screen 170 of the electronic device, all of which are home screens of the electronic device. Generally, the size of the display 170 of an electronic device is fixed and only limited controls can be displayed in the display 170 of the electronic device. A control is a GUI element that is a software component contained within an application program that controls all data processed by the application program and interactive operations on that data, and a user can interact with the control by direct manipulation (direct manipulation) to read or edit information about the application program. In general, controls may include visual interface elements such as icons, buttons, menus, tabs, text boxes, dialog boxes, status bars, navigation bars, widgets, and the like.
Video codecs are used to compress or decompress digital video. The electronic device may support one or more video codecs. In this way, the electronic device may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 121 may be used to store computer executable program code including instructions. The processor 110 executes various functional applications of the electronic device and data processing by executing instructions stored in the internal memory 121. For example, in the present embodiment, the processor 110 may call a target sound effect post-processing algorithm in the sound effect post-processing algorithm module by executing the control instruction stored in the internal memory 121 to perform sound effect processing on the decoded audio data based on the target sound effect post-processing algorithm.
The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device (e.g., audio data, phonebook, etc.), and so forth. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like. The processor 110 performs various functional applications of the electronic device and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
The electronic device may implement audio functions through an audio processor 140, a speaker 140A, a receiver 140B, a microphone 140C, an earphone interface 140D, an application processor, and the like. Such as music playing, recording, etc.
The audio processor 140 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio processor 140 may also be used for encoding and decoding audio signals. In some embodiments, the audio processor 140 may be disposed in the processor 110, or some functional modules of the audio processor 140 may be disposed in the processor 110.
Speaker 140A, also known as a "horn," is used to convert audio electrical signals into sound signals. In the embodiment of the present application, the electronic device may output the processed audio data through the speaker 140A.
A receiver 140B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When the electronic device picks up a phone call or voice message, the voice can be picked up by placing the receiver 140B close to the human ear.
Microphone 140C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can sound near the microphone 140C through the mouth, inputting a sound signal to the microphone 140C. The electronic device may be provided with at least one microphone 140C. In other embodiments, the electronic device may be provided with two microphones 140C, and may implement a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device may also be provided with three, four, or more microphones 140C to enable collection of sound signals, noise reduction, identification of sound sources, directional recording functions, etc.
The earphone interface 140D is used to connect a wired earphone. The headset interface 140D may be a USB interface 130 or a 3.5mm open mobile electronic device platform (open mobile terminal platform, OMTP) standard interface, a american cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The power amplifier 140E is configured to power amplify the audio data, so that the speaker 140A outputs the power amplified audio data.
And the hard decoder 140F is configured to hard-decode the audio data to obtain hard-decoded audio data.
The basic sound effect processor 140G is configured to perform basic sound effect processing on the processed audio data, for example, perform Equalization (EQ) on the processed audio data, to obtain optimized audio data, before outputting the processed audio data. The basic sound effect processor 140G transmits the optimized audio data to the power amplifier, and the power amplifier power-amplifies the optimized audio data and outputs the amplified optimized audio data through the speaker.
The touch sensor 150A, also referred to as a "touch device". The touch sensor 150A may be disposed on the display 170, and the touch sensor 150A and the display 170 form a touch screen, which is also referred to as a "touch screen". The touch sensor 150A is used to detect a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 170. In other embodiments, the touch sensor 150A may also be disposed on a surface of the electronic device at a different location than the display 170.
In the embodiment of the present application, based on the touch sensor 150A, the user may select, in the sound effect setting interface, an option corresponding to the target sound effect through a touch operation.
The bone conduction sensor 150B may acquire a vibration signal. In some embodiments, bone conduction sensor 150B may acquire a vibration signal of a human vocal tract vibrating bone pieces. The bone conduction sensor 150B may also contact the pulse of the human body and receive the blood pressure pulsation signal. In some embodiments, bone conduction sensor 150B may also be provided in a headset, in combination with an osteoinductive headset. The audio processor 140 may analyze the voice signal based on the vibration signal of the sound portion vibration bone piece obtained by the bone conduction sensor 150B, so as to implement the voice function.
The keys 160 include a power-on key, a volume key, etc. The key 160 may be a mechanical key. Or may be a touch key. The electronic device may receive key inputs, generating key signal inputs related to user settings and function controls of the electronic device.
In addition, an operating system is run on the components. Such as the iOS operating system developed by apple corporation, the Android open source operating system developed by google corporation, the Windows operating system developed by microsoft corporation, etc. An operating application may be installed on the operating system.
The operating system of the electronic device may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. In the embodiment of the application, taking an Android system with a layered architecture as an example, a software structure of an electronic device is illustrated.
The layered architecture divides the operating system of the electronic device into several layers, each layer having distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the operating system of the electronic device is an Android system. The Android system can be divided into four layers, namely an Application (APP) layer, an application framework layer (abbreviated as FWK), a system library and a kernel layer from top to bottom. Fig. 6 and 7 show part of the layers in the software structure of the electronic device.
Although the Android system is taken as an example for explanation, the basic principle of the embodiment of the present application is equally applicable to electronic devices based on iOS, windows and other operating systems.
The foregoing is merely a specific embodiment of the present application, but the protection scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered in the protection scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A sound effect processing method, characterized by comprising:
the electronic equipment acquires audio data;
the electronic equipment calls an audio post-processing algorithm in an audio post-processing algorithm module, performs audio processing on the audio data to obtain processed audio data, and the audio post-processing algorithm module is located at a hardware abstraction layer of an operating system of the electronic equipment;
the electronic device outputs the processed audio data.
2. The method of claim 1, wherein the electronic device invokes an audio post-processing algorithm in an audio post-processing algorithm module to perform audio processing on the audio data to obtain processed audio data, comprising:
the electronic equipment decodes the audio data to obtain decoded audio data;
and calling an audio post-processing algorithm in the audio post-processing algorithm module, and performing audio processing on the decoded audio data to obtain processed audio data.
3. The method of claim 2, wherein the electronic device decoding the audio data to obtain decoded audio data, comprising:
the electronic equipment performs hard decoding on the audio data in a hard decoder of a system-level chip to obtain hard decoded audio data;
The calling the sound effect post-processing algorithm in the sound effect post-processing algorithm module to perform sound effect processing on the decoded audio data to obtain processed audio data, including:
and calling an audio post-processing algorithm in the audio post-processing algorithm module, and performing audio processing on the hard decoded audio data to obtain processed audio data.
4. The method of claim 2, wherein the electronic device decoding the audio data to obtain decoded audio data, comprising:
the electronic equipment utilizes a soft decoding module of an application program framework layer in an electronic equipment operating system to carry out soft decoding on the audio data to obtain soft decoded audio data;
and invoking an audio post-processing algorithm in the audio post-processing algorithm module to perform audio processing on the decoded audio data to obtain processed audio data, wherein the audio post-processing algorithm comprises the following steps:
and calling an audio post-processing algorithm in the audio post-processing algorithm module, and performing audio processing on the soft decoded audio data to obtain processed audio data.
5. The method of claim 1, wherein the electronic device invokes an audio post-processing algorithm in an audio post-processing algorithm module to perform audio processing on the audio data to obtain processed audio data, comprising:
The electronic equipment invokes a target sound effect post-processing algorithm from a plurality of sound effect post-processing algorithms in a sound effect post-processing algorithm module based on a control instruction, wherein the control instruction is used for indicating a target sound effect selected by a user, and the target sound effect post-processing algorithm corresponds to the target sound effect;
and performing sound effect processing on the audio data based on the target sound effect post-processing algorithm to obtain processed audio data.
6. The method of claim 1, wherein the electronic device outputting the processed audio data comprises:
the electronic equipment performs basic sound effect processing on the processed audio data to obtain optimized audio data, wherein the basic sound effect processing comprises at least one of equalization processing, tone adjustment and volume adjustment;
and outputting the optimized audio data.
7. The method of claim 1, wherein the electronic device obtains audio data, comprising:
the electronic device obtains the audio data from at least one of a television, a device connected through a high definition multimedia interface, a device connected through a coaxial output interface, and a multimedia processor.
8. The method of any one of claims 1-7, further comprising:
performing power amplification on the processed audio data to obtain power amplified audio data;
and outputting the audio data after power amplification.
9. An electronic device, the electronic device comprising a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to perform the steps of a sound effect processing method according to any one of claims 1-8 according to instructions in the program code.
10. A computer-readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, implements the steps of a sound effect processing method according to any one of claims 1-8.
CN202310849596.1A 2023-07-11 2023-07-11 Sound effect processing method, device and storage medium Pending CN117714969A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310849596.1A CN117714969A (en) 2023-07-11 2023-07-11 Sound effect processing method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310849596.1A CN117714969A (en) 2023-07-11 2023-07-11 Sound effect processing method, device and storage medium

Publications (1)

Publication Number Publication Date
CN117714969A true CN117714969A (en) 2024-03-15

Family

ID=90148608

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310849596.1A Pending CN117714969A (en) 2023-07-11 2023-07-11 Sound effect processing method, device and storage medium

Country Status (1)

Country Link
CN (1) CN117714969A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106782578A (en) * 2016-12-06 2017-05-31 努比亚技术有限公司 A kind of distributed solution code controller, distributed coding/decoding method and voice frequency terminal
CN108182930A (en) * 2017-12-18 2018-06-19 福建星网视易信息系统有限公司 Sound effect treatment method, equipment and audio/video on-demand system
US20190028803A1 (en) * 2014-12-05 2019-01-24 Stages Llc Active noise control and customized audio system
CN111713141A (en) * 2018-04-04 2020-09-25 华为技术有限公司 Bluetooth playing method and electronic equipment
CN113286280A (en) * 2021-04-12 2021-08-20 沈阳中科创达软件有限公司 Audio data processing method and device, electronic equipment and computer readable medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190028803A1 (en) * 2014-12-05 2019-01-24 Stages Llc Active noise control and customized audio system
CN106782578A (en) * 2016-12-06 2017-05-31 努比亚技术有限公司 A kind of distributed solution code controller, distributed coding/decoding method and voice frequency terminal
CN108182930A (en) * 2017-12-18 2018-06-19 福建星网视易信息系统有限公司 Sound effect treatment method, equipment and audio/video on-demand system
CN111713141A (en) * 2018-04-04 2020-09-25 华为技术有限公司 Bluetooth playing method and electronic equipment
CN113286280A (en) * 2021-04-12 2021-08-20 沈阳中科创达软件有限公司 Audio data processing method and device, electronic equipment and computer readable medium

Similar Documents

Publication Publication Date Title
EP4060475A1 (en) Multi-screen cooperation method and system, and electronic device
CN117063461A (en) Image processing method and electronic equipment
US10045079B2 (en) Exposing media processing features
WO2022148319A1 (en) Video switching method and apparatus, storage medium, and device
CN114327312B (en) Screen throwing control method and device
CN111598919B (en) Motion estimation method, motion estimation device, storage medium and electronic equipment
CN114996168A (en) Multi-device cooperative test method, test device and readable storage medium
CN113971969B (en) Recording method, device, terminal, medium and product
US20230370774A1 (en) Bluetooth speaker control method and system, storage medium, and mobile terminal
TWI619383B (en) Widi cloud mode
CN115396520A (en) Control method, control device, electronic equipment and readable storage medium
CN115167802A (en) Audio switching playing method and electronic equipment
WO2023273845A1 (en) Multi-application screen recording method and apparatus
WO2022161120A1 (en) Method for turning on screen, and electronic device
US20230319217A1 (en) Recording Method and Device
CN117714969A (en) Sound effect processing method, device and storage medium
CN115185441A (en) Control method, control device, electronic equipment and readable storage medium
CN114494546A (en) Data processing method and device and electronic equipment
CN116567489B (en) Audio data processing method and related device
CN116546126B (en) Noise suppression method and electronic equipment
WO2023174322A1 (en) Layer processing method and electronic device
WO2023071730A1 (en) Voiceprint registration method and electronic devices
WO2024061138A1 (en) Data coding and data decoding method and apparatus, and device
CN111626929B (en) Depth image generation method and device, computer readable medium and electronic equipment
US12019942B2 (en) Multi-screen collaboration method and system, and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination