WO2020211766A1 - 音频处理方法、电路和终端 - Google Patents

音频处理方法、电路和终端 Download PDF

Info

Publication number
WO2020211766A1
WO2020211766A1 PCT/CN2020/084866 CN2020084866W WO2020211766A1 WO 2020211766 A1 WO2020211766 A1 WO 2020211766A1 CN 2020084866 W CN2020084866 W CN 2020084866W WO 2020211766 A1 WO2020211766 A1 WO 2020211766A1
Authority
WO
WIPO (PCT)
Prior art keywords
speaker
power amplifier
sound
audio
screen
Prior art date
Application number
PCT/CN2020/084866
Other languages
English (en)
French (fr)
Inventor
彭功良
Original Assignee
深圳市万普拉斯科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市万普拉斯科技有限公司 filed Critical 深圳市万普拉斯科技有限公司
Publication of WO2020211766A1 publication Critical patent/WO2020211766A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/60Substation equipment, e.g. for use by subscribers including speech amplifiers
    • H04M1/6033Substation equipment, e.g. for use by subscribers including speech amplifiers for providing handsfree use or a loudspeaker mode in telephone sets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/60Substation equipment, e.g. for use by subscribers including speech amplifiers
    • H04M1/62Constructional arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions

Definitions

  • This application relates to the field of communication technology, and in particular to an audio processing method, circuit and terminal.
  • the embodiments of the present application expect to provide an audio processing method, circuit, and terminal that can improve the effect of bass calls.
  • an embodiment of the present application provides an audio processing method, the method including:
  • the determining the audio processing mode according to the audio mode includes:
  • the audio mode is the multimedia mode, controlling the first speaker and the second speaker to sound at the same time;
  • the audio mode is the call mode
  • the first speaker and the screen are controlled to sound at the same time.
  • the controlling the first speaker and the screen to sound at the same time includes: when the audio frequency is in the first frequency range, using a driving motor to drive the screen to sound; when the audio frequency is in the second frequency range , The first speaker is used for sound; the frequency of the first frequency range is lower than the frequency of the second frequency range.
  • controlling the first speaker to emit sound includes:
  • Control the second speaker to sound including:
  • Control screen sound including:
  • the third power amplifier controls the driving motor, and the driving motor controls the screen to produce sound.
  • the method further includes:
  • a clock synchronization signal is generated, and the first power amplifier, the second power amplifier, and the third power amplifier are controlled to perform clock synchronization according to the clock synchronization signal.
  • an embodiment of the present application also provides an audio processing circuit, the circuit includes: a first speaker, a second speaker, a driving motor, a screen, and a digital signal processor (Digital Signal Processing, DSP);
  • DSP Digital Signal Processing
  • the first speaker, the second speaker and the drive motor are respectively electrically connected to the DSP, and are configured to obtain a control signal output by the DSP;
  • the driving motor is electrically connected to the screen and is configured to drive the screen to produce sound
  • the first speaker is configured to work simultaneously with the second speaker under the control of the control signal in a multimedia mode
  • the first speaker is also configured to work simultaneously with the driving motor under the control of the control signal in the case of the call mode.
  • the circuit further includes a first power amplifier, a second power amplifier, and a third power amplifier;
  • the first power amplifier is electrically connected to the first speaker, and is configured to drive the first speaker to produce sound;
  • the second power amplifier is electrically connected to the second speaker, and is configured to drive the second speaker to produce sound
  • the third power amplifier is electrically connected to the drive motor, and is configured to drive the drive motor to work.
  • the first power amplifier, the second power amplifier, and the third power amplifier are power amplifiers of the same model.
  • the driving motor is a Z-axis motor.
  • the DSP is further configured to send clock synchronization signals to the first power amplifier, the second power amplifier, and the third power amplifier, respectively, and the clock synchronization signal For controlling the first power amplifier, the second power amplifier and the third power amplifier to perform clock synchronization.
  • an embodiment of the present application also provides a terminal, which includes the audio processing circuit described in the foregoing embodiment.
  • an embodiment of the present application also provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps of the audio processing method described in the embodiment of the present application are implemented.
  • the audio processing method, circuit and terminal provided by the embodiments of the application obtain the current audio mode and determine the audio processing mode according to the audio mode.
  • the terminal can control the first speaker and the second speaker Simultaneous sounding to meet the audio performance requirements of multimedia playback;
  • the terminal can control the first speaker and the screen to sound at the same time, which can avoid the low frequency response caused by the speaker sound in the traditional technology that does not meet the requirements ,
  • the problem of poor bass effect because the screen has a lower cutoff frequency than the speaker, the screen's frequency response to low frequencies can meet the requirements.
  • the bass effect in the call mode can be greatly improved .
  • Figure 1 is an internal structure diagram of a computer device in an embodiment of the application
  • FIG. 2 is a schematic flowchart of an audio processing method provided by an embodiment of the application
  • FIG. 3 is a schematic flowchart of an audio processing method provided by another embodiment of this application.
  • FIG. 4 is a schematic structural diagram of an audio processing circuit provided by an embodiment of the application.
  • FIG. 5 is a schematic structural diagram of an audio processing circuit provided by another embodiment of the application.
  • the first speaker 401;
  • the second speaker 402;
  • DSP 405; The first power amplifier: 406;
  • Second power amplifier 407
  • Third power amplifier 408.
  • the audio processing method provided by the embodiments of this application can be applied to the computer device shown in FIG. 1.
  • the computer device can be a server, a desktop computer, a personal digital assistant, or other terminal, and the other terminal can be a tablet computer, a mobile phone, etc. It can also be a cloud or a remote server, and the embodiment of the present application does not limit the specific form of the computer device.
  • the computer equipment includes a processor, a memory, a network interface, a display screen, and an input device connected through a system bus. Among them, the processor of the computer device is used to provide calculation and control capabilities.
  • the memory of the computer device may include a non-volatile storage medium and internal memory.
  • the non-volatile storage medium stores an operating system, a computer program, and a database.
  • the internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage medium.
  • the computer equipment database is used to store data.
  • the network interface of the computer device can be used to communicate with other external devices through a network connection.
  • the display screen of the computer device may be a liquid crystal display screen or an electronic ink display screen, etc., and the type of the computer display screen is not limited in this embodiment.
  • the input device of the computer equipment can be a touch layer covered on the display screen, it can also be a button, a trackball or a touchpad provided on the housing of the computer equipment, or an external keyboard, a touchpad or a mouse. In other examples, the input device and the display screen may not be part of the computer equipment, that is, as external equipment of the computer equipment.
  • FIG. 1 is only a block diagram of part of the structure related to the embodiment of the present application, and does not constitute a limitation on the computer equipment to which the embodiment of the present application is applied.
  • the specific computer The device may include more or less components than shown in the figure, or combine certain components, or have a different component arrangement, which is not limited in the embodiments of the present application.
  • the execution body of the following method embodiments may be an audio processing device, which may be implemented as part or all of the foregoing computer equipment through software, hardware, or a combination of software and hardware.
  • the execution subject is a terminal (for example, the aforementioned terminal device) as an example for description.
  • Fig. 2 is a schematic flowchart of an audio processing method provided by an embodiment. This embodiment relates to the specific process of the terminal determining the audio processing mode according to the current audio mode. As shown in Figure 2, the method includes:
  • the terminal may obtain its current audio mode by checking its own working status.
  • the audio mode may include a multimedia mode, such as a mode of playing music or playing a video with music.
  • the audio mode may also include a call mode, that is, the terminal is in a call state.
  • S104 Determine an audio processing mode according to the audio mode.
  • each audio mode can correspond to a matching audio processing mode, and the matching audio mode can make the audio performance of the corresponding audio mode the best, that is, the "sounding" effect is the best.
  • the terminal may determine an audio processing mode that matches the audio mode according to the audio mode it is in.
  • different audio processing methods may include selecting different elements or combinations of elements for sound.
  • the determining the audio processing mode according to the audio mode may include the following several implementation modes:
  • S106A When the audio mode is the multimedia mode, control the first speaker and the second speaker to sound at the same time.
  • the audio processing mode determined by the terminal is to control the first speaker and the second speaker to sound at the same time to meet the audio performance requirements of multimedia playback.
  • S106B When the audio mode is the call mode, control the first speaker and the screen to sound at the same time.
  • the audio processing mode determined by the terminal is to control the first speaker and the screen to sound at the same time, so as to meet the audio performance requirements during the call. Since the screen itself has a lower cut-off frequency than the speaker, its frequency response in bass can meet the requirements, so sound through the screen can make the bass effect in the call mode better.
  • the screen sound in this embodiment can be understood as a sound emitted through the screen, for example, a sound can be produced by driving the screen to vibrate, or a driving frame sound in the screen can be output.
  • a sound can be produced by driving the screen to vibrate, or a driving frame sound in the screen can be output.
  • the specific implementation of the screen sound is not limited.
  • the terminal can obtain the current audio mode, and determine the audio processing mode according to the audio mode.
  • the terminal controls the first speaker and the second speaker to sound at the same time, so as to meet the requirements of multimedia playback. Audio performance requirements; and when the audio mode is the call mode, the terminal controls the first speaker and the screen to sound at the same time, which can avoid the problem of inadequate low-frequency response and poor bass effect caused by the traditional technology, caused by sound from the speaker. Since the screen has a lower cut-off frequency than the speaker, the screen's frequency response to low frequencies can meet the requirements. By controlling the first speaker and the screen to sound at the same time, the bass effect in the call mode can be greatly improved.
  • controlling the first speaker and the screen to sound at the same time in step S106B in the above embodiment may include: when the audio frequency is in the first frequency range, using a driving motor to drive the screen to sound; When the audio frequency is in the second frequency range, the first speaker is used for sound; the frequency of the first frequency range is lower than the frequency of the second frequency range.
  • the frequency response of the screen to low frequencies can meet the requirements of audio performance. Therefore, when the currently played audio frequency is in the first frequency range with a lower frequency, use The drive motor drives the screen to vibrate to produce sound to meet the audio performance requirements of the bass, thereby greatly improving the bass effect; when the audio frequency currently being played is in the second frequency range with the higher frequency, the first frequency with the higher cut-off frequency is used.
  • a speaker produces sound to meet the audio performance requirements of the treble, thereby ensuring the effect of the mid-to-treble.
  • the audio performance requirements for high and low frequencies can be met at the same time, and the audio effect of the call is greatly improved.
  • controlling the first speaker and the screen to sound at the same time means that in the call mode, the sound function of the first speaker and the screen is enabled, and the first speaker and the screen can be controlled to sound, but not It is at the same time that the first speaker and the screen are simultaneously sounding.
  • controlling the first speaker and the second speaker to sound at the same time means that in the multimedia mode, the sound function of the first speaker and the second speaker is enabled, and the first speaker and the second speaker can be controlled to sound, but not The first speaker and the second speaker are sounding at the same time.
  • FIG. 3 is a schematic flowchart of an audio processing method provided by another embodiment of this application.
  • This embodiment relates to a specific method for a terminal to drive an audio component through a power amplifier.
  • the method further includes:
  • S202 Control the first speaker to produce sound through the first power amplifier; control the second speaker to produce sound through the second power amplifier; control the drive motor through the third power amplifier, and control the screen to produce sound through the drive motor.
  • the terminal sends the amplified control signal to the first speaker through the first power amplifier, thereby controlling the first speaker to emit sound.
  • the terminal sends the amplified control signal to the second speaker through the second power amplifier, thereby controlling the second speaker to emit sound.
  • the terminal sends the amplified control signal to the driving motor through the third power amplifier, thereby controlling the operation of the motor, and driving the screen to sound through the operation of the motor.
  • the terminal can control the first speaker to sound through the first power amplifier, the second power amplifier can control the second speaker to sound, the third power amplifier can control the drive motor, and the drive motor can control the screen to sound, so that
  • the first speaker and the second speaker, or the first speaker and the driving motor can work under the drive of a high-power control signal, which avoids the abnormal sound or poor sound effect caused by the low power of the control signal output by the DSP In this case, it ensures the performance of the audio processing circuit and further improves the sound effect of the audio processing circuit.
  • the method may further include: S204, generating a clock synchronization signal, and controlling the first power amplifier, the second power amplifier, and the clock synchronization signal according to the clock synchronization signal.
  • the third power amplifier performs clock synchronization.
  • the terminal may also generate a clock synchronization signal through the DSP, and send the clock synchronization signal to the first power amplifier, the second power amplifier, and the third power amplifier, and use the clock synchronization signal to control the first power amplifier and the second power amplifier.
  • the amplifier and the third power amplifier perform clock synchronization, thereby realizing the timing control between the first power amplifier, the second power amplifier, and the third power amplifier, further improving the audio control ability, and thereby improving the audio performance.
  • the method may further include: controlling the first speaker, the second speaker, and the screen to sound at the same time.
  • the control of the first speaker, the second speaker and the screen to sound at the same time can refer to the above.
  • the first power amplifier is used to control the first speaker to sound
  • the second power amplifier is used to control the second speaker to sound.
  • a three-power amplifier controls a driving motor, and the screen is controlled to produce sound through the driving motor.
  • FIG. 4 is an audio processing circuit provided by an embodiment of the application.
  • the circuit includes a first speaker 401, a second speaker 402, a driving motor 403, a screen 404, and a DSP 405; among them, the first speaker 401, the second speaker 402 and the driver
  • the motor 403 is electrically connected to the DSP 405 and configured to obtain the control signal output by the DSP 405;
  • the driving motor 403 is electrically connected to the screen 404 and configured to drive the screen 404 to produce sound;
  • the first speaker 401 is configured to be in the multimedia mode. It works simultaneously with the second speaker 402 under the control of the control signal; the first speaker 401 is also configured to work simultaneously with the driving motor 403 under the control of the control signal in the call mode.
  • the audio processing circuit includes a first speaker 401, a second speaker 402, a driving motor 403, a screen 404, and a DSP405.
  • the first speaker 401, the second speaker 402, and the driving motor 403 are respectively electrically connected to the DSP 405 and configured to obtain the control signal output by the DSP 405;
  • the driving motor 403 is electrically connected to the screen 404 and configured to drive the screen 404 to vibrate along the Z axis to produce sound .
  • the DSP 405 outputs control signals to the first speaker 401 and the second speaker 402, and controls the first speaker 401 and the second speaker 402 to simultaneously emit sound through the output control signal.
  • the DSP 405 In the call mode, the DSP 405 outputs a control signal to the first speaker 401 and the driving motor 403, and the output control signal controls the first speaker 401 and the driving motor 403 to work at the same time, because the driving motor 403 can drive the screen to vibrate for sound Therefore, when the DSP 405 outputs control signals to the first speaker 401 and the driving motor 403, respectively, it can drive the first speaker 401 to sound simultaneously with the screen 404.
  • the audio processing circuit uses the control signal output by the DSP 405 to enable the first speaker to work simultaneously with the second speaker under the control of the control signal in the multimedia mode, so as to meet the audio performance of multimedia playback.
  • the frequency response can meet the audio performance requirements. Therefore, the first speaker and the driving motor drive the screen to sound at the same time, which can greatly improve the bass effect in the call mode.
  • FIG. 5 is an audio processing circuit provided by another embodiment of this application.
  • the circuit further includes a first power amplifier 406, a second power amplifier 407, and a third power amplifier 408.
  • the first power amplifier 406 is electrically connected to the first speaker 401 and is configured to drive the first speaker 401 to produce sound;
  • the second power amplifier 407 is electrically connected to the second speaker 402 and is configured to drive the second speaker 402 to produce sound;
  • the third power amplifier 408 is electrically connected to the driving motor 403, and is configured to operate the driving motor 403.
  • the audio processing circuit may further include a first power amplifier 406, a second power amplifier 407, and a third power amplifier 408.
  • the input end of the first power amplifier 406 is electrically connected to the DSP405
  • the output end of the first power amplifier 406 is electrically connected to the first speaker 401 to receive the control signal sent by the DSP405 and amplify the control signal.
  • the control signal of is sent to the first speaker 401, thereby driving the first speaker 401 to sound.
  • the input end of the second power amplifier 407 is electrically connected to the DSP405, and the output end of the second power amplifier 407 is electrically connected to the second speaker 402 to receive the control signal sent by the DSP405, and amplify the control signal to control the amplified The signal is sent to the second speaker 402 to drive the second speaker 402 to emit sound.
  • the input end of the third power amplifier 408 is electrically connected to the DSP405, the output end of the third power amplifier 408 is electrically connected to the driving motor 403, and the driving motor 403 is electrically connected to the screen 404.
  • the third power amplifier 408 receives the control signal sent by the DSP405, and The control signal is amplified, and the amplified control signal is sent to the driving motor 403 to drive the motor 403 to work, and then the screen 404 is driven to produce sound based on the driving motor 403.
  • the audio processing circuit provided in this embodiment enables the first speaker, the second speaker, and the driving motor to work under the drive of a higher-power control signal, which prevents the control signal output by the DSP from being unable to sound normally due to the low power of the control signal Or the case where the sound effect is not good, which ensures the performance of the audio processing circuit and further improves the sound effect of the audio processing circuit.
  • the first power amplifier 406, the second power amplifier 407, and the third power amplifier 408 are power amplifiers of the same model. Specifically, since the models of the first power amplifier 406, the second power amplifier 407, and the third power amplifier 408 are the same, their performance indicators are unified. The power amplifier with the unified indicator is more convenient for the synchronization of the control signal and the clock synchronization signal. Improve the audio control ability, and then improve the audio performance.
  • the drive motor 403 is a Z-axis motor.
  • the Z-axis motor is a motor that vibrates along the Z-axis direction, where the Z-axis direction is a direction perpendicular to the screen plane.
  • the Z-axis motor can drive the screen to vibrate in the direction perpendicular to its plane, so that the frequency response of the audio processing circuit in bass meets the audio performance requirements, thereby enhancing the bass sound effect.
  • the DSP 405 is further configured to send clock synchronization signals to the first power amplifier 406, the second power amplifier 407, and the third power amplifier 408, respectively, and the clock synchronization signals are used to control the first power amplifier 406, the second power amplifier 406, and the third power amplifier 408.
  • the second power amplifier 407 and the third power amplifier 408 perform clock synchronization, which better realizes timing control, thus further improving the audio control ability, thereby improving the audio performance.
  • an embodiment of the present application further provides a terminal, and the terminal includes the audio processing circuit described in any of the foregoing embodiments.
  • the embodiment of the present application also provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps of the audio processing method described in the embodiment of the present application are implemented.
  • a person of ordinary skill in the art can understand that all or part of the processes in the above-mentioned embodiment methods can be implemented by instructing relevant hardware through a computer program.
  • the computer program can be stored in a non-volatile computer readable storage.
  • the medium when the computer program is executed, it may include the procedures of the above-mentioned method embodiments.
  • any reference to memory, storage, database or other media used in the embodiments provided in this application may include non-volatile and/or volatile memory.
  • Non-volatile memory can include Read Only Memory (ROM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory, EPROM ), electrically erasable programmable read-only memory (Electrically Erasable Programmable Read-Only Memory, EEPROM) or flash memory. Volatile memory may include random access memory (Random Access Memory, RAM) or external cache memory.
  • ROM Read Only Memory
  • PROM Programmable Read-Only Memory
  • EPROM Erasable Programmable Read-Only Memory
  • EEPROM Electrically erasable programmable read-only memory
  • Volatile memory may include random access memory (Random Access Memory, RAM) or external cache memory.
  • RAM is available in many forms, such as static RAM (Static Random Access Memory, SRAM), dynamic RAM (Dynamic Random Access Memory, DRAM), synchronous DRAM (Synchronous Dynamic Random Access Memory, SDRAM), dual Data rate SDRAM (Double Data Rate Synchronous Dynamic Random Access Memory, DDR SDRAM), Enhanced SDRAM (Enhanced Synchronous Dynamic Access Memory, ESDRAM), Synchronous Link (Synchlink) DRAM (SLDRAM), Memory Bus (Rambus) Direct RAM (RDRAM) ), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.
  • SRAM Static Random Access Memory
  • DRAM Dynamic Random Access Memory
  • SDRAM synchronous DRAM
  • SDRAM dual Data rate SDRAM (Double Data Rate Synchronous Dynamic Random Access Memory, DDR SDRAM)
  • Enhanced SDRAM Enhanced Synchronous Dynamic Access Memory, ESDRAM
  • SLDRAM Synchronous Link (Synchlink) DRAM
  • SLDRAM Synchronous Link (Synchlink

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Environmental & Geological Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Telephone Function (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

本申请实施例公开了一种音频处理方法、电路和终端。该音频处理方法包括:获取当前的音频模式;根据所述音频模式确定音频处理方式;其中,所述根据所述音频模式确定音频处理方式,包括:在所述音频模式为多媒体模式的情况下,控制第一扬声器和第二扬声器同时发声;在所述音频模式为通话模式的情况下,控制第一扬声器和屏幕同时发声。

Description

音频处理方法、电路和终端
相关申请的交叉引用
本申请基于申请号为201910297900.X、申请日为2019年04月15日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此以引入方式并入本申请。
技术领域
本申请涉及通信技术领域,特别涉及一种音频处理方法、电路和终端。
背景技术
随着通信技术的发展,手机终端已经成为人们日常生活和工作不可缺少的必备用品。
伴随着人们生活质量的提高,人们对手机终端的质量和性能的要求也越来越高。手机终端作为人们主要的沟通工具,其通话质量是其重点关注的指标之一。传统的手机终端在发声的时候,是通过功率放大器直接驱动扬声器进行发声。
然而,由于扬声器的截止频率高,因此其低频性能较差,导致低音的通话效果不佳。
发明内容
本申请实施例期望提供一种能够提升低音通话效果的音频处理方法、电路和终端。
第一方面,本申请实施例提供一种音频处理方法,所述方法包括:
获取当前的音频模式;
根据所述音频模式确定音频处理方式;
其中,所述根据所述音频模式确定音频处理方式,包括:
在所述音频模式为多媒体模式的情况下,控制第一扬声器和第二扬声器同时发声;
在所述音频模式为通话模式的情况下,控制第一扬声器和屏幕同时发声。
在本申请的一个可选实施例中,所述控制第一扬声器和屏幕同时发声包括:当音频频率为第一频率范围时,采用驱动马达驱动屏幕进行发声;当音频频率为第二频率范围时,采用第一扬声器进行发声;所述第一频率范围的频率低于所述第二频率范围的频率。
在本申请的一个可选实施例中,控制第一扬声器发声,包括:
通过第一功率放大器控制所述第一扬声器发声;
控制第二扬声器发声,包括:
通过第二功率放大器控制所述第二扬声器发声;
控制屏幕发声,包括:
通过第三功率放大器控制驱动马达,通过所述驱动马达控制所述屏幕发声。
在本申请的一个可选实施例中,所述方法还包括:
产生时钟同步信号,根据所述时钟同步信号控制所述第一功率放大器、所述第二功率放大器和所述第三功率放大器进行时钟同步。
第二方面,本申请实施例还提供一种音频处理电路,所述电路包括:第一扬声器、第二扬声器、驱动马达、屏幕和数字信号处理器(Digital Signal Processing,DSP);
所述第一扬声器、所述第二扬声器和所述驱动马达分别与所述DSP电连接,配置为获取所述DSP输出的控制信号;
所述驱动马达与所述屏幕电连接,配置为驱动所述屏幕进行发声;
所述第一扬声器,配置为在多媒体模式的情况下,在所述控制信号的控制下与所述第二扬声器同时工作;
所述第一扬声器,还配置为在通话模式的情况下,在所述控制信号的控制下与所述驱动马达同时工作。
在本申请的一个可选实施例中,所述电路还包括第一功率放大器、第二功率放大器和第三功率放大器;
所述第一功率放大器与所述第一扬声器电连接,配置为驱动所述第一扬声器进行发声;
所述第二功率放大器与所述第二扬声器电连接,配置为驱动所述第二扬声器进行发声;
所述第三功率放大器与所述驱动马达电连接,配置为驱动所述驱动马达进行工作。
在本申请的一个可选实施例中,所述第一功率放大器、所述第二功率放大器和所述第三功率放大器为相同型号的功率放大器。
在本申请的一个可选实施例中,所述驱动马达为Z轴马达。
在本申请的一个可选实施例中,所述DSP,还配置为分别向所述第一功率放大器、所述第二功率放大器和所述第三功率放大器发送时钟同步信号,所述时钟同步信号用于控制所述第一功率放大器、所述第二功率放大器和所述第三功率放大器进行时钟同步。
第三方面,本申请实施例还提供一种终端,包括如上述实施例所述的音频处理电路。
第四方面,本申请实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现本申请实施例所述音频处理方法的步骤。
本申请实施例提供的音频处理方法、电路和终端,通过获取当前的音频模式,并根据音频模式确定音频处理方式,在音频模式为多媒体模式的情况下,终端可控制第一扬声器和第二扬声器同时发声,以满足多媒体播放时的音频性能要求;在音频模式为通话模式的情况下,终端可控制第一扬声器和屏幕同时发声,可以避免传统技术中,通过扬声器发声导致的低频响应不满足要求,低音效果不佳的问题,由于屏幕具有较扬声器更低的截止频率,因此屏幕对于低频的频率响应能够满足要求,通过控制第一扬声器和屏幕同时发声,能够使得通话模式下的低音效果大大提升。
附图说明
图1为本申请一个实施例中计算机设备的内部结构图;
图2为本申请一个实施例提供的音频处理方法的流程示意图;
图3为本申请另一个实施例提供的音频处理方法的流程示意图;
图4为本申请一个实施例提供的音频处理电路的结构示意图;
图5为本申请另一个实施例提供的音频处理电路的结构示意图。
附图标记说明:
第一扬声器:401;              第二扬声器:402;
驱动马达:403;                屏幕:404;
DSP:405;                     第一功率放大器:406;
第二功率放大器:407;          第三功率放大器:408。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
本申请实施例提供的音频处理方法,可以适用于图1所示的计算机设 备,该计算机设备可以是服务器、台式机、个人数字助理或者其他的终端,其他的终端例如可以是平板电脑、手机等等,还可以是云端或者远程服务器,本申请实施例对计算机设备的具体形式并不做限定。如图1所示,该计算机设备包括通过系统总线连接的处理器、存储器、网络接口、显示屏和输入装置。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器可包括非易失性存储介质和内存储器。该非易失性存储介质存储有操作系统、计算机程序和数据库。该内存储器为非易失性存储介质中的操作系统和计算机程序的运行提供环境。该计算机设备的数据库用于存储数据。该计算机设备的网络接口可以用于与外部的其他设备通过网络连接通信。该计算机设备的显示屏可以是液晶显示屏或者电子墨水显示屏等等,本实施例中对计算机的显示屏的类型不做限定。该计算机设备的输入装置可以是显示屏上覆盖的触摸层,也可以是计算机设备外壳上设置的按键、轨迹球或触控板,还可以是外接的键盘、触控板或鼠标等。在其他示例中,输入装置和显示屏也可以不属于计算机设备的一部分,即作为计算机设备的外接设备。
本领域技术人员可以理解,图1中示出的结构,仅仅是与本申请实施例相关的部分结构的框图,并不构成对本申请实施例所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置,本申请实施例中对此不做限定。
下面以具体的实施例对本申请的技术方案以及本申请的技术方案如何解决上述技术问题进行详细说明。下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例中不再赘述。下面将结合附图,对本申请实施例进行描述。
需要说明的是,下述方法实施例的执行主体可以是音频处理装置,该 装置可以通过软件、硬件或者软硬件结合的方式实现成为上述计算机设备的部分或者全部。下述方法实施例中,均以执行主体是终端(例如上述终端设备)为例进行说明。
图2为一个实施例提供的音频处理方法的流程示意图。本实施例涉及的是终端根据当前的音频模式,确定音频处理方式的具体过程。如图2所示,所述方法包括:
S102、获取当前的音频模式。
在一些可选实施例中,终端可以通过检查自身的工作状态,获取其当前的音频模式。在一种示例中,音频模式可以包括多媒体模式,例如播放音乐或者播放带有音乐的视频的模式。在另一种示例中,音频模式还可以包括通话模式,即终端处于通话状态。
S104、根据所述音频模式确定音频处理方式。
需要说明的是,每一种音频模式可以对应一种匹配的音频处理方式,匹配的音频方式能够使得其对应的音频模式下的音频性能最佳,即“发声”效果最好。具体的,终端可以根据其所处的音频模式,确定与音频模式匹配的音频处理方式。其中,不同的音频处理方式可以包括选择不同的元件或者元件的组合进行发声。
其中,所述根据所述音频模式确定音频处理方式,可包括以下几种实施方式:
S106A、在所述音频模式为多媒体模式的情况下,控制第一扬声器和第二扬声器同时发声。
本实施例中,若获取到当前的音频模式为多媒体模式,则终端确定的音频处理方式为通过控制第一扬声器和第二扬声器同时发声,以达到多媒体播放的音频性能要求。
S106B、在所述音频模式为通话模式的情况下,控制第一扬声器和屏幕 同时发声。
本实施例中,若获取到当前的音频模式为通话模式,则终端确定的音频处理方式为通过控制第一扬声器和屏幕同时发声,以达到通话时的音频性能要求。由于屏幕本身具有较扬声器更低的截止频率,其在低音时的频率响应能够满足要求,因此通过屏幕进行发声,能够使得通话模式下的低音效果更好。
本实施例中的屏幕发声,可以理解为通过屏幕发出声音,例如,可通过驱动屏幕振动发出声音,或者输出屏幕中的驱动帧声音等等。本实施例中对屏幕发声的具体实现方式不做限定。
本实施例中,终端能够获取当前的音频模式,并根据音频模式确定音频处理方式,在音频模式为多媒体模式的情况下,终端控制第一扬声器和第二扬声器同时发声,以满足多媒体播放时的音频性能要求;以及在音频模式为通话模式的情况下,终端控制第一扬声器和屏幕同时发声,其可以避免传统技术中,通过扬声器发声导致的低频响应不满足要求,低音效果不佳的问题,由于屏幕具有较扬声器更低的截止频率,因此屏幕对于低频的频率响应能够满足要求,通过控制第一扬声器和屏幕同时发声,能够使得通话模式下的低音效果大大提升。
在一个可选实施例中,上述实施例中的步骤S106B中的所述控制第一扬声器和屏幕同时发声,可包括:当音频频率为第一频率范围时,采用驱动马达驱动屏幕进行发声;当音频频率为第二频率范围时,采用第一扬声器进行发声;所述第一频率范围的频率低于所述第二频率范围的频率。
本实施例中,由于屏幕具有较低的截止频率,屏幕对于低频的频率响应能够满足音频性能的要求,因此,在当前播放的音频频率处于频率较低的第一频率范围的情况下,则采用驱动马达驱动屏幕振动进行发声以满足低音的音频性能要求,从而大大提升了低音的效果;在当前播放的音频频 率处于频率较高的第二频率范围的情况下,即采用截止频率较高的第一扬声器进行发声以满足高音的音频性能要求,进而保证了中高音的效果。本实施例中,通过分别控制屏幕在低音时进行发声,以及控制第一扬声器在高音时发声,从而能够同时满足高音和低音的音频性能的要求,使得通话的音频效果大大提升。
本实施例中,可以理解,控制第一扬声器和屏幕同时发声,指的是在通话模式下,第一扬声器和屏幕的发声功能被使能,可控制第一扬声器和屏幕发声,而并非指的是在同一时刻第一扬声器和屏幕同时在发声。同理,控制第一扬声器和第二扬声器同时发声,指的是在多媒体模式下,第一扬声器和第二扬声器的发声功能被使能,可控制第一扬声器和第二扬声器发声,而并非指的是在同一时刻第一扬声器和第二扬声器同时在发声。
基于上述实施例,图3为本申请另一个实施例提供的音频处理方法的流程示意图。本实施例涉及的是终端通过功率放大器驱动音频元件的具体方法。可选地,在上述各实施例的基础上,如图3所示,所述方法还包括:
S202、通过第一功率放大器控制所述第一扬声器发声;通过第二功率放大器控制所述第二扬声器发声;通过第三功率放大器控制驱动马达,通过所述驱动马达控制所述屏幕发声。
本实施例中,终端通过第一功率放大器,向第一扬声器发送经过放大处理的控制信号,从而控制第一扬声器发声。终端通过第二功率放大器,向第二扬声器发送经过放大处理的控制信号,从而控制第二扬声器发声。终端通过第三功率放大器,向驱动马达发送经过放大处理的控制信号,从而控制马达工作,并通过马达的工作带动屏幕发声。
本实施例中,终端可通过第一功率放大器控制第一扬声器发声,可通过第二功率放大器控制第二扬声器发声,以及通过第三功率放大器控制驱动马达,并通过驱动马达控制屏幕发声,从而使得第一扬声器和第二扬声 器、或第一扬声器和驱动马达能够在功率较大的控制信号的驱动下进行工作,避免了DSP输出的控制信号功率过低导致的无法正常发声或者发声效果不佳的情况,其确保了音频处理电路的性能,进一步提高了音频处理电路的发声效果。
在一个实施例中,继续参见图3所示,所述方法还可以包括:S204、产生时钟同步信号,根据所述时钟同步信号控制所述第一功率放大器、所述第二功率放大器和所述第三功率放大器进行时钟同步。具体的,终端还可以通过DSP产生时钟同步信号,并将该时钟同步信号发送至第一功率放大器、第二功率放大器和第三功率放大器,通过该时钟同步信号控制第一功率放大器、第二功率放大器和第三功率放大器进行时钟同步,从而实现第一功率放大器、第二功率放大器和第三功率放大器之间的时序控制,进一步提升音频的控制能力,进而提升音频的性能。
在本申请的一些可选实施例中,所述方法还可以包括:控制第一扬声器、第二扬声器和屏幕同时发声。
其中,控制上述第一扬声器、第二扬声器和屏幕同时发声可参见上述所述,例如通过第一功率放大器控制所述第一扬声器发声,通过第二功率放大器控制所述第二扬声器发声,通过第三功率放大器控制驱动马达,通过所述驱动马达控制所述屏幕发声。
基于上述实施例,本申请实施例还提供了一种音频处理电路。图4为本申请一个实施例提供的音频处理电路,该电路包括第一扬声器401、第二扬声器402、驱动马达403、屏幕404和DSP 405;其中,第一扬声器401、第二扬声器402和驱动马达403分别与DSP 405电连接,配置为获取DSP405输出的控制信号;驱动马达403与屏幕404电连接,配置为驱动屏幕404进行发声;第一扬声器401,配置为在多媒体模式的情况下,在控制信号的控制下与第二扬声器402同时工作;第一扬声器401,还配置为在通话模式 的情况下,在控制信号的控制下与驱动马达403同时工作。
本实施例中,音频处理电路包括第一扬声器401、第二扬声器402、驱动马达403、屏幕404和DSP405。第一扬声器401、第二扬声器402和驱动马达403分别与DSP405电连接,配置为获取DSP 405输出的控制信号;驱动马达403与屏幕404电连接,配置为驱动屏幕404沿Z轴方向振动进行发声。在多媒体模式的情况下,DSP 405向第一扬声器401和第二扬声器402输出控制信号,通过输出的控制信号控制第一扬声器401和第二扬声器402同时进行发声。在通话模式的情况下,DSP 405向第一扬声器401和驱动马达403输出控制信号,通过输出的控制信号控制第一扬声器401和驱动马达403同时进行工作,由于驱动马达403能够驱动屏幕振动进行发声,因此,当DSP 405分别向第一扬声器401和驱动马达403输出控制信号时,则可以驱动第一扬声器401和和屏幕404同时发声。
本实施例所提供的音频处理电路,通过DSP 405输出的控制信号,使得第一扬声器在多媒体模式的情况下、在控制信号的控制下与第二扬声器同时工作,以满足多媒体播放时的音频性能要求;或者使得第一扬声器在通话模式的情况下、在控制信号的控制下与驱动马达同时工作,并通过驱动马达带动屏幕发声;由于屏幕具有较扬声器更低的截止频率,因此屏幕对于低音的频率响应能够满足音频性能要求,因此通过第一扬声器和驱动马达带动屏幕同时发声,能够使得通话模式下的低音效果大大提升。
基于前述实施例,图5为本申请另一个实施例提供的音频处理电路。在上述图4所示的实施例的基础上,所述电路还包括第一功率放大器406、第二功率放大器407和第三功率放大器408。第一功率放大器406与第一扬声器401电连接,配置为驱动第一扬声器401进行发声;第二功率放大器407与第二扬声器402电连接,配置为驱动第二扬声器402进行发声;第三功率放大器408与驱动马达403电连接,配置为驱动马达403进行工作。
本实施例中,该音频处理电路还可以包括第一功率放大器406、第二功率放大器407和第三功率放大器408。其中,第一功率放大器406的输入端与DSP405电连接,第一功率放大器406的输出端与第一扬声器401电连接,以接收DSP405发送的控制信号,并将该控制信号进行放大,将放大后的控制信号发送至第一扬声器401,从而驱动第一扬声器401发声。第二功率放大器407的输入端与DSP405电连接,第二功率放大器407的输出端与第二扬声器402电连接,以接收DSP405发送的控制信号,并将该控制信号进行放大,将放大后的控制信号发送至第二扬声器402,从而驱动第二扬声器402发声。第三功率放大器408的输入端与DSP405电连接,第三功率放大器408的输出端与驱动马达403电连接,驱动马达403与屏幕404电连接,第三功率放大器408接收DSP405发送的控制信号,并将该控制信号进行放大,将放大后的控制信号发送至驱动马达403从而驱动马达403工作,进而基于驱动马达403带动屏幕404发声。
本实施例所提供的音频处理电路,使得第一扬声器、第二扬声器和驱动马达能够在功率较高的控制信号的驱动下工作,其避免了DSP输出的控制信号功率过低导致的无法正常发声或者发声效果不佳的情况,其确保了音频处理电路的性能,进一步提高了音频处理电路的发声效果。
在一个实施例中,第一功率放大器406、第二功率放大器407和第三功率放大器408为相同型号的功率放大器。具体的,由于第一功率放大器406、第二功率放大器407和第三功率放大器408的型号相同,因此其性能指标统一,具有统一指标的功率放大器更便于控制信号和时钟同步信号的同步,因此进一步提升了音频的控制能力,进而提升音频的性能。
在一个实施例中,驱动马达403为Z轴马达。Z轴马达为沿Z轴方向振动的马达,其中,Z轴方向为与屏幕平面垂直的方向。通过该Z轴马达能够带动屏幕在其平面垂直的方向震动,从而使得音频处理电路在低音时 的频率响应满足音频性能要求,从而提升低音的发声效果。
在一个实施例中,DSP405,还配置为分别向第一功率放大器406、第二功率放大器407和第三功率放大器408发送时钟同步信号,所述时钟同步信号用于控制第一功率放大器406、第二功率放大器407和第三功率放大器408进行时钟同步,其更好地实现时序控制,因此进一步提升了音频的控制能力,进而提升音频的性能。
基于前述实施例,本申请实施例还提供一种终端,所述终端包括上述任一实施例所述的音频处理电路。
本申请实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现本申请实施例所述音频处理方法的步骤。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(Read Only Memory,ROM)、可编程只读存储器(Programmable Read-Only Memory,PROM)、可擦除可编程只读存储器(Erasable Programmable Read-Only Memory,EPROM)、电可擦除可编程只读存储器(Electrically Erasable Programmable Read-Only Memory,EEPROM)或闪存。易失性存储器可包括随机存取存储器(Random Access Memory,RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(Static Random Access Memory,SRAM)、动态RAM(Dynamic Random Access Memory,DRAM)、同步DRAM(Synchronous Dynamic Random Access  Memory,SDRAM)、双数据率SDRAM(Double Data Rate Synchronous Dynamic Random Access Memory,DDRSDRAM)、增强型SDRAM(Enhanced Synchronous Dynamic Random Access Memory,ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请实施例范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请实施例构思的前提下,还可以做出若干变形和改进,这些都属于本申请实施例的保护范围。

Claims (11)

  1. 一种音频处理方法,所述方法包括:
    获取当前的音频模式;
    根据所述音频模式确定音频处理方式;
    其中,所述根据所述音频模式确定音频处理方式,包括:
    在所述音频模式为多媒体模式的情况下,控制第一扬声器和第二扬声器同时发声;
    在所述音频模式为通话模式的情况下,控制第一扬声器和屏幕同时发声。
  2. 根据权利要求1所述的方法,其中,所述控制第一扬声器和屏幕同时发声包括:当音频频率为第一频率范围时,采用驱动马达驱动屏幕进行发声;当音频频率为第二频率范围时,采用第一扬声器进行发声;所述第一频率范围的频率低于所述第二频率范围的频率。
  3. 根据权利要求1所述的方法,其中,控制第一扬声器发声,包括:
    通过第一功率放大器控制所述第一扬声器发声;
    控制第二扬声器发声,包括:
    通过第二功率放大器控制所述第二扬声器发声;
    控制屏幕发声,包括:
    通过第三功率放大器控制驱动马达,通过所述驱动马达控制所述屏幕发声。
  4. 根据权利要求3所述的方法,其中,所述方法还包括:
    产生时钟同步信号,根据所述时钟同步信号控制所述第一功率放大器、所述第二功率放大器和所述第三功率放大器进行时钟同步。
  5. 一种音频处理电路,包括第一扬声器、第二扬声器、驱动马达、屏幕和数字信号处理器DSP;
    所述第一扬声器、所述第二扬声器和所述驱动马达分别与所述DSP电连接,配置为获取所述DSP输出的控制信号;
    所述驱动马达与所述屏幕电连接,配置为驱动所述屏幕进行发声;
    所述第一扬声器,配置为在多媒体模式的情况下,在所述控制信号的控制下与所述第二扬声器同时工作;
    所述第一扬声器,还配置为在通话模式的情况下,在所述控制信号的控制下与所述驱动马达同时工作。
  6. 根据权利要求5所述的电路,其中,所述电路还包括第一功率放大器、第二功率放大器和第三功率放大器;
    所述第一功率放大器与所述第一扬声器电连接,配置为驱动所述第一扬声器进行发声;
    所述第二功率放大器与所述第二扬声器电连接,配置为驱动所述第二扬声器进行发声;
    所述第三功率放大器与所述驱动马达电连接,配置为驱动所述驱动马达进行工作。
  7. 根据权利要求6所述的电路,其中,所述第一功率放大器、所述第二功率放大器和所述第三功率放大器为相同型号的功率放大器。
  8. 根据权利要求5至7任一项所述的电路,其中,所述驱动马达为Z轴马达。
  9. 根据权利要求6所述的电路,其中,所述DSP,还配置为分别向所述第一功率放大器、所述第二功率放大器和所述第三功率放大器发送时钟同步信号,所述时钟同步信号用于控制所述第一功率放大器、所述第二功率放大器和所述第三功率放大器进行时钟同步。
  10. 一种终端,包括如权利要求5至9任一项所述的音频处理电路。
  11. 一种计算机可读存储介质,其上存储有计算机程序,该计算机程 序被处理器执行时实现权利要求1至4任一项所述方法的步骤。
PCT/CN2020/084866 2019-04-15 2020-04-15 音频处理方法、电路和终端 WO2020211766A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910297900.XA CN110191221A (zh) 2019-04-15 2019-04-15 音频处理方法、电路和终端
CN201910297900.X 2019-04-15

Publications (1)

Publication Number Publication Date
WO2020211766A1 true WO2020211766A1 (zh) 2020-10-22

Family

ID=67714515

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/084866 WO2020211766A1 (zh) 2019-04-15 2020-04-15 音频处理方法、电路和终端

Country Status (2)

Country Link
CN (1) CN110191221A (zh)
WO (1) WO2020211766A1 (zh)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110191221A (zh) * 2019-04-15 2019-08-30 深圳市万普拉斯科技有限公司 音频处理方法、电路和终端
CN112543250B (zh) * 2019-09-04 2023-06-09 中兴通讯股份有限公司 音频播放控制方法、智能手机、装置及可读存储介质
CN112492095B (zh) * 2019-09-11 2022-02-15 北京小米移动软件有限公司 控制终端的系统、终端、方法、装置和存储介质
CN116112600A (zh) * 2021-11-10 2023-05-12 荣耀终端有限公司 通话音量的调节方法、电子设备及存储介质
CN116744204A (zh) * 2022-02-24 2023-09-12 北京荣耀终端有限公司 一种器件检测方法和终端
CN115379347A (zh) * 2022-08-16 2022-11-22 Oppo广东移动通信有限公司 音频输出方法、电子设备及计算机可读存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103259903A (zh) * 2013-04-15 2013-08-21 瑞声科技(南京)有限公司 多功能发声系统、发声方法及应用该系统的通讯终端
US20140241542A1 (en) * 2009-06-09 2014-08-28 Samsung Electronics Co., Ltd. Apparatus and method for outputting audio signal in portable terminal
CN106126184A (zh) * 2016-06-30 2016-11-16 维沃移动通信有限公司 一种音频信号播放方法及移动终端
CN106126181A (zh) * 2016-06-28 2016-11-16 宇龙计算机通信科技(深圳)有限公司 一种屏幕发声控制装置、方法及终端
CN108156562A (zh) * 2017-12-28 2018-06-12 上海传英信息技术有限公司 一种显示屏幕发声装置及智能终端
CN110191221A (zh) * 2019-04-15 2019-08-30 深圳市万普拉斯科技有限公司 音频处理方法、电路和终端

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101481426B1 (ko) * 2008-01-08 2015-01-13 삼성전자주식회사 평판 음향 출력 장치를 구비하는 이동통신 단말기 및 그의진동 출력 방법
CN204681595U (zh) * 2015-01-30 2015-09-30 瑞声光电科技(常州)有限公司 发声系统及应用所述发声系统的移动通信设备
CN104935742B (zh) * 2015-06-10 2017-11-24 瑞声科技(南京)有限公司 移动通讯终端及改善其在听筒模式下的音质的方法
CN204810369U (zh) * 2015-07-02 2015-11-25 瑞声光电科技(常州)有限公司 移动通讯终端

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140241542A1 (en) * 2009-06-09 2014-08-28 Samsung Electronics Co., Ltd. Apparatus and method for outputting audio signal in portable terminal
CN103259903A (zh) * 2013-04-15 2013-08-21 瑞声科技(南京)有限公司 多功能发声系统、发声方法及应用该系统的通讯终端
CN106126181A (zh) * 2016-06-28 2016-11-16 宇龙计算机通信科技(深圳)有限公司 一种屏幕发声控制装置、方法及终端
CN106126184A (zh) * 2016-06-30 2016-11-16 维沃移动通信有限公司 一种音频信号播放方法及移动终端
CN108156562A (zh) * 2017-12-28 2018-06-12 上海传英信息技术有限公司 一种显示屏幕发声装置及智能终端
CN110191221A (zh) * 2019-04-15 2019-08-30 深圳市万普拉斯科技有限公司 音频处理方法、电路和终端

Also Published As

Publication number Publication date
CN110191221A (zh) 2019-08-30

Similar Documents

Publication Publication Date Title
WO2020211766A1 (zh) 音频处理方法、电路和终端
US10262650B2 (en) Earphone active noise control
US20190051289A1 (en) Voice assistant system, server apparatus, device, voice assistant method therefor, and program to be executed by copmuter
CN109150677B (zh) 跨域访问的处理方法、装置及电子设备
WO2020107290A1 (zh) 音频输出控制方法和装置、计算机可读存储介质、电子设备
WO2017101325A1 (zh) 一种车载显示控制方法及其装置
CN110784804B (zh) 一种无线耳机降噪校准方法、装置及耳机盒和存储介质
WO2021203906A1 (zh) 自动音量调整方法、装置、介质和设备
US20200028970A1 (en) Pre-distortion system for cancellation of nonlinear distortion in mobile devices
US20130178964A1 (en) Audio system with adaptable audio output
US9307334B2 (en) Method for calculating audio latency in real-time audio processing system
WO2020118496A1 (zh) 音频通路切换方法和装置、可读存储介质、电子设备
US20130178963A1 (en) Audio system with adaptable equalization
WO2021169585A1 (zh) 局域网环境下设备间的仲裁方法、电子设备、局域网系统
CN112565976B (zh) 扬声器的驱动电路、温度保护方法、终端设备及存储介质
CN103489452B (zh) 一种通话噪声消除方法、装置及终端设备
WO2024037183A1 (zh) 音频输出方法、电子设备及计算机可读存储介质
WO2019061292A1 (zh) 一种终端降噪方法及终端
US20110135113A1 (en) Apparatus and method for increasing volumn in portable terminal
KR102324063B1 (ko) 마이크를 통해 획득한 오디오 신호의 크기에 기반하여 마이크의 오류 발생 여부를 결정하기 위한 방법 및 그 전자 장치
CN111683331B (zh) 一种音频校准方法及装置
CN111083605A (zh) 音频信息播放方法、装置、电子设备和存储介质
WO2021120247A1 (zh) 听力补偿方法、装置及计算机可读存储介质
US8054948B1 (en) Audio experience for a communications device user
CN111741422B (zh) 颈戴式耳机音频校准方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20790439

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 03/02/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20790439

Country of ref document: EP

Kind code of ref document: A1