WO2023103503A1 - 频率响应一致性的校准方法及电子设备 - Google Patents

频率响应一致性的校准方法及电子设备 Download PDF

Info

Publication number
WO2023103503A1
WO2023103503A1 PCT/CN2022/118558 CN2022118558W WO2023103503A1 WO 2023103503 A1 WO2023103503 A1 WO 2023103503A1 CN 2022118558 W CN2022118558 W CN 2022118558W WO 2023103503 A1 WO2023103503 A1 WO 2023103503A1
Authority
WO
WIPO (PCT)
Prior art keywords
frequency
calibrated
audio signal
calibration
electronic device
Prior art date
Application number
PCT/CN2022/118558
Other languages
English (en)
French (fr)
Inventor
杨枭
许剑峰
叶千峰
吴琪
Original Assignee
荣耀终端有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 荣耀终端有限公司 filed Critical 荣耀终端有限公司
Publication of WO2023103503A1 publication Critical patent/WO2023103503A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/04Circuits for transducers, loudspeakers or microphones for correcting frequency response
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R29/00Monitoring arrangements; Testing arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R29/00Monitoring arrangements; Testing arrangements
    • H04R29/001Monitoring arrangements; Testing arrangements for loudspeakers

Definitions

  • the present application relates to the field of audio technology, and in particular to a calibration method for frequency response consistency and electronic equipment.
  • the electronic device In order to realize the function of voice communication or audio playback, the electronic device needs to be equipped with a sound generating device so that the user can hear the other party's voice during the call or the audio played by the electronic device.
  • the screen sound emitting device such as a piezoelectric ceramic capacitive device
  • the electronic equipment As the electronic equipment. speaker or earpiece.
  • the embodiment of the present application provides a frequency response consistency calibration method and electronic equipment, which are used to solve the problem that the frequency response curve of the sound signal will fluctuate greatly when the screen sound emitting device of the electronic equipment plays the sound signal .
  • a method for calibrating frequency response consistency is provided.
  • the method is applied to electronic equipment.
  • the electronic device includes an equalizer calibration module.
  • the electronic device is communicatively connected to the calibration device.
  • the method includes: the electronic device plays a test audio signal.
  • the electronics save the calibration parameters.
  • the calibration parameters are operating parameters of the equalizer calibration module.
  • Calibration parameters are determined from the frequency response of the first audio signal and the standard frequency response of the test audio signal.
  • the first audio signal is obtained by the calibration device collecting the test audio signal played by the electronic device.
  • the calibration parameters are used to adjust the frequency response of the second audio signal played by the electronic device through the equalizer calibration module when the electronic device plays the second audio signal.
  • the first audio signal is obtained by collecting the sound signal played before the electronic equipment is calibrated, and the calibration parameters of the equalizer calibration module are determined according to the frequency response of the first audio signal and the standard frequency response of the test audio signal.
  • This calibration parameter is used to adjust the smoothness of the frequency response curve when the electronic device plays a normal audio signal, so as to reduce the difference between the frequency response curve of the actually played audio signal and the standard frequency response curve, and improve the quality of the audio signal played by the electronic device , to enhance the user's listening experience.
  • the above method may further include: the electronic device receives the first audio signal.
  • the electronic device determines calibration parameters according to the frequency response of the first audio signal and the standard frequency response of the test audio signal.
  • the calibration parameter can be determined by the electronic device.
  • the calibration device collects the first audio signal, it can send the first audio signal to the electronic device, so that the electronic device can calculate and determine the calibration parameter. In this way, the calibration equipment can be simplified and the development of the calibration equipment can be facilitated.
  • the above method may further include: the electronic device receives the calibration parameter.
  • the calibration parameter is the frequency response of the calibration device according to the frequency response of the first audio signal and the standard frequency response of the test audio signal. That is to say, the calibration parameter can also be determined by the calibration device. Since the calibration parameters can be detected before the electronic equipment leaves the factory, the calibration parameters are determined by the calibration equipment, which can prevent the program instructions for determining the calibration parameters from occupying the storage space of the electronic equipment.
  • the electronic device playing the test audio signal includes: in response to the electronic device receiving a detection instruction sent by the calibration device, the electronic device plays the test audio signal. Or, in response to the electronic device sending a detection instruction to the calibration device, the electronic device plays a test audio signal.
  • the test audio signal is a frequency sweep signal in the full frequency domain.
  • the test audio signal is a frequency sweep signal in the full frequency domain.
  • the frequency response of the first audio signal is obtained by performing time-frequency transformation on the first audio signal.
  • the equalizer calibration module includes multiple sub-band filters; the calibration parameter is a parameter of the multiple sub-band filters.
  • the calibration parameters include the number of frequency bands to be calibrated, the filter type corresponding to each frequency band to be calibrated, the center frequency of the filter corresponding to each frequency band to be calibrated, and the frequency response gain corresponding to each frequency band to be calibrated. Due to the non-linear characteristics of capacitive screen sound emitting devices, the calibration of the frequency response of the audio signal played by the electronic device can be realized through a multi-segment equalization filter (that is, multiple sub-band filters), so that the frequency response consistency is better, and the user listens feel better.
  • the electronic device further includes an equalizer parameter calculation module.
  • the electronic device determines the calibration parameters according to the frequency response of the first audio signal and the standard frequency response of the test audio signal, including: the electronic device uses an equalizer parameter calculation module to determine the calibration parameters according to the frequency response of the first audio signal and the standard frequency response of the test audio signal , determine the frequency point to be calibrated, the number of frequency points to be calibrated, and the frequency response gain corresponding to the frequency point to be calibrated.
  • the electronic device determines the calibration parameters through the equalizer parameter calculation module according to the frequency points to be calibrated, the number of frequency points to be calibrated, and the frequency response gains corresponding to the frequency points to be calibrated.
  • the frequency point to be calibrated is the frequency point at which the difference between the frequency response of the first audio signal and the standard frequency response of the test audio signal exceeds a preset frequency response gain.
  • the frequency response gain corresponding to the frequency point to be calibrated is: at the frequency point to be calibrated, the difference between the frequency response of the first audio signal and the standard frequency response of the test audio signal.
  • the number of frequency points to be calibrated is less than or equal to N
  • the number of frequency bands to be calibrated is N
  • the center frequency points of the N frequency bands to be calibrated are: the frequency response and test frequency of the first audio signal
  • the filter type corresponding to each frequency band to be calibrated is: peak filter.
  • the center frequency of the filter corresponding to each frequency band to be calibrated is: the frequency corresponding to the center frequency point of the frequency band to be calibrated.
  • the frequency response gain of the filter corresponding to each frequency band to be calibrated is: the difference between the frequency response of the first audio signal and the standard frequency response of the test audio signal at the center frequency point of the frequency band to be calibrated.
  • N is the preset minimum number of supported calibration subbands.
  • the number of frequency bands to be calibrated is the number of frequency points to be calibrated.
  • the filter type corresponding to each frequency band to be calibrated is: peak filter.
  • the center frequency of the filter corresponding to each frequency band to be calibrated is: the frequency corresponding to each frequency point to be calibrated.
  • the frequency response gain of the filter corresponding to each frequency band to be calibrated is: the frequency response gain corresponding to each frequency point to be calibrated.
  • the number of frequency points to be calibrated is greater than M, the number of frequency bands to be calibrated is N; the frequency bands to be calibrated are obtained by combining the frequency points to be calibrated.
  • the filter type corresponding to each frequency band to be calibrated is: peak filter.
  • the center frequency of the filter corresponding to each frequency band to be calibrated is: the frequency corresponding to the center frequency point of the frequency band to be calibrated.
  • the frequency response gain of the filter corresponding to each frequency band to be calibrated is: an average gain of frequency response gains corresponding to all frequency points to be calibrated in each frequency band to be calibrated.
  • the number of frequency bands to be calibrated is N.
  • the frequency band to be calibrated is obtained by combining the frequency points that need to be calibrated.
  • the lowest frequency band to be calibrated among the N frequency bands to be calibrated if the lowest frequency point of the lowest frequency band to be calibrated is f1, and the number of frequency points to be calibrated in the lowest frequency band to be calibrated is greater than or equal to the first threshold, then
  • the filter type corresponding to the lowest frequency band to be calibrated is: low frequency shelving filter.
  • the center frequency of the filter corresponding to the lowest frequency band to be calibrated is: the frequency corresponding to the highest frequency point in the lowest frequency band to be calibrated.
  • the frequency response gain corresponding to the lowest frequency band to be calibrated is: an average gain of frequency response gains corresponding to multiple frequency points to be calibrated in the lowest frequency band to be calibrated.
  • the lowest frequency band to be calibrated among the N frequency bands to be calibrated if the lowest frequency point of the lowest frequency band to be calibrated is greater than f1, or the number of frequency points to be calibrated in the lowest frequency band to be calibrated is less than the first threshold, then the lowest The filter type corresponding to the calibrated frequency band is: peak filter.
  • the center frequency of the filter corresponding to the lowest frequency band to be calibrated is: the frequency corresponding to the center frequency point in the lowest frequency band to be calibrated.
  • the frequency response gain corresponding to the lowest frequency band to be calibrated is: an average gain of frequency response gains corresponding to all frequency points to be calibrated in the lowest frequency band to be calibrated.
  • f1 is the preset minimum frequency that needs to be calibrated.
  • the number of frequency points to be calibrated is greater than M
  • the number of filters is N.
  • the frequency band to be calibrated is obtained by combining the frequency points that need to be calibrated.
  • the filter type corresponding to the highest frequency band to be calibrated is: high frequency shelf filter.
  • the center frequency of the filter corresponding to the highest frequency band to be calibrated is: the frequency corresponding to the lowest frequency point in the highest frequency band to be calibrated.
  • the frequency response gain corresponding to the highest frequency band to be calibrated is: an average gain of frequency response gains corresponding to multiple frequency points to be calibrated in the highest frequency band to be calibrated.
  • the highest frequency band to be calibrated among the N frequency bands to be calibrated if the highest frequency point of the highest frequency band to be calibrated is less than f2, or the number of frequency points to be calibrated in the highest frequency band to be calibrated is less than the first threshold, then the highest The filter type corresponding to the frequency band to be calibrated is: peak filter.
  • the center frequency of the filter corresponding to the highest frequency band to be calibrated is: the frequency corresponding to the center frequency point in the highest frequency band to be calibrated.
  • the frequency response gain corresponding to the highest frequency band to be calibrated is: an average gain of frequency response gains corresponding to all frequency points to be calibrated in the highest frequency band to be calibrated.
  • f2 is the preset highest frequency to be calibrated.
  • another calibration method for frequency response consistency is provided. This method is applied to calibrate the equipment.
  • the calibration device is communicatively coupled to the electronic device. The method includes: during the electronic device playing the test audio signal, collecting the sound signal played by the electronic device to obtain the first audio signal. Calibration parameters are determined according to the frequency response of the first audio signal and the standard frequency response of the test audio signal. The calibration parameter is used to adjust the frequency response of the sound signal played by the electronic device when the electronic device plays the second audio signal. Send calibration parameters to electronics.
  • the calibration device includes an artificial ear.
  • Collecting the sound signal played by the electronic device includes: collecting the sound signal played by the electronic device through the artificial ear in response to receiving a detection instruction sent by the electronic device. Or, in response to sending a detection instruction to the electronic device, the artificial ear collects the sound signal played by the electronic device.
  • the detection instruction is used to instruct the electronic device to play a test audio signal.
  • the test audio signal is a frequency sweep signal in the full frequency domain.
  • the frequency response of the first audio signal is obtained by performing time-frequency transformation on the first audio signal.
  • the equalizer calibration module includes a plurality of sub-band filters; the calibration parameters include the number of frequency bands to be calibrated, the filter type corresponding to each frequency band to be calibrated, and the filter type corresponding to each frequency band to be calibrated The center frequency of , the frequency response gain corresponding to each frequency band to be calibrated.
  • determining the calibration parameters according to the frequency response of the first audio signal and the standard frequency response of the test audio signal includes: determining the required frequency response according to the frequency response of the first audio signal and the standard frequency response of the test audio signal.
  • the calibration parameters are determined according to the frequency points to be calibrated, the number of frequency points to be calibrated, and the frequency response gains corresponding to the frequency points to be calibrated.
  • the frequency point to be calibrated is the frequency point at which the difference between the frequency response of the first audio signal and the standard frequency response of the test audio signal exceeds a preset frequency response gain.
  • the frequency response gain corresponding to the frequency point to be calibrated is: at the frequency point to be calibrated, the difference between the frequency response of the first audio signal and the standard frequency response of the test audio signal.
  • the number of frequency points to be calibrated is less than or equal to N
  • the number of frequency bands to be calibrated is N
  • the center frequency points of the N frequency bands to be calibrated are: the frequency response and test frequency of the first audio signal
  • the filter type corresponding to each frequency band to be calibrated is: peak filter.
  • the center frequency of the filter corresponding to each frequency band to be calibrated is: the frequency corresponding to the center frequency point of the frequency band to be calibrated.
  • the frequency response gain of the filter corresponding to each frequency band to be calibrated is: the difference between the frequency response of the first audio signal and the standard frequency response of the test audio signal at the center frequency point of the frequency band to be calibrated.
  • N is the preset minimum number of supported calibration subbands.
  • the number of frequency bands to be calibrated is the number of frequency points to be calibrated.
  • the filter type corresponding to each frequency band to be calibrated is: peak filter.
  • the center frequency of the filter corresponding to each frequency band to be calibrated is: the frequency corresponding to each frequency point to be calibrated.
  • the frequency response gain of the filter corresponding to each frequency band to be calibrated is: the frequency response gain corresponding to each frequency point to be calibrated.
  • the number of frequency points to be calibrated is greater than M, the number of frequency bands to be calibrated is N; the frequency bands to be calibrated are obtained by combining the frequency points to be calibrated.
  • the filter types corresponding to each frequency band to be calibrated are: peak filter.
  • the center frequency of the filter corresponding to each frequency band to be calibrated is: the frequency corresponding to the center frequency point of the frequency band to be calibrated.
  • the frequency response gain of the filter corresponding to each frequency band to be calibrated is: an average gain of frequency response gains corresponding to all frequency points to be calibrated in each frequency band to be calibrated.
  • the number of frequency bands to be calibrated is N.
  • the frequency band to be calibrated is obtained by combining the frequency points that need to be calibrated.
  • the lowest frequency band to be calibrated among the N frequency bands to be calibrated if the lowest frequency point of the lowest frequency band to be calibrated is f1, and the number of frequency points to be calibrated in the lowest frequency band to be calibrated is greater than or equal to the first threshold, then
  • the filter type corresponding to the lowest frequency band to be calibrated is: low frequency shelving filter.
  • the center frequency of the filter corresponding to the lowest frequency band to be calibrated is: the frequency corresponding to the highest frequency point in the lowest frequency band to be calibrated.
  • the frequency response gain corresponding to the lowest frequency band to be calibrated is: an average gain of frequency response gains corresponding to multiple frequency points to be calibrated in the lowest frequency band to be calibrated.
  • the lowest frequency band to be calibrated among the N frequency bands to be calibrated if the lowest frequency point of the lowest frequency band to be calibrated is greater than f1, or the number of frequency points to be calibrated in the lowest frequency band to be calibrated is less than the first threshold, then the lowest The filter type corresponding to the calibrated frequency band is: peak filter.
  • the center frequency of the filter corresponding to the lowest frequency band to be calibrated is: the frequency corresponding to the center frequency point in the lowest frequency band to be calibrated.
  • the frequency response gain corresponding to the lowest frequency band to be calibrated is: an average gain of frequency response gains corresponding to all frequency points to be calibrated in the lowest frequency band to be calibrated.
  • f1 is the preset minimum frequency that needs to be calibrated.
  • the number of frequency points to be calibrated is greater than M
  • the number of filters is N.
  • the frequency band to be calibrated is obtained by combining the frequency points that need to be calibrated.
  • the filter type corresponding to the highest frequency band to be calibrated is: high frequency shelf filter.
  • the center frequency of the filter corresponding to the highest frequency band to be calibrated is: the frequency corresponding to the lowest frequency point in the highest frequency band to be calibrated.
  • the frequency response gain corresponding to the highest frequency band to be calibrated is: an average gain of frequency response gains corresponding to multiple frequency points to be calibrated in the highest frequency band to be calibrated.
  • the highest frequency band to be calibrated among the N frequency bands to be calibrated if the highest frequency point of the highest frequency band to be calibrated is less than f2, or the number of frequency points to be calibrated in the highest frequency band to be calibrated is less than the first threshold, then the highest The filter type corresponding to the frequency band to be calibrated is: peak filter.
  • the center frequency of the filter corresponding to the highest frequency band to be calibrated is: the frequency corresponding to the center frequency point in the highest frequency band to be calibrated.
  • the frequency response gain corresponding to the highest frequency band to be calibrated is: an average gain of frequency response gains corresponding to all frequency points to be calibrated in the highest frequency band to be calibrated.
  • f2 is the preset highest frequency to be calibrated.
  • an audio playing method is provided.
  • the audio playing method is applied to electronic equipment; the electronic equipment includes an equalizer calibration module and a screen sound emitting device.
  • the method includes: receiving an audio playing instruction.
  • the audio playing instruction is used to instruct the electronic device to play the second audio signal.
  • calibration parameters are acquired.
  • the calibration parameter is a calibration parameter stored in the electronic device according to the method in any possible implementation manner of the first aspect above.
  • the frequency response of the second audio signal is adjusted through the equalizer calibration module and the calibration parameters to obtain the third audio signal.
  • the third audio signal is played through the screen sound generating device.
  • a calibration system in a fourth aspect, includes electronics and calibration equipment.
  • the electronic device is communicatively connected to the calibration device.
  • the electronic device is configured to execute the method in any possible implementation manner of the above first aspect.
  • a calibration device is provided.
  • the calibration device is communicatively connected to the electronic device.
  • the calibration equipment includes: a calibration control module, an equalizer parameter calculation module and an artificial ear.
  • the calibration control module is used to send a detection instruction to the electronic device; the detection instruction is used to instruct the electronic device to play a test audio signal.
  • the artificial ear is used for collecting the test audio signal played by the electronic device during the process of playing the test audio signal by the electronic device to obtain the first audio signal.
  • the equalizer parameter calculation module is used to determine the calibration parameters according to the frequency response of the first audio signal and the standard frequency response of the test audio signal.
  • the calibration parameter is used to adjust the frequency response of the second audio signal played by the electronic device when the electronic device plays the second audio signal.
  • the calibration control module is also used to send calibration parameters to the electronic equipment.
  • the test audio signal is a frequency sweep signal in the full frequency domain.
  • the frequency response of the first audio signal is obtained by performing time-frequency transformation on the first audio signal.
  • the calibration parameters include the number of frequency bands to be calibrated, the filter type corresponding to each frequency band to be calibrated, the center frequency of the filter corresponding to each frequency band to be calibrated, and the filter type corresponding to each frequency band to be calibrated. frequency response gain.
  • the equalizer parameter calculation module is specifically configured to: determine the frequency points that need to be calibrated, the number of frequency points that need to be calibrated, and the frequency points that need to be calibrated according to the frequency response of the first audio signal and the standard frequency response of the test audio signal. The frequency response gain corresponding to the frequency point.
  • the calibration parameters are determined according to the frequency points to be calibrated, the number of frequency points to be calibrated, and the frequency response gains corresponding to the frequency points to be calibrated.
  • the frequency point to be calibrated is the frequency point at which the difference between the frequency response of the first audio signal and the standard frequency response of the test audio signal exceeds a preset frequency response gain.
  • the frequency response gain corresponding to the frequency point to be calibrated is: at the frequency point to be calibrated, the difference between the frequency response of the first audio signal and the standard frequency response of the test audio signal.
  • a calibration device in a sixth aspect, includes a processor and memory.
  • One or more computer programs are stored in the memory, and the one or more computer programs include instructions.
  • the calibration device executes the method in any possible implementation manner of the second aspect above.
  • an electronic device in a seventh aspect, includes: one or more processors; a memory; a communication module. Wherein, the communication module is used for communicating with the calibration equipment.
  • One or more computer programs are stored in the memory, and the one or more computer programs include instructions. When the instructions are executed by the processor, the electronic device executes the method in any possible implementation manner of the first aspect and the third aspect above. .
  • a chip system in an eighth aspect, includes one or more interface circuits and one or more processors.
  • the interface circuit and the processor are interconnected by wires.
  • the chip system can be applied to an electronic device including a communication module and a memory.
  • the interface circuit can read instructions stored in the memory of the electronic device, and send the instructions to the processor.
  • the electronic device can be made to execute the method described in any possible implementation of the first aspect above, or the calibration device can be made to execute any possible implementation of the second aspect above. method described in the method.
  • a computer-readable storage medium and instructions are stored in the computer-readable storage medium, and when the instructions are run on the calibration equipment, the calibration equipment executes any of the possible implementations of the second aspect above. method.
  • a computer-readable storage medium is provided. Instructions are stored in the computer-readable storage medium. When the instructions are run on the calibration device, the electronic device performs any one of the possible steps in the first aspect and the third aspect above. method in the implementation.
  • FIG. 1 is a schematic diagram of a scene where a user performs voice communication through an electronic device provided in an embodiment of the present application;
  • FIG. 2 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of another electronic device provided by an embodiment of the present application.
  • Fig. 4 is a schematic diagram of the sounding of the screen sounding device provided by the embodiment of the present application.
  • Fig. 5 is the frequency resistance characteristic curve of the 4.2 microfarad piezoelectric ceramic provided by the embodiment of the present application.
  • FIG. 6 is a schematic diagram of the functions of different types of filters with different parameters provided by the embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a calibration system provided in an embodiment of the present application.
  • FIG. 8A is a system structural block diagram of electronic equipment and calibration equipment in a calibration system provided by an embodiment of the present application.
  • Fig. 8B is a system structural block diagram of electronic equipment and calibration equipment in another calibration system provided by the embodiment of the present application;
  • FIG. 9 is a software structural block diagram of an electronic device provided in an embodiment of the present application.
  • FIG. 10A is a flow chart of a frequency response consistency calibration method provided by an embodiment of the present application.
  • FIG. 10B is a flowchart in step S1004 in FIG. 10A;
  • FIG. 11 is a comparison diagram of a frequency response curve of a first audio signal provided in an embodiment of the present application and a standard frequency response curve of a test audio signal;
  • FIG. 12 is a flow chart of another frequency response consistency calibration method provided by the embodiment of the present application.
  • Fig. 13 is a comparison diagram of the frequency response curve of a screen sound emitting device before and after frequency response consistency calibration provided by the embodiment of the present application;
  • FIG. 14 is a schematic structural diagram of a chip system provided by an embodiment of the present application.
  • Capacitive device In general, a load with capacitive parameters (that is, a load that matches the characteristics of voltage lagging current) can be called a capacitive load or a capacitive device.
  • the load of a capacitive device in an electronic system can be equivalent to a capacitor.
  • piezoelectric micro-electro-mechanical systems micro-electro-mechanical system, MEMS
  • thin film materials thin film materials
  • electrostatic speakers, etc. are all capacitive devices. Capacitive devices do not experience sudden voltage changes during charge/discharge.
  • a sound emitting device needs to be installed in the electronic equipment so that the user can hear the other party's voice during voice communication.
  • electronic equipment also needs to be equipped with sound-generating devices.
  • an earpiece also called a speaker
  • the earpiece is arranged inside the mobile phone, and a hole needs to be drilled on the front panel of the mobile phone to form a sound outlet.
  • the earpiece When the earpiece emits sound, the sound energy emitted by the earpiece can be transmitted through the sound outlet, so that the user can hear the sound emitted by the earpiece.
  • the screen-to-body ratio of mobile phone screens is getting higher and higher. Since the sound output holes arranged on the front panel occupy part of the front panel area of the mobile phone, the width of the frame of the mobile phone will be increased, which will affect the improvement of the screen-to-body ratio of the mobile phone.
  • the sound outlet of the handset of the mobile phone is designed as a long slit, and the sound outlet is located at the connection between the middle frame and the front panel of the mobile phone (also called the side seam of the mobile phone).
  • the sound output hole of the handset of the mobile phone it is also possible to open a hole on the top of the middle frame of the mobile phone as the sound output hole.
  • the user's auricle cannot completely cover and wrap the sound hole, and the sound energy of the handset of the mobile phone cannot be completely transmitted to the user's auricle, resulting in sound leakage.
  • the earpiece of the mobile phone is used to play the voice signal of the opposite user during the voice communication sounding speakers).
  • the sound output hole 201 of the handset is close to the user's ear (or auricle).
  • the sound outlet 201 of the handset of the mobile phone (such as the sound outlet at the side seam of the mobile phone and the sound outlet at the top of the middle frame) cannot be completely covered by the user's ears, the sound emitted by the sound outlet 201 The signal can not only be heard by the user, but also can be heard by other users in a quiet environment, resulting in sound leakage.
  • FIG. 2 it is a schematic structural diagram of an electronic device.
  • the electronic device includes a housing structure 100 .
  • the housing structure 100 is enclosed by a front panel (including a screen and a frame), a rear panel for supporting internal circuits, and a middle frame.
  • the housing structure 100 of the electronic device is provided with an earpiece 101 and a screen sound emitting device 104 .
  • the earpiece 101 is a loudspeaker used for speaking in voice communication, also called a receiver, and is usually arranged at the top of the shell structure.
  • the screen sound generating device 104 may be a vibration source connected under the screen.
  • the electronic device is provided with two sound outlets, namely the sound outlet 102 and the sound outlet 103 .
  • the sound outlet 102 is located at the connection between the front panel and the middle frame of the electronic device (ie at the side seam).
  • the sound hole 103 is located on the middle frame of the electronic device at a position closer to the earpiece (that is, at the top of the middle frame of the electronic device). In this way, the electronic device shown in FIG. 2 can produce sound through the earpiece, or through the screen, or simultaneously through the earpiece and the screen, so as to avoid the sound leakage phenomenon that occurs only when the earpiece is used.
  • the screen sound emitting device may be a vibration source (such as piezoelectric ceramics, motor vibrator, exciter or other vibration units) connected to the back of the screen.
  • the vibration source can vibrate under the control of the current signal to drive the screen to vibrate, so as to realize the sound from the screen.
  • the screen sound emitting device may also be a piezoelectric ceramic fixed on the middle frame of the electronic device through a cantilever beam structure.
  • the piezoelectric ceramic can vibrate under the control of the current signal, and use the middle frame of the mobile phone to transmit the vibration to the screen to drive the screen to vibrate, so as to realize the sound of the screen.
  • the screen sound emitting device may also be an exciter fixed on the middle frame of the electronic device. The exciter can vibrate under the control of the current signal, and use the middle frame of the mobile phone to transmit the vibration to the screen to drive the screen to vibrate, so as to realize the sound of the screen.
  • the screen sound emitting device can also be a split type magnetic levitation vibrator. One of the vibrators in the split-type magnetic levitation vibrator is fixed on the middle frame of the electronic device, and the other vibrator is fixed on the screen. The vibrator that can be fixed on the screen is under the control of the current signal The vibrator on the frame vibrates, thereby pushing the screen to vibrate, so as to realize the sound of the screen.
  • the embodiment of the present application provides a calibration method for frequency response consistency of screen sound emitting devices (such as capacitive devices such as piezoelectric ceramics).
  • screen sound emitting devices such as capacitive devices such as piezoelectric ceramics.
  • An equalizer calibration module can also be added in the audio playback path of the electronic device. Before the audio signal to be played is played through the screen sound device, the key frequency points of the audio signal are first enhanced and suppressed by the equalizer calibration module to adjust the frequency of the audio signal. The smoothness of the frequency response curve when the screen sound device emits sound, so that the frequency response curve of the audio signal output through the screen sound device is within the preset factory threshold range, improving the reliability and stability of electronic equipment to improve user experience. experience.
  • the electronic equipment in the embodiment of the present application may be a mobile phone, a tablet computer, an ultra-mobile personal computer (ultra-mobile personal computer, UMPC), a netbook, a cellular phone, a personal digital assistant (personal digital assistant, PDA),
  • PDA personal digital assistant
  • the embodiment of the present application does not specifically limit the specific form of the electronic devices.
  • FIG. 3 shows a schematic structural diagram of another electronic device provided by an embodiment of the present application. That is, for example, the electronic device shown in FIG. 3 may be a mobile phone.
  • mobile phone can comprise: processor 310, external memory interface 320, internal memory 321, universal serial bus (universal serial bus, USB) interface 330, charging management module 340, power management module 341, battery 342, Antenna 1, antenna 2, mobile communication module 350, wireless communication module 360, audio module 370, speaker 370A, receiver (i.e. handset) 370B, microphone 370C, earphone jack 370D, sensor module 380, button 390, motor 391, indicator 392 , a camera 393, a display screen 394, a subscriber identification module (subscriber identification module, SIM) card interface 395, a screen sound emitting device 396, etc.
  • SIM subscriber identification module
  • the above-mentioned sensor module may include sensors such as pressure sensor, gyroscope sensor, air pressure sensor, magnetic sensor, acceleration sensor, distance sensor, proximity light sensor, fingerprint sensor, temperature sensor, touch sensor, ambient light sensor and bone conduction sensor.
  • sensors such as pressure sensor, gyroscope sensor, air pressure sensor, magnetic sensor, acceleration sensor, distance sensor, proximity light sensor, fingerprint sensor, temperature sensor, touch sensor, ambient light sensor and bone conduction sensor.
  • the structure shown in this embodiment does not constitute a specific limitation on the mobile phone.
  • the mobile phone may include more or fewer components than shown, or combine certain components, or separate certain components, or arrange different components.
  • the illustrated components can be realized in hardware, software or a combination of software and hardware.
  • the processor 310 may include one or more processing units, for example: the processor 310 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU) wait. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor graphics processing unit
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • baseband processor baseband processor
  • neural network processor neural-network processing unit, NPU
  • the controller can be the nerve center and command center of the phone.
  • the controller can generate an operation control signal according to the instruction opcode and timing signal, and complete the control of fetching and executing the instruction.
  • DSP can include smart power amplifier (smart PA) hardware circuit, smart PA algorithm module, audio algorithm module.
  • the smart PA hardware circuit can be connected with the application processor and the screen sound emitting device (such as piezoelectric ceramics) respectively, and is used to control the sound of the screen sound emitting device according to the instruction of the application processor.
  • the smart PA algorithm module includes an equalizer calibration module, in which multiple filters can be set, and the frequency response curve of the screen sound-emitting device can be adjusted through the joint action of different parameters and different types of filters.
  • the smart PA hardware circuit may also be arranged outside the DSP chip, which is not specifically limited in this embodiment of the present application.
  • a memory may also be provided in the processor 310 for storing instructions and data.
  • the memory in processor 310 is a cache memory.
  • the memory may hold instructions or data that the processor 310 has just used or recycled. If the processor 310 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 310 is reduced, thereby improving the efficiency of the system.
  • the memory may be used to store calibration parameters (such as filter parameters) for adjusting the consistency of the frequency response curve of the sound emitting device on the screen.
  • processor 310 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transmitter (universal asynchronous receiver/transmitter, UART) interface, mobile industry processor interface (mobile industry processor interface, MIPI), general-purpose input and output (general-purpose input/output, GPIO) interface, subscriber identity module (subscriber identity module, SIM) interface, and /or universal serial bus (universal serial bus, USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input and output
  • subscriber identity module subscriber identity module
  • SIM subscriber identity module
  • USB universal serial bus
  • the interface connection relationship between modules shown in this embodiment is only a schematic illustration, and does not constitute a structural limitation of the mobile phone.
  • the mobile phone may also adopt different interface connection methods in the above embodiments, or a combination of multiple interface connection methods.
  • the charging management module 340 is configured to receive a charging input from a charger (such as a wireless charger or a wired charger) to charge the battery 342 .
  • the power management module 341 is used for connecting the battery 342 , the charging management module 340 and the processor 310 .
  • the power management module 341 receives the input of the battery 342 and/or the charging management module 340 to supply power to various components of the electronic device.
  • the wireless communication function of the mobile phone can be realized by the antenna 1, the antenna 2, the mobile communication module 350, the wireless communication module 360, the modem processor and the baseband processor.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in a mobile phone can be used to cover single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas.
  • Antenna 1 can be multiplexed as a diversity antenna for a WLAN.
  • the antenna may be used in conjunction with a tuning switch.
  • the antenna 1 of the mobile phone is coupled to the mobile communication module 350, and the antenna 2 is coupled to the wireless communication module 360, so that the mobile phone can communicate with the network and other devices through wireless communication technology.
  • the above-mentioned mobile communication module 350 can provide wireless communication solutions including 2G/3G/4G/5G applied to mobile phones.
  • the mobile communication module 350 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA) and the like.
  • the mobile communication module 350 can receive electromagnetic waves through the antenna 1, filter and amplify the received electromagnetic waves, and send them to the modem processor for demodulation.
  • the mobile communication module 350 can also amplify the signal modulated by the modem processor, convert it into electromagnetic wave and radiate it through the antenna 1 .
  • at least part of the functional modules of the mobile communication module 350 may be set in the processor 310 .
  • at least part of the functional modules of the mobile communication module 350 and at least part of the modules of the processor 310 may be set in the same device.
  • the wireless communication module 360 can provide applications on mobile phones including wireless local area networks (wireless local area networks, WLAN) (such as wireless fidelity (wireless fidelity, Wi-Fi) network), bluetooth (bluetooth, BT), global navigation satellite system ( Global navigation satellite system (GNSS), frequency modulation (frequency modulation, FM), near field communication (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • the wireless communication module 360 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 360 receives electromagnetic waves via the antenna 2 , frequency-modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 310 .
  • the wireless communication module 360 can also receive the signal to be sent from the processor 310 , frequency-modulate it, amplify it, and convert it into electromagnetic waves through the antenna 2 for radiation.
  • the above wireless communication module 360 can also support the mobile phone to perform voice communication.
  • the mobile phone can access the Wi-Fi network through the wireless communication module 360, and then use any application program that can provide voice communication services to interact with other devices to provide users with voice communication services.
  • the above-mentioned application program that can provide voice communication service may be an instant messaging application.
  • the mobile phone can realize the display function through the GPU, the display screen 394, and the application processor.
  • the GPU is a microprocessor for image processing, connected to the display screen 394 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 310 may include one or more GPUs that execute program instructions to generate or alter display information.
  • the display screen 394 is used to display images, videos and the like.
  • the mobile phone can realize shooting function through ISP, camera 393 , video codec, GPU, display screen 394 and application processor.
  • the ISP is used for processing the data fed back by the camera 393 .
  • the ISP may be located in the camera 393 .
  • Camera 393 is used to capture still images or video.
  • the mobile phone may include 1 or N cameras 393, where N is a positive integer greater than 1.
  • the external memory interface 320 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the mobile phone.
  • the internal memory 321 may be used to store computer-executable program code, which includes instructions.
  • the processor 310 executes various functional applications and data processing of the mobile phone by executing instructions stored in the internal memory 321 .
  • the processor 310 may execute instructions stored in the internal memory 321, and the internal memory 321 may include a program storage area and a data storage area.
  • the mobile phone can implement audio functions through an audio module 370 , a speaker 370A, a receiver (ie, a handset) 370B, a microphone 370C, an earphone interface 370D, and an application processor. Such as music playback, recording, etc.
  • the audio module 370 is used to convert digital audio signals into analog audio signal output, and is also used to convert analog audio input into digital audio signal.
  • the audio module 370 may also be used to encode and decode audio signals.
  • the audio module 370 can be set in the processor 310 , or some functional modules of the audio module 370 can be set in the processor 310 .
  • Speaker 370A also called “horn” is used to convert audio electrical signals into sound signals.
  • Receiver 370B also called “earpiece”, is used to convert audio electrical signals into audio signals.
  • the microphone 370C also called “microphone” or “microphone” is used to convert sound signals into electrical signals.
  • the earphone interface 370D is used to connect wired earphones.
  • the earphone interface 370D may be a USB interface 330, or a 3.5mm open mobile terminal platform (open mobile terminal platform, OMTP) standard interface, or a cellular telecommunications industry association of the USA (CTIA) standard interface.
  • CTIA cellular
  • the receiver 370B (that is, the "earpiece") may be the earpiece 101 shown in (a) of FIG. 2 .
  • the audio module 370 may convert audio electrical signals received by the mobile communication module 350 and the wireless communication module 360 into sound signals.
  • the sound signal is played by the receiver 370B of the audio module 370 (ie, the "earpiece"), and at the same time, the screen sound generator 396 drives the screen (ie, the display screen) to produce sound on the screen to play the sound signal.
  • the keys 390 include a power key, a volume key and the like.
  • the motor 391 can generate a vibrating prompt.
  • the indicator 392 can be an indicator light, which can be used to indicate the charging status, the change of the battery capacity, and can also be used to indicate messages, missed calls, notifications and the like.
  • the SIM card interface 395 is used for connecting a SIM card.
  • the mobile phone can support 1 or N SIM card interfaces, and N is a positive integer greater than 1.
  • FIG. 3 is only an exemplary description when the electronic device is in the form of a mobile phone. If the electronic device is a tablet computer, a handheld computer, a PDA, a wearable device (such as a smart watch, a smart bracelet) and other device forms, the structure of the electronic device may include fewer structures than those shown in Figure 3, More structures than those shown in FIG. 3 may also be included, without limitation.
  • FIG. 4 is a schematic diagram of the sounding device for the screen sounding device.
  • the screen sound emitting device includes multilayer piezoelectric ceramics.
  • the multilayer piezoelectric ceramic forms a vibrating film, and after an AC driving signal is applied, it can undergo bending deformation under the piezoelectric effect to push the vibrating film to produce sound.
  • the impedance of piezoelectric ceramics (that is, the sound emitting device of the screen) satisfies the following relationship: Among them, z is the impedance of the piezoelectric ceramic, C is the capacitance, and f is the frequency of the AC signal. It can be seen that the equivalent impedance of piezoelectric ceramics decreases sharply as the frequency of the input AC signal increases.
  • the frequency resistance characteristic curve of a piezoelectric ceramic of 4.2 microfarads when the frequency of an AC signal of a piezoelectric ceramic of 4.2 microfarads (uF) is 200 hertz (Hz), the equivalent of piezoelectric ceramics
  • the impedance is about 160 ohms (Ohm)
  • the equivalent impedance of piezoelectric ceramics of 4.2 microfarads (uF) is about 3.7 ohms (Ohm) when the frequency of the AC signal is 10k hertz (Hz).
  • the screen sound emitting device formed of multilayer piezoelectric ceramics does not only include piezoelectric ceramics, but may also include electrode leads, dielectric substances, and other components. Therefore, the equivalent impedance of the screen sound emitting device formed of multilayer piezoelectric ceramics is a nonlinear curve, which is related to temperature, frequency and materials.
  • an equalizer calibration module is added in the audio playback path of the electronic equipment, and the equalizer calibration module is realized by using a multi-segment equalization filter. That is to say, the equalizer calibration module is usually formed by N subband filters. For example, N may be a positive integer between 6 and 12.
  • the parameters for each subband filter can include:
  • System sampling rate fs The system sampling rate is determined by the processor model adopted by the electronic device.
  • the system sampling rate may be 48000 Hz, 44100 Hz, 16000 Hz, etc.
  • Center frequency f0 refers to the frequency f0 of the passband of the filter.
  • f1 and f2 can be the side frequency points with 1dB or 3dB relative drop from the left and right of the bandpass filter.
  • the center frequency of the filter may be determined according to the frequency point to be calibrated in the frequency response curve of the audio signal.
  • Peak bandwidth Q also known as the quality factor Q of the filter.
  • the quality factor Q of the filter is the center frequency f0 of the filter divided by the filter bandwidth BW.
  • Peak gain gain In this embodiment of the application, it can be determined according to the frequency point to be calibrated or the center frequency point of the frequency band, and the difference between the frequency response curve of the audio signal and the standard frequency response curve.
  • Filter types e.g. Peak Filter, low shelf filter (LS), high shelf filter (HS), low-pass filter , LP), high-pass filter (high-pass filter, HP), band-pass filter (band-pass filter, BP).
  • Fig. 6 shows a schematic diagram of the functions of different types of filters with different parameters.
  • the Q value of the filter determines the bandwidth of the filter.
  • the bandwidth is narrower and the affected frequency range is smaller, as shown in Figure 6
  • the Q value of the innermost curve is the largest, and the Q value of the outermost curve is the smallest.
  • the gains of the curves from outside to inside are 12dB, 10dB, 8dB, 6dB, 4dB, 2dB.
  • the low-frequency shelving filter is mainly used to boost or attenuate the amplitude of the low-frequency part.
  • the high-frequency shelving filter is mainly used to boost or attenuate the amplitude of the high-frequency part.
  • Filter order eg first order filter, second order filter, third order filter.
  • the filter coefficients of each sub-band can be calculated and obtained.
  • the coefficients of the second-order IIR filter are a(0), a(1), a(2), b(0), b(1), and b(2).
  • the center frequency, peak bandwidth, peak gain, and filter type of the filter can be obtained by comparing the uncalibrated frequency response curve with the standard frequency response curve.
  • the coefficients of the filters can be calculated according to different orders and different types of filters.
  • the calculation methods of filter coefficients in different types of filters are illustrated below with examples.
  • the known parameters are: center frequency f0, peak gain gain, peak bandwidth Q, and system sampling rate fs.
  • the known parameters are: center frequency f0, peak gain gain, peak bandwidth Q, and system sampling rate fs.
  • a0 (A+1)+(A-1)*cos(w0)+2*sqrt(A)*alpha;
  • a2 (A+1)+(A-1)*cos(w0)-2*sqrt(A)*alpha.
  • the known parameters are: center frequency f0, peak gain gain, peak bandwidth Q, and system sampling rate fs.
  • a0 (A+1)+(A-1)*cos(w0)+2*sqrt(A)*alpha;
  • a2 (A+1)+(A-1)*cos(w0)-2*sqrt(A)*alpha.
  • the output signal of the filter can be obtained by the following formula (1) according to the input signal of the filter of each subband and the coefficient of the filter.
  • y(n) is the output of the signal of the nth sampling point after passing through the filter;
  • x(n) is the signal of the nth sampling point input to the filter.
  • FIG. 7 is a schematic structural diagram of a calibration system provided by an embodiment of the present application.
  • the calibration system may include electronics and calibration equipment.
  • the electronic device can be communicatively connected to the calibration device.
  • the electronic device to be calibrated communicates with the calibration device, the electronic device is used to play a test audio signal through its screen sound emitting device, and the calibration device is used to collect the test audio signal played by the electronic device when the electronic device plays the test audio signal, and obtain the first an audio signal.
  • the electronic device to be calibrated or the calibration device may analyze the collected first audio signal to obtain a frequency response corresponding to the first audio signal. And the frequency response corresponding to the first audio signal can be compared with the standard frequency response of the test audio signal to obtain the frequency point to be calibrated, the frequency response gain corresponding to the frequency point to be calibrated, and the number of frequency points to be calibrated.
  • the electronic equipment or calibration equipment to be calibrated can also determine the number of filters, the type of filters, and the center frequency of the filters according to the frequency points that need to be calibrated, the frequency response gain corresponding to the frequency points that need to be calibrated, and the number of frequency points that need to be calibrated , filter gain and filter Q value and other calibration parameters.
  • the above-mentioned calibration parameters can be stored in the non-volatile storage device of the electronic device, so that when the electronic device plays an audio signal during normal use, the sound output of the screen can be adjusted. Consistency of the frequency response curve, so that the frequency response of the electronic device is consistent, so that the user's sense of hearing is better and the user experience is improved.
  • FIG. 8A and 8B are system block diagrams of electronic equipment and calibration equipment in the calibration system provided by the embodiment of the present application.
  • FIG. 9 is a block diagram of a software structure of an electronic device provided by an embodiment of the present application. The system structures of the electronic equipment and the calibration equipment are introduced respectively below.
  • the system structure of the above-mentioned electronic device can adopt a layered architecture, an event-driven architecture, a micro-kernel architecture, a micro-service architecture, or a cloud architecture.
  • the Android system with layered architecture is taken as an example to illustrate the software structure of the mobile phone.
  • the functions implemented by each functional module are similar to the embodiments of the present application.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate through software interfaces.
  • the Android system is divided into five layers, which are application program layer, application program framework layer (framework), Android runtime (Android runtime) and system libraries (libraries), HAL (hardware abstraction) from top to bottom. layer, hardware abstraction layer) and the kernel layer (kernel).
  • the application layer can consist of a series of application packages. As shown in FIG. 9 , applications such as call, memo, browser, contact, camera, gallery, calendar, map, bluetooth, music, video, and short message can be installed in the application layer. In this embodiment of the application, the application layer may also install a calibration application.
  • the calibration application can receive a calibration signal (also referred to as a test instruction) of the calibration device, so that the electronic device performs an operation of playing a test audio signal after receiving the calibration signal, so as to play the test audio signal to the screen sound device. The frequency response of the audio signal is calibrated.
  • the calibration application can send a calibration signal (also called a test command) to the calibration device, and after sending the calibration signal, control the electronic device to play a test audio signal, so that the calibration device can receive the calibration signal Finally, start the collection device (such as artificial ear) to collect the test audio signal played by the electronic device.
  • a calibration signal also called a test command
  • the calibration application when the calibration application receives the calibration signal from the calibration device, or after the calibration application sends the calibration signal to the calibration device, the calibration application will send a test command to the corresponding HAL in the HAL layer (such as the smart PA controlling the HAL), so that the smart PA The control HAL controls the electronic device to play the test audio signal.
  • the corresponding HAL in the HAL layer such as the smart PA controlling the HAL
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • an audio playback management service is set in the application framework layer.
  • the audio playback management service can be used to initialize the audio and video player, obtain the volume of the current audio, adjust the volume of the audio playback, add sound effects, etc.
  • the application framework layer may also include window management services, content provision services, view systems, resource management services, notification management services, etc., which are not limited in this embodiment of the present application.
  • the above-mentioned window management service is used to manage window programs.
  • the window management service can obtain the size of the display screen, determine whether there is a status bar, lock the screen, capture the screen, etc.
  • the above content provides services to store and retrieve data, and make these data accessible to applications. Said data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebook, etc.
  • the above view system can be used to build the display interface of the application.
  • Each display interface can consist of one or more controls.
  • controls may include interface elements such as icons, buttons, menus, tabs, text boxes, dialog boxes, status bars, navigation bars, and widgets (Widgets).
  • the resource management service above provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and so on.
  • the above-mentioned notification management service enables the application to display notification information in the status bar, which can be used to convey notification-type messages, and can automatically disappear after a short stay without user interaction.
  • the notification management service is used to notify the download completion, message reminder and so on.
  • the notification manager can also be a notification that appears on the top status bar of the system in the form of a chart or scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, sending out prompt sounds, vibrating, and flashing lights, etc.
  • the HAL of the mobile phone provides HALs corresponding to different hardware modules of the mobile phone, such as Audio HAL, Camera HAL, Wi-Fi HAL, and smart PA control HAL and information storage HAL.
  • Audio HAL can correspond to audio output devices (such as speakers, screen sound devices) through the audio driver of the kernel layer.
  • audio output devices such as speakers, screen sound devices
  • these multiple audio output devices correspond to multiple audio drivers in the kernel layer respectively.
  • the smart PA control HAL corresponds to the smart PA hardware circuit through the smart PA algorithm in the DSP.
  • the smart PA control HAL can control the operation of the smart PA algorithm to configure the smart PA algorithm to play the test audio signal.
  • the smart PA control HAL receives the call command issued by the call application in the application layer or the music play command issued by the music application, the smart PA control HAL can control the operation of the smart PA algorithm to configure the smart PA algorithm to play the voice of the other party signal or an audio signal corresponding to music.
  • the smart PA control HAL can also control the smart PA hardware circuit (such as the hardware circuit (smart PA0) of the screen sound device) to open through the I2C signal, so as to play the test audio signal through the screen sound device.
  • the information storage HAL corresponds to the non-volatile storage medium (such as memory) of the electronic device, and is used to store the equalizer parameters (ie, calibration parameters) calculated by the electronic device or the calibration device to the non-volatile storage of the electronic device medium.
  • the information storage HAL can store the equalizer parameters in the electronic in the device's non-volatile storage medium.
  • the equalizer parameter is used to adjust the smoothness of the frequency response curve when the electronic device plays a normal audio signal, so as to reduce the difference between the frequency response curve of the actually played audio signal and the standard frequency response curve, and improve the frequency response curve of the electronic device playing the audio signal. quality to enhance the user's listening experience.
  • Android runtime includes core library and virtual machine. The Android runtime is responsible for the scheduling and management of the Android system.
  • the core library consists of two parts: one part is the function function that the java language needs to call, and the other part is the core library of Android.
  • the application layer and the application framework layer run in virtual machines.
  • the virtual machine executes the java files of the application program layer and the application program framework layer as binary files.
  • the virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
  • a system library can include multiple function modules. For example: surface manager (surface manager), media library (Media Libraries), 3D graphics processing library (eg: OpenGL ES), 2D graphics engine (eg: SGL), etc.
  • the surface manager is used to manage the display subsystem, and provides the fusion of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of various commonly used audio and video formats, as well as still image files, etc.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, compositing, and layer processing, etc.
  • 2D graphics engine is a drawing engine for 2D drawing.
  • the kernel layer is located below the HAL and is the layer between hardware and software.
  • the kernel layer may also include a display driver, a camera driver, a sensor driver, etc., which are not limited in this embodiment of the present application.
  • the embodiment of the present application includes a digital signal processing (digital signal processing, DSP) chip, and a smart PA algorithm module, an audio algorithm module, etc. are run in the DSP chip.
  • DSP digital signal processing
  • the smart PA algorithm module includes an equalizer calibration module.
  • the information storage HAL When the smart PA control HAL receives the test command issued by the calibration application, the information storage HAL will detect whether there are equalizer parameters (ie calibration parameters) stored in the non-volatile storage device. If the information storage HAL does not detect the equalizer parameters, the information storage HAL cannot send the equalizer parameters to the equalizer calibration module or the information storage HAL sends empty equalizer parameters to the equalizer calibration module, and the equalizer calibration module does not run. At this point, the smart PA algorithm module directly plays the original test audio signal.
  • equalizer parameters ie calibration parameters
  • an audio playback instruction is sent to the smart PA control HAL.
  • the smart PA controls the HAL to receive audio playback instructions, and the information storage HAL will detect whether there are equalizer parameters (ie, calibration parameters) stored in the non-volatile storage device. If the information storage HAL detects that there are equalizer parameters, the information storage HAL obtains the calibration parameters from the non-volatile storage medium, and sends the equalizer parameters to the equalizer calibration module.
  • the equalizer calibration module receives the calibration parameters, and determines parameters such as the type of the equalizer, the center frequency, and the frequency response gain according to the calibration parameters. Moreover, the equalizer calibration module can also convert the audio signal to be played into a frequency domain signal, and use the multi-band equalizer in the equalizer calibration module to calibrate the frequency domain gain, and send the calibrated audio signal to the smart PA hardware circuit, so that the screen sound device plays the calibrated audio signal.
  • the DSP chip also runs an equalizer parameter calculation module.
  • the calibration device collects the test audio signal played by the electronic device, the calibration device may send the collected sound signal (that is, the first audio signal) to the calibration application.
  • the calibration application then sends the first audio signal to the equalizer parameter calculation module through the HAL layer (for example, the smart PA controls the HAL).
  • the equalizer parameter calculation module receives the first audio signal, it can compare the frequency response of the first audio signal with the standard frequency response of the transmitted audio signal to determine equalizer parameters (ie calibration parameters).
  • the above-mentioned equalizer parameter calculation module may also be a module in a calibration application, which is not specifically limited in this application.
  • the calibration device when the electronic device does not include an equalizer parameter calculation module, the calibration device includes a calibration control module, an equalizer parameter calculation module, and an artificial ear.
  • the calibration control module is used for communicating with the electronic equipment.
  • the calibration control module can be used to send test instructions (i.e. calibration signals) to the electronic device to instruct the electronic device to play the test audio signal through the smart PA, and after sending the test command, control the artificial ear to start the collection of the sound signal.
  • the calibration control module can also be used to send the calibration parameters output by the equalizer parameter calculation module to the electronic device, and to instruct the electronic device to store the calibration parameters.
  • the calibration control module can also be used to receive a test instruction (ie, a calibration signal) sent by the electronic device, and when receiving the test instruction, control the artificial ear to start the collection of sound signals.
  • the artificial ear is used to collect the sound signal (that is, the first audio signal) output by the electronic device when playing the test audio signal, and convert the collected sound signal into an electrical signal and transmit it to the equalizer parameter calculation module.
  • the equalizer parameter calculation module is used to receive the first audio signal, and after receiving the first audio signal, compare the frequency response of the first audio signal with the standard frequency response of the transmitted audio signal to determine the equalizer parameters (ie calibration parameters ).
  • the standard device may only include a calibration control module and an artificial ear.
  • the calibration control module is used for communicating with the electronic equipment.
  • the calibration control module can be used to send a test instruction (ie, a calibration signal) to the electronic device to instruct the electronic device to play a test audio signal through the smart PA, and after sending the test instruction, control the artificial ear to start the collection of the sound signal.
  • the calibration control module can also be used to receive a test instruction (ie, a calibration signal) sent by the electronic device, and when receiving the test instruction, control the artificial ear to start the collection of sound signals.
  • the calibration control module can also be used to send the first audio signal collected by the artificial ear to the electronic device, so that the equalizer parameter calculation module in the electronic device The responses are compared to determine equalizer parameters (ie, calibration parameters).
  • the frequency response consistency calibration method is applied to the calibration system shown in FIG. 8A .
  • the electronic device under test communicates with the calibration device.
  • the method includes:
  • the calibration device sends a test instruction to the electronic device under test.
  • test instruction is used to instruct the electronic device under test to play a test audio signal.
  • the test audio signal may be a frequency sweep signal of the whole frequency domain, or a frequency sweep signal of a specific frequency range (for example, a frequency sweep signal of a frequency range audible to human ears, such as 20 Hz-20000 Hz).
  • the user can press the test button of the calibration device, so that the calibration device sends a test instruction to the electronic device under test.
  • the electronic device under test can also send a test instruction to the calibration device, for example, the electronic device under test receives the user's click operation on the calibration control in the calibration application, and sends The calibration device sends a test command.
  • the electronic device under test plays a test audio signal.
  • the calibration application of the electronic device under test may send the test instruction to the smart PA control HAL in the HAL layer.
  • the smart PA control HAL receives the test instruction, it controls the operation of the smart PA algorithm module to control the smart PA hardware circuit to play the test audio signal.
  • the smart PA control HAL also controls the smart PA hardware circuit (such as the hardware circuit (smart PA0) of the screen sound device) to open through the I2C signal, and plays the test audio signal through the screen sound device.
  • test audio signal played by the electronic device under test at this time is a signal that has not been calibrated by the equalizer calibration module.
  • the uncalibrated test audio signal can characterize the frequency response curve of the electronic device under test when playing audio, so as to compare the uncalibrated frequency response curve with the standard frequency response curve of the test audio signal to determine the frequency response deviation Larger frequency points and frequency response gains, so as to determine the parameters (ie, calibration parameters) of the multi-band equalizer in the equalizer calibration module according to the frequency points with larger deviations and the frequency response gains.
  • the electronic device under test sends a test instruction to the calibration device
  • the electronic device may play a test audio signal after sending the test instruction to the calibration device.
  • the calibration device collects the test audio signal played by the electronic device under test to obtain a first audio signal.
  • the calibration device may activate the artificial ear.
  • the calibration device can activate the artificial ear after receiving the test instruction sent by the electronic device.
  • the calibration device can collect the test audio signal played by the electronic device under test through the artificial ear to obtain the first audio signal.
  • the artificial ear can convert the received first audio signal into an electrical signal and send it to the equalizer parameter calculation module.
  • the calibration device determines calibration parameters according to the frequency response of the first audio signal and the standard frequency response of the test audio signal.
  • the calibration parameters include: the number of frequency bands to be calibrated (that is, the number of filters), the filter type corresponding to each frequency band to be calibrated, the center frequency of the filter corresponding to each frequency band to be calibrated, and the frequency of each frequency band to be calibrated.
  • the range of the number of subbands that can support calibration can be preset.
  • the equalizer calibration module can support calibration of n to m subbands (that is, it can support calibration of n to m frequency band to be calibrated).
  • the frequency range to be calibrated may be preset, for example, the frequency range of the frequency response to be calibrated may be preset to be the first frequency point f1 to the second frequency point f2.
  • the first frequency point f1 is the lowest frequency to be calibrated.
  • the second frequency point f2 is the highest frequency to be calibrated.
  • the expected calibration dispersion range of the electronic equipment pre-set to be ⁇ x dB. That is to say, when the frequency response of an electronic device at a certain frequency deviates from the normal frequency response by more than x dB, it means that the frequency needs to be calibrated.
  • the above S1004 may include S1004A to S1004D.
  • the calibration device can determine the frequency point to be calibrated and the corresponding frequency point to be calibrated within the range from the first frequency point f1 to the second frequency point f2
  • the frequency response gain and the number of frequency points that need to be calibrated refers to: the frequency point at which the difference between the frequency response of the first audio signal and the frequency response of the test audio signal is greater than the preset expected calibration discrete range (ie x dB).
  • the frequency response gain corresponding to the frequency point to be calibrated refers to the difference between the frequency response of the first audio signal and the frequency response of the test audio signal at the frequency point to be calibrated.
  • the number of frequency points that need to be calibrated refers to the sum of the numbers of all frequency points that need to be calibrated.
  • FIG. 11 is a graph comparing a frequency response curve of a first audio signal provided in an embodiment of the present application with a standard frequency response curve of a test audio signal. As shown in Figure 11, according to the comparison between the frequency response curve of the first audio signal and the standard frequency response curve of the test audio signal, it can be seen that the frequency response curve of the first audio signal has a larger difference with the standard frequency response curve of the test audio signal. difference.
  • the frequency response curve of the first audio signal has relatively large fluctuations.
  • the 100Hz, 200Hz, 500Hz, 1000Hz, 2000Hz, 3000Hz, 3500Hz, 4000Hz, and 5000Hz frequency points in the curve are the frequency points that may need to be calibrated.
  • the frequency response of the first audio signal at the above-mentioned frequency point can be compared with the frequency response of the corresponding frequency point in the standard frequency response curve of the test audio signal. For example, at the frequency point 100Hz, when the frequency response of the first audio signal differs from the frequency response in the standard frequency response curve of the test audio signal by 5dB, if the preset expected standard discrete range is ⁇ 2dB, then 5dB is greater than 2dB , it can be determined that the frequency point 100Hz is the frequency point that needs to be calibrated, and it can be determined that the frequency response gain at the frequency point 100Hz is 5dB.
  • each frequency point that may need to be calibrated can be calculated, and the frequency point that needs to be calibrated, the frequency response gain corresponding to the frequency point that needs to be calibrated, and the total number of frequency points that need to be calibrated can be determined.
  • the calibration parameter can be determined, namely The number of filters, the type of filters, the center frequency of the filters, the gain of the filters, and the Q value of the filters can be determined.
  • the equalizer calibration module can support the calibration of n to m sub-bands, the number of filters and filter types are not exactly the same according to the number of frequency points to be calibrated.
  • the first case the number i of frequency points to be calibrated is less than or equal to the lowest number n of subbands that support calibration. At this time, the following S1004B may be performed.
  • a filter is set for each sub-band to be calibrated (ie, frequency band to be calibrated). That is, the number of filters is set to n.
  • each subband to be calibrated includes a frequency point to be calibrated.
  • the center frequency of the filter corresponding to each sub-band to be calibrated is the frequency of the corresponding frequency bin to be calibrated.
  • the frequency points to be calibrated are set as: n frequency points with the largest difference between the frequency response of the first audio signal and the frequency response of the test audio signal.
  • the filter types are all set to peak filter.
  • the peak gain of the filter corresponding to each sub-band to be calibrated is the frequency response gain corresponding to the frequency point to be calibrated.
  • the second case the number i of frequency points to be calibrated is greater than the lowest supported number n of calibration subbands, and less than or equal to the maximum supported number m of calibration subbands. At this time, the following S1004C may be performed.
  • the number of sub-bands to be calibrated may be set to i, that is, the filter data is set to i.
  • the filter types are all set to peak filter.
  • each subband to be calibrated includes a frequency point to be calibrated.
  • the center frequency of the filter corresponding to each sub-band to be calibrated is the frequency of the corresponding frequency bin to be calibrated.
  • the i frequency points to be calibrated are all configured as frequency points to be calibrated.
  • the peak gain of the filter corresponding to each sub-band to be calibrated is the frequency response gain corresponding to the frequency point to be calibrated.
  • frequency points that need to be calibrated may be combined, for example, adjacent frequency points may be combined.
  • adjacent frequency points may be combined.
  • the two frequency points of 100 Hz and 200 Hz in FIG. 11 may be combined into one frequency band.
  • n frequency bands to be calibrated are obtained after merging adjacent frequency points, stop merging adjacent frequency points.
  • the number of sub-bands to be calibrated can be set to n, that is, the number of filters can be set to n.
  • the peak gain of the filter corresponding to each sub-band to be calibrated is: the average gain of the frequency response gains corresponding to each frequency point in each frequency band to be calibrated. Both filter types can be set to peak filter.
  • the third case the number i of frequency points to be calibrated is greater than the maximum supported number m of calibration subbands. At this time, the following S1004D can be executed.
  • S1004D set the number of sub-bands to be calibrated to n, m or p; where n ⁇ p ⁇ m; the frequency bands to be calibrated are obtained by merging the frequency points to be calibrated, and the number of the combined frequency bands to be calibrated is equal to The number of sub-bands to be calibrated is the same.
  • the number i of frequency points to be calibrated exceeds the maximum supported number m of calibration subbands, and adjacent frequency points need to be merged.
  • the number of sub-bands to be calibrated can be set to n, that is, the number of filters can be set to n.
  • the filter of the subband corresponding to the frequency band to be calibrated is set as a low-frequency shelving filter.
  • the center frequency of the filter is set to the highest frequency point in the frequency band to be calibrated.
  • the peak gain of the filter corresponding to the sub-band to be calibrated is set to be: the average gain of the frequency response gains corresponding to each frequency point in the frequency band to be calibrated.
  • the pending The filter set for the subband corresponding to the calibrated frequency band is a high frequency shelf filter.
  • the center frequency of the filter is set to the lowest frequency point in the frequency band to be calibrated.
  • the peak gain of the filter corresponding to the sub-band to be calibrated is set to be: the average gain of the frequency response gains corresponding to each frequency point in the frequency band to be calibrated.
  • the filters of subbands corresponding to the remaining n-2 frequency bands to be calibrated among the n frequency bands to be calibrated are all set as peak filters.
  • the center frequencies of the filters are all set to the center frequency points of the corresponding frequency bands to be calibrated.
  • the gain to be calibrated is set to be an average gain of frequency response gains corresponding to each frequency point in the frequency band to be calibrated.
  • the filters of the subbands corresponding to the n frequency bands to be calibrated may all be set as peak filters.
  • the center frequencies of the filters are all set to the center frequency points of the corresponding frequency bands to be calibrated.
  • the peak gain of the filter corresponding to the sub-band to be calibrated is set to be: the average gain of the frequency response gains corresponding to each frequency point in the frequency band to be calibrated.
  • the number of sub-bands to be calibrated can be set to m, that is, the number of filters can be set to m.
  • pairwise combination of frequency points to be calibrated may also be performed. For example, when i is an odd number, (i-1)/2 frequency bands to be calibrated and 1 frequency point to be calibrated are obtained. At this time, the number of sub-bands to be calibrated is set to (i-1)/2+1, that is, the number of filters is set to (i-1)/2+1. When i is an even number, i/2 frequency bands to be calibrated are obtained. At this time, the number of sub-bands to be calibrated is set to i/2, that is, the number of filters is set to i/2.
  • i is greater than 2m
  • adjacent frequency points can be combined to obtain p frequency bands to be calibrated, n ⁇ p ⁇ m.
  • the number of sub-bands to be calibrated is set to p, that is, the number of filters is set to p.
  • the peak bandwidth of the filter is the center frequency of the filter divided by the filter bandwidth.
  • the bandwidth of the filter corresponding to the frequency band to be calibrated can be the frequency response gain of the difference between the center frequency point of the frequency band to be calibrated and the standard frequency response is x dB The frequency difference between two frequency points. According to the center frequency and bandwidth of the filter, the peak bandwidth of the filter can be calculated.
  • the calibration device sends calibration parameters to the electronic device under test.
  • the equalizer parameter calculation module calculates the number of filters, the center frequency of the filter, the type of the filter, the peak gain of the filter, and the peak bandwidth of the filter
  • the above calibration parameters can be transmitted to the screen sound device calibration control module.
  • the calibration control module of the screen sound-emitting device receives the above-mentioned calibration parameters, it can send the above-mentioned calibration parameters to the electronic equipment under test.
  • the electronic device under test saves the calibration parameters.
  • the electronic device under test may save the above calibration parameters in a non-volatile storage medium.
  • the consistency of the frequency response curve of the screen sound emitting device in the electronic device under test is calibrated.
  • the frequency response consistency calibration method is applied to the calibration system shown in FIG. 8B .
  • the electronic device under test communicates with the calibration device.
  • the method includes:
  • the calibration device sends a test instruction to the electronic device under test.
  • the electronic device under test plays a test audio signal.
  • the calibration device collects the test audio signal played by the electronic device under test to obtain a first audio signal.
  • the calibration device may activate the artificial ear.
  • the calibration device can activate the artificial ear after receiving the test instruction sent by the electronic device.
  • the calibration device can collect the test audio signal played by the electronic device under test through the artificial ear to obtain the first audio signal.
  • the calibration device sends the first audio signal to the electronic device under test.
  • the calibration device since the calibration device only performs sound collection when the electronic device under test plays the test audio signal, after the calibration device collects the first audio signal, it can send the first audio signal to the electronic device under test, and then the electronic device under test according to the first audio signal.
  • the frequency response of an audio signal and the standard frequency response of the test audio signal determine calibration parameters.
  • the electronic device under test determines calibration parameters according to the frequency response of the first audio signal and the standard frequency response of the test audio signal.
  • the electronic device under test saves the calibration parameters.
  • the electronic device detects the user's operation of making a call, and the upper layer application (such as a call application) will send the smart PA to control the HAL. Send a call command.
  • the electronic device plays music through the screen sound emitting device, the electronic device detects the user's operation of playing music, and the upper layer application (such as a music application) will send a music playback instruction to the smart PA control HAL.
  • the smart PA control HAL When the smart PA control HAL receives a call command or a music play command, the smart PA control HAL can control the operation of the smart PA algorithm to configure the smart PA algorithm to play the other party's voice signal or the audio signal corresponding to the music. In the process of configuring the smart PA algorithm, the smart PA control HAL can control the information storage HAL to obtain calibration parameters (that is, equalizer parameters) from the non-volatile storage medium and send them to the equalizer calibration module in the smart PA algorithm .
  • calibration parameters that is, equalizer parameters
  • the audio signal to be played by the screen sound device (such as the sound signal of the other party or the audio signal corresponding to the music) will be processed by the equalizer calibration module in the smart PA algorithm, and then played through the screen sound device.
  • the audio signal can be converted into a frequency domain signal first, and then the frequency response can be calibrated through the multi-band filter in the equalizer calibration module to improve the frequency response curve of the screen sound device. Consistency, so as to ensure the consistency of the sound effect of electronic equipment, so that the user's sense of hearing is better.
  • FIG. 13 is a comparison chart of the frequency response curve of a screen sound emitting device provided by the embodiment of the present application before and after frequency response consistency calibration. As shown in (a) in Figure 13, it is the distribution of the frequency response curves of 100 electronic devices (pcs) before calibration for the screen sound emitting device in the electronic device. It can be seen from this figure that when not calibrated, The frequency response curves of the sound signals played by the screen sound emitting devices of the 100 electronic devices have large differences and a high degree of dispersion.
  • the embodiment of the present application also provides an audio playing method applied to an electronic device, and the audio playing method is applied to an electronic device; the electronic device includes an equalizer calibration module and a screen sound generating device.
  • the method includes: receiving an audio playing instruction.
  • the audio playing instruction is used to instruct the electronic device to play the second audio signal.
  • calibration parameters are obtained.
  • the calibration parameter is the calibration parameter stored in the electronic device according to the method in the above-mentioned embodiment.
  • the frequency response of the second audio signal is adjusted through the equalizer calibration module and the calibration parameters to obtain the third audio signal.
  • the third audio signal is played through the screen sound generating device.
  • the embodiment of the present application also provides an electronic device.
  • the electronic device includes: one or more processors; a memory; a communication module. Wherein, the communication module is used for communicating with the calibration equipment.
  • One or more computer programs are stored in the memory, and the one or more computer programs include instructions. When the instructions are executed by the processor, the electronic device executes the electronic device under test or the method performed by the electronic device in the above embodiments.
  • the electronic device may be the electronic device shown in FIG. 9 above.
  • the embodiment of the present application also provides a calibration device.
  • the calibration device includes a processor and memory.
  • One or more computer programs are stored in the memory, and the one or more computer programs include instructions.
  • the calibration device executes the method performed by the calibration device in the above embodiments.
  • the embodiment of the present application also provides a chip system.
  • the chip system 1400 can be applied to electronic equipment or calibration equipment, and the chip system 1400 includes at least one processor 1401 and at least one interface circuit 1402 .
  • the processor 1401 and the interface circuit 1402 may be interconnected through wires.
  • interface circuit 1402 may be used to receive signals from other devices (eg, memory of an electronic device).
  • the interface circuit 1402 may be used to send signals to other devices (such as the processor 1401).
  • the interface circuit 1402 can read instructions stored in the memory of the electronic device, and send the instructions to the processor 1401 .
  • the electronic device such as the electronic device shown in FIG. 9
  • the calibration device may be made to perform various functions or steps performed by the calibration device in the foregoing embodiments.
  • the chip system may also include other discrete devices, which is not specifically limited in this embodiment of the present application.
  • Another embodiment of the present application provides a computer storage medium, the computer storage medium includes computer instructions, and when the computer instructions are run on the electronic device, the electronic device is made to perform the various functions performed by the electronic device in the above method embodiments or step.
  • Another embodiment of the present application provides a computer program product.
  • the computer program product is run on a computer, the computer is made to execute each function or step performed by the electronic device in the method embodiment above.
  • the disclosed devices and methods may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components can be Incorporation or may be integrated into another device, or some features may be omitted, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • the unit described as a separate component may or may not be physically separated, and the component displayed as a unit may be one physical unit or multiple physical units, that is, it may be located in one place, or may be distributed to multiple different places . Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware or in the form of software functional units.
  • the integrated unit is realized in the form of a software function unit and sold or used as an independent product, it can be stored in a readable storage medium.
  • the software product is stored in a program product, such as a computer-readable storage medium, and includes several instructions to make a device (which may be a single-chip microcomputer, a chip, etc.) or a processor (processor) execute all of the methods described in various embodiments of the present application. or partial steps.
  • the aforementioned storage medium includes: various media capable of storing program codes such as U disk, mobile hard disk, ROM, RAM, magnetic disk or optical disk.
  • the embodiments of the present application may also provide a computer-readable storage medium on which computer program instructions are stored.
  • the computer program instructions are executed by the electronic device, the electronic device is made to implement the audio processing method described in the foregoing method embodiments.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Otolaryngology (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

本申请实施例提供一种频率响应一致性的校准方法及电子设备,用于解决电子设备的屏幕发声器件在播放声音信号时,该声音信号的频率响应曲线会出现较大的波动较大的问题。该频率响应一致性的校准方法包括:电子设备播放测试音频信号。电子设备保存校准参数。校准参数为均衡器校准模块的运行参数。校准参数由第一音频信号的频率响应与测试音频信号的标准频率响应确定。第一音频信号是由校准设备采集电子设备播放的测试音频信号得到的。校准参数用于当电子设备播放第二音频信号时,通过所述均衡器校准模块调节电子设备播放的第二音频信号的频率响应。

Description

频率响应一致性的校准方法及电子设备
本申请要求于2021年12月10日提交国家知识产权局、申请号为202111509124.9、发明名称为“频率响应一致性的校准方法及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及音频技术领域,尤其涉及一种频率响应一致性的校准方法及电子设备。
背景技术
目前,较多的电子设备均具有语音通信功能或音频播放功能,如手机、平板等。为了实现语音通信功能或音频播放功能,电子设备中需要安装发声器件才能使用户听到通话过程中对方的声音或者电子设备播放的音频。随着电子设备对屏幕屏占比的要求,需减少电子设备前面板(即屏幕)的开孔,因此电子设备中通常会设置屏幕发声器件(如压电陶瓷类容性器件)作为电子设备的扬声器或听筒。
然而,对于采用屏幕发声器件发声的电子设备,在屏幕发声器件组装过程中,需将屏幕发声器件与电子设备的屏幕相贴合,电子设备的屏幕与屏幕发声器件贴合后会存在误差。此外,由于屏幕的生产厂商不同,屏幕的尺寸和质量也会存在差异。由于这些问题的存在,该电子设备的屏幕发声器件在播放声音信号时,该声音信号的频率响应曲线会出现较大的波动,偏离标准的频率响应曲线较严重,从而造成给用户的听感不一致的问题,影响用户体验。
发明内容
本申请实施例提供一种频率响应一致性的校准方法及电子设备,用于解决电子设备的屏幕发声器件在播放声音信号时,该声音信号的频率响应曲线会出现较大的波动较大的问题。
为达到上述目的,本申请的实施例采用如下技术方案:
第一方面,提供了一种频率响应一致性的校准方法。该方法应用于电子设备。该电子设备包括均衡器校准模块。电子设备与校准设备通信连接。该方法包括:电子设备播放测试音频信号。电子设备保存校准参数。校准参数为均衡器校准模块的运行参数。校准参数由第一音频信号的频率响应与测试音频信号的标准频率响应确定。第一音频信号是由校准设备采集电子设备播放的测试音频信号得到的。校准参数用于当电子设备播放第二音频信号时,通过所述均衡器校准模块调节电子设备播放的第二音频信号的频率响应。
基于上述方法,通过采集电子设备校准前播放的声音信号得到第一音频信号,并根据第一音频信号的频率响应与测试音频信号的标准频率响应确定均衡器校准模块的校准参数。该校准参数用于在电子设备播放正常的音频信号时调节频率响应曲线的平滑度,以降低实际播放的音频信号的频率响应曲线与标准的频率响应曲线的差异,提高电子设备播放音频信号的质量,以提升用户的听感体验。
一种可能的实现方式中,电子设备播放测试音频信号之后,上述方法还可以包括:电子设备接收第一音频信号。电子设备根据第一音频信号的频率响应与测试音频信号 的标准频率响应,确定校准参数。该校准参数可以由电子设备来确定,当校准设备采集到第一音频信号之后,可以将第一音频信号发送至电子设备,以便电子设备计算并确定校准参数。如此,可以简化校准设备,便于校准设备的开发。
一种可能的实现方式中,电子设备播放测试音频信号之后,上述方法还可以包括:电子设备接收校准参数。该校准参数为校准设备根据第一音频信号的频率响应与测试音频信号的标准频率响应。也就是说,该校准参数也可以由校准设备来确定。由于校准参数可以是在电子设备出厂之前检测得到的,因此校准参数由校准设备来确定,可以避免确定校准参数的程序指令占用电子设备的存储空间。
一种可能的实现方式中,电子设备播放测试音频信号,包括:响应于电子设备接收到校准设备发送的检测指令,电子设备播放测试音频信号。或者,响应于电子设备向校准设备发送检测指令,电子设备播放测试音频信号。
一种可能的实现方式中,测试音频信号为全频域的扫频信号。为了保证电子设备的屏幕发声器件播放任意的音频信号时,都具有良好的听感,在频率响应校准时,需要校准全频域的信号。
一种可能的实现方式中,第一音频信号的频率响应由第一音频信号进行时频变换后获得。
一种可能的实现方式中,均衡器校准模块包括多个子带滤波器;该校准参数为多个子带滤波器的参数。校准参数包括待校准的频带数量、每个待校准的频带对应的滤波器类型、每个待校准的频带对应的滤波器的中心频率、每个待校准频带对应的频率响应增益。由于容性器件类屏幕发声器件具有非线性特性,通过多段均衡滤波器(即多个子带滤波器)可以实现电子设备播放的音频信号的频率响应的校准,使得频率响应一致性更好,用户听感更佳。
一种可能的实现方式中,电子设备还包括均衡器参数计算模块。电子设备根据第一音频信号的频率响应和测试音频信号的标准频率响应,确定校准参数,包括:电子设备通过均衡器参数计算模块,根据第一音频信号的频率响应和测试音频信号的标准频率响应,确定需要校准的频点、需要校准的频点数量以及需要校准的频点对应的频率响应增益。电子设备通过均衡器参数计算模块,根据需要校准的频点、需要校准的频点数量以及需要校准的频点对应的频率响应增益,确定校准参数。
一种可能的实现方式中,需要校准的频点为第一音频信号的频率响应和测试音频信号的标准频率响应相差超过预设频响增益的频点。需要校准的频点对应的频率响应增益为:需要校准的频点处,第一音频信号的频率响应和测试音频信号的标准频率响应的差值。
一种可能的实现方式中,若需要校准的频点数量小于或等于N,则待校准的频带数量为N;N个待校准的频带的中心频点为:第一音频信号的频率响应和测试音频信号的频率响应相差最大的N个频点。每个待校准的频带对应的滤波器类型均为:峰值滤波器。每个待校准的频带对应的滤波器的中心频率为:待校准的频带的中心频点对应的频率。每个待校准频带对应的滤波器的频率响应增益为:待校准的频带的中心频点处,第一音频信号的频率响应和测试音频信号的标准频率响应的差值。其中,N为预设的最低支持的校准子带的数量。
一种可能的实现方式中,若需要校准的频点数量大于N,且小于或等于M,则待校准的频带数量为需要校准的频点数量。每个待校准的频带对应的滤波器类型均为:峰值滤波器。每个待校准的频带对应的滤波器的中心频率为:每个需要校准的频点对应的频率。每个待校准频带对应的滤波器的频率响应增益为:每个需要校准的频点对应的频率响应增益。其中,N为预设的最低支持的校准子带的数量;M为预设的最大支持的校准子带的数量。
一种可能的实现方式中,若需要校准的频点数量大于M,则待校准的频带数量的数量为N;待校准的频带由需要校准的频点部分合并得到。在待校准的频带中,除最低待校准的频带和最高待校准的频带外,每个待校准的频带对应的滤波器类型均为:峰值滤波器。每个待校准的频带对应的滤波器的中心频率为:待校准的频带的中心频点对应的频率。每个待校准频带对应的滤波器的频率响应增益为:每个待校准的频带中,所有的需要校准的频点对应的频率响应增益的平均增益。
一种可能的实现方式中,若需要校准的频点数量大于M,则待校准的频带数量的数量为N。待校准的频带由需要校准的频点部分合并得到。在N个待校准的频带中的最低待校准的频带中,若最低待校准的频带的最低频点为f1,且最低待校准的频带中需要校准的频点数量大于或等于第一阈值,则最低待校准的频带对应的滤波器类型为:低频搁架滤波器。最低待校准的频带对应的滤波器的中心频率为:最低待校准的频带中的最高频点对应的频率。最低待校准的频带对应的频率响应增益为:最低待校准的频带中多个需要校准的频点对应的频率响应增益的平均增益。在N个待校准的频带中的最低待校准的频带中,若最低待校准的频带的最低频点大于f1,或者最低待校准的频带中需要校准的频点数量小于第一阈值,则最低待校准的频带对应的滤波器类型为:峰值滤波器。最低待校准的频带对应的滤波器的中心频率为:最低待校准的频带中的中心频点对应的频率。最低待校准的频带对应的频率响应增益为:最低待校准的频带中,所有的需要校准的频点对应的频率响应增益的平均增益。其中,f1为预设的需要校准的最低频率。
一种可能的实现方式中,若需要校准的频点数量大于M,则滤波器的数量为N。待校准的频带由需要校准的频点部分合并得到。在N个待校准的频带中的最高待校准的频带中,若最高待校准的频带的最高频点为f2,且最高待校准的频带中需要校准的频点数量大于或等于第二阈值,则最高待校准的频带对应的滤波器类型为:高频搁架滤波器。最高待校准的频带对应的滤波器的中心频率为:最高待校准的频带中的最低频点对应的频率。最高待校准的频带对应的频率响应增益为:最高待校准的频带中多个需要校准的频点对应的频率响应增益的平均增益。在N个待校准的频带中的最高待校准的频带中,若最高待校准的频带的最高频点小于f2,或者最高待校准的频带中需要校准的频点数量小于第一阈值,则最高待校准的频带对应的滤波器类型为:峰值滤波器。最高待校准的频带对应的滤波器的中心频率为:最高待校准的频带中的的中心频点对应的频率。最高待校准的频带对应的频率响应增益为:最高待校准的频带中,所有的需要校准的频点对应的频率响应增益的平均增益。其中,f2为预设的需要校准的最高频率。
第二方面,提供另一种频率响应一致性的校准方法。该方法应用于校准设备。该 校准设备与电子设备通信连接。该方法包括:在电子设备播放测试音频信号过程中,采集电子设备播放的声音信号,得到第一音频信号。根据第一音频信号的频率响应与测试音频信号的标准频率响应,确定校准参数。校准参数用于当电子设备播放第二音频信号时,调节电子设备播放的声音信号的频率响应。向电子设备发送校准参数。
一种可能的实现方式中,校准设备包括人工耳。采集电子设备播放的声音信号,包括:响应于接收到电子设备发送的检测指令,通过人工耳采集电子设备播放的声音信号。或者,响应于向电子设备发送检测指令,通过人工耳采集电子设备播放的声音信号。检测指令用于指示电子设备播放测试音频信号。
一种可能的实现方式中,测试音频信号为全频域的扫频信号。
一种可能的实现方式中,第一音频信号的频率响应由第一音频信号进行时频变换后获得。
一种可能的实现方式中,均衡器校准模块包括多个子带滤波器;校准参数包括待校准的频带数量、每个待校准的频带对应的滤波器类型、每个待校准的频带对应的滤波器的中心频率、每个待校准频带对应的频率响应增益。
一种可能的实现方式中,根据第一音频信号的频率响应和测试音频信号的标准频率响应,确定校准参数,包括:根据第一音频信号的频率响应和测试音频信号的标准频率响应,确定需要校准的频点、需要校准的频点数量以及需要校准的频点对应的频率响应增益。根据需要校准的频点、需要校准的频点数量以及需要校准的频点对应的频率响应增益,确定校准参数。
一种可能的实现方式中,需要校准的频点为第一音频信号的频率响应和测试音频信号的标准频率响应相差超过预设频响增益的频点。需要校准的频点对应的频率响应增益为:需要校准的频点处,第一音频信号的频率响应和测试音频信号的标准频率响应的差值。
一种可能的实现方式中,若需要校准的频点数量小于或等于N,则待校准的频带数量为N;N个待校准的频带的中心频点为:第一音频信号的频率响应和测试音频信号的频率响应相差最大的N个频点。每个待校准的频带对应的滤波器类型均为:峰值滤波器。每个待校准的频带对应的滤波器的中心频率为:待校准的频带的中心频点对应的频率。每个待校准频带对应的滤波器的频率响应增益为:待校准的频带的中心频点处,第一音频信号的频率响应和测试音频信号的标准频率响应的差值。其中,N为预设的最低支持的校准子带的数量。
一种可能的实现方式中,若需要校准的频点数量大于N,且小于或等于M,则待校准的频带数量为需要校准的频点数量。每个待校准的频带对应的滤波器类型均为:峰值滤波器。每个待校准的频带对应的滤波器的中心频率为:每个需要校准的频点对应的频率。每个待校准频带对应的滤波器的频率响应增益为:每个需要校准的频点对应的频率响应增益。其中,N为预设的最低支持的校准子带的数量;M为预设的最大支持的校准子带的数量。
一种可能的实现方式中,若需要校准的频点数量大于M,则待校准的频带数量的数量为N;待校准的频带由需要校准的频点部分合并得到。在待校准的频带中,除最低待校准的频带和最高待校准的频带外,每个待校准的频带对应的滤波器类型均为: 峰值滤波器。每个待校准的频带对应的滤波器的中心频率为:待校准的频带的中心频点对应的频率。每个待校准频带对应的滤波器的频率响应增益为:每个待校准的频带中,所有的需要校准的频点对应的频率响应增益的平均增益。
一种可能的实现方式中,若需要校准的频点数量大于M,则待校准的频带数量的数量为N。待校准的频带由需要校准的频点部分合并得到。在N个待校准的频带中的最低待校准的频带中,若最低待校准的频带的最低频点为f1,且最低待校准的频带中需要校准的频点数量大于或等于第一阈值,则最低待校准的频带对应的滤波器类型为:低频搁架滤波器。最低待校准的频带对应的滤波器的中心频率为:最低待校准的频带中的最高频点对应的频率。最低待校准的频带对应的频率响应增益为:最低待校准的频带中多个需要校准的频点对应的频率响应增益的平均增益。在N个待校准的频带中的最低待校准的频带中,若最低待校准的频带的最低频点大于f1,或者最低待校准的频带中需要校准的频点数量小于第一阈值,则最低待校准的频带对应的滤波器类型为:峰值滤波器。最低待校准的频带对应的滤波器的中心频率为:最低待校准的频带中的中心频点对应的频率。最低待校准的频带对应的频率响应增益为:最低待校准的频带中,所有的需要校准的频点对应的频率响应增益的平均增益。其中,f1为预设的需要校准的最低频率。
一种可能的实现方式中,若需要校准的频点数量大于M,则滤波器的数量为N。待校准的频带由需要校准的频点部分合并得到。在N个待校准的频带中的最高待校准的频带中,若最高待校准的频带的最高频点为f2,且最高待校准的频带中需要校准的频点数量大于或等于第二阈值,则最高待校准的频带对应的滤波器类型为:高频搁架滤波器。最高待校准的频带对应的滤波器的中心频率为:最高待校准的频带中的最低频点对应的频率。最高待校准的频带对应的频率响应增益为:最高待校准的频带中多个需要校准的频点对应的频率响应增益的平均增益。在N个待校准的频带中的最高待校准的频带中,若最高待校准的频带的最高频点小于f2,或者最高待校准的频带中需要校准的频点数量小于第一阈值,则最高待校准的频带对应的滤波器类型为:峰值滤波器。最高待校准的频带对应的滤波器的中心频率为:最高待校准的频带中的的中心频点对应的频率。最高待校准的频带对应的频率响应增益为:最高待校准的频带中,所有的需要校准的频点对应的频率响应增益的平均增益。其中,f2为预设的需要校准的最高频率。
第三方面,提供一种音频播放方法。该音频播放方法应用于电子设备;电子设备包括均衡器校准模块和屏幕发声器件。该方法包括:接收音频播放指令。音频播放指令用于指示电子设备播放第二音频信号。响应于接收到音频播放指令,获取校准参数。校准参数为根据如上第一方面任一种可能的实现方式中的方法储存在电子设备的校准参数。通过均衡器校准模块并利用校准参数,调节第二音频信号的频率响应,得到第三音频信号。通过屏幕发声器件,播放第三音频信号。
第四方面,提供一种校准系统。该校准系统包括电子设备和校准设备。电子设备与校准设备通信连接。电子设备用于执行如上第一方面任一种可能的实现方式中的方法。
第五方面,提供一种校准设备。该校准设备与电子设备通信连接。该校准设备包 括:校准控制模块、均衡器参数计算模块和人工耳。校准控制模块,用于向电子设备发送检测指令;检测指令用于指示电子设备播放测试音频信号。人工耳,用于在电子设备播放测试音频信号过程中,采集电子设备播放的测试音频信号,得到第一音频信号。均衡器参数计算模块,用于根据第一音频信号的频率响应与测试音频信号的标准频率响应,确定校准参数。校准参数用于当电子设备播放第二音频信号时,调节电子设备播放的第二音频信号的频率响应。校准控制模块,还用于向电子设备发送校准参数。
一种可能的实现方式中,测试音频信号为全频域的扫频信号。
一种可能的实现方式中,第一音频信号的频率响应由第一音频信号进行时频变换后获得。
一种可能的实现方式中,校准参数包括待校准的频带数量、每个待校准的频带对应的滤波器类型、每个待校准的频带对应的滤波器的中心频率、每个待校准频带对应的频率响应增益。
一种可能的实现方式中,均衡器参数计算模块具体用于:根据第一音频信号的频率响应和测试音频信号的标准频率响应,确定需要校准的频点、需要校准的频点数量以及需要校准的频点对应的频率响应增益。根据需要校准的频点、需要校准的频点数量以及需要校准的频点对应的频率响应增益,确定校准参数。
一种可能的实现方式中,需要校准的频点为第一音频信号的频率响应和测试音频信号的标准频率响应相差超过预设频响增益的频点。需要校准的频点对应的频率响应增益为:需要校准的频点处,第一音频信号的频率响应和测试音频信号的标准频率响应的差值。
第六方面,提供一种校准设备。该校准设备包括处理器和存储器。存储器中存储有一个或多个计算机程序,一个或多个计算机程序包括指令,当指令被处理器执行时,使得校准设备执行如上第二方面中任一种可能的实现方式中的方法。
第七方面,提供一种电子设备。该电子设备包括:一个或多个处理器;存储器;通信模块。其中,通信模块用于与校准设备通信。存储器中存储有一个或多个计算机程序,一个或多个计算机程序包括指令,当指令被处理器执行时,使得电子设备执行如上第一方面和第三方面中任一种可能的实现方式中方法。
第八方面,提供一种芯片系统,该芯片系统包括一个或多个接口电路和一个或多个处理器。该接口电路和处理器通过线路互联。该芯片系统可以应用于包括通信模块和存储器的电子设备。该接口电路可以读取电子设备中存储器中存储的指令,并将该指令发送给处理器。当所述指令被处理器执行时,可使得电子设备执行如上第一方面中任一种可能的实现方式中所述的方法,或者可使得校准设备执行如上第二方面中任一种可能的实现方式中所述的方法。
第九方面,提供一种计算机可读存储介质,计算机可读存储介质中存储有指令,当指令在校准设备上运行时,使得校准设备执行如上第二方面中任一种可能的实现方式中的方法。
第十方面,提供一种计算机可读存储介质,计算机可读存储介质中存储有指令,当指令在校准设备上运行时,使得电子设备执行如上第一方面和第三方面中任一种可 能的实现方式中的方法。
可以理解地,上述提供的第二方面至第十方面中任一种可能的实现方式带来的技术效果可参见第一方面中不同实现方式所带来的技术效果,此处不再赘述。
附图说明
图1为本申请实施例提供的一种用户通过电子设备进行语音通信的场景示意图;
图2为本申请实施例提供的一种电子设备的结构示意图;
图3为本申请实施例提供的另一种电子设备的结构示意图;
图4为本申请实施例提供的屏幕发声器件发声的原理图;
图5为本申请实施例提供的4.2微法的压电陶瓷的频阻特性曲线图;
图6为本申请实施例提供的不同种类不同参数的滤波器的作用示意图;
图7为本申请实施例提供的一种校准系统的结构示意图;
图8A为本申请实施例提供的一种校准系统中电子设备和校准设备的系统结构框图;
图8B为本申请实施例提供的另一种校准系统中电子设备和校准设备的系统结构框图;
图9为本申请实施例提供的一种电子设备的软件结构框图;
图10A为本申请实施例提供的一种频率响应一致性的校准方法的流程图;
图10B为图10A中的步骤S1004中的流程图;
图11为本申请实施例提供的一种第一音频信号的频率响应曲线与测试音频信号的标准频率响应曲线对比图;
图12为本申请实施例提供的另一种频率响应一致性的校准方法的流程图;
图13为本申请实施例提供的一种屏幕发声器件的频率响应曲线在频率响应一致性校准前后的对比图;
图14为本申请实施例提供的一种芯片系统的结构示意图。
具体实施方式
为了便于理解,示例性的给出了部分与本申请实施例相关概念的说明以供参考。如下所示:
容性器件:通常情况下,可以将具有电容性参数的负载(即与电压滞后电流的特性相匹配的负载)称为容性负载或容性器件。容性器件在电子学系统中的负载可等效为电容。例如压电微机电系统(micro-electro-mechanical system,MEMS)、薄膜材料、静电扬声器等都是容性器件。容性器件在充电/放电期间电压不会突然变化。
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述。其中,在本申请的描述中,除非另有说明,“至少一个”是指一个或多个,“多个”是指两个或多于两个。另外,为了便于清楚描述本申请实施例的技术方案,在本申请的实施例中,采用了“第一”、“第二”等字样对功能和作用基本相同的相同项或相似项进行区分。本领域技术人员可以理解“第一”、“第二”等字样并不对数量和执行次序进行限定,并且“第一”、“第二”等字样也并不限定一定不同。
目前,较多的电子设备均具有语音通信功能或音频播放功能。为了实现语音通信功能,电子设备中需要安装发声器件才能使用户在语音通信时听到对方的声音。类似地,为 实现音频播放功能,电子设备也需要安装发声器件。以手机这类电子设备实现语音通信功能为例,手机的顶部会设置听筒(也可称为扬声器)作为用于语音通信的发声器件,实现语音通信功能。通常情况下,听筒是设置在手机的内部,需要在手机的前面板上开孔形成出音孔。当听筒发声时,听筒发出的声音能量能够通过出音孔传出,以使用户听到听筒发出的声音。然而,随着手机的不断发展,为了向用户提供更好的屏幕观看体验,手机屏幕的屏占比越来越高。由于设置在前面板上的出音孔占用了手机的前面板部分区域,会增加手机边框的宽度,因此会对手机的屏占比的提高造成影响。
随着大屏以及全面屏手机的发展,为提高手机的屏占比,需要减小听筒的出音孔在手机前面板上的占用面积。例如,将手机听筒的出音孔设计为长缝形态,并且使该出音孔位于手机中框和前面板的连接处(也可称为手机侧缝处)。在某些情况下,为了确保手机听筒的出音孔具有良好出音效果,还可以在手机中框的顶部开孔作为出音孔。在此情况下,当用户使用手机进行语音通信时,用户的耳廓无法完全覆盖包裹出音孔,手机听筒的声音能量无法完全传递至用户的耳廓内,从而产生漏音现象。
示例性地,以手机为例,在用户手持手机通过听筒进行语音通信的过程中,手机听筒用于在语音通信过程中播放对侧用户的声音信号(即手机听筒为上述用于语音通信中通话发声的扬声器)。如图1所示,手机听筒的出音孔201靠近用户的耳朵(或者耳廓)。在此时,由于手机听筒的出音孔201(如位于手机侧缝处的出音孔和中框顶部的出音孔)无法被用户的耳朵完全包裹覆盖,因此由出音孔201发出的声音信号不仅能够被该用户听到,在安静环境下还能够被其他用户听到,从而产生漏音现象。
为了避免听筒发声时出现漏音现象,一些电子设备会采用屏幕发声来替代听筒发声,或采用屏幕发声和听筒同时发声。例如,如图2所示,为电子设备的结构示意图。其中,电子设备包括壳体结构100。该壳体结构100是由前面板(包括屏幕和边框)、用于支撑内部电路的后面板以及中框围合形成的。如图2中的(a)所示,电子设备的壳体结构100内设置有听筒101和屏幕发声器件104。其中,听筒101即为用于语音通信中通话发声的扬声器,也称为受话器,通常设置于壳体结构的顶部位置处。屏幕发声器件104可以为连接在屏幕下方的振动源。结合图2中的(b)所示,对应于该听筒101,电子设备设置有两处出音孔,分别为出音孔102和出音孔103。其中,出音孔102位于该电子设备前面板和中框的连接处(即侧缝处)。出音孔103位于该电子设备的中框上距离上述听筒较近的位置(即电子设备的中框顶部位置)。如此一来,该图2所示的电子设备能够通过听筒发声,或通过屏幕发声,或通过听筒和屏幕发声同时发声,以避免单纯听筒发声出现的漏音现象。
应理解,针对不同的屏幕发声方案,电子设备中的屏幕发声器件的具体结构有所不同。例如,屏幕发声器件可以是连接在屏幕背面的振动源(如压电陶瓷、马达振子、激励器或其他振动单元)。该振动源可以在电流信号的控制作用下振动以带动屏幕振动,从而实现屏幕发声。又例如,屏幕发声器件还可以是通过悬臂梁结构固定在电子设备的中框上的压电陶瓷。该压电陶瓷可以在电流信号的控制作用下振动,并利用手机中框将振动传递至屏幕以带动屏幕振动,从而实现屏幕发声。又例如,屏幕发声器件还可以是固定在电子设备的中框上的激励器。该激励器可以在电流信号的控制作用下振动,并利用手机中框将振动传递至屏幕以带动屏幕振动,从而实现屏幕发声。再 例如,屏幕发声器件还可以是分体式磁悬浮振子。该分体式磁悬浮振子中的其中一个振子固定在电子设备的中框上,另一个振子固定在屏幕上,可以固定在屏幕上的振子在电流信号的控制作用下,相对于固定在电子设备的中框上的振子振动,从而推动屏幕振动,以实现屏幕发声。
然而,采用屏幕发声器件(如压电陶瓷等容性器件)发声的电子设备(如手机),在组装屏幕发声器件时,需要将屏幕发声器件贴合于手机屏幕的下方。通常情况下,屏幕发声器件与手机屏幕的贴合后会存在一定的误差范围。并且,由于手机屏幕的生产厂商不同,可能导致屏幕的尺寸、质量存在些许差异,这些都可能导致电子设备在使用的过程中,屏幕发声器件发声时的频率响应曲线的离散程度较高,出现高低音等使用户听感不佳。
为解决上述问题,本申请实施例提供一种针对屏幕发声器件(如压电陶瓷等容性器件)的频率响应一致性的校准方法。在电子设备出厂之前,通过播放扫频信号(即测试音频信号),检测该电子设备的屏幕发声器件发声时的频率响应曲线,并将检测得到的频率响应曲线与标准的频率响应曲线进行对比,得到在该电子设备的屏幕发声器件发声的过程中,该屏幕发声器件的频率响应曲线中需要进行调节的多个关键频点以及调节该关键频点的校准参数。将需要调节的多个关键频点以及调节该关键频点的校准参数存储在存储器中。在电子设备的音频播放通路中还可以增加均衡器校准模块,在待播放的音频信号通过屏幕发声器件播放之前,先通过均衡器校准模块对音频信号的关键频点进行增强和抑制,以调整该屏幕发声器件发声时的频率响应曲线的平滑度,从而使最终通过屏幕发声器件输出的音频信号的频响曲线在预设的出厂阈值范围内,提高电子设备的可靠性和稳定性,以提高用户体验。
以下,将结合附图对本申请实施例提供的频率响应一致性的校准方法进行说明。
示例性的,本申请实施例中的电子设备可以是手机、平板电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本,以及蜂窝电话、个人数字助理(personal digital assistant,PDA)、可穿戴式设备(如:智能手表、智能手环),等具备语音通信功能的设备,本申请实施例对该电子设备的具体形态不作特殊限制。
示例地,以电子设备为手机为例,图3示出了本申请实施例提供的另一种电子设备的结构示意图。也即,示例性的,图3所示的电子设备可以是手机。
如图3所示,手机可以包括:处理器310,外部存储器接口320,内部存储器321,通用串行总线(universal serial bus,USB)接口330,充电管理模块340,电源管理模块341,电池342,天线1,天线2,移动通信模块350,无线通信模块360,音频模块370,扬声器370A,受话器(即听筒)370B,麦克风370C,耳机接口370D,传感器模块380,按键390,马达391,指示器392,摄像头393,显示屏394,以及用户标识模块(subscriber identification module,SIM)卡接口395,屏幕发声器件396等。
其中,上述传感器模块可以包括压力传感器,陀螺仪传感器,气压传感器,磁传感器,加速度传感器,距离传感器,接近光传感器,指纹传感器,温度传感器,触摸传感器,环境光传感器和骨传导传感器等传感器。
可以理解的是,本实施例示意的结构并不构成对手机的具体限定。在另一些实施例中,手机可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器310可以包括一个或多个处理单元,例如:处理器310可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
控制器可以是手机的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
DSP可以包括智能功放(smart PA)硬件电路、smart PA算法模块、音频算法模块。其中,smart PA硬件电路可以分别与应用处理器和屏幕发声器件(如压电陶瓷)连接,用于根据应用处理器的指令控制屏幕发声器件发声。smart PA算法模块包括均衡器校准模块,在该均衡器校准模块中可以设置多段滤波器,通过不同参数不同种类的滤波器的共同作用调节屏幕发声器件发声时的频率响应曲线。
应理解,smart PA硬件电路也可以设置在DSP芯片的外部,本申请实施例不做特殊限定。
处理器310中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器310中的存储器为高速缓冲存储器。该存储器可以保存处理器310刚用过或循环使用的指令或数据。如果处理器310需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器310的等待时间,因而提高了系统的效率。在本申请实施例中,存储器可以用于存储调整屏幕发声器件频率响应曲线一致性的校准参数(例如滤波器参数)。
在一些实施例中,处理器310可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
可以理解的是,本实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对手机的结构限定。在另一些实施例中,手机也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块340用于从充电器(如无线充电器或有线充电器)接收充电输入,为电池342充电。电源管理模块341用于连接电池342,充电管理模块340与处理器310。电源管理模块341接收电池342和/或充电管理模块340的输入,为电子设备的各个器件供电。
手机的无线通信功能可以通过天线1,天线2,移动通信模块350,无线通信模块360,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。手机中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线 1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
在一些实施例中,手机的天线1和移动通信模块350耦合,天线2和无线通信模块360耦合,使得手机可以通过无线通信技术与网络以及其他设备通信。上述移动通信模块350可以提供应用在手机上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块350可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块350可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。
移动通信模块350还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块350的至少部分功能模块可以被设置于处理器310中。在一些实施例中,移动通信模块350的至少部分功能模块可以与处理器310的至少部分模块被设置在同一个器件中。
无线通信模块360可以提供应用在手机上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。
无线通信模块360可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块360经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器310。无线通信模块360还可以从处理器310接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
当然,上述无线通信模块360也可以支持手机进行语音通信。例如,手机可以通过无线通信模块360接入Wi-Fi网络,然后使用任一种可提供语音通信服务的应用程序与其他设备进行交互,为用户提供语音通信服务。例如,上述可提供语音通信服务的应用程序可以是即时通讯应用。
手机可以通过GPU,显示屏394,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏394和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器310可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。显示屏394用于显示图像,视频等。
手机可以通过ISP,摄像头393,视频编解码器,GPU,显示屏394以及应用处理器等实现拍摄功能。ISP用于处理摄像头393反馈的数据。在一些实施例中,ISP可以设置在摄像头393中。摄像头393用于捕获静态图像或视频。在一些实施例中,手机可以包括1个或N个摄像头393,N为大于1的正整数。
外部存储器接口320可以用于连接外部存储卡,例如Micro SD卡,实现扩展手机的存储能力。内部存储器321可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器310通过运行存储在内部存储器321的指令,从而执行手机的各种功能应用以及数据处理。例如,在本申请实施例中,处理器310可以通过执行存储在内部存储器321中的指令,内部存储器321可以包括存储程序区和存储数据区。
手机可以通过音频模块370,扬声器370A,受话器(即听筒)370B,麦克风370C, 耳机接口370D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块370用于将数字音频信号转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块370还可以用于对音频信号编码和解码。在一些实施例中,音频模块370可以设置于处理器310中,或将音频模块370的部分功能模块设置于处理器310中。扬声器370A,也称“喇叭”,用于将音频电信号转换为声音信号。受话器370B,也称“听筒”,用于将音频电信号转换成声音信号。麦克风370C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。耳机接口370D用于连接有线耳机。耳机接口370D可以是USB接口330,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
其中,该受话器370B(即“听筒”)可以是图2中的(a)所示的听筒101。
示例性的,本申请实施例中,音频模块370可以将移动通信模块350和无线通信模块360接收到的音频电信号转换为声音信号。由音频模块370的受话器370B(即“听筒”)播放该声音信号,同时由屏幕发声器件396来驱动屏幕(即显示屏)进行屏幕发声以播放该声音信号。
按键390包括开机键,音量键等。马达391可以产生振动提示。指示器392可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。SIM卡接口395用于连接SIM卡。手机可以支持1个或N个SIM卡接口,N为大于1的正整数。
当然,可以理解的,上述图3所示仅仅为电子设备的设备形态为手机时的示例性说明。若电子设备是平板电脑,手持计算机,PDA,可穿戴式设备(如:智能手表、智能手环)等其他设备形态时,电子设备的结构中可以包括比图3中所示更少的结构,也可以包括比图3中所示更多的结构,在此不作限制。
如图4所示为屏幕发声器件发声的原理图。其中,该屏幕发声器件包括多层压电陶瓷。该多层压电陶瓷形成振动膜,在施加交流驱动信号后,能够在压电效应下发生弯曲形变推动振动膜发声。
通常情况下,压电陶瓷(即屏幕发声器件)的阻抗满足如下关系式:
Figure PCTCN2022118558-appb-000001
其中,z为压电陶瓷的阻抗,C为电容,f为交流信号频率。由此可见,压电陶瓷的等效阻抗随着输入的交流信号的频率上升而急剧下降。例如,如图5所示为4.2微法的压电陶瓷的频阻特性曲线,4.2微法(uF)的压电陶瓷在交流信号的频率为200赫兹(Hz)时,压电陶瓷的等效阻抗约为160欧姆(Ohm),4.2微法(uF)的压电陶瓷在交流信号的频率为10k赫兹(Hz)时,压电陶瓷的等效阻抗约为3.7欧姆(Ohm)。
需要说明的是,实际上,由多层压电陶瓷形成的屏幕发声器件中并非仅仅包括压电陶瓷,还可能包括电极引线、介电物质以及其他组件等。因而,由多层压电陶瓷形成的屏幕发声器件的等效阻抗为非线性曲线,会与温度、频率和材料等相关。
为了适应屏幕发声器件的非线性特性,在电子设备的音频播放通路中增加均衡器校准模块,并且该均衡器校准模块采用多段均衡滤波器来实现。也就是说,该均衡器校准模块中通常由N个子带滤波器形成。例如,N可以为6至12之间的正整数。
每个子带滤波器的参数可以包括:
系统采样率fs:系统采样率由电子设备采用的处理器型号来确定。例如,系统采样率可以为48000Hz、44100Hz、16000Hz等。
中心频率f0:是指滤波器通带的频率f0,一般取f0=(f1+f2)/2,例如,f1、f2可以为带通滤波器左、右相对下降1dB或3dB边频点。在本申请实施例中,可以根据音频信号的频率响应曲线中待校准的频点来确定滤波器的中心频率。
峰值带宽Q:也称滤波器的品质因子Q。滤波器的品质因子Q为滤波器的中心频率f0除以滤波器带宽BW。滤波器带宽是指滤波器需要通过的频谱宽度,即滤波器带宽BW=(f2-f1),f1、f2为带通滤波器的中心频率f0左、右相对下降1dB或3dB边频点。也就是说,滤波器的品质因子Q=f0/(f2-f1)。
峰值增益gain:在本申请实施例中,可以根据待校准的频点或频带的中心频点,在音频信号的频率响应曲线与标准的频率响应曲线之间的差值确定。
滤波器类型:例如,峰值滤波器(Peak Filter)、低频搁架滤波器(low shelf filter,LS)、高频搁架滤波器(high shelf filter,HS)、低通滤波器(low-pass filter,LP)、高通滤波器(high-pass filter,HP)、带通滤波器(band-pass filter,BP)。
图6示出了不同种类不同参数的滤波器的作用示意图。对于不同Q值的峰值滤波器来说,滤波器的Q值决定的了滤波器的带宽,当滤波器的Q值越大时,带宽越窄,影响的频率区间越小,如图6中的(a)所示,从图示的中心区域到左右两侧的曲线中,最内侧的曲线Q值最大,最外侧的曲线Q值最小。对于不同增益的峰值滤波器来说,增益越大,增益或衰减的程度越大,如图6中的(b)所示的滤波器曲线中,从外到内的曲线的增益依次为12dB、10dB、8dB、6dB、4dB、2dB。如图6中的(c)所示,对于低频搁架滤波器,主要用于提升或衰减低频部分的振幅。如图6中的(d)所示,对于高频搁架滤波器,主要用于提升或衰减高频部分的振幅。
滤波器阶数:例如一阶滤波器、二阶滤波器、三阶滤波器。通过上述各个子带的滤波器参数,可以计算得到各个子带的滤波器的系数。以二阶IIR滤波器为例,二阶IIR滤波器的系数有a(0)、a(1)、a(2)、b(0)、b(1)、b(2)。
综上所述,上述每个子带滤波器的参数中,滤波器的中心频率、峰值带宽、峰值增益、滤波器类型均可以根据未经过校准的频响曲线和标准的频响曲线对比得出。
得到上述滤波器的参数后,根据不同阶数和不同种类的滤波器,可以计算得到滤波器的系数。下面举例说明不同种类的滤波器中的滤波器系数的计算方法。
二阶IIR峰值滤波器系数的一种计算方法:
其中,已知参数为:中心频率f0、峰值增益gain、峰值带宽Q、系统采样率fs。
中间变量:
A=sqrt(10**(gain/20));
w0=2*pi*f0/fs;
alpha=sin(w0)/(2*Q);
计算得到滤波器系数:
b0=1+alpha*A;
b1=-2*cos(w0);
b2=1-alpha*A;
a0=1+alpha/A;
a1=-2*cos(w0);
a2=1-alpha/A。
二阶IIR低频搁架滤波器系数的一种计算方法:
其中,已知参数为:中心频率f0、峰值增益gain、峰值带宽Q、系统采样率fs。
中间变量:
A=sqrt(10**(gain/20));
w0=2*pi*f0/fs;
alpha=sin(w0)/(2*Q);
计算得到滤波器系数:
b0=A*((A+1)-(A-1)*cos(w0)+2*sqrt(A)*alpha);
b1=2*A*((A-1)-(A+1)*cos(w0));
b2=A*((A+1)-(A-1)*cos(w0)-2*sqrt(A)*alpha);
a0=(A+1)+(A-1)*cos(w0)+2*sqrt(A)*alpha;
a1=-2*((A-1)+(A+1)*cos(w0));
a2=(A+1)+(A-1)*cos(w0)-2*sqrt(A)*alpha。
二阶IIR高频搁架滤波器系数的一种计算方法:
其中,已知参数为:中心频率f0、峰值增益gain、峰值带宽Q、系统采样率fs。
中间变量:
A=sqrt(10**(gain/20));
w0=2*pi*f0/fs;
alpha=sin(w0)/(2*Q);
计算得到滤波器系数:
b0=A*((A+1)-(A-1)*cos(w0)+2*sqrt(A)*alpha);
b1=2*A*((A-1)-(A+1)*cos(w0));
b2=A*((A+1)-(A-1)*cos(w0)-2*sqrt(A)*alpha);
a0=(A+1)+(A-1)*cos(w0)+2*sqrt(A)*alpha;
a1=-2*((A-1)+(A+1)*cos(w0));
a2=(A+1)+(A-1)*cos(w0)-2*sqrt(A)*alpha。
得到上述滤波器的系数后,可以根据每个子带的滤波器的输入信号和滤波器的系数,通过如下公式(一)得到滤波器的输出信号。
Figure PCTCN2022118558-appb-000002
其中,y(n)为第n个采样点的信号经过滤波器后的输出;x(n)为输入至滤波器的第n个采样点的信号。
下面以电子设备为手机为例,并结合系统架构和流程图,对本申请实施例提供的一种频率响应曲线一致性的校准方法进行详细说明。
图7为本申请实施例提供的一种校准系统的结构示意图。如图7所示,该校准系统可以包括电子设备和校准设备。在产线,当需要对电子设备中的屏幕发声器件的频率响应曲线的一致性进行校准时,可以将电子设备与校准设备通信连接。当待校准的电子设备与校 准设备通信连接之后,电子设备用于通过其屏幕发声器件播放测试音频信号,校准设备用于在电子设备播放测试音频信号时采集电子设备播放的测试音频信号,得到第一音频信号。
通过校准设备采集完成电子设备播放的测试音频信号之后,待校准的电子设备或校准设备可以对采集得到的第一音频信号进行分析,获得该第一音频信号对应的频率响应。并且可以将该第一音频信号对应的频率响应与测试音频信号的标准频率响应进行对比,获取需要校准的频点、需要校准的频点对应的频率响应增益以及需要校准的频点数量。待校准的电子设备或校准设备还可以根据需要校准的频点、需要校准的频点对应的频率响应增益以及需要校准的频点数量确定滤波器的数量、滤波器的类型、滤波器的中心频率、滤波器的增益以及滤波器的Q值等校准参数。当待校准的电子设备或校准设备获得上述校准参数后,可以将上述校准参数存储至电子设备非易失性存储设备中,以便电子设备在正常使用过程中播放音频信号时,调节屏幕发声器件的频率响应曲线的一致性,从而使电子设备的频率响应具有一致性,使用户的听感更佳,提高用户体验。
图8A和图8B为本申请实施例提供的校准系统中的电子设备和校准设备的系统框图。图9为本申请实施例提供的电子设备的软件结构框图。下面分别介绍电子设备与校准设备的系统结构。
对于电子设备:
请参考图8A、图8B和图9,上述电子设备(如手机)的系统结构可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本申请实施例以分层架构的Android系统为例,示例性说明手机的软件结构。当然,在其他操作系统中,只要各个功能模块实现的功能和本申请的实施例类似。
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为五层,从上至下分别为应用程序层,应用程序框架层(framework),安卓运行时(Android runtime)和系统库(libraries),HAL(hardware abstraction layer,硬件抽象层)以及内核层(kernel)。
应用程序层可以包括一系列应用程序包。如图9所示,应用程序层中可以安装通话,备忘录,浏览器,联系人,相机,图库,日历,地图,蓝牙,音乐,视频,短信息等应用(application)。在本申请实施例中,应用程序层还可以安装校准应用。在一些实施方式中,该校准应用可以接收校准设备的校准信号(也可以称为测试指令),以使电子设备在接收到校准信号后,执行播放测试音频信号的操作,以便对屏幕发声器件播放的音频信号的频率响应进行校准。在另一些实施方式中,该校准应用可以向校准设备发送校准信号(也可以称为测试指令),并在发送校准信号之后,控制电子设备播放测试音频信号,以使校准设备在接收到校准信号后,启动采集设备(如人工耳)采集电子设备播放的测试音频信号。
示例地,当校准应用接收到校准设备的校准信号,或者校准应用向校准设备发送校准信号之后,校准应用会向HAL层中对应的HAL(如smart PA控制HAL)发送测试命令,以使smart PA控制HAL控制电子设备播放测试音频信号。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。如图9所示,应用程序框架层中设置有音频播放管理服务。音频播放管理服务可以用于初始化音视频播放器,获取当前音频的音量大小,调节音频播放的音量大小,增加音效等。
另外,应用程序框架层还可以包括窗口管理服务,内容提供服务,视图系统,资源管理服务,通知管理服务等,本申请实施例对此不做任何限制。
例如,上述窗口管理服务用于管理窗口程序。窗口管理服务可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。上述内容提供服务用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。上述视图系统可用于构建应用程序的显示界面。每个显示界面可以由一个或多个控件组成。一般而言,控件可以包括图标、按钮、菜单、选项卡、文本框、对话框、状态栏、导航栏、微件(Widget)等界面元素。上述资源管理服务为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。上述通知管理服务使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理服务被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,振动,指示灯闪烁等。
如图9所示,手机的HAL中提供了与手机的不同硬件模块对应的HAL,例如,Audio HAL、Camera HAL、Wi-Fi HAL以及smart PA控制HAL和信息存储HAL等。
其中,Audio HAL通过内核层的音频驱动可与音频输出器件(如扬声器、屏幕发声器件)对应。当手机设置有多个音频输出器件(如多个扬声器或屏幕发声器件)时,这多个音频输出器件分别与内核层的多个音频驱动对应。
smart PA控制HAL通过DSP中的smart PA算法与smart PA硬件电路相对应。示例性地,当smart PA控制HAL接收到应用程序层中的校准应用下发的测试命令时,smart PA控制HAL可以控制smart PA算法运行,以配置smart PA算法播放测试音频信号。当smart PA控制HAL接收到应用程序层中的通话应用下发的通话指令或者音乐应用下发的音乐播放指令时,smart PA控制HAL可以控制smart PA算法运行,以配置smart PA算法播放对方的声音信号或者音乐对应的音频信号。并且,smart PA控制HAL还可以通过I2C信号控制smart PA硬件电路(如屏幕发声器件的硬件电路(smart PA0))打开,以通过屏幕发声器件播放测试音频信号。
信息存储HAL与电子设备的非可易失存储介质(如存储器)相对应,用于将电子设备或者校准设备计算得出的均衡器参数(即校准参数)存储至电子设备的非可易失存储介质中。示例地,当电子设备计算得出均衡器参数(即校准参数)或者电子设备接收到校准设备计算得出的均衡器参数(即校准参数)时,信息存储HAL可以将该均衡器参数存储至电子设备的非可易失存储介质中。该均衡器参数用于在电子设备播放正常的音频信号时调节频率响应曲线的平滑度,以降低实际播放的音频信号的频率响应曲线与标准的频率响应曲线的差异,提高电子设备播放音频信号的质量,以提升用户的听感体验。
Android runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程 管理,安全和异常的管理,以及垃圾回收等功能。
系统库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(Media Libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。
其中,表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。2D图形引擎是2D绘图的绘图引擎。
内核层位于HAL之下,是硬件和软件之间的层。内核层除了包括上述音频驱动以外,还可以包括显示驱动,摄像头驱动,传感器驱动等,本申请实施例对此不做任何限制。
需要说明的是,在内核层之下便是硬件电路。示例性地,在本申请实施例中包括数字信号处理(digital signal processing,DSP)芯片,在该DSP芯片中运行有smart PA算法模块、音频算法模块等。
其中,smart PA算法模块中包括均衡器校准模块。当smart PA控制HAL接收到校准应用下发的测试命令时,信息存储HAL会检测非易失性存储设备中是否存储有均衡器参数(即校准参数)。若信息存储HAL未检测到有均衡器参数,则信息存储HAL不能向均衡器校准模块发送该均衡器参数或者信息存储HAL向均衡器校准模块发送空的均衡器参数,均衡器校准模块不运行。此时,smart PA算法模块直接播放原始的测试音频信号。
当电子设备处于正常使用的过程中时,当与音频相关的应用(如通话应用或音乐应用)响应于用户的操作运行时,向smart PA控制HAL下发音频播放指令。smart PA控制HAL接收到音频播放指令,信息存储HAL会检测非易失性存储设备中是否存储有均衡器参数(即校准参数)。若信息存储HAL检测到有均衡器参数,则信息存储HAL从非可易失存储介质中获取校准参数,并将该均衡器参数发送至均衡器校准模块中。此时,均衡器校准模块接收到校准参数,根据该校准参数确定均衡器的类型、中心频率以及频响增益等参数。并且,该均衡器校准模块还可以将待播放的音频信号转换为频域信号,并使用均衡器校准模块中的多段均衡器进行频域增益的校准,将经过校准的音频信号发送至smart PA硬件电路,以使屏幕发声器件播放该经过校准的音频信号。
在一种可能的实施方式中,如图8B和图9所示,该DSP芯片中还运行有均衡器参数计算模块。在校准设备采集到电子设备播放的测试音频信号之后,校准设备可以将采集到的声音信号(即第一音频信号)发送至校准应用。校准应用再将第一音频信号通过HAL层(如smart PA控制HAL)发送至均衡器参数计算模块中。该均衡器参数计算模块接收到第一音频信号之后,可以根据第一音频信号的频率响应和传输音频信号的标准频率响应进行对比,确定均衡器参数(即校准参数)。
当然,上述均衡器参数计算模块也可以是校准应用中的模块,本申请对此不作特殊限定。
对于校准设备:
在一种实现方式中,请参考图8A,当电子设备不包括均衡器参数计算模块时,该校准设备包括校准控制模块、均衡器参数计算模块以及人工耳。其中,校准控制模块用于与电子设备通信。例如,校准控制模块可以用于向电子设备发送测试指令(即校准信号),以 指示电子设备通过smart PA播放测试音频信号,并在发送测试指令之后,控制人工耳启动声音信号的采集。又例如,校准控制模块还可以用于将均衡器参数计算模块输出的校准参数发送至电子设备,用于指示电子设备存储该校准参数。再例如,校准控制模块还可以用于接收电子设备发送的测试指令(即校准信号),并在接收到测试指令时,控制人工耳启动声音信号的采集。
人工耳用于采集电子设备播放测试音频信号时输出的声音信号(即第一音频信号),并将采集的声音信号转换为电信号传输至均衡器参数计算模块。
均衡器参数计算模块用于接收第一音频信号,并在接收到第一音频信号之后,根据第一音频信号的频率响应和传输音频信号的标准频率响应进行对比,确定均衡器参数(即校准参数)。
在另一种实现方式中,请参考图8B,当电子设备包括均衡器参数计算模块时,该标准设备可以仅包括校准控制模块和人工耳。其中,校准控制模块用于与电子设备通信。例如,校准控制模块可以用于向电子设备发送测试指令(即校准信号),以指示电子设备通过smart PA播放测试音频信号,并在发送测试指令之后,控制人工耳启动声音信号的采集。又例如,校准控制模块还可以用于接收电子设备发送的测试指令(即校准信号),并在接收到测试指令时,控制人工耳启动声音信号的采集。再例如,校准控制模块还可以用于将人工耳采集的第一音频信号发送至电子设备,以使电子设备中的均衡器参数计算模块根据第一音频信号的频率响应和传输音频信号的标准频率响应进行对比,确定均衡器参数(即校准参数)。
下面结合流程图,对本申请实施例提供的一种频率响应一致性的校准方法进行详细说明。一些实施例中,如图10A所示,该频率响应一致性的校准方法,应用于图8A所示的校准系统。其中,被测电子设备与校准设备通信连接。该方法包括:
S1001,校准设备向被测电子设备发送测试指令。
其中,该测试指令用于指示被测电子设备播放测试音频信号。该测试音频信号可以是全频域的扫频信号,或者特定频率范围的扫频信号(例如人耳可听的频率范围的扫频信号,如20Hz~20000Hz)。
当校准设备和被测电子设备通信连接之后,用户可以按压校准设备的测试按钮,以实现校准设备向被测电子设备发送测试指令。
需要说明的是,当电子设备需要进行频率响应一致性校准时,也可以由被测电子设备向校准设备发送测试指令,例如被测电子设备接收到用户对校准应用中校准控件的点击操作,向校准设备发送测试指令。
S1002,响应于接收到测试指令,被测电子设备播放测试音频信号。
示例地,当被测电子设备的校准应用接收到测试指令时,可以将该测试指令发送至HAL层中的smart PA控制HAL。当smart PA控制HAL接收到测试指令时,控制smart PA算法模块运行,以控制smart PA硬件电路播放测试音频信号。并且,smart PA控制HAL还通过I2C信号控制smart PA硬件电路(如屏幕发声器件的硬件电路(smart PA0))打开,通过屏幕发声器件播放测试音频信号。
需要说明的是,此时被测电子设备播放的测试音频信号是未经过均衡器校准模块校准的信号。该未经过校准的测试音频信号播放之后能够表征被测电子设备的播放音频时的频率响应曲线,以便将该未经过校准的频率响应曲线与测试音频信号的标准频率响应曲线对 比,确定频率响应偏差较大的频点以及频率响应增益,从而根据偏差较大的频点以及频率响应增益确定均衡器校准模块中多段均衡器的参数(即校准参数)。
此外,若由被测电子设备向校准设备发送测试指令,则电子设备可以在向校准设备发送测试指令之后,播放测试音频信号。
S1003,校准设备采集被测电子设备播放的测试音频信号,得到第一音频信号。
示例地,当校准设备向被测电子设备发送测试指令之后,校准设备可以启动人工耳。当然,若由被测电子设备向校准设备发送测试指令,则校准设备可以在接收到电子设备发送的测试指令后启动人工耳。
当被测电子设备通过屏幕发声器件播放测试音频信号时,校准设备可以通过人工耳采集被测电子设备播放的测试音频信号,得到第一音频信号。人工耳可以将接收到的第一音频信号再转换为电信号发送至均衡器参数计算模块。
S1004,校准设备根据第一音频信号的频率响应和测试音频信号的标准频率响应,确定校准参数。其中,校准参数包括:待校准的频带的数量(即滤波器数量)、每个待校准的频带对应的滤波器类型、每个待校准的频带对应的滤波器的中心频率、每个待校准的频带对应的滤波器的增值增益以及每个待校准的频带对应的滤波器的Q值。
通常情况下,在校准设备的均衡器参数计算模块中可以预设能够支持校准的子带的个数范围,例如该均衡器校准模块可支持校准n到m个子带(即可支持校准n到m个待校准的频带)。
由于在电子设备播放音频信号时,部分频率范围的音频信号属于人耳不可听的信号,因此这类信号不会影响用户的听感,可以不进行频率响应校准。因此为提高频率响应校准的效率,可以预先设置需要校准的频率范围,例如可以预先设置需要校准的频率响应的频率范围为第一频点f1至第二频点f2。其中,第一频点f1为最低需要校准的频率。第二频点f2为最高需要校准的频率。
此外,还可以预先设置电子设备期望的校准离散范围为±x dB。也就是说,当电子设备在某一频点的频率响应偏离正常频率响应超过x dB,则表示该频点需要校准。
在本申请实施例中,如图10B所示,上述S1004可以包括S1004A至S1004D。
S1004A,校准设备根据第一音频信号的频率响应和测试音频信号的频率响应,可以确定在第一频点f1至第二频点f2的范围内的需要校准的频点、需要校准的频点对应的频率响应增益以及需要校准的频点数量。其中,需要校准的频点是指:第一音频信号的频率响应和测试音频信号的频率响应的差值大于预先设置的期望的校准离散范围(即x dB)的频点。需要校准的频点对应的频率响应增益是指:在需要校准的频点处,第一音频信号的频率响应与测试音频信号的频率响应的差值。需要校准的频点数量是指所有需要校准的频点的数量总和。
示例地,图11为本申请实施例提供的一种第一音频信号的频率响应曲线与测试音频信号的标准频率响应曲线对比图。如图11所示,根据第一音频信号的频率响应曲线和测试音频信号的标准频率响应曲线进行对比,可以看出第一音频信号的频率响应曲线与测试音频信号的标准频率响应曲线具有较大的差异。
如图11所示,根据第一音频信号的频率响应曲线的特点,第一音频信号的频率响应曲线具有较大的波动。在此情况下,可以根据第一音频信号的频率响应曲线确定该频率响应 曲线的拐点位置处(即波峰波谷位置处)的频率为可能需要校准的频点,例如在图11所示的频率响应曲线中100Hz、200Hz、500Hz、1000Hz、2000Hz、3000Hz、3500Hz、4000Hz、5000Hz频点为可能需要校准的频点。
得到上述可能需要校准的频点之后,可以将第一音频信号在上述频点的频率响应与测试音频信号的标准频率响应曲线中对应频点的频率响应进行对比计算。例如,在频点100Hz处,第一音频信号的频率响应与测试音频信号的标准频率响应曲线中的频率响应相差5dB时,若预先设置的期望的标准离散范围为±2dB,此时5dB大于2dB,可以确定频点100Hz为需要校准的频点,并且可以确定频点100Hz处的频率响应增益为5dB。
通过上述方式可以对各个可能需要校准的频点进行计算,确定需要校准的频点、需要校准的频点对应的频率响应增益以及需要校准的频点总数。
此时,根据在第一频点f1至第二频点f2的范围内的需要校准的频点、需要校准的频点对应的频率响应增益以及需要校准的频点数量,可以确定校准参数,即可以确定滤波器数量、滤波器类型、滤波器的中心频率、滤波器的增益以及滤波器的Q值等。
由于均衡器校准模块可支持校准n到m个子带,因此根据需要校准的频点数量的不同,滤波器数量、滤波器类型也不完全相同。
下面分情况说明确定校准参数的方法。
第一种情况:需要校准的频点数量i小于或等于最低支持校准的子带数量n。此时,可执行如下S1004B。
S1004B,将待校准的子带个数设置为n。
每个待校准的子带(即待校准的频带)会设置一个滤波器。也就是说,滤波器数量设置为n。在此情况下,每个待校准的子带包括一个待校准的频点。每个待校准的子带对应的滤波器的中心频率为对应的待校准的频点的频率。待校准的频点设置为:第一音频信号的频率响应和测试音频信号的频率响应的差值最大的n个频点。滤波器的类型均设置为峰值滤波器。每个待校准的子带对应的滤波器的峰值增益为待校准的频点对应的频率响应增益。
第二种情况:需要校准的频点数量i大于最低支持的校准子带数量n,且小于或等于最大支持的校准子带数量m。此时,可执行如下S1004C。
S1004C,将待校准的子带个数设置为i或者n。
一些实施例中,由于n<i≤m,因此可以将待校准的子带个数设置为i,即滤波器数据设置为i。此时,滤波器的类型均设置为峰值滤波器。在此情况下,每个待校准的子带包括一个待校准的频点。每个待校准的子带对应的滤波器的中心频率为对应的待校准的频点的频率。i个需要校准的频点均配置为待校准的频点。每个待校准的子带对应的滤波器的峰值增益为待校准的频点对应的频率响应增益。
在一些实施例中,可以对需要校准的频点进行合并,例如合并相邻的频点。示例地,可以将图11中的100Hz和200Hz两个频点合并为一个频带。当合并相邻的频点之后得到n个待校准的频带后,停止合并相邻的频点。此时,可以将待校准的子带个数设置为n,即滤波器数量设置为n。每个待校准的子带对应的滤波器的峰值增益为:每个待校准的频带中的各个频点对应的频率响应增益的平均增益。滤波器类型均可以设置为峰值滤波器。
第三种情况:需要校准的频点数量i大于最大支持的校准子带数量m。此时,可执行 如下S1004D。
S1004D,将待校准的子带个数设置为n、m或者p;其中n≤p≤m;待校准的频带由需要校准的频点部分合并得到,且合并后的待校准的频带的数量与待校准的子带个数一致。
在此情况下,需要校准的频点数量i超过了最大支持的校准子带数量m,需要合并相邻的频点。
一些实施例中,可以将待校准的子带个数设置为n,即滤波器数量设置为n。此时,合并相邻的频点后得到的待校准的频带为n个。示例地,假设最低支持校准的子带数量n=6,最大支持校准的子带数量m=12。若需要校准的频点数量i大于12,如i=13时,需要校准的频点两两合并可得到6个待校准的频带和1个单频点。此时,可以再将该1个单频点与其相邻的待校准频带合并,最终得到6个待校准的频带(其中一个待校准的频带包括3个待校准的频点)。
经过上述合并相邻的频点得到n个待校准的频带后,若最低待校准频带的最低频点为f1(其中,f1为预设的需要校准的最低频率),并且该最低待校准频带中的待校准的频点数量大于或等于2,则将该待校准的频带对应的子带的滤波器设置为低频搁架滤波器。滤波器的中心频率设置为该待校准的频带中的最高频点。待校准的子带对应的滤波器的峰值增益设置为:该待校准的频带中的各个频点对应的频率响应增益的平均增益。
若最高待校准频带的最高频点为f2(其中,f2为预设的需要校准的最高频率),并且该最高待校准频带中的待校准的频点数量大于或等于2,则将该待校准的频带对应的子带的滤波器设置为高频搁架滤波器。滤波器的中心频率设置为该待校准的频带中的最低频点。待校准的子带对应的滤波器的峰值增益设置为:该待校准的频带中的各个频点对应的频率响应增益的平均增益。
当满足上述情况时,n个待校准的频带中剩余的n-2个待校准的频带对应的子带的滤波器均设置为峰值滤波器。滤波器的中心频率均设置为对应的待校准的频带的中心频点。待校准增益设置为该待校准的频带中的各个频点对应的频率响应增益的平均增益。
当不满足上述情况时,可以将n个待校准的频带对应的子带的滤波器均设置为峰值滤波器。滤波器的中心频率均设置为对应的待校准的频带的中心频点。待校准的子带对应的滤波器的峰值增益设置为:该待校准的频带中的各个频点对应的频率响应增益的平均增益。
一些实施例中,可以将待校准的子带个数设置为m,即滤波器数量设置为m。此时,合并相邻的频点后得到的待校准的频带为m个。示例地,假设最低支持校准的子带数量n=6,最大支持校准的子带数量m=12。若需要校准的频点数量i大于12,如i=13时,可以仅合并其中两个相邻的频点为1个待校准的频带,最终得到1个待校准的频带和11个待校准的频点。
一些实施例中,在i小于或等于2m的情况下,也可以对需要校准的频点进行两两合并。例如当i为奇数时,得到(i-1)/2个待校准的频带和1个待校准的频点。此时,待校准的子带个数设置为(i-1)/2+1,即滤波器的数量设置为(i-1)/2+1。当i为偶数时,得到i/2个待校准的频带。此时,待校准的子带个数设置为i/2,即滤波器的数量设置为i/2。
在i大于2m的情况下,可以将相邻的频点合并得到p个待校准的频带,n≤p≤m。此时,待校准的子带个数设置为p,即滤波器的数量设置为p。
经过上述合并相邻的频点得到m个、(i-1)/2+1个、i/2或p个待校准的频带后,确定滤 波器的类型、滤波器的中心频率以及滤波器的峰值增益,可以采用上述得到n个待校准的频带的方法,此处不再赘述。
确定了滤波器的数量、滤波器的类型、滤波器的中心频点以及滤波器的峰值增益后,还需要确定滤波器的峰值带宽(也称为品质因子、Q值)。其中,滤波器的峰值带宽为滤波器的中心频除以滤波器带宽。在本申请实施例中,滤波器带宽可以由待校准频点或频带确定。例如,如图11所示,在待校准频点100Hz处对应的滤波器的带宽可以为待校准频点100Hz两侧与标准频率响应相差的频响增益为x dB的两个频点之间的频率差,如25Hz。由于待校准频点100Hz处对应的滤波器的中心频率为100Hz,因此待校准频点100Hz处对应的滤波器的峰值带宽为100/25=4。
同理,对于经过频点合并得到的待校准的频带,该待校准的频带对应的滤波器的带宽可以为待校准频带的中心频点两侧与标准频率响应相差的频响增益为x dB的两个频点之间的频率差。根据滤波器的中心频率和带宽可以计算得出滤波器的峰值带宽。
S1005,校准设备向被测电子设备发送校准参数。
当均衡器参数计算模块计算得到滤波器的数量、滤波器的中心频率、滤波器的类型和滤波器的峰值增益、以及滤波器的峰值带宽后,可以将上述校准参数传输至屏幕发声器件校准控制模块。当屏幕发声器件校准控制模块接收到上述校准参数后,可以将上述校准参数发送至待测电子设备中。
S1006,响应于接收到校准参数,待测电子设备保存校准参数。
当待测电子设备接收到上述校准参数,待测电子设备可以将上述校准参数保存至非易失性存储介质中。
如下表1所示,为滤波器的参数定义和功能描述。
表1 滤波器参数定义和功能描述
Figure PCTCN2022118558-appb-000003
在待测电子设备保存上述校准参数之后,该待测电子设备中屏幕发声器件的频率响应 曲线的一致性便校准完成。
在另一些实施例中,如图12所示,该频率响应一致性的校准方法,应用于图8B所示的校准系统。其中,被测电子设备与校准设备通信连接。该方法包括:
S1201,校准设备向被测电子设备发送测试指令。
具体请参考上述S1001,此处不再赘述。
S1202,响应于接收到测试指令,被测电子设备播放测试音频信号。
具体请参考上述S1002,此处不再赘述。
S1203,校准设备采集被测电子设备播放的测试音频信号,得到第一音频信号。
示例地,当校准设备向被测电子设备发送测试指令之后,校准设备可以启动人工耳。当然,若由被测电子设备向校准设备发送测试指令,则校准设备可以在接收到电子设备发送的测试指令后启动人工耳。
当被测电子设备通过屏幕发声器件播放测试音频信号时,校准设备可以通过人工耳采集被测电子设备播放的测试音频信号,得到第一音频信号。
S1204,校准设备向被测电子设备发送第一音频信号。
由于校准设备仅仅执行被测电子设备播放测试音频信号时的声音采集,当校准设备采集完成第一音频信号之后,可以将第一音频信号发送至被测电子设备,然后由被测电子设备根据第一音频信号的频率响应和测试音频信号的标准频率响应,确定校准参数。
S1205,被测电子设备根据第一音频信号的频率响应和测试音频信号的标准频率响应,确定校准参数。
具体请参考上述S1004中的确定校准参数的过程,此处不再赘述。
S1206,被测电子设备保存校准参数。
具体请参考上述S1006,此处不再赘述。
应理解,该待测电子设备在正常使用的过程中,例如该电子设备通过屏幕发声器件通话时,电子设备检测到用户拨打电话的操作,上层应用(如通话应用)会向smart PA控制HAL下发通话指令。又例如该电子设备通过屏幕发声器件播放音乐时,电子设备检测到用户播放音乐的操作,上层应用(如音乐应用)会向smart PA控制HAL下发音乐播放指令。
当smart PA控制HAL接收到通话指令或者音乐播放指令时,smart PA控制HAL可以控制smart PA算法运行,以配置smart PA算法播放对方的声音信号或者音乐对应的音频信号。在配置smart PA算法的过程中,smart PA控制HAL可以控制信息存储HAL从非易失性存储介质中获取校准参数(即均衡器参数),并下发至smart PA算法中的均衡器校准模块中。
在smart PA算法中,待屏幕发声器件播放的音频信号(如对方的声音信号或者音乐对应的音频信号)会经过smart PA算法中的均衡器校准模块的处理,再通过屏幕发声器件播放。例如,对待屏幕发声器件播放的音频信号,可以先将该音频信号变换为频域信号,再通过均衡器校准模块中的多段滤波器进行频率响应的校准,以提升屏幕发声器件的频率响应曲线的一致性,从而保证电子设备发声效果的一致性,使用户的听感更佳。
综上所述,图13为本申请实施例提供的一种屏幕发声器件的频率响应曲线在频率响应一致性校准前后的对比图。如图13中的(a)所示,为电子设备中的屏幕发声器件在校准前的100个电子设备(pcs)的频率响应曲线的分布,从该图中可以看出,未经过校准时, 该100个电子设备的屏幕发声器件播放的声音信号的频率响应曲线的差异较大,离散程度较高。然而,经过校准之后,如图13中的(b)所示,该屏幕发声器件播放的声音信号的频率响应曲线的趋于一致性,频率响应曲线也更加平滑,屏幕发声器件的发声效果更好,用户的听感更佳。
此外,本申请实施例还提供一种音频播放方法,应用于电子设备,该音频播放方法应用于电子设备;电子设备包括均衡器校准模块和屏幕发声器件。该方法包括:接收音频播放指令。音频播放指令用于指示电子设备播放第二音频信号。响应于接收到音频播放指令,获取校准参数。校准参数为根据上述实施例中的方法储存在电子设备的校准参数。通过均衡器校准模块并利用校准参数,调节第二音频信号的频率响应,得到第三音频信号。通过屏幕发声器件,播放第三音频信号。
本申请实施例还提供一种电子设备。该电子设备包括:一个或多个处理器;存储器;通信模块。其中,通信模块用于与校准设备通信。存储器中存储有一个或多个计算机程序,一个或多个计算机程序包括指令,当指令被处理器执行时,使得电子设备执行上述实施例中被测电子设备或者电子设备执行的方法。该电子设备可以是上述图9所示的电子设备。
本申请实施例还提供一种校准设备。该校准设备包括处理器和存储器。存储器中存储有一个或多个计算机程序,一个或多个计算机程序包括指令,当指令被处理器执行时,使得校准设备执行如上实施例中校准设备执行的方法。
本申请实施例还提供一种芯片系统,如图14所示,该芯片系统1400可以应用于电子设备或者校准设备,该芯片系统1400包括至少一个处理器1401和至少一个接口电路1402。处理器1401和接口电路1402可通过线路互联。例如,接口电路1402可用于从其它装置(例如,电子设备的存储器)接收信号。又例如,接口电路1402可用于向其它装置(例如处理器1401)发送信号。
例如,接口电路1402可读取电子设备中存储器中存储的指令,并将该指令发送给处理器1401。当所述指令被处理器1401执行时,可使得电子设备(如图9所示的电子设备)执行上述实施例中被测电子设备所执行的各个功能或者步骤。或者,当所述指令被处理器1401执行时,可使得校准设备执行上述实施例中校准设备所执行的各个功能或者步骤。
当然,该芯片系统还可以包含其他分立器件,本申请实施例对此不作具体限定。
本申请另一实施例提供一种计算机存储介质,该计算机存储介质包括计算机指令,当所述计算机指令在电子设备上运行时,使得电子设备执行上述方法实施例中电子设备所执行的各个功能或者步骤。
本申请另一实施例提供一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机执行上述方法实施例中电子设备所执行的各个功能或者步骤。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如 多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是一个物理单元或多个物理单元,即可以位于一个地方,或者也可以分布到多个不同地方。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,如:程序。该软件产品存储在一个程序产品,如计算机可读存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
例如,本申请实施例还可以提供一种计算机可读存储介质,其上存储有计算机程序指令。当计算机程序指令被电子设备执行时,使得电子设备实现如前述方法实施例中所述的音频处理方法。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (17)

  1. 一种频率响应一致性的校准方法,其特征在于,应用于电子设备;所述电子设备包括均衡器校准模块;所述电子设备与校准设备通信连接,所述方法包括:
    所述电子设备播放测试音频信号;
    所述电子设备保存校准参数;所述校准参数为所述均衡器校准模块的运行参数;所述校准参数由第一音频信号的频率响应与所述测试音频信号的标准频率响应确定;所述第一音频信号是由所述校准设备采集所述电子设备播放的所述测试音频信号得到的;所述校准参数用于当所述电子设备播放第二音频信号时,通过所述均衡器校准模块调节所述电子设备播放的第二音频信号的频率响应。
  2. 根据权利要求1所述的校准方法,其特征在于,所述电子设备播放所述测试音频信号之后,所述方法还包括:
    所述电子设备接收所述第一音频信号;
    所述电子设备根据所述第一音频信号的频率响应与所述测试音频信号的标准频率响应,确定校准参数。
  3. 根据权利要求1所述的校准方法,其特征在于,所述电子设备播放所述测试音频信号之后,所述方法还包括:
    所述电子设备接收所述校准参数;所述校准参数为所述校准设备根据所述第一音频信号的频率响应与所述测试音频信号的标准频率响应确定的。
  4. 根据权利要求1-3任一项所述的校准方法,其特征在于,所述电子设备播放测试音频信号,包括:
    响应于所述电子设备接收到所述校准设备发送的检测指令,所述电子设备播放所述测试音频信号;或者,
    响应于所述电子设备向所述校准设备发送所述检测指令,所述电子设备播放所述测试音频信号。
  5. 根据权利要求1-4任一项所述的方法,其特征在于,所述测试音频信号为全频域的扫频信号。
  6. 根据权利要求1-5任一项所述的方法,其特征在于,所述第一音频信号的频率响应由所述第一音频信号进行时频变换后获得。
  7. 根据权利要求1-6任一项所述的方法,其特征在于,所述均衡器校准模块包括多个子带滤波器;所述校准参数为所述多个子带滤波器的参数;所述校准参数包括待校准的频带数量、每个所述待校准的频带对应的滤波器类型、每个所述待校准的频带对应的滤波器的中心频率、每个所述待校准频带对应的频率响应增益。
  8. 根据权利要求7所述的方法,其特征在于,所述电子设备还包括均衡器参数计算模块;所述电子设备根据所述第一音频信号的频率响应和所述测试音频信号的标准频率响应,确定校准参数,包括:
    所述电子设备通过所述均衡器参数计算模块,根据所述第一音频信号的频率响应和所述测试音频信号的标准频率响应,确定需要校准的频点、需要校准的频点数量以及需要校准的频点对应的频率响应增益;
    所述电子设备通过所述均衡器参数计算模块,根据所述需要校准的频点、需要校 准的频点数量以及需要校准的频点对应的频率响应增益,确定校准参数。
  9. 根据权利要求8所述的方法,其特征在于,所述需要校准的频点为所述第一音频信号的频率响应和所述测试音频信号的标准频率响应相差超过预设频响增益的频点;
    所述需要校准的频点对应的频率响应增益为:所述需要校准的频点处,所述第一音频信号的频率响应和所述测试音频信号的标准频率响应的差值。
  10. 根据权利要求8或9所述的方法,其特征在于,若所述需要校准的频点数量小于或等于N,则
    所述待校准的频带数量为N;N个所述待校准的频带的中心频点为:所述第一音频信号的频率响应和所述测试音频信号的频率响应相差最大的N个频点;
    每个所述待校准的频带对应的滤波器类型均为:峰值滤波器;
    每个所述待校准的频带对应的滤波器的中心频率为:所述待校准的频带的中心频点对应的频率;
    每个所述待校准频带对应的滤波器的频率响应增益为:所述待校准的频带的中心频点处,所述第一音频信号的频率响应和所述测试音频信号的标准频率响应的差值;
    其中,N为预设的最低支持的校准子带的数量。
  11. 根据权利要求8或9所述的方法,其特征在于,若所述需要校准的频点数量大于N,且小于或等于M,则
    所述待校准的频带数量为所述需要校准的频点数量;
    每个所述待校准的频带对应的滤波器类型均为:峰值滤波器;
    每个所述待校准的频带对应的滤波器的中心频率为:每个所述需要校准的频点对应的频率;
    每个所述待校准频带对应的滤波器的频率响应增益为:每个所述需要校准的频点对应的频率响应增益;
    其中,N为预设的最低支持的校准子带的数量;M为预设的最大支持的校准子带的数量。
  12. 根据权利要求8或9所述的方法,其特征在于,若所述需要校准的频点数量大于M,则
    所述待校准的频带数量的数量为N;所述待校准的频带由所述需要校准的频点部分合并得到;
    在所述待校准的频带中,除最低待校准的频带和最高待校准的频带外,
    每个所述待校准的频带对应的滤波器类型均为:峰值滤波器;
    每个所述待校准的频带对应的滤波器的中心频率为:所述待校准的频带的中心频点对应的频率;
    每个所述待校准频带对应的滤波器的频率响应增益为:每个所述待校准的频带中,所有的所述需要校准的频点对应的频率响应增益的平均增益。
  13. 根据权利要求8或9所述的方法,其特征在于,若所述需要校准的频点数量大于M,则
    所述待校准的频带数量的数量为N;所述待校准的频带由所述需要校准的频点部分合并得到;
    在N个所述待校准的频带中的最低待校准的频带中,若所述最低待校准的频带的最低频点为f1,且所述最低待校准的频带中需要校准的频点数量大于或等于第一阈值,则
    所述最低待校准的频带对应的滤波器类型为:低频搁架滤波器;
    所述最低待校准的频带对应的滤波器的中心频率为:所述最低待校准的频带中的最高频点对应的频率;
    所述最低待校准的频带对应的频率响应增益为:所述最低待校准的频带中多个所述需要校准的频点对应的频率响应增益的平均增益;
    在N个所述待校准的频带中的最低待校准的频带中,若所述最低待校准的频带的最低频点大于f1,或者所述最低待校准的频带中需要校准的频点数量小于第一阈值,则
    所述最低待校准的频带对应的滤波器类型为:峰值滤波器;
    所述最低待校准的频带对应的滤波器的中心频率为:所述最低待校准的频带中的中心频点对应的频率;
    所述最低待校准的频带对应的频率响应增益为:所述最低待校准的频带中,所有的所述需要校准的频点对应的频率响应增益的平均增益;
    其中,f1为预设的需要校准的最低频率。
  14. 根据权利要求8或9所述的方法,其特征在于,若所述需要校准的频点数量大于M,则
    所述滤波器的数量为N;所述待校准的频带由所述需要校准的频点部分合并得到;
    在N个所述待校准的频带中的最高待校准的频带中,若所述最高待校准的频带的最高频点为f2,且所述最高待校准的频带中需要校准的频点数量大于或等于第二阈值,则
    所述最高待校准的频带对应的滤波器类型为:高频搁架滤波器;
    所述最高待校准的频带对应的滤波器的中心频率为:所述最高待校准的频带中的最低频点对应的频率;
    所述最高待校准的频带对应的频率响应增益为:所述最高待校准的频带中多个所述需要校准的频点对应的频率响应增益的平均增益;
    在N个所述待校准的频带中的最高待校准的频带中,若所述最高待校准的频带的最高频点小于f2,或者所述最高待校准的频带中需要校准的频点数量小于第一阈值,则
    所述最高待校准的频带对应的滤波器类型为:峰值滤波器;
    所述最高待校准的频带对应的滤波器的中心频率为:所述最高待校准的频带中的的中心频点对应的频率;
    所述最高待校准的频带对应的频率响应增益为:所述最高待校准的频带中,所有的所述需要校准的频点对应的频率响应增益的平均增益;
    其中,f2为预设的需要校准的最高频率。
  15. 一种音频播放方法,其特征在于,应用于电子设备;所述电子设备包括均衡器校准模块和屏幕发声器件;所述方法包括:
    接收音频播放指令;所述音频播放指令用于指示所述电子设备播放第二音频信号;
    响应于接收到所述音频播放指令,获取校准参数;所述校准参数为根据如权利要求1-14任一项所述的方法储存在所述电子设备的校准参数;
    通过所述均衡器校准模块并利用所述校准参数,调节所述第二音频信号的频率响应,得到第三音频信号;
    通过所述屏幕发声器件,播放所述第三音频信号。
  16. 一种电子设备,其特征在于,所述电子设备包括:
    一个或多个处理器;
    存储器;
    通信模块;
    其中,所述通信模块用于与校准设备通信;
    所述存储器中存储有一个或多个计算机程序,所述一个或多个计算机程序包括指令,当所述指令被所述处理器执行时,使得所述电子设备执行如权利要求1-14任一项所述频率响应一致性的校准方法,或者使得所述电子设备执行如权利要求15所述的音频播放方法。
  17. 一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,其特征在于,当所述指令在电子设备上运行时,使得所述电子设备执行如权利要求1-14任一项所述频率响应一致性的校准方法,或者使得所述电子设备执行如权利要求15所述的音频播放方法。
PCT/CN2022/118558 2021-12-10 2022-09-13 频率响应一致性的校准方法及电子设备 WO2023103503A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111509124.9A CN116320905A (zh) 2021-12-10 2021-12-10 频率响应一致性的校准方法及电子设备
CN202111509124.9 2021-12-10

Publications (1)

Publication Number Publication Date
WO2023103503A1 true WO2023103503A1 (zh) 2023-06-15

Family

ID=86729586

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/118558 WO2023103503A1 (zh) 2021-12-10 2022-09-13 频率响应一致性的校准方法及电子设备

Country Status (2)

Country Link
CN (1) CN116320905A (zh)
WO (1) WO2023103503A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117241170B (zh) * 2023-11-16 2024-01-19 武汉海微科技有限公司 基于二分频音箱的音频播放方法、装置、设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008294493A (ja) * 2007-05-22 2008-12-04 Nippon Hoso Kyokai <Nhk> フレキシブルスピーカの音質補正装置および音質補正装置を備えたスピーカシステム
US20130315405A1 (en) * 2012-05-24 2013-11-28 Kabushiki Kaisha Toshiba Sound processor, sound processing method, and computer program product
CN109245739A (zh) * 2018-08-28 2019-01-18 南京中感微电子有限公司 数字音频均衡器
CN109274909A (zh) * 2018-09-19 2019-01-25 深圳创维-Rgb电子有限公司 电视机声音调整方法、电视机和存储介质
CN112185324A (zh) * 2020-10-12 2021-01-05 Oppo广东移动通信有限公司 调音方法、装置、存储介质、智能设备及调音系统
CN113076075A (zh) * 2020-01-03 2021-07-06 北京小米移动软件有限公司 音频信号的调整方法、装置、终端及存储介质
CN113282265A (zh) * 2021-04-09 2021-08-20 海能达通信股份有限公司 终端的均衡参数配置方法、电子设备及存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008294493A (ja) * 2007-05-22 2008-12-04 Nippon Hoso Kyokai <Nhk> フレキシブルスピーカの音質補正装置および音質補正装置を備えたスピーカシステム
US20130315405A1 (en) * 2012-05-24 2013-11-28 Kabushiki Kaisha Toshiba Sound processor, sound processing method, and computer program product
CN109245739A (zh) * 2018-08-28 2019-01-18 南京中感微电子有限公司 数字音频均衡器
CN109274909A (zh) * 2018-09-19 2019-01-25 深圳创维-Rgb电子有限公司 电视机声音调整方法、电视机和存储介质
CN113076075A (zh) * 2020-01-03 2021-07-06 北京小米移动软件有限公司 音频信号的调整方法、装置、终端及存储介质
CN112185324A (zh) * 2020-10-12 2021-01-05 Oppo广东移动通信有限公司 调音方法、装置、存储介质、智能设备及调音系统
CN113282265A (zh) * 2021-04-09 2021-08-20 海能达通信股份有限公司 终端的均衡参数配置方法、电子设备及存储介质

Also Published As

Publication number Publication date
CN116320905A (zh) 2023-06-23

Similar Documents

Publication Publication Date Title
US11496824B2 (en) Acoustic output apparatus with drivers in multiple frequency ranges and bluetooth low energy receiver
WO2022002166A1 (zh) 一种耳机噪声处理方法、装置及耳机
CN103581791B (zh) 移动设备及其控制方法
CN102860043B (zh) 用于控制声学信号的装置、方法和计算机程序
WO2022002110A1 (zh) 一种模式控制方法、装置及终端设备
CN105745942A (zh) 用于提供宽带频率响应的系统和方法
CN115442709B (zh) 音频处理方法、虚拟低音增强系统、设备和存储介质
CN109524016B (zh) 音频处理方法、装置、电子设备及存储介质
WO2017215654A1 (zh) 一种防止音效突变的方法及终端
WO2023103503A1 (zh) 频率响应一致性的校准方法及电子设备
CN114245271A (zh) 音频信号处理方法及电子设备
EP4203447A1 (en) Sound processing method and apparatus thereof
TWM526238U (zh) 可依據使用者年齡調整等化器設定之電子裝置及聲音播放裝置
CN110430511A (zh) 扬声器模块
WO2024037183A1 (zh) 音频输出方法、电子设备及计算机可读存储介质
CN109360582B (zh) 音频处理方法、装置及存储介质
WO2023000795A1 (zh) 音频播放方法、屏幕发声器件的失效检测方法及电子设备
CN107493376A (zh) 一种铃声音量调节方法和装置
CN115567831A (zh) 一种提升扬声器的音质的方法及装置
CN115686425A (zh) 音频播放方法、屏幕发声器件的失效检测方法及电子设备
CN116567489B (zh) 一种音频数据处理方法及相关装置
WO2023284403A1 (zh) 一种音频处理方法及设备
WO2023160204A1 (zh) 音频处理方法及电子设备
CN109144462A (zh) 发声控制方法、装置、电子装置及计算机可读介质
CN116156390B (zh) 一种音频处理方法和电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22902928

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2022902928

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2022902928

Country of ref document: EP

Effective date: 20240328