CN117896658A - Terminal equipment and space sound field correction method - Google Patents

Terminal equipment and space sound field correction method Download PDF

Info

Publication number
CN117896658A
CN117896658A CN202311747134.5A CN202311747134A CN117896658A CN 117896658 A CN117896658 A CN 117896658A CN 202311747134 A CN202311747134 A CN 202311747134A CN 117896658 A CN117896658 A CN 117896658A
Authority
CN
China
Prior art keywords
audio
sound
correction
sound effect
gain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311747134.5A
Other languages
Chinese (zh)
Inventor
刘儒茜
陈先义
刘永梅
何营昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Visual Technology Co Ltd
Original Assignee
Hisense Visual Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Visual Technology Co Ltd filed Critical Hisense Visual Technology Co Ltd
Priority to CN202311747134.5A priority Critical patent/CN117896658A/en
Publication of CN117896658A publication Critical patent/CN117896658A/en
Pending legal-status Critical Current

Links

Landscapes

  • Circuit For Audible Band Transducer (AREA)

Abstract

The application provides a terminal device and a space sound field correction method, wherein the method can respond to a space sound field correction instruction to obtain sampled audio, the sampled audio is audio data collected when a sound collector plays preset correction audio at an audio output interface, the correction audio is audio processed through basic sound effects, and the correction audio comprises audio data of a preset audio frequency band. And generating a gain offset value of the audio frequency band according to the preset target frequency response curve and the sampled audio frequency, and finally setting the gain of the audio data in the audio frequency band based on the gain offset value. According to the method, the preset audio frequency processed only by the basic sound effect is collected, and the gain offset value is obtained by analysis and correction, so that the problem that the correction effect is poor due to the advanced sound effect processing is solved.

Description

Terminal equipment and space sound field correction method
Technical Field
The application relates to the technical field of terminal equipment, in particular to terminal equipment and a space sound field correction method.
Background
The terminal equipment refers to electronic equipment with a sound playing function, and can be electronic equipment such as a smart television, a mobile phone, a smart sound box, a computer, a robot and the like. Taking intelligent electricity as an example, the intelligent television is based on the Internet application technology, has an open operating system and a chip, has an open application platform, can realize a bidirectional man-machine interaction function, and is a television product integrating multiple functions of video, entertainment, data and the like, and the intelligent television is used for meeting the diversified and personalized requirements of users.
Since the default volume and corresponding sound effect parameters are already set when the terminal device leaves the factory. However, the default setting environment is different from the home environment such as decoration, construction, furniture placement, etc. when the user actually uses the terminal device, so that the volume and sound effect of the audio played by the terminal device are not optimal when the user uses the terminal device in the home environment.
In order to improve the played sound effect, the terminal equipment supports the function of correcting the spatial sound field, but due to the basic sound effect of the sound effect effective machine core of the terminal equipment and the advanced sound effect of a third party, the effective machine type can also support a plurality of advanced sound effects at the same time. When the terminal equipment has the operation scenes such as switching among various sound effect types, switching among various advanced sound effects, adjusting an equalizer in the UI, switching a sound mode and the like, the recorded audio data can have the influence of other advanced sound effect processing, so that the correction effect of the spatial sound field correction function is reduced.
Disclosure of Invention
The application provides a terminal device and a space sound field correction method, which are used for solving the problem that the correction effect is poor due to advanced sound effect processing.
In a first aspect, the present application provides a terminal device comprising an audio output interface, a sound collector and a controller. Wherein the audio output interface is configured to play audio data; the sound collector is configured to collect audio data; the controller is configured to perform the following program steps:
Responding to a space sound field correction instruction, acquiring sampling audio, wherein the sampling audio is audio data acquired by the sound collector when the audio output interface plays preset correction audio, the correction audio is audio processed through basic sound effects, and the correction audio comprises audio data of a preset audio frequency band;
generating a gain offset value of the audio frequency band according to a preset target frequency response curve and the sampled audio, wherein the target frequency response curve is a frequency response curve of an audio signal;
and setting the gain of the audio data in the audio frequency band based on the gain offset value.
In a second aspect, the present application further provides a spatial sound field correction method, which is applied to the terminal device, and the method includes:
responding to a space sound field correction instruction, acquiring sampling audio, wherein the sampling audio is audio data acquired by the sound collector when the audio output interface plays preset correction audio, the correction audio is audio processed through basic sound effects, and the correction audio comprises audio data of a preset audio frequency band;
generating a gain offset value of the audio frequency band according to a preset target frequency response curve and the sampled audio, wherein the target frequency response curve is a frequency response curve of an audio signal;
And setting the gain of the audio data in the audio frequency band based on the gain offset value.
According to the technical scheme, the terminal equipment and the space sound field correction method are provided, the method can respond to the space sound field correction instruction to obtain the sampled audio, wherein the sampled audio is audio data collected when the sound collector plays preset correction audio at the audio output interface, the correction audio is audio processed through basic sound effects, and the correction audio comprises audio data of a preset audio frequency band. And generating a gain offset value of the audio frequency band according to the preset target frequency response curve and the sampled audio frequency, and finally setting the gain of the audio data in the audio frequency band based on the gain offset value. According to the method, the preset audio frequency which is only processed through basic sound effects is collected, and analysis and correction are carried out to obtain a gain offset value, so that the problem that the correction effect is poor due to advanced sound effect processing is solved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an operation scenario between a terminal device and a control device provided in some embodiments of the present application;
fig. 2 is a schematic hardware configuration diagram of a control device according to some embodiments of the present application;
fig. 3 is a schematic hardware configuration diagram of a terminal device according to some embodiments of the present application;
fig. 4 is a schematic software configuration diagram of a terminal device according to some embodiments of the present application;
fig. 5 is a flow chart of a method for correcting a spatial sound field according to some embodiments of the present application;
FIG. 6 is a schematic diagram illustrating a process for implementing spatial sound field correction according to some embodiments of the present application;
FIG. 7 is a schematic diagram of an audio path provided in some embodiments of the present application;
fig. 8 is a schematic flow chart of triggering an incoming space sound field correction according to some embodiments of the present application;
FIG. 9 is a schematic illustration of corrective environment preparation provided in some embodiments of the present application;
FIG. 10 is a schematic illustration of a corrective analysis provided in some embodiments of the present application;
FIG. 11 is a flow chart of spatial sound field correction for sound effects provided in some embodiments of the present application;
fig. 12 is a flow chart of spatial sound field correction for another sound effect provided in some embodiments of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the exemplary embodiments of the present application more apparent, the technical solutions in the exemplary embodiments of the present application will be clearly and completely described below with reference to the drawings in the exemplary embodiments of the present application, and it is apparent that the described exemplary embodiments are only some embodiments of the present application, but not all embodiments.
All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present application, are intended to be within the scope of the present application based on the exemplary embodiments shown in the present application. Furthermore, while the disclosure has been presented in terms of an exemplary embodiment or embodiments, it should be understood that various aspects of the disclosure can be practiced separately from the disclosure in a complete subject matter.
It should be understood that the terms "first," "second," "third," and the like in the description and in the claims and in the above-described figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate, such as where appropriate, for example, implementations other than those illustrated or described in accordance with embodiments of the present application.
Furthermore, the terms "comprise" and "have," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of elements is not necessarily limited to those elements expressly listed, but may include other elements not expressly listed or inherent to such product or apparatus.
The terminal device provided in the embodiment of the application may have various implementation forms, for example, may be a display device, such as a television, an intelligent television, a laser projection device, a monitor (monitor), an electronic whiteboard (electronic bulletin board), an electronic desktop (electronic table), and the like. And the audio output device can also be an intelligent sound box and the like for playing audio.
Fig. 1 is a schematic diagram of an operation scenario between a terminal device and a control apparatus according to an embodiment. As shown in fig. 1, a user may operate the terminal device 200 through the smart device 300 or the control apparatus 100.
In some embodiments, the control device 100 may be a remote controller, and the communication between the remote controller and the terminal device includes infrared protocol communication or bluetooth protocol communication, and other short-range communication modes, and the terminal device 200 is controlled by a wireless or wired mode. The user may control the terminal device 200 by inputting user instructions through keys on a remote controller, voice input, control panel input, etc.
In some embodiments, the smart device 300 (e.g., mobile terminal, tablet, computer, notebook, etc.) may also be used to control the terminal device 200. For example, the terminal device 200 is controlled using an application running on the smart device.
In some embodiments, the terminal device 200 may receive the instruction not using the above-described smart device or control device, but rather receive the control of the user through touch or gesture, or the like.
In some embodiments, the terminal device 200 may also perform control in a manner other than the control apparatus 100 and the smart device 300, for example, the voice command control of the user may be directly received through a module configured inside the terminal device 200 for acquiring a voice command, or the voice command control of the user may be received through a voice control apparatus configured outside the terminal device 200.
In some embodiments, the terminal device 200 is also in data communication with the server 400. The terminal device 200 may be permitted to make communication connection through a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 400 may provide various contents and interactions to the terminal device 200. The server 400 may be a cluster, or may be multiple clusters, and may include one or more types of servers.
As shown in fig. 2, the terminal apparatus 200 includes at least one of a modem 210, a communicator 220, a detector 230, a device interface 240, a controller 250, a display 260, an audio output interface 270, a memory, a power supply, and a communication device interface 280.
In some embodiments, the controller 250 includes a processor, a video processor, an audio processor, a graphics processor, RAM, ROM, a first interface to an nth interface for input/output.
The modem 210 receives broadcast television signals through a wired or wireless reception manner, and demodulates audio and video signals, such as EPG data signals, from a plurality of wireless or wired broadcast television signals.
The communicator 220 is a component for communicating with external devices or servers according to various communication protocol types. For example: the communicator may include at least one of a Wifi module, a bluetooth module, a wired ethernet module, or other network communication protocol chip or a near field communication protocol chip, and an infrared receiver. The terminal device 200 may establish transmission and reception of control signals and data signals with the control apparatus 100 or the server 400 through the communicator 220.
The device interface 240 may include, but is not limited to, the following: high Definition Multimedia Interface (HDMI), analog or data high definition component input interface (component), composite video input interface (CVBS), USB input interface (USB), RGB port, etc. The input/output interface may be a composite input/output interface formed by a plurality of interfaces.
The display 260 includes a display screen component for presenting a picture, and a driving component for driving an image display, a component for receiving an image signal from the controller output, displaying video content, image content, and a menu manipulation interface, and a user manipulation UI interface.
The display 260 may be a liquid crystal display, an OLED display, a projection device, or a projection screen.
Audio output interface 270 is used to output audio signals to other devices for playing audio. In some embodiments, the terminal device 200 has a speaker built therein, and the audio output interface 270 may input an audio signal into the speaker, and play audio through the speaker. In other embodiments, the terminal device 200 may be externally connected to an external audio device such as a power amplifier, a sound device, a speaker, etc. through the audio output interface 270, so that an audio signal may be input to the external audio device, and the audio may be played through the external audio device.
Audio output interface 270 may include, but is not limited to, the following: any one or more interfaces of an RCA port, a butterfly clip port, an SPDIF port, and a headphone port. The output interface may be a composite output interface formed by a plurality of interfaces.
The communication device interface 280 may be configured to receive control signals from the control device 100 (e.g., an infrared remote control).
The detector 230 is used to collect signals of the external environment or interaction with the outside. For example, detector 230 includes a light receiver, a sensor for capturing the intensity of ambient light; alternatively, the detector 230 includes an image collector such as a camera, which may be used to collect external environmental scenes, user attributes, or user interaction gestures, or alternatively, the detector 230 includes a sound collector such as a microphone, or the like, which is used to receive external sounds.
The sound collector may be a microphone, also called "microphone", which may be used to receive the sound of a user and to convert the sound signal into an electrical signal. The terminal device 200 may be provided with at least one microphone. In other embodiments, the terminal device 200 may be provided with two microphones, and may implement a noise reduction function in addition to collecting sound signals. In other embodiments, the terminal device 200 may be further provided with three, four or more microphones to collect sound signals, reduce noise, identify the source of sound, implement directional recording functions, etc.
Further, the microphone may be built in the terminal device 200, or the microphone may be connected to the terminal device 200 by a wired or wireless means. Of course, the position of the microphone on the terminal device 200 is not limited in the embodiment of the present application. Alternatively, the terminal device 200 may not include a microphone, i.e., the microphone is not provided in the terminal device 200. The terminal device 200 may be externally connected to a microphone (may also be referred to as a microphone) through an interface such as a USB interface. The external microphone may be secured to the terminal device 200 by external fasteners such as a camera mount with a clip.
In some embodiments, the controller 250 and the modem 210 may be located in separate devices, i.e., the modem 210 may also be located in an external device to the main device in which the controller 250 is located, such as an external set-top box or the like.
The controller 250 controls the operation of the terminal device and responds to the user's operations through various software control programs stored in the memory. The controller 250 controls the overall operation of the terminal device 200. For example: in response to receiving a user command to select a UI object to be displayed on the display 260, the controller 250 may perform an operation related to the object selected by the user command.
In some embodiments the controller includes at least one of a central processing unit (Central Processing Unit, CPU), video processor, audio processor, graphics processor (Graphics Processing Unit, GPU), RAM (Random Access Memory, RAM), ROM (Read-Only Memory, ROM), first to nth interfaces for input/output, a communication Bus (Bus), etc.
The user may input a user command through a Graphical User Interface (GUI) displayed on the display 260, and the user input interface receives the user input command through the Graphical User Interface (GUI). Alternatively, the user may input the user command by inputting a specific sound or gesture, and the user input interface recognizes the sound or gesture through the sensor to receive the user input command.
A "user interface" is a media interface for interaction and exchange of information between an application or operating system and a user, which enables conversion between an internal form of information and a user-acceptable form. A commonly used presentation form of the user interface is a graphical user interface (Graphic User Interface, GUI), which refers to a user interface related to computer operations that is displayed in a graphical manner. It may be an interface element such as an icon, a window, a control, etc. displayed in a display screen of the electronic device, where the control may include a visual interface element such as an icon, a button, a menu, a tab, a text box, a dialog box, a status bar, a navigation bar, a Widget, etc.
Fig. 3 exemplarily shows a block diagram of a configuration of the control apparatus 100 in accordance with an exemplary embodiment. The control device 100 includes a controller 110, a communication interface 130, a user input/output interface, a memory 190, and a power supply 180.
The control device 100 is configured to control the terminal device 200, and can receive an input operation instruction of a user, and convert the operation instruction into an instruction recognizable and responsive by the terminal device 200, functioning as an interaction mediation between the user and the terminal device 200.
In some embodiments, the control device 100 may be a smart device. Such as: the control apparatus 100 may install various applications of the control terminal apparatus 200 according to user demands.
In some embodiments, as shown in fig. 1, a mobile terminal 300 or other intelligent electronic device may serve a similar function as the control device 100 after installing an application that manipulates the terminal device 200.
The controller 110 includes a processor 112 and RAM 113 and ROM 114, a communication interface 130, and a communication bus. The controller 110 is used to control the operation and operation of the control device 100, as well as the communication collaboration among the internal components and the external and internal data processing functions.
The communication interface 130 enables communication of control signals and data signals with the terminal device 200 under the control of the controller 110. The communication interface 130 may include at least one of a WiFi chip 131, a bluetooth module 132, an NFC module 133, and other near field communication modules.
A user input/output interface 140, wherein the input interface includes at least one of a microphone 141, a touchpad 142, a sensor 143, keys 144, and other input interfaces.
In some embodiments, the control device 100 includes at least one of a communication interface 130 and an input-output interface 140. The control device 100 is provided with a communication interface 130 such as: the WiFi, bluetooth, NFC, etc. modules may send the user input instruction to the terminal device 200 through a WiFi protocol, or a bluetooth protocol, or an NFC protocol code.
A memory 190 for storing various operation programs, data and applications for driving and controlling the control device 100 under the control of the controller. The memory 190 may store various control signal instructions input by a user.
A power supply 180 for providing operating power support for the various elements of the control device 100 under the control of the controller.
As shown in fig. 4, in some embodiments, the system is divided into four layers, from top to bottom, an application layer (application layer), an application framework layer (Application Framework layer), a An Zhuoyun row (Android run) and a system library layer (system runtime layer), and a kernel layer, respectively.
In some embodiments, at least one application program is running in the application program layer, and these application programs may be a Window (Window) program of an operating system, a system setting program, a clock program, or the like; or may be an application developed by a third party developer. In particular implementations, the application packages in the application layer are not limited to the above examples.
The framework layer provides an application programming interface (application programming interface, API) and programming framework for the application. The application framework layer includes a number of predefined functions. The application framework layer corresponds to a processing center that decides to let the applications in the application layer act. Through the API interface, the application program can access the resources in the system and acquire the services of the system in the execution.
As shown in fig. 4, the application framework layer in the embodiment of the present application includes a manager (manager), a Content Provider (Content Provider), and the like, where the manager includes at least one of the following modules: an Activity Manager (Activity Manager) is used to interact with all activities that are running in the system; a Location Manager (Location Manager) is used to provide system services or applications with access to system Location services; a Package Manager (Package Manager) for retrieving various information about an application Package currently installed on the device; a notification manager (Notification Manager) for controlling the display and clearing of notification messages; a Window Manager (Window Manager) is used to manage bracketing icons, windows, toolbars, wallpaper, and desktop components on the user interface.
In some embodiments, the activity manager is used to manage the lifecycle of the individual applications as well as the usual navigation rollback functions, such as controlling the exit, opening, fallback, etc. of the applications. The window manager is used for managing all window programs, such as obtaining the size of the display screen, judging whether a status bar exists or not, locking the screen, intercepting the screen, controlling the change of the display window (for example, reducing the display window to display, dithering display, distorting display, etc.), etc.
In some embodiments, the system runtime layer provides support for the upper layer, the framework layer, and when the framework layer is in use, the android operating system runs the C/C++ libraries contained in the system runtime layer to implement the functions to be implemented by the framework layer.
In some embodiments, the kernel layer is a layer between hardware and software. As shown in fig. 4, the kernel layer contains at least one of the following drivers: audio drive, display drive, bluetooth drive, camera drive, WIFI drive, USB drive, HDMI drive, sensor drive (e.g., fingerprint sensor, temperature sensor, pressure sensor, etc.), and power supply drive, etc.
Audio data can be played based on the above-described terminal device 200. In order to enhance the sound effect of the playing, the terminal device 200 may perform various sound effects processing on the audio data to improve the audio effect, and provide a clearer and three-dimensional audio experience for the user. According to different sound effect processing functions, the sound effect types in the terminal device 200 are divided into basic sound effects and advanced sound effects, namely, the basic sound effects of a terminal device core (System on a Chip, SOC) and third party advanced sound effects integrated in the SOC. The basic sound effect processing includes basic sound effect processing technology, such as volume control, equalizer, filter, etc. Advanced audio processing integrated in the SOC includes audio processing techniques such as Dolby, dts, dbx, which may provide more advanced audio processing functions such as tone tuning, dynamic range control, multi-channel processing, etc. The user may select and adjust the sound effect processing function through the sound effect setting interface provided by the terminal device 200, so that the terminal device 200 presents different sound effects based on the sound effect processing function.
In some embodiments, the default volume and corresponding sound effect parameters are already set at the factory of the terminal device 200. However, the default setting environment is different from the home environment such as decoration, construction, furniture placement, etc. when the user actually uses the terminal device 200, so that the volume and sound effect of playing audio by the terminal device 200 are not the best experience when the user uses the terminal device 200 in the home environment.
Therefore, the terminal apparatus 200 supports a spatial sound field correction function for achieving the purpose of correcting sound effects by adjusting the equalizer (Graphic Equalizer, geq) in the terminal apparatus 200 in the current environment. Geq can improve the balance and sharpness of the audio by adjusting the spectral distribution of the audio signal. Geq is formed by the combined action of a plurality of High-Pass filters (HPFs) and a plurality of Low-Pass filters (LPFs), and mainly relates to three parameters: center frequency, Q value (bandwidth), gain. The Q value refers to a quality factor of the filter, and is used to describe the bandwidth of the filter, and is a fixed value. The center frequency refers to the center operating frequency of the filter, and the gain or attenuation of a specific frequency range in the audio signal can be controlled by adjusting the center frequency. Gain refers to the degree of amplification or attenuation of an audio signal by a filter, and by adjusting the gain the overall loudness or tone balance of the audio signal can be changed. Therefore, the terminal device 200 can achieve the purpose of frequency equalization by adjusting the gain value of the frequency band corresponding to the set center frequency.
Specifically, the terminal device 200 may record, through the sound collector, preset audio data played by the terminal device 200 in an environment where the user uses the terminal device 200, and perform calculation analysis on the audio data based on a correction algorithm. The correction algorithm presets a target frequency response curve according to the number of center frequencies and the center frequency of the equalizer supported by the current terminal equipment 200 model before use. The target frequency response curve is a frequency response curve of a preset audio signal for describing the volume of the terminal device 200 at different frequencies. In the spatial sound field correction, the target frequency response curve may be used as a reference standard for correcting the frequency response of the terminal device 200, so as to achieve more accurate audio correction. After being processed by a correction algorithm, a group of gain offset values corresponding to each center frequency point of Geq can be obtained, and superposition adjustment can be performed on the basis of the current sound effect of the terminal equipment 200 based on the gain offset values, so that sound effect correction under different use scenes is realized.
However, due to the basic sound effects of the sound effect organic cores themselves of the terminal device 200 and the advanced sound effects of the third party, the other models can also support a plurality of advanced sound effects at the same time. When the terminal device 200 has the operation scene of switching among multiple sound effect types, switching among multiple advanced sound effects, adjusting the equalizer in the UI, switching the sound mode, etc., the recorded audio data has the influence of other advanced sound effect processing, thereby reducing the correction effect of the spatial sound field correction function.
In order to solve the problem that the correction effect is poor due to advanced sound effect processing, some embodiments of the present application provide a spatial sound field correction method, which is used for realizing accurate spatial sound field correction under different use scenarios, and improving the sound correction effect. In order to meet the implementation of the spatial sound field correction method, the terminal device 200 should include at least an audio output interface 270, a sound collector, and a controller 250. The audio output interface 270 is configured to play audio data, the sound collector is configured to collect audio data, and the controller 250 is configured to execute the program steps corresponding to the spatial sound field correction method, as shown in fig. 5, which is a schematic flow chart of spatial sound field correction provided in the embodiment of the present application. The method specifically comprises the following steps:
s100: in response to the spatial sound field correction instruction, sampled audio is acquired.
The terminal device 200 may turn on the spatial sound field correction function in response to the spatial sound field correction instruction to correct the sound effect in the current scene. Wherein, the spatial sound field correction instruction can be obtained in various modes. For example, the user may enter the setting interface of the terminal apparatus 200 through a menu key of the control apparatus 100 and select a control corresponding to the spatial sound field correction in the setting interface to input a spatial sound field correction instruction to the terminal apparatus 200. The user may also press a voice key of the control apparatus 100 and speak "open space sound field correction" to input a space sound field correction instruction.
The terminal device 200 may obtain preset correction audio in response to the spatial sound field correction instruction, and perform basic sound effect processing on the correction audio based on the basic sound effect of the movement of the terminal device 200, so as to control the audio output interface 270 to play the correction audio after the basic sound effect processing. And sends an acquisition instruction to the sound collector to cause the sound collector to collect audio data when the correction audio is played by the audio output interface 270 to obtain sampled audio.
The correction audio comprises audio data of a preset audio frequency band. In order to realize sound correction of multiple frequency bands, the corrected audio may include audio data of full frequency bands (20 Hz to 20 kHz) to analyze and correct the audio of the full frequency bands, thereby realizing sound effect correction of the full frequency bands.
It is understood that the above-described corrective audio is audio that is processed only by the basic sound effects. Namely, the correction audio is only processed through the self-contained basic sound effect of the movement (System on a Chip, SOC) of the terminal equipment 200, and is not processed through the advanced sound effect integrated in the SOC, so that the influence of the advanced sound effect on the correction of the space sound field is eliminated when the sampled audio is analyzed and corrected subsequently, and the correction effect is improved.
S200: and generating a gain offset value of the audio frequency band according to the target frequency response curve and the sampled audio frequency.
After the sampled audio is obtained, correction algorithm processing can be performed on the sampled audio based on a preset target frequency response curve, so as to obtain a gain offset value of the audio signal in an audio frequency band. The target frequency response curve is a frequency response curve of the audio signal, and is used for describing the volume of the terminal device 200 at different frequencies.
In the spatial sound field correction, the target frequency response curve may be used as a reference standard for correcting the frequency response of the terminal device 200 to achieve more accurate sound correction. The target frequency response curve can be obtained according to the number of center frequencies and the center frequencies of the equalizer (Graphic Equalizer, geq) supported by the current model of the terminal device 200. Therefore, a set of gain offset values corresponding to the central frequency points of Geq can be obtained after the correction algorithm processing. The gain offset value refers to a value by which the terminal device 200 performs offset adjustment on gains of different center frequency points when processing an audio signal.
S300: the gain of the audio data within the audio frequency band is set based on the gain offset value.
After the gain offset value of the audio frequency band in the current scene is obtained through the correction algorithm processing, the gain of the audio signal in the audio frequency band when the terminal device 200 plays the audio data can be corrected based on the gain offset value, so that the hearing effect of the audio signal in the current scene is improved. For example, in sound effect processing, if some frequencies of an audio signal are too low or too high, the frequency response may be modified by increasing or decreasing the gain offset value of the corresponding frequency, thereby improving the sound quality.
In some embodiments, the gain parameters of the equalizer may be updated based on the gain offset values to control the gain or attenuation of different frequencies of the audio signal, changing the spectral characteristics of the audio signal to achieve a desired audio effect. That is, in some embodiments, the terminal device 200 may obtain the gain offset value in response to a play instruction for playing the target audio, and the current gain value of the terminal device 200, and superimpose the gain value and the gain offset value to obtain the correction gain value. And setting the correction gain value to the equalizer, correcting the gain of the target audio based on the equalizer, and finally controlling the audio output interface to play the corrected target audio, thereby realizing the correction of the spatial sound field.
The implementation of the spatial sound field correction function of the present application is described below with an example. Fig. 6 is a schematic diagram of an implementation process of spatial sound field correction according to an embodiment of the present application. Wherein solid arrows represent audio data flow, and dashed arrows represent control commands.
The user starts the spatial sound field correction function by operating the system application provided by the terminal device 200, and the application sends an instruction 1 to the player to cause the player to play preset correction audio, wherein the correction audio covers audio data of a full frequency band (20 Hz-20 kHz).
The application sends an instruction 2 to the Bluetooth module to enable the Bluetooth module to be connected with the Bluetooth remote controller in a pairing mode. The Bluetooth module starts a Bluetooth remote controller (gattmic) through an instruction 3, and then makes the Bluetooth remote controller and the terminal equipment 200 be connected in a pairing way through an instruction 4, and updates a Bluetooth status (Bluetooth status) to connected after the pairing connection is successful.
After the pairing connection, the Bluetooth remote controller records the correction audio played by the player in the current environment through the instruction 5.
The bluetooth remote controller decodes the recorded audio data, and returns the decoded audio data to the terminal device 200 through the instruction 6, and the decoded audio data is stored in a pcm file under a contracted path, for example: data/local/MIsc/remote.
The audio digital signal processing module (Audio Video Media Solution, avmw) reads audio data from the pcm file under the contracted path through the instruction 7, and sends the read audio data to the correction algorithm module for analysis and correction through the instruction 8.
The correction algorithm module analyzes and corrects the audio data, and sends gain offset values corresponding to the central frequency points of the set Geq obtained by correction to the avmw module through the instruction 9.
The avmw module performs superposition processing on the corresponding Geq center frequency point gain based on the gain offset value, and sets the correction gain value obtained after the superposition processing to the Geq module corresponding to the terminal device 200 through the instruction 10, thereby realizing the spatial sound field correction effect.
Fig. 7 is a schematic diagram of an audio path according to an embodiment of the present application. Wherein Dmp, hdmi, dtv, atv belongs to audio data Source terminals, and different audio data Source terminals are selected according to requirements through a Source selection module (Source Select). The audio data input from the audio data source end is firstly subjected to processes such as SOC mixing, for example, volume turning off or on through a Volume button (Volume) and analog audio signals are converted into digital audio signals through a Mixer Pcm module and mixed with other digital audio signals.
Then, the audio after SOC mixing processing enters an audio processing module of the SOC, and is sequentially processed by automatic volume control (Automatic Volume Control, avc) of basic audio, advanced audio of a third party (such as dolby, dtsvx, dbx, self-grinding audio and the like), equalizer of the basic audio, sound Balance (Basnce) and the like. In the process, the spatial sound field correction module carries out superposition processing on the gain offset value of the Geq center frequency point obtained by spatial sound field correction and the gain value of the Geq center frequency point according to the current sound effect. Finally, the audio data is sent to a loudspeaker (Speaker) for sounding after being processed by gain adjustment, D/A conversion and the like of a digital power Amplifier (AMP).
Two equalizers, a base equalizer and an advanced equalizer, respectively, are shown in fig. 7. The advanced equalizer is used to adjust the audio gain of the advanced audio, such as dolby audio processing (Dolby Audio Processing, dap) of the dolby audio shown in fig. 7, namely Geq of the dolby audio. The base equalizer is used to adjust the audio gain of the base sound effect and the audio gain of the advanced sound effect, as shown in fig. 7 as Geq of the base sound effect. When the spatial sound field correction is performed, the gain offset value of each center frequency point of the group Geq obtained by the correction algorithm and the gain value of each center frequency point of Geq can be subjected to superposition processing, and the correction gain value obtained after the superposition processing is updated to Geq.
As shown in fig. 7, the terminal device 200 includes a plurality of advanced effects, and some of the advanced effects support an equalizer by themselves, such as Dap of dolby effect, and some of the advanced effects do not support an equalizer, such as Dtsvx, dbx. Accordingly, the terminal device 200 may detect a current sound effect in response to a play instruction for playing a target audio, detect a sound effect type of an advanced sound effect if the sound effect type does not support an equalizer by itself, and correct a gain of the target audio using a base equalizer. If the sound effect type itself supports an equalizer, then the gain of the target audio is corrected using its own advanced equalizer.
When the current sound effect is the advanced sound effect, if the equalizer is not supported by the current sound effect, the gain of the target audio can be corrected based on the advanced equalizer of other advanced sound effects. That is, the terminal device 200 may detect the current sound effect in response to a play instruction for playing the target audio, and detect an equalizer supported by the terminal device 200, set a correction gain value to the advanced equalizer if the advanced equalizer exists and the current sound effect is the advanced sound effect, and correct the gain of the target audio based on the advanced equalizer.
For example, if the current sound effect is the base sound effect avc, the gain of the target audio is corrected based on the base equalizer. If the current sound effect is a dolby sound effect, the gain of the target audio is corrected based on an advanced equalizer (Dap). If the current sound effect is the dbx sound effect, the gain of the target audio may be corrected based on the base equalizer, or the gain of the target audio may be corrected based on the advanced equalizer (Dap).
The specific steps of the spatial sound field correction method of the present application will be described below with reference to the above examples.
The terminal device 200 triggers an entry into the spatial sound field correction program in response to the spatial sound field correction instruction. Fig. 8 is a schematic flow chart of triggering the correction of the sound field of the space in the entering space according to the embodiment of the application. After a user inputs a start instruction for starting the spatial sound field correction by operating the system application provided by the terminal device 200, the terminal device 200 may detect whether the current audio output device is a built-in sound box, detect whether the spatial sound field correction switch is currently turned on, detect whether the bluetooth remote controller is connected with the terminal device 200 in a pairing manner, and enter the spatial sound field correction program if the above conditions are satisfied.
In order to improve user experience, when the user first starts the spatial sound field correction, the terminal device 200 may be triggered to be connected with the bluetooth remote controller in a pairing manner, and automatically enter the spatial sound field correction program after the pairing connection is successful.
After the terminal device 200 enters the spatial sound field correction program, in order to avoid the influence of the recorded audio on the correction effect after the advanced sound effect processing, the terminal device 200 may detect whether the advanced sound effect is currently turned on before playing the corrected audio, and if so, turn off the advanced sound effect.
In this regard, the terminal device 200 may detect an effect parameter of an advanced effect module, where the advanced effect module is configured to perform advanced effect processing on the audio data, and the effect parameter includes a first effect parameter and a second effect parameter, where the first effect parameter is used to characterize turning on the advanced effect module, and the second effect parameter is used to characterize turning off the advanced effect module.
If the sound effect parameter of the advanced sound effect module is the first sound effect parameter, indicating that the advanced sound effect is started currently, setting the sound effect parameter as the second sound effect parameter so as to close the advanced sound effect. If the sound effect parameter of the advanced sound effect module is the second sound effect parameter, the advanced sound effect is not started currently, and the processing is not performed.
In some embodiments, in order to be able to collect the sampled audio, it is also necessary to ensure that the terminal device 200 is not muted, i.e. before playing the rectified audio, it is also necessary to detect whether the terminal device 200 is muted, and if so, to perform a de-muting process. The sound parameters of the terminal device 200 may be read to determine whether to mute, where the sound parameters include a first sound parameter for characterizing silence and a second sound parameter for characterizing non-silence.
If the sound parameter is the first sound parameter, indicating that the sound is currently mute, setting the sound parameter as the second sound parameter to unmute. If the sound parameter is the second sound parameter, indicating that the sound parameter is not mute currently, the processing is not performed.
In order to improve the quality of the acquired audio, the volume value of the terminal device 200 may also be set to a correction volume value. The volume value is used to represent the volume level of the audio playing, and the corrected volume value can be obtained based on the accompanying sound curve of the current model of the terminal device 200, where the accompanying sound curve is a relation curve between the frequency and the volume of the audio signal, and can reflect the distortion degree of the audio signal under different volumes. Accordingly, after the terminal device 200 is set to be non-mute, the volume value of the terminal device may be detected, and the sound accompanying curve of the terminal device 200 may be acquired, and a corrected volume value may be generated according to the sound accompanying curve. If the volume value is not equal to the corrected volume value, the volume value of the terminal device 200 is set to the corrected volume value. If the volume value is equal to the corrected volume value, no processing is performed. By setting the volume value as the correction volume value, the correction audio is played at the correction volume value, and the played audio is clear and distinguishable, so that the collection and analysis are convenient.
For example, as shown in fig. 9, a schematic diagram of the corrective environment preparation provided for an embodiment of the present application is provided. After the terminal device 200 enters the spatial sound field correction program, the preparation work of correcting the environment is completed in the avmw module:
firstly, the avmw module may call to the database through the interface to obtain the stored volume parameter CurVolume and the advanced sound effect object CurEffect set by the current terminal device 200. The CurVolume represents the current volume value of the terminal device 200, which is an adjustable parameter, and the volume of the audio played by the terminal device 200 can be controlled by adjusting the CurVolume. CurEffect is an object or variable representing the current advanced sound effect, and contains related information of the advanced sound effect, such as type, parameters, settings and the like. The properties and behavior of the advanced sound effects can be controlled and managed by using CurEffect.
The avmw module determines whether the SOC of the current terminal device 200 is mute according to the acquired CurVolume, and if so, unmutes the SOC and sets the volume value to the corrected volume value Vol, for example, the Vol value may be set to 15.
The avmw module obtains the type of the advanced sound effect of the current advanced sound effect according to the obtained CurEffect, and sets an Enable 0 command to the advanced sound effect module corresponding to the SOC to close the advanced sound effect, wherein the Enable is a variable or parameter used for controlling the on or off of the advanced sound effect module, and setting the Enable to 0 indicates that the advanced sound effect module is closed or disabled. Therefore, the influence on the correction effect caused by the high-level sound effect processing of the audio recorded by the Bluetooth remote controller is avoided.
Thus, the space sound field correction environment is ready to be completed, at which time the avmw module may return a value of 0 to the application, indicating that the current space sound field correction environment is ready to be completed, and space sound field correction may be performed.
As shown in fig. 10, in the correction analysis stage of the spatial sound field correction, after receiving the return value of the avmw module correction environment preparation interface of 0, the application calls the player to play the correction audio preset in the terminal device 200 through the loudspeaker, then, the application starts the recording function of the bluetooth remote controller, records the played audio data through the bluetooth remote controller, decodes the recorded data into a pcm format, and stores the pcm format audio data in a file format under a stipulated directory readable and writable in the terminal device 200.
after detecting that the pcm file exists in the appointed directory, the avmw module reads pcm data into a memory and copies the pcm data into a correction algorithm module for analysis and calculation, and after all recorded audio data are calculated and analyzed by the correction algorithm, the correction algorithm finally outputs a group of gain offset values correctGeqdb [ bandnum ] corresponding to Geq center frequency points, wherein bandnum is the number of center frequencies, and the avmw module copies the group of data from the memory and sets the group of data into a Geq module corresponding to the terminal equipment 200, so that the spatial sound field correction effect is achieved.
As shown in fig. 11 and 12, in the correction effect validation stage of the spatial sound field correction, the correction effect is mainly realized by performing logic judgment by the avmw module and calling with each module:
firstly, the avmw module obtains the current sound mode CurSndMode of the terminal device 200 by calling the database through the interface, and obtains the current higher-level sound effect object CurEffect. The sound mode cursndlmode includes current sound effect processing, equalizer setting, and the like of the terminal device 200.
If curEffect is Dolby sound effect, gain values curGeqdb [ bandnum ] of the center frequency points Geq corresponding to the current sound mode of the terminal device 200 are obtained from the memory, and gain offset values correctGeqdb [ bandnum ] of the center frequency points Geq obtained by performing analysis and correction are superimposed with the gain values correctGeqdb [ bandnum ] to obtain newGeqdb [ bandnum ], namely newGeqdb [ bandnum ] =curGeqdb [ bandnum ] +correctGeqdb [ bandnum ]. Then the avmw module sets the tdadqenable parameter in the Dolby algorithm to 1 to effect it, and then updates newGeqdb [ bandnum ] to the Geq module of Dolby, i.e., the Dap of Dolby sound effect shown in fig. 7.
If curEffect is the Dts sound effect, acquiring the number bandndim of the currently supported Geq frequency points and the specific center frequency of the terminal device 200 from the memory, acquiring the corresponding gain value curGeqdb [ bandnum ] of each center frequency point of Geq in the database according to curSndMode, and superposing the gain offset value cor rectGeqdb [ bandnum ] of each center frequency point of Geq obtained by performing analysis and correction with the gain offset value to obtain newGeqdb [ bandnum ], namely newGeqdb [ bandnum ] =curGeqdb [ bandnum ] +correctGeqdb [ bandnum ]. Times.5, wherein 5 is the conversion relation between the Dts sound effect and the basic sound effect. Then the avmw module sets stBassicSndGeqEnable parameter of SOC basic sound effect to 1, simultaneously sets tDapGeq Enable parameter in Dolby algorithm to 0, and then updates newGeqdb [ bandnum ] to Geq module of SOC, namely basic equalizer shown in FIG. 7.
If CurEffect is the Dts sound effect, it can also be implemented based on the Dolby Geq module. Gain values curGeqdb [ bandnum ] of the central frequency points Geq corresponding to the current sound mode are obtained in a database according to CurSndMode, and gain offset values correctGeqdb [ bandnum ] of the central frequency points Geq obtained by analysis and correction are overlapped with the gain values curGeqdb [ bandnum ] to obtain newGeqdb [ bandnum ]. The avmw module then sets the tDapGeq Enable parameter in the Dolby algorithm to 1 and updates newGeqdb [ band um ] to the Geq module of Dolby.
In some embodiments, different audio coding formats have different characteristics, and the terminal device 200 may select different sound effect processing methods according to the audio coding formats to ensure optimal audio quality and listening experience. Accordingly, the terminal device 200 can acquire the audio encoding format of the target audio in response to the play instruction for playing the target audio. The sound effects of the terminal device 200 are set based on the audio encoding format.
For example, if the CurEffect is an automatic sound effect, the avmw module may obtain an encoding format (audiotypes) of the currently played audio from the decoder, and if the audio format is an audio encoding format such as ac3, eac3, ac4, etc., the sound effect is set as a Dolby sound effect, so that the target audio may be processed according to a spatial sound field correction procedure corresponding to the Dolby sound effect. If the audio format is Dts, dtshd, dtsx and other audio coding formats, the sound effect is set as the Dts sound effect, so that the target audio can be processed according to the spatial sound field correction flow of the Dts sound effect.
It should be noted that, the above-mentioned spatial sound field correction needs to ensure that the correction algorithm, the bandwidth count (bandwidth count) used by the Dap and the SOC, and the frequency band are consistent. For example, embodiments of the present application provide two sets Geq of parameter settings: frequency band number: pandnum = 5; center frequency point: 120Hz, 500Hz, 1.5kHz, 5kHz, 10kHz; upper and lower limit frequencies of the center frequency point: 0-235 Hz, 235 Hz-845 Hz, 845 Hz-2.5 KHz, 2.5 KHz-6.8 KHz, and 6.8 KHz-20 KHz. Frequency band number: bandnum=7; center frequency point: 100Hz, 250Hz, 600Hz, 1KHz, 2.5KHz, 6KHz, 10KHz; upper and lower limit frequencies of the center frequency point: 0-235 Hz, 235-420 Hz, 420-890 Hz, 890 Hz-1.7 KHz, 1.7 KHz-3.4 KHz, 3.4 KHz-6.8 KHz, 6.8 KHz-20 KHz.
In order to improve user experience, after the space sound field correction is completed, user settings before the space sound field correction is performed by the terminal device 200 need to be restored, and the avmw module is reset to the SOC end according to the volume CurVolume and the higher-order sound effect object CurEffect before the terminal device 200 stored in the memory, so that the user settings of the terminal device 200 before and after the space sound field correction are kept consistent. That is, in some embodiments, the terminal device 200 records current setting information including a sound value and an advanced sound object for characterizing an advanced sound module currently applied to audio data, before performing spatial sound field correction.
After the terminal device 200 performs the spatial sound field correction, current setting information of the terminal device 200 is acquired, and then recorded historical setting information is acquired, wherein the historical setting information is the recorded setting information before the spatial sound field correction is performed. If the current setting information is not identical to the history setting information, the current setting information of the terminal device 200 is set according to the history setting information to restore the user setting of the terminal device 200.
Based on the spatial sound field correction method. There is also provided in some embodiments of the present application a terminal device 200, including: audio output interface 270, sound collector, and controller 250. Wherein audio output interface 270 is configured to play audio data; the sound collector is configured to collect audio data; the controller 250 is configured to perform the following program steps:
s100, responding to a spatial sound field correction instruction, and acquiring sampling audio.
The sampled audio is audio data collected when the sound collector plays preset correction audio through the audio output interface 270, the correction audio is audio processed through basic audio effects, and the correction audio includes audio data of a preset audio frequency band.
S200: and generating a gain offset value of the audio frequency band according to the preset target frequency response curve and the sampled audio frequency.
The target frequency response curve is a frequency response curve of the audio signal.
S300: the gain of the audio data within the audio frequency band is set based on the gain offset value.
The same and similar parts of the embodiments in this specification are referred to each other, and are not described herein.
As can be seen from the above technical solutions, the terminal device and the spatial sound field correction method provided in the foregoing embodiments may obtain sampled audio in response to a spatial sound field correction instruction, where the sampled audio is audio data collected when a sound collector plays preset correction audio at an audio output interface, the correction audio is audio processed through basic sound effects, and the correction audio includes audio data of a preset audio frequency band. And generating a gain offset value of the audio frequency band according to the preset target frequency response curve and the sampled audio frequency, and finally setting the gain of the audio data in the audio frequency band based on the gain offset value. According to the method, the preset audio frequency which is only processed through basic sound effects is collected, and analysis and correction are carried out to obtain a gain offset value, so that the problem that the correction effect is poor due to advanced sound effect processing is solved.
It will be apparent to those skilled in the art that the techniques of embodiments of the present invention may be implemented in software plus a necessary general purpose hardware platform. Based on such understanding, the technical solutions in the embodiments of the present invention may be embodied essentially or in parts contributing to the prior art in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method of the embodiments or some parts of the embodiments of the present invention.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.
The foregoing description, for purposes of explanation, has been presented in conjunction with specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the embodiments to the precise forms disclosed above. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles and the practical application, to thereby enable others skilled in the art to best utilize the embodiments and various embodiments with various modifications as are suited to the particular use contemplated.

Claims (10)

1. A terminal device, comprising:
an audio output interface configured to play audio data;
A sound collector configured to collect audio data;
a controller configured to:
responding to a space sound field correction instruction, acquiring sampling audio, wherein the sampling audio is audio data acquired by the sound collector when the audio output interface plays preset correction audio, the correction audio is audio processed through basic sound effects, and the correction audio comprises audio data of a preset audio frequency band;
generating a gain offset value of the audio frequency band according to a preset target frequency response curve and the sampled audio, wherein the target frequency response curve is a frequency response curve of an audio signal;
and setting the gain of the audio data in the audio frequency band based on the gain offset value.
2. The terminal device of claim 1, wherein the controller performs acquiring the sampled audio, and is further configured to:
acquiring preset correction audio;
performing basic sound effect processing on the corrected audio;
controlling the audio output interface to play the corrected audio subjected to basic sound effect processing;
and sending an acquisition instruction to the sound collector so that the sound collector acquires audio data when the audio output interface plays the correction audio to obtain the sampling audio.
3. The terminal device of claim 2, wherein the controller is further configured to:
the method comprises the steps of reading sound parameters of the terminal equipment, wherein the sound parameters comprise a first sound parameter and a second sound parameter, the first sound parameter is used for representing a mute state, and the second sound parameter is used for representing a non-mute state;
and if the sound parameter is the first sound parameter, setting the sound parameter as a second sound parameter.
4. A terminal device according to claim 3, wherein the controller is further configured to:
detecting a volume value of the terminal equipment, wherein the volume value is used for representing a volume level of playing audio data;
acquiring an accompanying sound curve of the terminal equipment;
generating a correction sound value according to the sound accompaniment curve of the terminal equipment;
and if the volume value is not equal to the correction volume value, setting the volume value of the terminal equipment to be the correction volume value.
5. The terminal device of claim 1, wherein the controller is further configured to:
the method comprises the steps that the sound effect parameters of a high-level sound effect module are detected, the high-level sound effect module is used for executing high-level sound effect processing on audio data, the sound effect parameters comprise first sound effect parameters and second sound effect parameters, the first sound effect parameters are used for representing that the high-level sound effect module is started, and the second sound effect parameters are used for representing that the high-level sound effect module is closed;
And if the sound effect parameter of the advanced sound effect module is the first sound effect parameter, setting the sound effect parameter as the second sound effect parameter.
6. The terminal device of claim 1, wherein the controller is further configured to:
responding to a playing instruction for playing target audio, obtaining a gain offset value and obtaining a current gain value of the terminal equipment;
superposing the gain value and the gain offset value to obtain a correction gain value;
updating the correction gain value to an equalizer;
correcting a gain of the target audio based on the equalizer;
and controlling the audio output interface to play the corrected target audio.
7. The terminal device of claim 6, wherein the controller is further configured to:
detecting a current sound effect and a supported equalizer, wherein the equalizer comprises a basic equalizer and an advanced equalizer, the basic equalizer is used for adjusting the audio gain of the basic sound effect and the audio gain of the advanced sound effect, and the advanced equalizer is used for adjusting the audio gain of the advanced sound effect;
if the advanced equalizer is supported and the current sound effect is an advanced sound effect, setting the correction gain value to the advanced equalizer, and correcting the gain of the target audio based on the advanced equalizer.
8. The terminal device of claim 7, wherein the controller is further configured to:
detecting an audio encoding format of the target audio;
and setting the sound effect of the terminal equipment based on the audio coding format.
9. The terminal device of claim 1, wherein the controller is further configured to:
after the gain offset value of the audio frequency band is obtained, the current setting information of the terminal equipment is obtained;
acquiring recorded historical setting information, wherein the historical setting information is setting information before the spatial sound field correction is executed, the historical setting information comprises a sound volume value and an advanced sound effect object, and the advanced sound effect object is used for representing an advanced sound effect module currently applied to the audio data;
and if the current setting information is different from the historical setting information, setting the current setting information of the terminal equipment according to the historical setting information.
10. The spatial sound field correction method is characterized by being applied to terminal equipment, wherein the terminal equipment comprises an audio output interface, a sound collector and a controller; the audio output interface is used for playing audio data; the sound collector is used for collecting audio data, and the method comprises the following steps:
Responding to a space sound field correction instruction, acquiring sampling audio, wherein the sampling audio is audio data acquired by the sound collector when the audio output interface plays preset correction audio, the correction audio is audio processed through basic sound effects, and the correction audio comprises audio data of a preset audio frequency band;
generating a gain offset value of the audio frequency band according to a preset target frequency response curve and the sampled audio, wherein the target frequency response curve is a frequency response curve of an audio signal;
and setting the gain of the audio data in the audio frequency band based on the gain offset value.
CN202311747134.5A 2023-12-19 2023-12-19 Terminal equipment and space sound field correction method Pending CN117896658A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311747134.5A CN117896658A (en) 2023-12-19 2023-12-19 Terminal equipment and space sound field correction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311747134.5A CN117896658A (en) 2023-12-19 2023-12-19 Terminal equipment and space sound field correction method

Publications (1)

Publication Number Publication Date
CN117896658A true CN117896658A (en) 2024-04-16

Family

ID=90646429

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311747134.5A Pending CN117896658A (en) 2023-12-19 2023-12-19 Terminal equipment and space sound field correction method

Country Status (1)

Country Link
CN (1) CN117896658A (en)

Similar Documents

Publication Publication Date Title
US11527243B1 (en) Signal processing based on audio context
CN109716780B (en) Electronic device and control method thereof
EP2737692B1 (en) Control device, control method and program
US20190098428A1 (en) Playback Device Calibration
US20080070616A1 (en) Mobile Communication Terminal with Improved User Interface
TWI747031B (en) Video playback method, device and multimedia data playback method
CN112995551A (en) Sound control method and display device
CN112612443A (en) Audio playing method, display device and server
CN106453032B (en) Information-pushing method and device, system
WO2022078065A1 (en) Display device resource playing method and display device
US9214914B2 (en) Audio device control program, mobile telephone, recording medium, and control method
CN117896658A (en) Terminal equipment and space sound field correction method
CN111263223A (en) Media volume adjusting method and display device
US20100111320A1 (en) Acoustic system and update method of the acoustic system
CN113096681B (en) Display device, multi-channel echo cancellation circuit and multi-channel echo cancellation method
CN115550825A (en) Display device, hearing aid and volume adjustment method
CN111883152B (en) Audio signal processing method and electronic equipment
CN115359788A (en) Display device and far-field voice recognition method
CN211860471U (en) Intelligent sound box
CN112104950B (en) Volume control method and display device
CN105187757B (en) Method and device for displaying equipment terminal state
CN113938634A (en) Multi-channel video call processing method and display device
CN113115105B (en) Display device and prompt method for configuring WISA speaker
KR102265583B1 (en) Method for standardizing volume of sound source, device, and method of display and operation
CN117075837A (en) Display equipment and volume adjusting method of eARC peripheral equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination