CN117294880A - Display device and sound effect processing method - Google Patents

Display device and sound effect processing method Download PDF

Info

Publication number
CN117294880A
CN117294880A CN202211508371.1A CN202211508371A CN117294880A CN 117294880 A CN117294880 A CN 117294880A CN 202211508371 A CN202211508371 A CN 202211508371A CN 117294880 A CN117294880 A CN 117294880A
Authority
CN
China
Prior art keywords
audio data
sound effect
intermediate audio
chip
stage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211508371.1A
Other languages
Chinese (zh)
Inventor
王光强
肖兵
刘盛鉴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Juhaokan Technology Co Ltd
Original Assignee
Juhaokan Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Juhaokan Technology Co Ltd filed Critical Juhaokan Technology Co Ltd
Priority to CN202211508371.1A priority Critical patent/CN117294880A/en
Publication of CN117294880A publication Critical patent/CN117294880A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42203Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] sound input device, e.g. microphone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • H04N21/4852End-user interface for client configuration for modifying audio parameters, e.g. switching between mono and stereo

Abstract

The application discloses a display device and an audio processing method, wherein the display device comprises a system-level chip, an audio chip and a controller, and the system-level chip of the display device executes first audio processing on initial audio data to determine intermediate audio data; the controller further determines the acquisition frequency of the intermediate audio data, and after the system-level chip transmits the intermediate audio data to the sound effect chip, the determination of the adjustment stage of the intermediate audio data can be realized based on the acquisition frequency and the preset audio delay condition correspondingly applied to the initial audio data; the method can further determine the function authority for executing the second sound effect processing on the intermediate audio data based on the adjustment stage, and trigger the sound effect chip to execute the corresponding second sound effect processing on the intermediate audio data according to the function authority, so that the sound effect chip can correspondingly execute the sound effect data processing according to the specific condition of the intermediate audio data, delay caused by the sound effect processing is avoided, and user experience is improved.

Description

Display device and sound effect processing method
Technical Field
The application relates to the technical field of display, in particular to a display device and an audio processing method.
Background
With the development of technology, the functions of display devices are more and more diversified, and the functions which can be provided for users are more and more abundant. The display device comprises a smart television, a smart phone, a product with a display screen and the like. Taking intelligent electricity as an example, the intelligent electricity can be used as equipment for watching video programs, and can also provide functions of video call, K song, game, learning and the like for users.
In the related art, the display device processes the sound effect data of each application mainly through a chip, and processes each piece of sound effect data to be processed sequentially according to the sequence in the thread through the chip. However, if the resources of the chip are limited, if the to-be-processed sound data is too much, or if the real-time requirement of the application on the sound processing is high (for example, in the K song application, the sound processing of the voice data received by the chip in the display device needs to be high in real-time, so that the voice data is matched with accompaniment data provided by the application), delay exists in the sound data processing, and the use experience of the user is affected.
Disclosure of Invention
The application provides display equipment and an audio processing method, which can be used for solving the technical problem that the display equipment delays the audio data processing of each application through a chip.
In a first aspect, some embodiments of the present application provide a display device, including a display, a system-on-chip, an audio chip, and a controller, wherein:
a system-in-chip configured to perform a first sound effect process on the initial audio data, determining intermediate audio data;
an audio chip configured to perform a second audio process on the intermediate audio data;
a controller in communication with the system-on-chip, the sound effect chip, the controller configured to:
determining the acquisition frequency of the intermediate audio data;
after the system-level chip transmits the intermediate audio data to the sound effect chip, determining an adjustment stage of the intermediate audio data based on the acquisition frequency and a preset audio delay condition applied correspondingly to the initial audio data;
based on the adjustment stage, determining the function authority for executing the second sound effect processing on the intermediate audio data, and triggering the sound effect chip to execute the corresponding second sound effect processing on the intermediate audio data according to the function authority.
In a second aspect, some embodiments of the present application provide an audio processing method, which is applied to a display device, where the display device includes a system-in-chip, an audio chip, and a controller, and the audio processing method includes the following steps:
Determining the acquisition frequency of intermediate audio data, wherein the intermediate audio data is determined by performing first sound effect processing on the initial audio data by a system-in-chip;
after the system-level chip transmits the intermediate audio data to the sound effect chip, determining an adjustment stage of the intermediate audio data based on the acquisition frequency and a preset audio delay condition applied correspondingly to the initial audio data;
based on the adjustment stage, determining the function authority for executing the second sound effect processing on the intermediate audio data, and triggering the sound effect chip to execute the corresponding second sound effect processing on the intermediate audio data according to the function authority.
Some embodiments of the present application provide a display device and an audio processing method, where a system-on-chip of the display device performs a first audio process on initial audio data to determine intermediate audio data; further determining the acquisition frequency of the intermediate audio data, and after the system-level chip transmits the intermediate audio data to the sound effect chip, determining an adjustment stage of the intermediate audio data can be realized based on the acquisition frequency and a preset audio delay condition applied correspondingly to the initial audio data; the method can further determine the function authority for executing the second sound effect processing on the intermediate audio data based on the adjustment stage, and trigger the sound effect chip to execute the corresponding second sound effect processing on the intermediate audio data according to the function authority, so that the sound effect chip can correspondingly execute the sound effect data processing according to the specific condition of the intermediate audio data, delay caused by the sound effect processing is avoided, and user experience is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 illustrates an operational scenario between a display device and a control apparatus of some embodiments of the present application;
fig. 2 shows a hardware configuration block diagram of a display device 200 of some embodiments of the present application;
FIG. 3 illustrates a software configuration diagram in a display device according to some embodiments of the present application;
FIG. 4 is a schematic flow chart of an audio processing method in a display device according to some embodiments of the present application;
FIG. 5 is a flowchart illustrating another method for processing sound effects in a display device according to some embodiments of the present application;
FIG. 6 illustrates a flow diagram of adjustment stage determination in a display device according to some embodiments of the present application;
FIG. 7 illustrates a flow diagram of adjustment stage determination in a display device according to some embodiments of the present application;
fig. 8 is a flowchart illustrating an audio processing method of a K song application in a display device according to some embodiments of the present application;
Fig. 9 is a schematic diagram showing a software configuration in a display device of a K song application in a display device according to some embodiments of the present application;
FIG. 10 is a flowchart illustrating yet another method for processing sound effects in a display device according to some embodiments of the present application;
fig. 11 is a flowchart illustrating yet another sound effect processing method in a display device according to some embodiments of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the exemplary embodiments of the present application more apparent, the technical solutions in the exemplary embodiments of the present application will be clearly and completely described below with reference to the drawings in the exemplary embodiments of the present application, and it is apparent that the described exemplary embodiments are only some embodiments of the present application, but not all embodiments.
All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present application, are intended to be within the scope of the present application based on the exemplary embodiments shown in the present application. Furthermore, while the disclosure has been presented in terms of an exemplary embodiment or embodiments, it should be understood that various aspects of the disclosure can be practiced separately from the disclosure in a complete subject matter.
It should be understood that the terms "first," "second," "third," and the like in the description and in the claims and in the above-described figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate, such as where appropriate, for example, implementations other than those illustrated or described in accordance with embodiments of the present application.
Furthermore, the terms "comprise" and "have," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of elements is not necessarily limited to those elements expressly listed, but may include other elements not expressly listed or inherent to such product or apparatus.
The display device provided in the embodiment of the application may have various implementation forms, for example, may be a television, a smart phone, a tablet computer, a laser projection device, a display (monitor), an electronic whiteboard (electronic bulletin board), an electronic desktop (electronic table), a product with a display screen, and the like. Fig. 1 is a specific embodiment of a display device of the present application.
Fig. 1 is a schematic diagram of an operation scenario between a display device and a control apparatus according to an embodiment. As shown in fig. 1, a user may operate the display device 200 through the smart device 300 or the control apparatus 100.
In some embodiments, the control apparatus 100 may be a remote controller, and the communication between the remote controller and the display device includes infrared protocol communication or bluetooth protocol communication, and other short-range communication modes, and the display device 200 is controlled by a wireless or wired mode. The user may control the display device 200 by inputting user instructions through keys on a remote control, voice input, control panel input, etc.
In some embodiments, a smart device 300 (e.g., mobile terminal, tablet, computer, notebook, etc.) may also be used to control the display device 200. For example, the display device 200 is controlled using an application running on a smart device.
In some embodiments, the display device may receive instructions not using the smart device or control device described above, but rather receive control of the user by touch or gesture, or the like.
In some embodiments, the display device 200 may also perform control in a manner other than the control apparatus 100 and the smart device 300, for example, the voice command control of the user may be directly received through a module configured inside the display device 200 device for acquiring voice commands, or the voice command control of the user may be received through a voice control apparatus configured outside the display device 200 device.
In some embodiments, the display device 200 is also in data communication with a server 400. The display device 200 may be permitted to make communication connections via a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 400 may provide various contents and interactions to the display device 200. The server 400 may be a cluster, or may be multiple clusters, and may include one or more types of servers.
As shown in fig. 2, the display apparatus 200 includes at least one of a modem 210, a communicator 220, a detector 230, an external device interface 240, a controller 250, a display 260, an audio output interface 270, a memory, a power supply, and a user interface.
In some embodiments the controller includes a processor, a video processor, an audio processor, a graphics processor, RAM, ROM, a first interface for input/output to an nth interface.
The display device 200 also includes a system-on-chip and an audio-effect chip, both of which may process audio data, and in some embodiments, the system-on-chip may process audio data with basic audio effects, and the audio-effect chip may process audio data with basic audio effects or with specific audio effects. The basic sound effect processing comprises echo cancellation, sound enhancement, noise reduction and the like; specific sound effect processing includes echo cancellation, sound enhancement, noise reduction, bass enhancement, bass sound effect, automatic gain, howling suppression, and the like.
The display 260 includes a display screen component for presenting a picture, and a driving component for driving an image display, a component for receiving an image signal from the controller output, displaying video content, image content, and a menu manipulation interface, and a user manipulation UI interface.
The display 260 may be a liquid crystal display, an OLED display, a projection device, or a projection screen.
The communicator 220 is a component for communicating with external devices or servers according to various communication protocol types. For example: the communicator may include at least one of a Wifi module, a bluetooth module, a wired ethernet module, or other network communication protocol chip or a near field communication protocol chip, and an infrared receiver. The display apparatus 200 may establish transmission and reception of control signals and data signals with the control device 100 or the server 400 through the communicator 220.
A user interface, which may be used to receive control signals from the control device 100 (e.g., an infrared remote control, etc.).
The detector 230 is used to collect signals of the external environment or interaction with the outside. For example, detector 230 includes a light receiver, a sensor for capturing the intensity of ambient light; alternatively, the detector 230 includes an image collector such as a camera, which may be used to collect external environmental scenes, user attributes, or user interaction gestures, or alternatively, the detector 230 includes a sound collector such as a microphone, or the like, which is used to receive external sounds.
The external device interface 240 may include, but is not limited to, the following: high Definition Multimedia Interface (HDMI), analog or data high definition component input interface (component), composite video input interface (CVBS), USB input interface (USB), RGB port, etc. The input/output interface may be a composite input/output interface formed by a plurality of interfaces.
The modem 210 receives broadcast television signals through a wired or wireless reception manner, and demodulates audio and video signals, such as EPG data signals, from a plurality of wireless or wired broadcast television signals.
In some embodiments, the controller 250 and the modem 210 may be located in separate devices, i.e., the modem 210 may also be located in an external device to the main device in which the controller 250 is located, such as an external set-top box or the like.
The controller 250 controls the operation of the display device and responds to the user's operations through various software control programs stored on the memory. The controller 250 controls the overall operation of the display apparatus 200. For example: in response to receiving a user command to select a UI object to be displayed on the display 260, the controller 250 may perform an operation related to the object selected by the user command.
In some embodiments the controller includes at least one of a central processing unit (Central Processing Unit, CPU), video processor, audio processor, graphics processor (Graphics Processing Unit, GPU), RAM Random Access Memory, RAM), ROM (Read-Only Memory, ROM), first to nth interfaces for input/output, a communication Bus (Bus), and the like.
The user may input a user command through a Graphical User Interface (GUI) displayed on the display 260, and the user input interface receives the user input command through the Graphical User Interface (GUI). Alternatively, the user may input the user command by inputting a specific sound or gesture, and the user input interface recognizes the sound or gesture through the sensor to receive the user input command.
A "user interface" is a media interface for interaction and exchange of information between an application or operating system and a user, which enables conversion between an internal form of information and a user-acceptable form. A commonly used presentation form of the user interface is a graphical user interface (Graphic User Interface, GUI), which refers to a user interface related to computer operations that is displayed in a graphical manner. It may be an interface element such as an icon, a window, a control, etc. displayed in a display screen of the electronic device, where the control may include a visual interface element such as an icon, a button, a menu, a tab, a text box, a dialog box, a status bar, a navigation bar, a Widget, etc.
In some embodiments, as shown in fig. 3, the system is divided into four layers, from top to bottom, an application layer (abbreviated as "application layer"), an application framework layer (Application Framework) layer (abbreviated as "framework layer"), an android running time layer (abbreviated as "system runtime layer"), a An Zhuoyun-line layer (abbreviated as "android running time layer"), and a kernel layer.
In some embodiments, at least one application program is running in the application program layer, and these application programs may be a Window (Window) program of an operating system, a system setting program, a clock program, or the like; or may be an application developed by a third party developer. In particular implementations, the application packages in the application layer are not limited to the above examples.
The framework layer provides an application programming interface (application programming interface, API) and programming framework for the application. The application framework layer includes a number of predefined functions. The application framework layer corresponds to a processing center that decides to let the applications in the application layer act. Through the API interface, the application program can access the resources in the system and acquire the services of the system in the execution.
As shown in fig. 3, the application framework layer in the embodiment of the present application includes a manager (manager), a Content Provider (Content Provider), and the like, where the manager includes at least one of the following modules: an Activity Manager (Activity Manager) is used to interact with all activities that are running in the system; a Location Manager (Location Manager) is used to provide system services or applications with access to system Location services; a Package Manager (Package Manager) for retrieving various information about an application Package currently installed on the device; a notification manager (Notification Manager) for controlling the display and clearing of notification messages; a Window Manager (Window Manager) is used to manage icons, windows, toolbars, wallpaper, and desktop components on the user interface.
In some embodiments, the activity manager is used to manage the lifecycle of the individual applications as well as the usual navigation rollback functions, such as controlling the exit, opening, fallback, etc. of the applications. The window manager is used for managing all window programs, such as obtaining the size of the display screen, judging whether a status bar exists or not, locking the screen, intercepting the screen, controlling the change of the display window (for example, reducing the display window to display, dithering display, distorting display, etc.), etc.
In some embodiments, the system runtime layer provides support for the upper layer, the framework layer, and when the framework layer is in use, the android operating system runs the C/C++ libraries contained in the system runtime layer to implement the functions to be implemented by the framework layer.
In some embodiments, the kernel layer is a layer between hardware and software. As shown in fig. 3, the kernel layer contains at least one of the following drivers: audio drive, display drive, bluetooth drive, camera drive, WIFI drive, USB drive, HDMI drive, sensor drive (e.g., fingerprint sensor, temperature sensor, pressure sensor, etc.), and power supply drive, etc.
The functions of the display equipment are more and more diversified, the functions which can be provided for the user are more and more abundant, and the use experience of the diversified functions is improved; with the diversification of functions, the types and the number of sound effect processing required in each application also increase.
For example, in the K song application, the display device needs to perform sound effect processing on the voice data received in the K song through the chip, so that the voice data and the accompaniment data can be matched and output; the media asset playing application is characterized in that the display equipment can process audio data in media assets (video, audio and the like) through the chip, and different audio experiences are provided for users.
The chip may be a system-on-chip with an audio adjustment function in the display device, or may be a separate audio chip, that is, the chip may have an audio adjustment performance, or may be a system-on-chip and an audio chip.
When the chip capable of processing the sound effect in the display device comprises the main control chip and the sound effect chip, in the process of processing the sound effect, the cooperation processing of the main control chip and the sound effect chip is required, resources of each chip are limited, if the sound effect data to be processed is too much, or the real-time requirement of the application on the sound effect processing is high (for example, in the K song application, the sound effect processing with high real-time is required for the sound effect data received by the chip in the display device, so that the sound effect data is matched with accompaniment data provided by the application), and the like, if the cooperation linkage performance between the main control chip and the sound effect chip is poor, sound cannot be timely output. Therefore, the dynamic adjustment of the sound effect is realized by combining the requirements of each application on delay to dynamically process the sound effect data.
It should be appreciated that the delay requirements are different for each application, with some applications being delay sensitive to sound processing; some applications have low delay sensitivity to sound effect processing, but if the sound effect quality is improved within the delay requirement acceptable by the application, the user experience is improved; there are also applications that are insensitive to delay of sound processing, but have high requirements on sound quality; etc.
In order to improve the problem of delay in processing sound effect data of each application by a display device through a chip, fig. 4 shows a flow chart of a sound effect processing method in the display device according to some embodiments of the present application. The system-level chip of the display device performs first sound effect processing on the initial audio data to determine intermediate audio data; further determining the acquisition frequency of the intermediate audio data, and after the system-level chip transmits the intermediate audio data to the sound effect chip, determining an adjustment stage of the intermediate audio data can be realized based on the acquisition frequency and a preset audio delay condition applied correspondingly to the initial audio data; the method can further determine the function authority for executing the second sound effect processing on the intermediate audio data based on the adjustment stage, and trigger the sound effect chip to execute the corresponding second sound effect processing on the intermediate audio data according to the function authority, so that the sound effect chip can correspondingly execute the sound effect data processing according to the specific condition of the intermediate audio data, delay caused by the sound effect processing is avoided, and user experience is improved.
In order to facilitate further understanding of the technical solutions in the embodiments of the present application, the following details of each step of the method for processing sound effects in the display device are described with reference to some embodiments and the accompanying drawings. As shown in fig. 4, the sound effect processing method includes the following steps:
s110, receiving initial audio data of the application in response to an audio processing instruction started by the application.
The sound effect processing instruction of each application is triggered during the use process of the user, and the triggering time and the triggering form are determined by the function setting of each application. For example, the audio processing instructions may be initiated by the media asset being clicked for play, for example, by an application of the media asset play class; the K song application, the sound effect processing instruction can be started by clicking the song recording control; etc.
The initial audio data of each application is provided by each application, and it should be noted that the initial audio data may be obtained directly by the application or may be obtained through a buffer. For example, if the application is a media asset playing application, the initial audio data is obtained by providing the corresponding media asset data; if the application is a K song application, the initial audio data may be voice data, which is obtained through a microphone on the display device or an external microphone, and at this time, the initial audio data needs to be stored through an audio buffer.
s120, performing a first sound effect processing on the initial audio data to determine intermediate audio data.
The first sound effect processing of the initial audio data is implemented by a system-on-chip in the display device, which is determined mainly based on the processing requirements of the application on the initial audio data.
The intermediate audio data is determined through the first sound effect processing, and the intermediate audio data at this time is the audio data which can be output through a loudspeaker, an earphone and other players, but for display equipment with continuously abundant functions, the second sound effect processing can be executed on the intermediate audio data through a sound effect chip. Before the second sound effect processing is performed on the intermediate audio data by the sound effect chip, the intermediate audio data is further judged through the following steps, so that the intermediate audio data is output timely and has good sound effect.
In some embodiments, in the process of processing the first sound effect of the initial audio data, delay and other conditions exist, whether the first sound effect processing is executed on the initial audio data or not can be determined by judging the sound effect processing buffer resource of the system level chip, if the sound effect processing buffer resource of the system level chip meets the sound effect processing requirement of the application, the first sound effect processing is executed, namely, the initial audio data is determined to be intermediate audio data after being processed through the first sound effect; if the sound effect processing buffer resource of the system-level chip does not meet the sound effect processing requirement of the application, the system-level chip does not execute first sound effect processing on the initial audio data, namely the initial audio data is used as intermediate audio data.
Fig. 5 is a flowchart of another sound effect processing method in a display device according to some embodiments of the present application, as shown in fig. 5, before the system-on-chip performs the first sound effect processing on the initial audio data in step 120, the method further includes determining a sound effect processing buffer resource in the system-on-chip, which specifically includes the following steps:
s1201, determining whether the sound effect processing cache resource in the system-in-chip meets the sound effect processing requirement of the application.
For the system-level chip, the buffer memory resource for the sound effect processing is limited, so that the judgment on the audio processing performance of the system-level chip is increased for improving the cooperation of each chip in the display device on the sound effect processing of the audio data, and the problems of sound effect processing delay and the like are further improved.
If the sound processing buffer resource of the system-in-chip meets the sound processing requirement of the application, step 120 is executed to perform the first sound processing on the initial audio data, and determine the intermediate audio data.
If the sound effect processing buffer resource of the system-level chip does not meet the sound effect processing requirement of the application, the system-level chip does not execute the first sound effect processing on the initial audio data, and executes S1202 to take the initial audio data as intermediate audio data.
That is, the second sound effect processing will be performed on the intermediate audio data (the intermediate audio data at this time is directly determined by the initial audio data) by the sound effect chip.
In some embodiments, in a process of the first sound effect processing of the initial audio data by the system-on-chip, when the sound effect processing buffer resource of the system-on-chip meets the sound effect processing requirement of the application, the method may further include: determining processing threads corresponding to the initial audio data through priority judgment of the initial audio data or judgment of the acquisition sequence of the initial audio data; in the process of processing the initial audio data, each thread causes different degrees of delay of the obtained intermediate audio data according to the busy degree of each thread.
In some embodiments, there may be different degrees of delay in the resulting intermediate audio data for different usage times in the same application; of course, the delay of the intermediate audio data may be the same or may be different for different applications.
For example, in the K song application, after the first audio effect of the initial audio data is processed by the system-in-chip at the first moment, the obtained intermediate audio data has a delay of 10 ms; at a second moment, the system-level chip processes the first sound effect of the initial audio data to obtain intermediate audio data, the intermediate audio data has a delay of 30ms, at a third moment, the system-level chip processes the first sound effect of the initial audio data to obtain intermediate audio data, the intermediate audio data has a delay of 2ms, and at a fourth moment, the system-level chip processes the first sound effect of the initial audio data to obtain intermediate audio data, and the intermediate audio data has a delay of 30 ms; that is, for the same application, the delay of the intermediate audio data obtained by the thread that performs the first audio processing of the initial audio data by the system on chip may be different or the same. The processing of the audio chip is also different for the intermediate audio data of the different phases.
As shown in fig. 4, further includes: s130, determining the acquisition frequency of the intermediate audio data.
In the process of transmitting the intermediate audio data to the sound effect chip for processing, the acquisition frequency of the intermediate audio data needs to be determined, and the acquisition frequency of the intermediate audio data can be determined by one of the following modes:
if the initial audio data is provided by the Karaoke application, namely the data obtained from the cache region, the acquisition frequency of the intermediate audio data can be determined through the acquisition frequency of the initial audio data; determining the acquisition frequency of the intermediate audio data based on the difference between the corresponding acquisition period of the acquisition frequency of the initial audio data and the application delay time; wherein the acquisition frequency of the intermediate audio data is greater than or equal to the acquisition frequency of the initial audio data.
In some embodiments, after the system-on-chip processes the first audio effect of the initial audio data, if there is a longer delay in the obtained intermediate audio data, due to the limitation of the total time available for audio adjustment by the corresponding application, the acquisition period can be reduced by increasing the acquisition frequency of the intermediate audio data, and more time is reserved for the audio chip to execute the second audio effect process. If the obtained intermediate audio data has shorter delay, effective or more processing time can be reserved for the sound effect chip to execute the second sound effect processing by maintaining the acquisition frequency of the intermediate audio data or improving the acquisition frequency of the intermediate audio data.
The initial audio data is provided by the media play application, namely, the initial audio data is directly obtained by the application, and the acquisition frequency of the intermediate audio data can be directly obtained.
If the intermediate audio data is intermediate audio data which is not subjected to the first audio processing by the system-in-chip but is directly determined by the initial audio data, the acquisition frequency thereof may be the acquisition frequency of the initial audio data.
In some embodiments, other audio data storage modes are possible, and the determination of the sampling frequency of the audio data storage modes is different from one audio data storage mode to another, which is not limited herein.
As shown in fig. 4, after the system-on-chip transmits the intermediate audio data to the sound effect chip, the method further includes: and S140, determining an adjustment stage of the intermediate audio data based on the acquisition frequency and a preset audio delay condition applied corresponding to the initial audio data.
In some embodiments, the intermediate audio data carrying its acquisition frequency may be data transferred through middleware, kernel, so that the intermediate audio data is transferred from the system-in-chip to the sound chip.
The preset audio delay condition of each application is determined by the performance of each application.
For example, a K song application, whether it is voice data or accompaniment data, if there is delay in the matching process between two sets of data, the two sets of data are easy to be perceived by a user, and the use experience of the user is affected.
If the delay time perceived by the user is 40ms, the maximum time in the corresponding preset audio delay condition is required to be smaller than 40ms.
For example, the audio data is provided by the media corresponding to the application, the delay of the audio data is insensitive in the audio processing process, for example, the media is video, the audio data and the image data are provided by referring to the corresponding media, the output of the audio data and the output of the image data can be synchronized through control, the mismatch of the audio data and the image data can not be caused by the delay in the audio data processing process, if the voice quality requirement of a user on the audio data is high, the voice quality can be improved through the audio processing on the audio data, the user experience is improved, the application is weak or slightly weak in the delay sensitivity, and the time requirement corresponding to the preset audio delay condition is correspondingly longer than that of the delay sensitivity application.
If the delay time that the user can receive is 60ms, the maximum time in the corresponding preset audio delay condition needs to be smaller than 60ms.
That is, the preset audio delay conditions of the applications are different, so that the fourth time length and the fifth time length in the preset audio delay conditions of the applications are different, and the fourth time length is smaller than the fifth time length, and the fifth time length is smaller than or equal to the longest time length in the preset audio delay conditions.
Fig. 6 is a schematic flow chart of determining an adjustment phase in a display device according to some embodiments of the present application, as shown in fig. 6, in step 140, determining an adjustment phase of intermediate audio data based on a collection frequency and a preset audio delay condition applied by the initial audio data, including the following steps:
s1401, if the acquisition period corresponding to the acquisition frequency is smaller than the fourth duration in the preset audio delay condition, the adjustment phase is the first phase.
The first stage may be determined to have a processing time to perform the entire sound effect processing.
For example, the executable sound effect processing may include A, B, C, D, and if it is determined that the adjustment phase is the first phase, the processing time for which the sound effect processing may be executed may include A, B, C and D.
In some embodiments, the first phase also characterizes a delay requirement of the application to which the intermediate audio data corresponds as a phase not greater than the first duration.
It should be noted that the first duration may be equal to or different from the fourth duration, where the first duration is used to determine the delay requirement, and the fourth duration is data of a limitation duration in the preset audio delay condition.
S1402, if the acquisition period corresponding to the acquisition frequency is greater than or equal to the fourth duration, and the acquisition period corresponding to the acquisition frequency is less than the fifth duration in the preset audio delay condition, the adjustment stage is the second stage.
The second stage may be determined to have a processing time to perform part of the sound effect processing.
For example, the executable sound effect processing may include A, B, C, D, and if it is determined that the adjustment phase is the second phase, the processing time for performing the sound effect processing may include only a and C, or only B, or the like.
In some embodiments, the second phase also characterizes the latency requirement of the application as a phase that is no greater than a second duration that is less than the first duration. That is, the processing time of the sound effect processing at this time is shorter than that of the first-stage sound effect processing.
It should be noted that the second duration may be equal to the fifth duration or may be different.
S1403, if the acquisition period corresponding to the acquisition frequency is greater than or equal to the fifth duration, the adjustment phase is the third phase.
The third stage may determine that there is no processing time to execute the sound effect processing.
In some embodiments, the third phase also characterizes the latency requirement of the application as a phase that is no greater than a third duration that is less than the second duration.
It should be noted that, for different applications, if the collection frequency of the intermediate audio data is the same, the corresponding adjustment stage is different because the preset audio delay condition corresponding to each application is different. For the same application, when the acquisition frequencies are different, the corresponding adjustment phases are different.
For example, the collection frequency of the intermediate audio data of the K song application is K, the fourth duration included in the corresponding preset audio delay condition is M, and the fifth duration is N; the collection frequency of the intermediate audio data of the music application is K, the fourth time length contained in the corresponding preset audio delay condition is O, and the fifth time length is P, at this time, although the collection frequency of the intermediate audio data of the two applications is the same, the preset audio delay conditions are different, and the corresponding adjustment stages are also different.
For another example, the fourth duration is M, the fifth duration is N, and if the collection frequency of the intermediate audio data of the K song application is X, the collection period corresponding to the collection frequency X is smaller than the fourth duration M, and the adjustment phase of the intermediate audio data is a first phase, where the first phase characterizes a processing time for executing all sound effects. And if the acquisition frequency of the intermediate audio data of the K song application is Y, the acquisition period corresponding to the acquisition frequency Y is greater than or equal to the fourth time length M and less than the fifth time length N, the adjustment stage of the intermediate audio data is a second stage, and the second stage represents the processing time for executing part of the sound effect processing. If the acquisition frequency of the intermediate audio data applied by the K song is Z, the acquisition period corresponding to the acquisition frequency Z is greater than or equal to the fourth duration M and less than the fifth duration N, the adjustment stage of the intermediate audio data is a second stage, and the second stage represents the processing time for executing part of the sound effect processing.
For another example, the delay time that the K song application user can feel is 40ms, the maximum time in the corresponding preset audio delay condition needs to be less than 40ms, the corresponding fourth time length is 15ms, and the fifth time length is 25ms; if the K song application initial audio data has a delay of 10ms after the first audio processing, that is, the use experience of the user is ensured, the processing time of the second audio processing of the audio chip is 30ms at most, at this time, if the step 140 is passed, the adjustment stage of the intermediate audio data is determined to be the first stage according to the acquisition frequency and the preset audio delay condition, and at this time, the acquisition frequency of the intermediate audio data can not be adjusted; if the K song application initial audio data has a delay of 20ms after the first audio processing, at this time, the use experience of the user is ensured, the processing time of the second audio processing of the audio chip is at most 20ms, at this time, the acquisition frequency can be increased, so that the adjusting stage of the intermediate audio data is adjusted from the second stage to the first stage by step 140, and the degree of the second audio processing of the intermediate audio data is increased.
In some embodiments, different functions of the same application may have different preset audio delay conditions. For example, in a music application, preset audio delay conditions corresponding to a music playing function and a recording function may be different.
In some embodiments, since the second audio processing is not performed when the adjustment phase is the third phase, it may be determined first by determining whether the adjustment phase is the third phase, and then, after reducing the division of the functional authority for performing the second audio processing on the intermediate audio data based on the adjustment phase.
In some embodiments, if the adjustment phase is the third phase, the determination may be made by whether the sound chip is needed, that is, the second sound processing is not needed to be performed by the sound chip in the third phase.
As shown in fig. 4, further includes: and S150, determining the functional authority for executing the second sound effect processing on the intermediate audio data based on the adjustment stage.
For different adjustment stages, the corresponding second sound effect processing function rights are different, and based on each adjustment stage, the corresponding function rights are closed and opened correspondingly.
Fig. 7 is a flowchart illustrating the adjustment stage determination in the display device according to some embodiments of the present application, as shown in fig. 7, in step 150, the functional authority for performing the second sound effect processing on the intermediate audio data is determined based on the adjustment stage, including the following steps:
s1501, if the adjustment stage of the intermediate audio data is the first stage, determining the function authority is to execute all sound effect processing functions on the intermediate audio data.
Wherein the first phase may characterize a processing time with performing the full sound effect processing; the first phase may also represent a phase in which the delay requirement of the application corresponding to the intermediate audio data is not greater than a first duration, which may be understood as a phase insensitive to delay.
That is, the function authority at this time may include all sound effect processing, that is, all sound effect processing is executed for the service (for example, video playing and music playing) with low delay requirement but high sound quality requirement, so as to fully play the role of the sound effect chip and promote the user experience.
S1502, if the adjustment phase of the intermediate audio data is the second phase, determining that the function authority is an audio processing function with a function requirement weight greater than a preset threshold value for executing the application on the intermediate audio data.
Wherein the first phase may characterize a processing time with performing a portion of the sound effect processing; the second phase characterizes the delay requirement of the application as a phase not greater than a second duration, the second duration being less than the first duration.
That is, the function authority at this time may include a part of sound effect processing, that is, for an application having a second duration delay time, or adding a second duration delay corresponding to the second stage does not affect the use experience of the application, and by executing a part of sound effect processing, the user experience is improved.
The function authority of part of the sound effect processing refers to a sound effect processing function with a function requirement weight of executing application on the intermediate audio data being greater than a preset threshold.
For each application, the function requirement weight that can be processed by the second sound effect of the sound effect chip can be determined by the function-weight mapping table of each application, for example, as shown in table 1 below, which is an example of a function-weight mapping table.
Table 1 function-weight mapping table
Application name Functional requirements for second sound effect processing Weighting of
K Song application Bass sound effect requirements 9
K Song application Automatic gain sound effect requirement 7
K Song application Stereo sound effect requirement 5
Media asset playing application Noise suppression sound effect requirements 9
Media asset playing application Suppressing howling sound effect requirements 8
Media asset playing application Bass sound effect requirements 7
Media asset playing application Heavy bass sound effect requirements 5
For example, when the preset threshold is set to be 8, the sound effect processing function corresponding to the function authority of the K song application is a bass sound effect function, and the sound effect processing function corresponding to the media asset playing application is a noise suppression sound effect function.
If the weight of the function requirement in the application is greater than the preset threshold, the function requirement is a bass sound effect requirement, and the sound effect processing function is to execute bass sound effect processing on the intermediate audio data; and the thickness of the bass is improved.
If the weight of the function requirement in the application is greater than the preset threshold, the echo cancellation sound effect requirement is met, and the sound effect processing function is used for executing echo cancellation sound effect processing on the intermediate audio data; and the overall sound effect experience is improved.
If the weight of the function requirement in the application is greater than the preset threshold, the noise suppression sound effect requirement is met, and the sound effect processing function is to execute three-dimensional noise suppression processing on the intermediate audio data; and the experience of stereo is improved.
The sound effects that can be processed by the sound effect chip are not limited to the sound effects described in the above embodiments, but include other sound effects (e.g., automatic enhancement, active noise reduction, howling suppression, etc.), and are not limited herein.
As shown in fig. 7, further includes: s1503, if the adjustment stage of the intermediate audio data is the third stage, determining that the function authority is to execute no sound effect processing function on the intermediate audio data.
The third stage characterizes the delay requirement of the application as a stage which is not longer than a third time length, and the third time length is shorter than the second time length.
That is, the application of the intermediate audio data is sensitive to delay, and by setting the function authority to not execute the sound effect processing function on the intermediate audio data, the loss of delay can be reduced, and the playing time of the intermediate audio data can be improved.
And S160, triggering the sound effect chip to execute corresponding second sound effect processing on the intermediate audio data according to the functional authority.
Based on the above process, through the selective limitation of the sound effect processing of the sound effect chip, the integral delay of the audio data is ensured to reach the application standard; meanwhile, on the basis of ensuring that the delay reaches the standard, the quality of the audio data is improved as much as possible, the output of sound with quick and good sound effect is ensured, and the user experience is improved.
In the display device with the cooperation of the dual chips (the system-level chip and the audio effect chip), the system-level chip transmits the determined acquisition frequency of the intermediate audio data to the audio effect chip after processing the initial audio data, and according to the transmission acquisition frequency (i.e. instant time) and preset audio delay conditions of each application, the corresponding audio effect processing is carried out on the intermediate audio data in stages according to the adjustment stage, so that the audio effect and the time are guaranteed to be compatible when the audio data are output.
Fig. 8 is a flowchart illustrating an audio processing method of a K song application in a display device according to some embodiments of the present application, and as shown in fig. 8, the audio processing method includes the following steps:
s310, receiving initial audio data of the application in response to an audio processing instruction of a user for starting the K song application.
The initial audio data at this time is human voice data, and the initial audio data is stored in the audio buffer.
S320, the initial audio data is read from the audio buffer area, and first sound effect processing is carried out on the initial audio data to determine intermediate audio data.
The K song application needs to read the initial audio data from the audio buffer area because of the need of the voice data, and does not need to read the initial audio data from the audio buffer area for music playing applications and the like, so that the initial audio data can be directly obtained from the application.
S330, determining the acquisition frequency of the intermediate audio data.
The acquisition frequency is the frequency of reading data from the audio buffer area once by initial audio data, and all data in the audio buffer area can be taken out every time the data are read.
S340, the data transmission can be carried out through middleware and Kernel when the intermediate audio data carries the acquisition frequency of the intermediate audio data, so that the intermediate audio data is transmitted from the system-in-chip to the sound effect chip.
After the system-in-chip transmits the intermediate audio data to the sound effect chip, S350, determining an adjustment stage of the intermediate audio data based on the acquisition frequency and a preset audio delay condition applied corresponding to the initial audio data.
S360, based on the adjustment stage, determining the functional authority for executing the second sound effect processing on the intermediate audio data.
After the audio data is received by the audio chip, sound audio processing is carried out according to the acquisition frequency carried by the intermediate audio data and the preset audio delay condition applied corresponding to the initial audio data, if the corresponding adjustment stage is the first stage, the audio data without delay is better in comparison quality when the intermediate audio is acquired, and the audio chip can carry out second audio processing after receiving the intermediate audio data so as to achieve the best effect; if the corresponding adjustment phase is the second phase, it indicates that part of time is consumed in the process of collection, and when the audio chip passes through, in order to ensure that the overall delay reaches the standard, part of functions in the second sound effect processing are needed, so that part of the audio effect is improved, and if the corresponding adjustment phase is the third phase, it indicates that the time is consumed in the process of collection, and when the audio chip passes through, the second sound effect processing is not performed through the sound effect chip in order to ensure that the overall delay reaches the standard.
Fig. 9 is a schematic diagram showing software configuration in a display device of a K song application in some embodiments of the present application, as shown in fig. 9, where the display device is configured to process voice data transmitted through an external microphone 410, the voice data is processed by a first sound effect of a system-level chip corresponding to the K song application and then transmitted to an audio chip through a middleware and a Kernel, and the audio chip is configured to determine a function permission to execute on the voice data in combination with an acquisition frequency of the voice data and a preset sound effect delay condition corresponding to the K song application, and play the voice data through devices such as a power amplifier and a speaker based on the function permission.
The embodiment of the application provides a display device and an audio processing method, wherein the display device comprises a system-level chip, an audio chip and a controller, and the system-level chip executes first audio processing on initial audio data to determine intermediate audio data; further determining the acquisition frequency of the intermediate audio data, and after the system-level chip transmits the intermediate audio data to the sound effect chip, determining an adjustment stage of the intermediate audio data can be realized based on the acquisition frequency and a preset audio delay condition applied correspondingly to the initial audio data; the method can further determine the function authority for executing the second sound effect processing on the intermediate audio data based on the adjustment stage, and trigger the sound effect chip to execute the corresponding second sound effect processing on the intermediate audio data according to the function authority, so that the sound effect chip can correspondingly execute the sound effect data processing according to the specific condition of the intermediate audio data, delay caused by the sound effect processing is avoided, and user experience is improved.
In some embodiments, the display device further comprises a speaker, and the sound effect chip is communicatively connected to the speaker through a power amplifier. The controller determines target audio data after the second sound effect processing is performed on the intermediate audio data based on the function authority by the sound effect chip, and outputs the target audio data through the speaker.
Fig. 10 is a flowchart illustrating another method for processing sound effects in a display device according to some embodiments of the present application, as shown in fig. 10, after step 160, further including the following steps:
and S170, outputting the target audio data through a loudspeaker.
The target audio data are obtained by performing second sound effect processing on the intermediate audio data by the sound effect chip based on the functional authority.
In some embodiments, an audio chip in the display device is in communication connection with the speaker, and a power amplifier module may be further disposed between the audio chip and the speaker, for converting a digital signal output by the audio chip into an analog signal.
The embodiment of the application provides a display device and an audio processing method, wherein the display device comprises a system-level chip, an audio chip, a controller and a loudspeaker, and the system-level chip executes first audio processing on initial audio data to determine intermediate audio data; further determining the acquisition frequency of the intermediate audio data, and after the system-level chip transmits the intermediate audio data to the sound effect chip, determining an adjustment stage of the intermediate audio data can be realized based on the acquisition frequency and a preset audio delay condition applied correspondingly to the initial audio data; the method can further determine the function authority for executing the second sound effect processing on the intermediate audio data based on the adjustment stage, trigger the sound effect chip to execute the corresponding second sound effect processing on the intermediate audio data according to the function authority, and output the target audio data obtained through the second sound effect processing through the loudspeaker, so that the sound effect chip can correspondingly execute the sound effect data processing according to the specific condition of the intermediate audio data, delay caused by the sound effect processing is avoided, and user experience is improved.
Fig. 11 is a flowchart illustrating another sound effect processing method in a display device according to some embodiments of the present application, as shown in fig. 11, where the sound effect processing method includes the following steps:
wherein the following steps are performed by the system on chip:
s510, receiving initial audio data of the application in response to an audio processing instruction started by the application.
S520, performing first sound effect processing on the initial audio data to determine intermediate audio data.
Wherein the following steps are performed by the audio chip:
s530, receiving intermediate audio data and determining the acquisition frequency of the intermediate audio data.
S540, determining an adjustment stage of the intermediate audio data based on the acquisition frequency and a preset audio delay condition applied corresponding to the initial audio data.
S550, based on the adjustment stage, determining a function authority for executing the second sound effect processing on the intermediate audio data, and executing the corresponding second sound effect processing on the intermediate audio data according to the function authority.
The implementation principle and technical effect of each step are similar to those of each step in fig. 4, and are not repeated here.
The embodiment of the application provides a display device and an audio processing method, wherein the display device comprises a system-level chip and an audio chip, and the system-level chip executes first audio processing on initial audio data to determine intermediate audio data; further determining the acquisition frequency of the intermediate audio data, and after the system-level chip transmits the intermediate audio data to the sound effect chip, determining an adjustment stage of the intermediate audio data can be realized based on the acquisition frequency and a preset audio delay condition applied correspondingly to the initial audio data; the method can further determine the function authority for executing the second sound effect processing on the intermediate audio data based on the adjustment stage, and execute the corresponding second sound effect processing on the intermediate audio data according to the function authority, so that the sound effect chip can correspondingly execute the sound effect data processing according to the specific condition of the intermediate audio data, thereby avoiding time delay caused by the sound effect processing and improving the user experience.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.
The foregoing description, for purposes of explanation, has been presented in conjunction with specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the embodiments to the precise forms disclosed above. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles and the practical application, to thereby enable others skilled in the art to best utilize the embodiments and various embodiments with various modifications as are suited to the particular use contemplated.

Claims (10)

1. A display device, characterized by comprising:
a system-in-chip configured to perform a first sound effect process on the initial audio data, determining intermediate audio data;
An audio chip configured to perform a second audio process on the intermediate audio data;
a controller in communication with the system-on-chip, the sound effect chip, the controller configured to:
determining the acquisition frequency of the intermediate audio data;
after the system-in-chip transmits the intermediate audio data to the sound effect chip, determining an adjustment stage of the intermediate audio data based on the acquisition frequency and a preset audio delay condition applied correspondingly to the initial audio data;
based on the adjustment stage, determining the functional authority for executing the second sound effect processing on the intermediate audio data, and triggering the sound effect chip to execute the corresponding second sound effect processing on the intermediate audio data according to the functional authority.
2. The display device of claim 1, wherein the adjustment phase comprises a first phase, a second phase, and a third phase, and wherein in performing the step of determining functional rights to the intermediate audio data to perform the second sound effect process based on the adjustment phase, the controller is further configured to:
if the adjustment stage of the intermediate audio data is a first stage, determining that the function authority is to execute all sound effect processing functions on the intermediate audio data, wherein the first stage represents a stage that the delay requirement of the application corresponding to the intermediate audio data is not more than a first duration;
If the adjustment stage of the intermediate audio data is a second stage, determining that the function permission is an audio processing function for executing the application on the intermediate audio data, wherein the function demand weight of the application is greater than a preset threshold, and the second stage represents a stage in which the delay demand of the application is not greater than a second duration, and the second duration is smaller than the first duration;
if the adjustment stage of the intermediate audio data is a third stage, determining that the function permission is to execute no sound effect processing function on the intermediate audio data, wherein the third stage characterizes the delay requirement of the application as a stage with a duration not greater than a third duration, and the third duration is smaller than the second duration.
3. The display device according to claim 2, wherein in the step of determining that the function right is to perform an effect processing function having a weight greater than a preset threshold on the intermediate audio data if the adjustment phase of the intermediate audio data is the second phase, the controller is configured to:
if the weight of the function requirement in the application is greater than a preset threshold, the function requirement is a bass sound effect requirement, and the sound effect processing function is to execute bass sound effect processing on the intermediate audio data;
If the weight of the function requirement in the application is greater than the preset threshold, the noise suppression sound effect requirement is met, and the sound effect processing function is used for executing noise suppression sound effect processing on the intermediate audio data.
4. The display device of claim 1, wherein in the step of performing the determining the adjustment phase of the intermediate audio data based on the acquisition frequency and the applied preset audio delay condition, the controller is further configured to:
if the acquisition period corresponding to the acquisition frequency is smaller than the fourth duration in the preset audio delay condition, the adjustment stage is a first stage;
if the acquisition period corresponding to the acquisition frequency is greater than or equal to the fourth time length and the acquisition period corresponding to the acquisition frequency is less than the fifth time length in the preset audio delay condition, the adjustment stage is a second stage;
and if the acquisition period corresponding to the acquisition frequency is greater than or equal to the fifth duration, the adjustment stage is a third stage.
5. The display device of claim 1, wherein the determination of the frequency of acquisition of the intermediate audio data comprises one of:
acquiring the acquisition frequency of the intermediate audio data;
Determining the acquisition frequency of the intermediate audio data based on the difference between the corresponding acquisition period of the acquisition frequency of the initial audio data and the application delay time; wherein the acquisition frequency of the intermediate audio data is greater than or equal to the acquisition frequency of the initial audio data.
6. The display device of claim 1, wherein the controller is further configured to:
and if the sound effect processing cache resource of the system-in-chip does not meet the sound effect processing requirement of the application, taking the initial audio data as the intermediate audio data.
7. The display device according to claim 1, wherein the display device further comprises:
the sound effect chip is in communication connection with the loudspeaker through a power amplifier;
the controller is further configured to:
outputting target audio data through the loudspeaker, wherein the target audio data is obtained by the sound effect chip executing second sound effect processing on the intermediate audio data based on the functional authority.
8. A display device, characterized by comprising:
a system-in-chip configured to perform a first sound effect process on the initial audio data, determining intermediate audio data;
An audio chip in communication with the system-on-chip, the audio chip configured to:
receiving the intermediate audio data and determining the acquisition frequency of the intermediate audio data;
determining an adjustment stage of the intermediate audio data based on the acquisition frequency and a preset audio delay condition applied correspondingly to the initial audio data;
based on the adjustment phase, determining a functional right to perform a second sound effect process on the intermediate audio data, and performing the corresponding second sound effect process on the intermediate audio data according to the functional right.
9. A sound effect processing method, characterized by comprising:
determining the acquisition frequency of intermediate audio data, wherein the intermediate audio data is determined by performing first sound effect processing on initial audio data by a system-in-chip;
after the system-in-chip transmits the intermediate audio data to an audio chip, determining an adjustment stage of the intermediate audio data based on the acquisition frequency and a preset audio delay condition applied correspondingly to the initial audio data;
and based on the adjustment stage, determining the function authority for executing the second sound effect processing on the intermediate audio data, and triggering the sound effect chip to execute the corresponding second sound effect processing on the intermediate audio data according to the function authority.
10. The sound effect processing method of claim 9, wherein the adjustment phase includes a first phase, a second phase, and a third phase, and wherein the determining, based on the adjustment phase, a functional right to perform the second sound effect processing on the intermediate audio data includes:
if the adjustment stage of the intermediate audio data is a first stage, determining that the function authority is to execute all sound effect processing functions on the intermediate audio data, wherein the first stage represents that the intermediate audio data is insensitive to delay and the delay requirement of the application is a stage of a first duration;
if the adjustment stage of the intermediate audio data is a second stage, determining that the function permission is an audio processing function for executing the application on the intermediate audio data, wherein the function requirement weight of the application is greater than a preset threshold, and the second stage represents a stage that the delay requirement of the application is a second duration and the second duration is smaller than the first duration;
and if the adjustment stage of the intermediate audio data is a third stage, determining that the function authority is to execute no sound effect processing function on the intermediate audio data, wherein the third stage characterizes the intermediate audio data as a stage sensitive to delay.
CN202211508371.1A 2022-11-29 2022-11-29 Display device and sound effect processing method Pending CN117294880A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211508371.1A CN117294880A (en) 2022-11-29 2022-11-29 Display device and sound effect processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211508371.1A CN117294880A (en) 2022-11-29 2022-11-29 Display device and sound effect processing method

Publications (1)

Publication Number Publication Date
CN117294880A true CN117294880A (en) 2023-12-26

Family

ID=89237780

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211508371.1A Pending CN117294880A (en) 2022-11-29 2022-11-29 Display device and sound effect processing method

Country Status (1)

Country Link
CN (1) CN117294880A (en)

Similar Documents

Publication Publication Date Title
CN114302195B (en) Display device, external device and play control method
WO2022073392A1 (en) Picture display method, and display device
CN112272417B (en) double-Bluetooth sound box reconnection method and display device
CN112995551A (en) Sound control method and display device
CN111836104B (en) Display apparatus and display method
CN114302021A (en) Display device and sound picture synchronization method
CN114040254B (en) Display equipment and high concurrency message display method
WO2022078065A1 (en) Display device resource playing method and display device
CN113038048B (en) Far-field voice awakening method and display device
CN117294880A (en) Display device and sound effect processing method
CN112584210B (en) Display device, video recording method and recorded file display method
CN112104950B (en) Volume control method and display device
CN114078480A (en) Display device and echo cancellation method
CN115185392A (en) Display device, image processing method and device
CN113971049A (en) Background service management method and display device
CN111787350A (en) Display device and screenshot method in video call
CN113782021B (en) Display equipment and prompt tone playing method
CN114071056B (en) Video data display method and display device
CN114095769B (en) Live broadcast low-delay processing method of application-level player and display device
CN116795311A (en) Display device and display method of use state
CN113766164B (en) Display equipment and signal source interface display method
US20230119233A1 (en) Display apparatus, video recording method, and recorded file display method
CN117950619A (en) Display device and sound effect adjusting method
CN116233522A (en) Display equipment and continuous screen throwing method
CN117768697A (en) Screen-throwing control method and display device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination