CN116939262A - Display device and sound effect setting method of audio device - Google Patents

Display device and sound effect setting method of audio device Download PDF

Info

Publication number
CN116939262A
CN116939262A CN202210369229.7A CN202210369229A CN116939262A CN 116939262 A CN116939262 A CN 116939262A CN 202210369229 A CN202210369229 A CN 202210369229A CN 116939262 A CN116939262 A CN 116939262A
Authority
CN
China
Prior art keywords
audio
sound effect
parameters
file
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210369229.7A
Other languages
Chinese (zh)
Inventor
李现旗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Visual Technology Co Ltd
Original Assignee
Hisense Visual Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Visual Technology Co Ltd filed Critical Hisense Visual Technology Co Ltd
Priority to CN202210369229.7A priority Critical patent/CN116939262A/en
Priority to PCT/CN2023/084607 priority patent/WO2023193643A1/en
Publication of CN116939262A publication Critical patent/CN116939262A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • H04N21/43072Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen of multiple content streams on the same device
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
    • H04N21/4363Adapting the video or multiplex stream to a specific local network, e.g. a IEEE 1394 or Bluetooth® network
    • H04N21/43632Adapting the video or multiplex stream to a specific local network, e.g. a IEEE 1394 or Bluetooth® network involving a wired protocol, e.g. IEEE 1394
    • H04N21/43635HDMI
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4398Processing of audio elementary streams involving reformatting operations of audio signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4781Games

Abstract

The application provides a display device and a setting method of sound effect parameters, wherein the display device stores sound effect files corresponding to all audio devices in advance, so that when sound effect processing is carried out on an audio stream, the sound effect parameters corresponding to the audio device outputting the audio stream are used, the processed audio stream can be matched with the corresponding audio device, and the playing quality of the audio stream is ensured. The cloud server is used for storing the corresponding sound effect parameters of each audio peripheral when the audio peripheral is used on the display device, so that when a user needs to adjust the sound effect parameters of the audio peripheral, the corresponding sound effect parameters can be requested to be modified at the cloud server and synchronously updated to the display device, and the sound effect parameters of the audio peripheral can be adjusted according to the user needs. Meanwhile, when the sound effect parameters of the audio peripheral are updated, the updated sound effect parameters in the cloud server can be synchronously updated to the display equipment, so that the used sound effect parameters are ensured to be accurately matched with the audio peripheral, and the playing quality of audio data is improved.

Description

Display device and sound effect setting method of audio device
Technical Field
The application relates to the technical field of intelligent display equipment, in particular to a sound effect setting method of display equipment and audio equipment.
Background
The display device refers to a terminal device capable of outputting a specific display screen, and may be a terminal device such as a smart television, a mobile terminal, a smart advertisement screen, and a projector. Taking intelligent electricity as an example, the intelligent television is based on the Internet application technology, has an open operating system and a chip, has an open application platform, can realize a bidirectional man-machine interaction function, and is a television product integrating multiple functions of video, entertainment, data and the like, and the intelligent television is used for meeting the diversified and personalized requirements of users.
The display device plays the audio stream through an audio device, for example through a built-in audio device such as a speaker, or through an external audio device such as a bluetooth sound. The display device improves the playing quality of the audio stream by performing sound effect processing on the audio stream. However, when the audio stream is processed by the display device, the audio parameters used are only matched with the built-in audio device, but cannot be matched with the external audio device, so that when the audio stream processed by the audio is output by the external audio device, the playing quality of the audio stream cannot be ensured, the playing effect of the external audio device cannot be exerted, and the hearing experience of a user when the user plays the audio stream by using the external audio device is affected.
Disclosure of Invention
The application provides a display device and an audio setting method of an audio device, which can respectively use corresponding audio parameters when audio streams output to the built-in audio device and the external audio device of the display device are subjected to audio processing, so that the processed audio streams can be matched with the corresponding audio devices to ensure the playing quality of the audio streams.
In a first aspect, the present application provides a display apparatus comprising:
a display configured to display a user interface;
the storage is configured to store sound effect files corresponding to the audio devices, wherein the audio devices comprise built-in audio devices and external audio devices of the display device, each sound effect file corresponds to one type of audio device, and the sound effect files comprise sound effect parameters matched with the corresponding type of audio device;
a controller configured to:
acquiring an audio stream;
identifying a target audio device currently in use;
acquiring a target sound effect file corresponding to the target audio equipment;
and performing sound effect processing on the audio stream by using the sound effect parameters in the target sound effect file to obtain a processed audio stream.
In some embodiments of the present application, the audio devices are classified according to sources of the audio streams that are output correspondingly, or the audio devices are classified according to device types, and the controller obtains a target sound effect file that corresponds to the target audio device, and is configured to:
Identifying a target classification corresponding to the target audio device;
and acquiring the target sound effect file corresponding to the target classification.
In some embodiments of the present application, each of the audio files corresponds to a class of scenes, the scenes including audio content and/or a use environment, and the controller obtains a target audio file corresponding to the target audio device, and is configured to:
acquiring all sound effect files corresponding to the target audio equipment;
identifying a target scene corresponding to the audio stream;
and acquiring the target sound effect file corresponding to the target scene from the all sound effect files.
In some embodiments of the application, the controller is further configured to:
receiving a device switching instruction input by a user, wherein the device switching instruction indicates audio devices to be switched;
acquiring an audio file of the audio equipment to be switched;
and performing sound effect processing on the audio stream by using the sound effect parameters in the sound effect file of the audio equipment to be switched to obtain the processed audio stream.
In some embodiments of the application, the method further comprises establishing a communication connection with the cloud server, the controller further configured to:
Receiving an audio parameter adjustment instruction input by a user, wherein the audio parameter adjustment instruction indicates an adjusted audio parameter in the target audio file;
responding to the sound effect parameter adjustment instruction, sending a sound effect parameter adjustment request to the cloud server, wherein the sound effect parameter adjustment request comprises the adjusted sound effect parameters, and the cloud server stores cloud sound effect parameters corresponding to the sound effect files, and the cloud sound effect parameters are bound with the display equipment or the user account;
receiving the adjusted cloud sound effect parameters returned by the cloud server, wherein the cloud sound effect parameters are adjusted based on the adjusted sound effect parameters;
and replacing the sound effect parameters in the target sound effect file by using the adjusted cloud sound effect parameters, and performing sound effect processing on the audio stream by using the adjusted cloud sound effect parameters.
In some embodiments of the application, the method further comprises establishing a communication connection with the cloud server, the controller further configured to:
a first updating query request is sent to the cloud server at a designated node, wherein the first updating query request comprises equipment parameters of the display equipment, and the cloud server stores the latest sound effect parameters corresponding to various audio equipment in the display equipment;
Receiving a query result returned by the cloud server, wherein the query result comprises no update and update, and when the query result is update, the query result further comprises a storage address of the latest sound effect parameter different from the current sound effect parameter in the display device;
when the query result is updated, acquiring the latest sound effect parameters according to the storage address;
and replacing the sound effect parameters in the corresponding sound effect file by using the latest sound effect parameters.
In some embodiments of the present application, the method further comprises a communicator configured to establish a communication connection with a cloud server, the display device is bound to a user account, and the controller is further configured to:
after logging in the user account, sending a second updating query request to the cloud server, wherein the second updating query request comprises the user account and equipment parameters, and the cloud server stores the latest sound effect parameters corresponding to various audio equipment in the user account in the display equipment;
receiving a query result returned by the cloud server, wherein the query result comprises no update and update, and when the query result is update, the query result further comprises a storage address of the latest sound effect parameter different from the current sound effect parameter in the display device;
When the query result is updated, acquiring the latest sound effect parameters according to the storage address;
and replacing the sound effect parameters in the corresponding sound effect file by using the latest sound effect parameters.
In a second aspect, the present application provides an audio setting method for an audio device, where the audio device is applied to a display device, where the display device stores audio files corresponding to each audio device, where the audio device includes a built-in audio device and an external audio device of the display device, each of the audio files corresponds to one type of audio device, and the audio files include audio parameters matched with the corresponding type of audio device, and the method includes:
acquiring an audio stream;
identifying a target audio device currently in use;
acquiring a target sound effect file corresponding to the target audio equipment;
and performing sound effect processing on the audio stream by using the sound effect parameters in the target sound effect file to obtain a processed audio stream.
In some embodiments of the application, the method further comprises:
receiving an audio parameter adjustment instruction input by a user, wherein the audio parameter adjustment instruction indicates an adjusted audio parameter in the target audio file;
Responding to the sound effect parameter adjustment instruction, sending a sound effect parameter adjustment request to a cloud server, wherein the sound effect parameter adjustment request comprises the adjusted sound effect parameters, and the cloud server stores cloud sound effect parameters corresponding to the sound effect files, and the cloud sound effect parameters are bound with the display equipment or the user account;
receiving the adjusted cloud sound effect parameters returned by the cloud server, wherein the cloud sound effect parameters are adjusted based on the adjusted sound effect parameters;
and replacing the sound effect parameters in the target sound effect file by using the adjusted cloud sound effect parameters, and performing sound effect processing on the audio stream by using the adjusted cloud sound effect parameters.
In some embodiments of the application, the method further comprises:
a first updating query request is sent to a cloud server at a designated node, wherein the first updating query request comprises equipment parameters of the display equipment, and the cloud server stores the latest sound effect parameters corresponding to various audio equipment in the display equipment;
receiving a query result returned by the cloud server, wherein the query result comprises no update and update, and when the query result is update, the query result further comprises a storage address of the latest sound effect parameter different from the current sound effect parameter in the display device;
When the query result is updated, acquiring the latest sound effect parameters according to the storage address;
and replacing the sound effect parameters in the corresponding sound effect file by using the latest sound effect parameters.
The display device stores sound effect files corresponding to the audio devices in advance, so that when sound effect processing is carried out on the audio streams output to the built-in audio device and the externally connected audio device of the display device, corresponding sound effect parameters are respectively used, the processed audio streams can be matched with the corresponding audio devices, and accordingly playing quality of the audio streams is guaranteed. Meanwhile, the cloud server is used for storing the corresponding sound effect parameters of each audio peripheral when the audio peripheral is used on the display device, so that when a user needs to adjust the sound effect parameters of the audio peripheral, the corresponding sound effect parameters can be modified by the cloud server and synchronously updated to the display device, and the sound effect parameters of the audio peripheral can be adjusted. And when the sound effect parameters of the audio peripheral are updated, the updated sound effect parameters in the cloud server can be synchronously updated to the display equipment, so that timeliness of the sound effect parameters is ensured.
Drawings
In order to more clearly illustrate the technical solution of the present application, the drawings that are needed in the embodiments will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a usage scenario of a display device according to an embodiment of the present application;
fig. 2 is a block diagram of a configuration of a control device in an embodiment of the present application;
fig. 3 is a configuration diagram of a display device in an embodiment of the present application;
FIG. 4 is a diagram illustrating an operating system configuration of a display device according to an embodiment of the present application;
FIG. 5 is a flowchart illustrating setting of sound parameters according to an embodiment of the present application;
FIG. 6 is a schematic diagram of an audio stream processing flow according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a correspondence between audio files and audio devices according to an embodiment of the present application;
FIG. 8 is a schematic diagram of a correspondence between audio files and audio devices according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a correspondence between audio files and audio devices according to an embodiment of the present application;
FIG. 10 is a schematic diagram illustrating a correspondence between audio files and audio devices according to an embodiment of the present application;
FIG. 11 is a schematic diagram of a correspondence between audio files and audio devices according to an embodiment of the present application;
FIG. 12 is a schematic diagram illustrating a correspondence between audio files and audio devices according to an embodiment of the present application;
FIG. 13 is a schematic diagram of an audio device list in an embodiment of the application;
FIG. 14 is a flowchart of a method for obtaining a target sound file according to an embodiment of the present application;
FIG. 15 is a flowchart of a method for obtaining a target sound file according to an embodiment of the present application;
FIG. 16 is a schematic view of a field Jing Liebiao in accordance with an embodiment of the application;
fig. 17 is a schematic flow chart of switching audio devices according to an embodiment of the present application;
FIG. 18 is a flowchart illustrating the adjustment of sound parameters according to an embodiment of the present application;
fig. 19 is a schematic flow chart of updating configuration audio files by the synchronous cloud server according to an embodiment of the present application;
fig. 20 is a flowchart illustrating a process of updating a configuration audio file by the synchronous cloud server according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to the embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The embodiments described in the examples below do not represent all embodiments consistent with the application. Merely exemplary of systems and methods consistent with aspects of the application as set forth in the claims.
It should be noted that the brief description of the terminology in the present application is for the purpose of facilitating understanding of the embodiments described below only and is not intended to limit the embodiments of the present application. Unless otherwise indicated, these terms should be construed in their ordinary and customary meaning.
The terms first, second, third and the like in the description and in the claims and in the above-described figures are used for distinguishing between similar or similar objects or entities and not necessarily for describing a particular sequential or chronological order, unless otherwise indicated. It is to be understood that the terms so used are interchangeable under appropriate circumstances.
The terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of elements is not necessarily limited to all elements explicitly listed, but may include other elements not expressly listed or inherent to such product or apparatus.
The display device provided by the embodiment of the application can have various implementation forms, for example, can be an intelligent television, a laser projection device, a display (monitor), an electronic whiteboard (electronic bulletin board), an electronic desktop (electronic table) and the like, and can also be a device with a display screen, such as a mobile phone, a tablet personal computer, an intelligent watch and the like. Fig. 1 and 2 are specific embodiments of a display device of the present application.
Fig. 1 is a schematic diagram of an operation scenario between a display device and a control device according to an embodiment. As shown in fig. 1, a user may operate the display device 200 through the smart device 300 or the control device 100.
In some embodiments, the control device 100 may be a remote control, and the communication between the remote control and the display device may include at least one of infrared protocol communication or bluetooth protocol communication, and other short-range communication methods, and the display device 200 may be controlled by a wireless or wired method. The user may control the display device 200 by inputting user instructions through keys on a remote control, voice input, control panel input, etc.
In some embodiments, a smart device 300 (e.g., mobile terminal, tablet, computer, notebook, etc.) may also be used to control the display device 200. For example, the display device 200 is controlled using an application running on the smart device 300.
In some embodiments, the display device 200 is also in data communication with a server 400. The display device 200 may be permitted to make communication connections via a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 400 may provide various contents and interactions to the display device 200. The server 400 may be a cluster, or may be multiple clusters, and may include one or more types of servers.
The server 400 may be a cloud server that provides various services, for example, stores configuration files provided by manufacturers of external audio devices, stores data corresponding to user accounts, and provides support services for data collected by the display device 200.
Fig. 3 shows a block diagram of a configuration of the display device 200 in accordance with an exemplary embodiment.
The display apparatus 200 includes at least one of a modem 210, a communicator 220, a detector 230, an external device interface 240, a controller 250, a display 260, an audio output interface 270, a memory, a power supply, and a user interface 280.
In some embodiments, the modem 210 receives broadcast television signals via wired or wireless reception and demodulates audio video signals, and EPG data signals, from a plurality of wireless or wired broadcast television signals.
In some embodiments, communicator 220 is a component for communicating with external devices or servers according to various communication protocol types. For example: the communicator may include at least one of a Wi-Fi module, a bluetooth module, a wired ethernet module, or other network communication protocol chip or a near field communication protocol chip, and an infrared receiver. The display apparatus 200 may establish transmission and reception of control signals and data signals with the control apparatus 100 or the server 400 through the communicator 220.
In some embodiments, the detector 230 is used to collect signals of the external environment or interaction with the outside. For example, detector 230 includes a light receiver, a sensor for capturing the intensity of ambient light; alternatively, the detector 230 includes an image collector, such as a camera, that can be used to collect external environmental scenes, user attributes, or user interaction gestures; still alternatively, the detector 230 includes a sound collector, such as a microphone or the like, for receiving external sound.
The sound collector may be a microphone, also called "microphone", which may be used to receive the sound of a user and to convert the sound signal into an electrical signal. The display device 200 may be provided with at least one microphone. In other embodiments, the display device 200 may be provided with two microphones, and may implement a noise reduction function in addition to collecting sound signals. In other embodiments, the display device 200 may also be provided with three, four, or more microphones to enable collection of sound signals, noise reduction, identification of sound sources, directional recording functions, etc.
Further, the microphone may be built in the display device 200, or connected to the display device 200 by a wired or wireless manner. Of course, the position of the microphone on the display device 200 is not limited in the embodiment of the present application. Alternatively, the display device 200 may not include a microphone, i.e., the microphone is not provided in the display device 200. The display device 200 may be coupled to a microphone (also referred to as a microphone) via an interface such as the USB interface 130. The external microphone may be secured to the display device 200 by external fasteners such as a camera mount with clips.
In some embodiments, the external device interface 240 may include, but is not limited to, the following: high Definition Multimedia Interface (HDMI), analog or data high definition component input interface (component), composite video input interface (CVBS), USB input interface (USB), RGB port, or the like. The input/output interface may be a composite input/output interface formed by a plurality of interfaces.
In some embodiments, the controller 250 and the modem 210 may be located in separate devices, i.e., the modem 210 may also be located in an external device to the main device in which the controller 250 is located, such as an external set-top box or the like.
In some embodiments, the controller 250 controls the operation of the display device and responds to user operations through various software control programs stored on the memory. The controller 250 controls the overall operation of the display apparatus 200. For example: in response to receiving a user command to select a UI object to be displayed on the display 260, the controller 250 may perform an operation related to the object selected by the user command.
In some embodiments, the controller 250 includes at least one of a central processing unit (Central Processing Unit, CPU), a video processor, an audio processor, a graphics processor (Graphics Processing Unit, GPU), RAM (Random Access Memory), a ROM (Read-Only Memory), a first to nth interface for input/output, a communication Bus (Bus), and the like.
In some embodiments, the display 260 includes a display screen component for presenting a picture, and a driving component for driving an image display, a component for receiving an image signal output from the controller 250, displaying video content, image content, and a menu manipulation interface, and a user manipulation UI interface.
The display 260 may be a liquid crystal display, an OLED display, a projection device, or a projection screen.
In some embodiments, a user may input a user command through a graphical user interface (Graphic User Interface, GUI) displayed on the display 260, and the user input interface receives the user input command through the Graphical User Interface (GUI). Alternatively, the user may input the user command by inputting a specific sound or gesture, and the user input interface recognizes the sound or gesture through the sensor to receive the user input command.
In some embodiments, a "user interface" is a media interface for interaction and exchange of information between an application or operating system and a user that enables conversion between an internal form of information and a form acceptable to the user. A commonly used presentation form of a user interface is a Graphical User Interface (GUI), which refers to a user interface related to computer operations that is displayed in a graphical manner. It may be an interface element such as an icon, a window, a control, etc. displayed in a display screen of the electronic device, where the control may include at least one of a visual interface element such as an icon, a button, a menu, a tab, a text box, a dialog box, a status bar, a navigation bar, a Widget, etc.
Referring to FIG. 4, in some embodiments, the system is divided into four layers, from top to bottom, an application layer (referred to as an "application layer"), an application framework layer (Application Framework layer) (referred to as a "framework layer"), a An Zhuoyun row (Android run) and a system library layer (referred to as a "system runtime layer"), and a kernel layer, respectively.
In some embodiments, at least one application program is running in the application program layer, and these application programs may be a Window (Window) program of an operating system, a system setting program, a clock program, or the like; or may be an application developed by a third party developer. In particular implementations, the application packages in the application layer are not limited to the above examples.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions. The application framework layer corresponds to a processing center that decides to let the applications in the application layer act. The application program can access the resources in the system operation library layer and acquire the service of the system in the execution process through the API interface, and call the corresponding driver in the kernel layer to drive the corresponding module to execute the corresponding service by using the corresponding resources.
The display device 200 plays an audio stream through an audio device, which in this embodiment includes a built-in audio device, which refers to an audio device for playing an audio stream configured by the display device 200 itself, such as a speaker shown in fig. 3. The audio device further includes an external audio device (hereinafter referred to as an audio peripheral), which is an audio device connected through an interface or a communication module provided by the display device 200. For example, as shown in fig. 3, an audio peripheral connected to the display apparatus 200 through the communicator 220, such as an audio peripheral connected to the display apparatus 200 through a WiFi module of the display apparatus 200 based on a WiFi network, an audio peripheral connected to the display apparatus 200 through a bluetooth module of the display apparatus 200 based on bluetooth communication (hereinafter simply referred to as bluetooth audio peripheral), and an audio peripheral connected to the display apparatus 200 through a wired ethernet module of the display apparatus 200 based on ethernet communication. As another example, as shown in fig. 3, an audio peripheral connected to the display apparatus 200 through the external device interface 240, such as an audio apparatus (hereinafter abbreviated as USB audio peripheral) connected to the display apparatus 200 through a USB interface of the display apparatus 200, an audio peripheral (hereinafter abbreviated as 12S audio peripheral) connected to the display apparatus 200 through a 12S digital audio output interface (not shown), and an audio peripheral (hereinafter abbreviated as ARC audio peripheral) connected to the display apparatus 200 through a sound feedback (Audio Retum Channel, ARC) interface (not shown). As another example, as shown in fig. 3, an audio peripheral (hereinafter, simply referred to as a wired audio peripheral) that is wired to the display apparatus 200 through the audio output interface 270, such as an external sound device, a wired earphone. In some embodiments, an audio peripheral (hereinafter referred to simply as a fiber optic audio peripheral) connected to the display device 200 through a fiber optic (not shown in the figures) is also included.
As shown in fig. 3, the display apparatus 200 demodulates the received audio stream through the tuner demodulator 210 and inputs the demodulated audio stream to the audio processor for processing. In order to improve the playing quality of the audio stream, the audio stream is generally subjected to sound effect processing, in this embodiment, the sound effect processing is to set corresponding sound effect parameters for the audio stream, where the sound effect parameters include sound modes, such as a dynamic mode, a standard mode, etc., surround sound, sound resetting, bass emphasis, equalizer, dolby sound effect, etc., and values corresponding to each item. The audio stream subjected to the sound effect processing is transmitted to the currently used audio device for playing. However, the audio parameters used when the audio stream is processed by the display device 200 are only matched with the built-in audio device, but cannot be matched with the audio peripheral, for example, the items in the audio parameters are not matched with the items of the audio parameters corresponding to the audio peripheral, and the values of the audio parameters are not matched with the values of the audio parameters corresponding to the audio peripheral. Therefore, when the audio frequency peripheral outputs the audio frequency stream after the audio frequency effect processing, the playing quality of the audio frequency stream can not be ensured, so that the playing effect of the external audio frequency equipment can not be exerted, and the hearing experience of a user when the audio frequency peripheral plays the audio frequency stream is influenced.
In order to solve the above problems, an embodiment of the present application provides a method for setting audio parameters of a display device, so that when audio streams output to an internal audio device of the display device and an external audio device are processed in audio, corresponding audio parameters are used respectively, so that the processed audio streams can be matched with the corresponding audio devices, and thus, the playing quality of the audio streams is ensured. Reference may be made to the flow shown in fig. 5, which includes the following specific steps:
s501, acquiring an audio stream.
The audio stream may be live audio data, such as the audio stream received by the display device 200 via an antenna. The audio stream may be local audio data, such as audio data stored in a memory of the display device 200, from which the display device 200 directly retrieves the audio stream. The audio stream may be audio data provided by an external device, which is a device connected to the display device 200 through an interface or a communication module provided by the display device 200 and transmitting the audio data to the display device 200, as shown in fig. 3, and the external device may be connected to the display device 200 through the communicator 220 or the external device interface 240. Illustratively, the external device is connected with the display device 200 through a WiFi network, and the display device 200 receives an audio stream transmitted by the external device based on the WiFi network; the external device is connected with the display device 200 through Bluetooth, and the display device 200 receives an audio stream transmitted by the external device based on Bluetooth communication; the external device is connected with the display device 200 through the Ethernet, and the display device 200 receives the audio stream transmitted by the external device based on the Ethernet; the external device is connected with the display device 200 through the USB, and the display device 200 receives the audio stream transmitted by the external device based on the USB interface. The audio stream may be audio data collected by the display apparatus 200 from the external environment, as shown in fig. 3, and the display apparatus 200 collects the surrounding audio stream through a detector 230, such as a sound collector.
Referring to the audio stream processing flowchart shown in fig. 6, after the display apparatus 200 acquires an audio stream, first, format unification processing is performed on the audio stream. According to the encoding format, the audio stream may be divided into a pulse code modulation (Pulse Code Modulation, PCM) format audio stream, such as a waveform sound format (WAV) audio stream, and a non-PCM format audio stream, such as a lossless compression audio format (APE) audio stream, a lossless audio compression coding Format (FLAC) audio stream. Decoding the non-PCM format audio stream to obtain PCM format audio stream, and mixing all PCM format audio streams, namely optimizing the rate, bit rate and the like of each PCM format audio stream to obtain audio stream with specified format. In some embodiments, the specified format may be a 48kHz sampling rate, a 32bit rate. And performing pre-processing on the audio stream after the mixing processing, namely performing volume gain processing on the audio stream after the mixing processing to amplify the volume of the audio stream after the mixing processing so as to facilitate subsequent sound effect processing on the audio stream after the mixing processing and ensure the playing effect of the audio stream after the mixing processing. And performing post-processing on the pre-processed audio stream, namely performing post-processing on the pre-processed audio stream, wherein the pre-processed audio stream is subjected to the audio processing by using the audio parameters in the audio file, and the audio parameters in the audio file are in standard modes, wherein the values of an equalizer are respectively-4 dB, -1dB, 2dB and-3 dB, and are respectively corresponding to bass, midrange, treble and high-audio segments, and the audio parameters of the audio stream obtained by performing the audio processing on the pre-processed audio stream by using the audio file correspond to the values. And transmitting the post-processed audio stream to the currently used audio equipment, and playing the audio stream. For example, the post-processed audio stream may be transmitted to a built-in audio device, such as a speaker, for playback, or the post-processed audio stream may be transmitted to an audio peripheral, such as a bluetooth sound designated by the user for playback.
Only when the sound effect parameters of the sound effect processed audio stream are matched with the currently used audio equipment, the playing quality of the audio stream can be ensured. According to the processing flow of the audio stream, the audio parameters of the audio stream after the audio processing correspond to the audio files used during the post-audio processing, so that in order to ensure the playing quality of the audio stream, the audio files used during the audio processing are required to be accurately matched with the currently used audio equipment.
In this embodiment, in order to achieve accurate matching between the used audio file and the currently used audio device, the audio file is reconfigured, that is, different audio files are set for different audio devices, where the audio parameters in the audio file corresponding to each audio device are matched with the audio device, and when the audio processing is performed, the audio file corresponding to the currently used audio device is used, so that it is ensured that the audio stream after the audio processing has the audio parameters matched with the audio device, thereby ensuring the playing quality of the audio stream.
In the first embodiment, each audio device is classified according to the source of the corresponding output audio stream, and each audio device corresponds to one audio file. The audio stream output by the second audio peripheral device belongs to another source. The first audio peripheral is an audio peripheral connected to the display device 200 through a first communication manner, for example, a bluetooth audio peripheral, a USB audio peripheral, or a wired earphone as disclosed above, and the second audio peripheral is an audio peripheral connected to the display device 200 through a second communication manner, for example, an optical audio peripheral, an ARC audio peripheral, or a 12S audio peripheral as disclosed above.
As shown in fig. 7, the same sound effect file, for example, sound effect file a is configured for the built-in audio device (such as a speaker) and the first audio peripheral, the sound effect file a includes sound effect parameter a, and the same sound effect file, for example, sound effect file B is configured for the second audio peripheral, and the sound effect file B includes sound effect parameter B. When the audio streams belonging to the same source are subjected to sound effect processing, the sound effect parameters are relatively close, so that the processed audio streams obtained by performing sound effect processing on the audio streams of the same source by using the same group of sound effect parameters can be suitable for audio equipment for playing the audio streams of the source, and the playing quality of the audio streams can be ensured. The audio files corresponding to each type of audio equipment are pre-stored in the memory of the display equipment 200, and the audio files are configured according to the mode, so that not only can the playing quality of the audio stream be ensured, but also the number of pre-stored audio files can be effectively controlled, and the memory space of the memory occupied by the audio files is reduced. In addition, the method is convenient for quickly determining the sound effect files corresponding to the currently used audio equipment in the sound effect files with a small number, and the efficiency of sound effect processing can be improved.
In some embodiments, the sound effect parameter a in the sound effect file a adopts the sound effect parameter originally matched with the built-in audio device. Therefore, the sound effect parameter a can directly follow the sound effect parameter configured for the built-in audio device by the display device 200 when leaving the factory, and no additional obtaining of the sound effect parameter corresponding to other first audio peripherals is needed, namely, the sound effect file a can directly follow the original sound effect file in the display device 200, the configuration work of the sound effect file a can be saved, and only the configuration of the sound effect file B is needed.
In some embodiments, the sound effect parameter B in the sound effect file B may use a sound effect parameter matching any one of the second audio peripherals, such as using a sound effect parameter matching the ARC audio peripheral.
In some embodiments, the sound effect parameter B in the sound effect file B may be a specified sound effect parameter, where the specified sound effect parameter is calculated based on the sound effect parameter corresponding to each second audio peripheral. Therefore, the appointed sound effect parameter is not precisely matched with one second audio peripheral, but can be well matched with each second audio peripheral, so that the playing quality of the audio stream processed by using the appointed sound effect parameter on each second audio peripheral cannot have a larger gap, and the playing quality among the second audio peripheral can be balanced. For example, the phenomenon that when the user uses each second audio peripheral, the playing quality of part of the second audio peripheral is obviously better and the playing quality of part of the second audio peripheral is obviously poorer can be avoided. For another example, when the user switches between the second audio peripherals, abrupt changes in tone quality can be avoided, and the hearing experience of the user is affected.
Based on the first embodiment, various audio devices classified according to the sources of the corresponding output audio streams are further classified according to the device types of the audio devices, and each type of audio device corresponds to one sound effect file. In this embodiment, the device type of the audio device corresponds to the communication mode between the audio device and the display device 200, and as can be seen from the above description, the device type of the audio device includes a built-in audio device, a bluetooth audio peripheral, a USB audio peripheral, a wired earphone, an ARC audio peripheral, an optical fiber audio peripheral, and a 12S audio peripheral.
In this embodiment, the audio devices built in the display device 200 and the external audio device may be further classified into respective audio devices. The built-in audio equipment and the first audio peripheral are further classified according to equipment types, and the built-in audio equipment and the first audio peripheral are classified into two categories, and the second audio peripheral is not required to be further classified because the second audio peripheral is an audio peripheral. Therefore, the audio equipment obtained after further classification corresponds to three categories, namely the built-in audio equipment, the first audio peripheral equipment and the second audio peripheral equipment. The built-in audio equipment corresponds to one sound effect file, the first audio peripheral equipment corresponds to one sound effect file, and the second audio peripheral equipment corresponds to one sound effect file.
As shown in fig. 8, an audio file, such as an audio file a, is configured for a built-in audio device (such as a speaker), where the audio file a includes an audio parameter a, an audio file, such as an audio file C, is configured for a first audio peripheral (such as a bluetooth audio peripheral, a USB audio peripheral, and a wired earphone), and an audio file, such as an audio file B, is configured for a second audio peripheral (such as an ARC audio peripheral, an optical fiber audio peripheral, and a 12S audio peripheral), and an audio file B includes an audio parameter B. Typically, the audio parameters that the built-in audio device matches with the audio peripherals have a larger difference, e.g., fewer items of audio parameters that the built-in audio device corresponds to, while the audio peripherals typically provide more rich items of audio parameters to provide better quality audio. Therefore, even if the audio streams played by the built-in audio equipment and the first audio peripheral equipment belong to the same source, namely the sound effect parameters used in sound effect processing are similar, the similarity of the sound effect parameters mainly refers to the similarity of numerical values of the items of the same sound effect parameters, and the difference of the items of the sound effect parameters between the built-in audio equipment and the first audio peripheral equipment cannot be made up. The difference of the sound effect parameter items between the audio peripherals is relatively small, so that the built-in audio equipment and the first audio peripheral are further classified, and corresponding sound effect files are respectively configured. Because the audio streams played by the first audio peripherals come from the same source, the processed audio streams obtained by processing the audio streams by using the same set of audio parameters can be suitable for the first audio devices, and meanwhile, the set of audio parameters can be more close to the items and the numerical values of the audio peripherals so as to improve the matching degree of the audio streams processed by the audio and the first audio peripherals, thereby improving the playing quality of the audio streams.
The setting of the sound effect parameter a and the sound effect parameter b may refer to the sound effect file configuration manner in the first embodiment, which is not described herein.
In some embodiments, the sound effect parameter c may be set with reference to the portion of the sound effect file configuration mode in the above embodiment. Illustratively, the sound effect parameter c may employ a sound effect parameter matching any one of the first audio peripherals, such as employing a sound effect parameter matching a bluetooth audio peripheral. For example, the sound effect parameter c may be a specified sound effect parameter, where the specified sound effect parameter is calculated based on the sound effect parameter corresponding to each of the first audio peripherals. Therefore, the appointed sound effect parameter is not precisely matched with one first audio peripheral, but can be well matched with each first audio peripheral, so that the playing quality of the audio stream processed by using the appointed sound effect parameter on each first audio peripheral cannot have a larger gap, the playing quality among the second audio peripheral can be balanced, and the method is not repeated here.
In some embodiments, the first audio peripherals may be classified in a combination according to a device type, and the second audio peripherals may be classified in a combination according to a device type, where each type of audio device corresponds to one audio file. I.e. the built-in audio devices are of a class, the combination of each device type of the first audio peripheral corresponds to a class, and the combination of each device type of the second audio peripheral corresponds to a class. Therefore, the number of pre-stored sound effect files can be reduced by corresponding the same sound effect file through the audio peripherals of a plurality of equipment types, and the memory space of the occupied storage is further reduced.
For example, the bluetooth audio peripheral and the USB audio peripheral in the first audio peripheral are combined to correspond to one category, the wired earphone in the first audio peripheral corresponds to one category, and the second audio peripherals are not combined and classified, i.e. each second audio peripheral corresponds to one category. As shown in fig. 9, an audio file, such as an audio file a, is configured for a built-in audio device (such as a speaker), where the audio file a includes an audio parameter a, an audio file, such as an audio file C1, is configured for a bluetooth audio peripheral and a USB audio peripheral, where the audio file C1 includes an audio parameter C1, an audio file, such as an audio file C2, is configured for a wired earphone, where the audio file C2 includes an audio parameter C2, and an audio file, such as an audio file B, is configured for a second audio peripheral (such as an ARC audio peripheral, an optical fiber audio peripheral, and a 12S audio peripheral). The setting of the sound effect parameter a and the sound effect parameter b may refer to the configuration mode of the sound effect file in the first embodiment, and the setting of the sound effect parameter c1 may refer to the setting mode of the sound effect parameter b in the first embodiment, which is not described herein. The sound effect parameter c2 is the sound effect parameter accurately matched with the wired earphone.
For example, the first audio peripherals are not classified in combination, i.e., each first audio peripheral corresponds to a category, the ARC audio peripheral and the optical fiber audio peripheral in the second audio peripheral are combined, and correspond to a category, and the 12S audio peripheral in the second audio peripheral corresponds to a category. As shown in fig. 10, an audio file, such as an audio file a, is configured for a built-in audio device (such as a speaker), where the audio file a includes an audio parameter a, an audio file, such as an audio file C, is configured for a first audio peripheral (such as a bluetooth audio peripheral, a USB audio peripheral, and a wired earphone), where the audio file C includes an audio parameter C, an audio file, such as an audio file B1, is configured for an ARC audio peripheral and an optical fiber audio peripheral, where the audio file B1 includes an audio parameter B1, and an audio file, such as an audio file B2, is configured for a 12S audio peripheral, where the audio file B2 includes an audio parameter B2. The setting of the sound effect parameter a and the sound effect parameter c may refer to the portion corresponding to the configuration mode of the sound effect file shown in fig. 8, and the setting of the sound effect parameter b1 may refer to the setting mode of the sound effect parameter b in the first embodiment, which is not described herein. The sound effect parameter b2 is the sound effect parameter accurately matched with the 12S audio peripheral.
For example, the bluetooth audio peripheral and the USB audio peripheral in the first audio peripheral are combined, corresponding to one category, the wired earphone in the first audio peripheral corresponds to one category, the ARC audio peripheral and the optical fiber audio peripheral in the second audio peripheral are combined, corresponding to one category, and the 12S audio peripheral in the second audio peripheral corresponds to one category. As shown in fig. 11, an audio file, such as an audio file a, is configured for a built-in audio device (such as a speaker), where the audio file a includes an audio parameter a, an audio file, such as an audio file C1, is configured for a bluetooth audio peripheral and a USB audio peripheral, where the audio file C1 includes an audio parameter C1, an audio file, such as an audio file C2, is configured for a wired earphone, where the audio file C2 includes an audio parameter C2, an audio file, such as an audio file B1, is configured for an ARC audio peripheral and an optical fiber audio peripheral, where the audio file B1 includes an audio parameter B1, and an audio file, such as an audio file B2, is configured for a 12S audio peripheral, where the audio file B2 includes an audio parameter B2. The setting of the sound effect parameter a may refer to the sound effect parameter a in the first embodiment, the setting of the sound effect parameter c1 and the sound effect parameter c2 may refer to the portion corresponding to the sound effect file configuration mode shown in fig. 9, and the setting of the sound effect parameter b1 and the sound effect parameter b2 may refer to the portion corresponding to the sound effect file configuration mode shown in fig. 10, which are not described herein.
In the third embodiment, the audio devices are classified according to the device types of the audio devices, and each type of audio device corresponds to one sound effect file. In this embodiment, each audio device is accurately classified according to the device type, that is, the built-in audio device, the bluetooth audio peripheral, the USB audio peripheral, the wired earphone, the ARC audio peripheral, the optical fiber audio peripheral, and the 12S audio peripheral correspond to one type respectively, so that the sound effect parameters in each sound effect file are accurately matched with the corresponding audio device, and therefore, before the audio stream is transmitted to the currently used audio device, the sound effect parameters in the corresponding sound effect file are used for performing sound effect processing, so that the processed audio stream can be accurately matched with the currently used audio device, and the playing quality of the audio stream is effectively improved. As shown in fig. 12, an audio file is configured for a built-in audio device (such as a speaker), for example, an audio file D1 is configured for a bluetooth audio peripheral, for example, an audio file D2 is configured for a bluetooth audio peripheral, an audio file D2 is configured for a USB audio peripheral, for example, an audio file D3 is configured for a USB audio peripheral, an audio file D3 is configured for a wired earphone, for example, an audio file D4 is configured for a wired earphone, an audio file D4 is configured for an ARC audio peripheral, for example, an audio file D5 is configured for an audio file D5, an audio file D5 is configured for an optical fiber audio peripheral, for example, an audio file D6 is configured for an audio file D6, an audio file D7 is configured for a 12S audio peripheral, and an audio file D7 is configured for an audio file D7. The setting of the sound effect parameter d1 may refer to the sound effect parameter a in the first embodiment, which is not described herein, and the remaining sound effect parameters are all exactly matched with the corresponding audio peripherals.
The audio files in the above embodiments are all stored in the memory of the display device 200 in advance, so that when the audio device is used to play the audio stream, the pre-stored audio files can be directly used, so as to improve the efficiency of audio processing, avoid the play delay of the audio stream, and ensure the play quality of the audio stream. When the display device 200 is set in a factory, the sound effect parameters in each sound effect file correspond to initial values, and the initial values can be set based on the sound effect parameters matched by various pre-registered audio devices.
S502, identifying the currently used target audio device.
In the present embodiment, the audio device currently used is referred to as a target audio device, such as a built-in audio device that the display device 200 uses by default, or an audio peripheral indicated by a user. By identifying the user instruction, the target audio device may be determined. The audio device list shown in fig. 13 includes options of each audio device, such as speakers (built-in audio device), bluetooth headset (bluetooth audio peripheral), wired headset (wired headset), USB sound (USB audio peripheral), ARC audio peripheral, fiber optic audio peripheral, 12S audio peripheral. The option of the audio device that is currently connected to the display device 200 is in an active state, i.e. may be selected, and the option of the audio device that is not currently connected to the display device 200 is in a gray state, i.e. may not be selected, such as a bluetooth headset. The user selects a target audio device to be used based on the list of audio devices, for example, the user moves the focus to an option of the target audio device by manipulating the control apparatus 100, such as a remote controller, and sends a selection instruction to the display device 200 by pressing a "confirm" key to instruct the display device 200 to play an audio stream using the target audio device. In response to the selection instruction, the display device 200 identifies the location where the focus is currently located, i.e., on the target audio device's option, to identify the target audio device currently in use.
S503, obtaining a target sound effect file corresponding to the target audio device.
Based on the corresponding relation between the audio equipment and the sound effect file disclosed in the above embodiments, the sound effect file corresponding to the target audio equipment, that is, the target sound effect file, can be accurately determined.
The target sound effect file is acquired with reference to the flow shown in fig. 14, and specific steps are as follows:
s1401, identifying a target classification corresponding to the target audio device.
According to the classification manner of the audio device in the above embodiments, the classification corresponding to the target audio device, that is, the target classification, may be determined.
S1402, obtaining the target sound effect file corresponding to the target classification.
According to the first embodiment, a target sound effect file is determined according to the corresponding relation between the audio device and the sound effect file. Referring to fig. 7, if the target audio device is a speaker, the target is classified into a category corresponding to the built-in audio device and the first audio peripheral, and the target audio file is an audio file a; if the target audio equipment is a Bluetooth audio peripheral, classifying the target into a category corresponding to the built-in audio equipment and the first audio peripheral, wherein the target audio file is an audio file A; if the target audio equipment is an ARC audio peripheral, the target is classified into a class used by a second audio peripheral, and the target audio file is an audio file B.
According to the second embodiment, the target sound effect file is determined according to the corresponding relation between the audio device and the sound effect file. Referring to fig. 8, if the target audio device is a speaker, the target is classified into a category corresponding to the built-in audio device, and the target sound effect file is a sound effect file a; if the target audio equipment is a Bluetooth audio peripheral, classifying the target into a category corresponding to the first audio peripheral, wherein the target audio file is an audio file C; if the target audio equipment is an ARC audio peripheral, the target is classified into a category corresponding to the second audio peripheral, and the target audio file is an audio file B.
In an example III, according to the corresponding relation between the audio device and the sound effect file in the second embodiment, the target sound effect file is determined. Referring to fig. 9, if the target audio device is a speaker, the target is classified into a category corresponding to the built-in audio device, and the target sound effect file is a sound effect file a; if the target audio equipment is a Bluetooth audio peripheral, classifying the target into categories corresponding to the Bluetooth audio peripheral and the USB audio peripheral, wherein the target audio file is an audio file C1; if the target audio equipment is a wired earphone, the target is classified into a category corresponding to the wired earphone, and the target sound effect file is a sound effect file C2; if the target audio equipment is an ARC audio peripheral, the target is classified into a category corresponding to the second audio peripheral, and the target audio file is an audio file B.
In an example four, according to the correspondence between the audio device and the audio file in the second embodiment, the target audio file is determined. Referring to fig. 10, if the target audio device is a speaker, the target is classified into a category corresponding to the built-in audio device, and the target sound effect file is a sound effect file a; if the target audio equipment is a Bluetooth audio peripheral, classifying the target into a category corresponding to the first audio peripheral, wherein the target audio file is an audio file C; if the target audio equipment is ARC audio peripheral equipment, classifying the target into categories corresponding to the ARC audio peripheral equipment and the optical fiber audio peripheral equipment, wherein the target audio file is an audio file B1; if the target audio equipment is a 12S audio peripheral, the target is classified into a category corresponding to the 12S audio peripheral, and the target sound effect file is a sound effect file B2.
In the fifth example, according to the correspondence between the audio device and the audio file in the second embodiment, the target audio file is determined. Referring to fig. 11, in the present example, examples three and four may be referred to for a target sound effect file corresponding when a different audio device is used as a target audio device, wherein example three may be referred to when the target audio device is a built-in audio device, a bluetooth audio peripheral, a USB audio peripheral, a wired earphone; when the target audio device is an ARC audio peripheral, a fiber audio peripheral, a 12S audio peripheral, reference may be made to example four.
In an example six, according to the correspondence between the audio device and the audio file in the third embodiment, the target audio file is determined. Referring to fig. 12, different target audio devices correspond to different classifications and to different sound effect files, which are not described herein.
In some embodiments, each type of audio device corresponds to a plurality of audio files, each of the plurality of audio files corresponds to a type of scene, and in this embodiment, the scene corresponding to the audio file refers to the audio content of the audio stream and/or the usage environment of the audio device. Illustratively, the audio content is music corresponding to one sound effect file, the audio content is language class corresponding to one sound effect file, and so on. The use environment corresponds to one sound effect file for a noisy environment, and the use environment corresponds to one sound effect file for a quiet environment. The combination of the audio content and the use environment may also be associated with one sound file, for example, the combination of the audio content and the use environment is a noisy environment corresponds to one sound file, the combination of the audio content and the use environment is a language class corresponds to one sound file, the combination of the audio content and the use environment is a quiet environment corresponds to one sound file, and the combination of the audio content and the use environment is a noisy environment corresponds to one sound file.
That is, the sound parameters in the sound file are matched with the specific audio content being played and the specific usage scenario, in addition to the playback configuration of the audio device itself. Therefore, the audio stream processed by the audio file can be matched with the audio equipment, and can be matched with the played audio content and the use environment, so that the playing quality of the audio stream is effectively improved.
In this embodiment, the correspondence between each type of audio device and the plurality of audio files may refer to the correspondence between each type of audio device and the audio files described in embodiments one to three, where only one audio file originally corresponding to the audio device is replaced by the audio file containing the audio parameters matched with the audio device, and the audio content and the plurality of audio files with different usage scenarios are not described herein again.
Based on the correspondence between the sound effect file and the scene, the target sound effect file is determined with reference to the flow shown in fig. 15, and the specific steps are as follows:
s1501, acquiring all sound effect files corresponding to the target audio equipment.
The target audio device is determined according to the above process of determining the target audio device, and will not be described herein. The target classification corresponding to the target audio device is determined based on the correspondence between the audio device and the classification, and the determining process may refer to the process of determining the target classification above, which is not described herein. Based on the corresponding relation between each type of audio equipment and the sound effect file, all sound effect files corresponding to the target classification of the target audio equipment are obtained, each sound effect file in all sound effect files contains sound effect parameters matched with the target audio equipment, and the audio content and/or the use scene corresponding to the sound effect parameters in each sound effect file are different.
The target audio device is a wired earphone, the corresponding target is classified into the wired earphone, all the corresponding sound effect files comprise sound effect files E1, the audio content corresponding to the sound effect files E1 is music, the corresponding use scene is a quiet environment, and the sound effect parameters E1 are contained; the audio file E2, the audio content corresponding to the audio file E2 is a language class, the corresponding use environment is a quiet environment, and the audio file E2 comprises an audio parameter E2; the audio file E3, the audio content corresponding to the audio file E3 is music, the corresponding use environment is a noisy environment, and the audio file E3 is contained; the audio file E4, the audio content corresponding to the audio file E4 is a language class, and the corresponding use environment is a noisy environment and comprises an audio parameter E4.
S1502, identifying a target scene corresponding to the audio stream.
The display device 200 determines a target scene corresponding to the audio stream based on an instruction transmitted from the user, and after the user selects the target audio device based on the audio device list shown in fig. 13, if the bluetooth headset is selected, the scene list shown in fig. 16 is displayed, where the scene list includes audio contents and options of using the scene, and the user can select one or more options based on the scene list, that is, select the target scene. For example, the user may determine the corresponding target scene based on a selection instruction by manipulating the remote controller to transmit the selection instruction to the display apparatus 200 to indicate that the selected target scene is music and a quiet environment.
S1503, acquiring the target sound effect file corresponding to the target scene from all sound effect files.
In the example given in S1502, the target audio file corresponding to the target scene music and the quiet environment is E1. Therefore, the sound effect parameters in the currently used target sound effect file can be matched with not only the target audio equipment, but also the audio content of the currently played audio stream and the current use environment, so that the playing quality is further improved.
S504, performing sound effect processing on the audio stream by using the sound effect parameters in the target sound effect file to obtain the processed audio stream.
After the target sound effect file is determined based on the steps, sound effect processing is carried out on the audio stream by using sound effect parameters in the target sound effect file, the processed audio stream can be matched with target audio equipment, and the playing quality of the audio stream can be effectively ensured.
The display device stores sound effect files corresponding to the audio devices in advance, so that when sound effect processing is carried out on the audio streams output to the built-in audio device and the externally connected audio device of the display device, corresponding sound effect parameters are respectively used, the processed audio streams can be matched with the corresponding audio devices, and accordingly playing quality of the audio streams is guaranteed.
In some embodiments, when the user uses the target audio device, a need arises to switch the target audio device to another audio device, so that the audio stream can be continuously played by using the other audio device, and the specific steps of switching the audio devices can be performed with reference to the flow shown in fig. 17 as follows:
s1701, receiving a device switching instruction sent by a user, wherein the device switching instruction indicates an audio device to be switched.
The user selects an audio device to be switched based on the audio device list shown in fig. 13, and the audio device to be switched is, for example, a USB sound, as the currently used target audio device is a wired headphone.
S1702, acquiring an audio file of the audio equipment to be switched.
The process of obtaining the audio file of the audio device to be switched by the display device 200 may refer to the process of obtaining the target audio file hereinabove, and will not be described herein.
S1703, performing sound effect processing on the audio stream by using sound effect parameters in the sound effect file of the audio device to be switched to obtain a processed audio stream.
The process of performing audio processing on the audio stream by using the audio parameters in the target audio file may be referred to above, and will not be described herein. Therefore, after the audio equipment is switched, the audio stream can be accurately processed by using the sound effect file corresponding to the switched audio equipment, so that the playing quality of the audio stream played by the switched audio equipment is ensured.
Based on the pre-configuration mode of the sound effect file disclosed above, namely, the initial value of the sound effect parameter in the sound effect file is based on the sound effect parameter corresponding to various audio devices in the market, and is equivalent to the general sound effect parameter corresponding to each type of audio device. By directly adjusting the values of the sound effect parameters through the display device 200, the sound effect parameters of various audio devices can be correspondingly adjusted, namely the sound effect parameters of the currently used audio devices cannot be independently adjusted, so that if the user needs to individually set the currently used sound effect parameters based on personal preference, the sound effect parameters can be independently updated and set through the cloud server. As shown in the schematic view of the scenario in fig. 1, the server 400 may be a cloud server, and is configured and used for updating the sound effect parameters.
In the fourth embodiment, according to the category of the display device 200, the sound effect parameters of various audio devices are configured at the cloud server. The display devices 200 are classified according to device parameters, such as brands, countries, languages, models, device IDs, etc., and the display devices 200 of different categories have a classification mode of corresponding audio devices (refer to the classification mode of the audio devices above), and an audio engineer may configure audio parameters for various audio devices on various display devices 200 according to the categories of the display devices 200 and store the audio parameters corresponding to the various audio devices and the display devices 200 of the corresponding categories. Illustratively, the category to which the display device 200 corresponds is "brand: x; country: chinese; language: chinese language; model: * The following are all the following; device ID: * The classification manner of the audio device corresponding to the display device 200 of the class may refer to the third embodiment, where the sound effect parameters configured by the sound effect engineer for each type of audio device are "the sound effect parameter d1 of the built-in audio device (such as a speaker), the sound effect parameter d2 of the bluetooth audio peripheral, the sound effect parameter d3 of the USB audio peripheral, the sound effect parameter d4 of the wired earphone, the sound effect parameter d5 of the ARC audio peripheral, the sound effect parameter d6 of the optical fiber audio peripheral, and the sound effect parameter d7 of the 12S audio peripheral", respectively, and then the class corresponding to the display device 200 and the sound effect parameters corresponding to each type of audio device are stored correspondingly. Thus, based on the category of the display device 200, i.e., the device parameters, the sound effect parameters of the various audio devices corresponding to the display device 200 can be accurately determined.
Based on the fifth embodiment, according to the user account, the sound effect parameters of various audio devices on various display devices 200 bound by the user account are stored in the cloud server. After the user logs in the user account on the currently used display device 200, the display device 200 may use the stored data corresponding to the user account, for example, using the sound effect parameters of various audio devices corresponding to the bound display device 200 that are the same as the category of the current display device 200. For example, if the user account a binds the class a display device 200 and the class b display device 200, the cloud server stores the user account a, and sound effect parameters of each audio device on the class a display device 200 and the class b display device 200, such as sound effect data m1 of each audio device corresponding to the class a display device 200, and sound effect data m2 of each audio device corresponding to the class b display device 200. The display device 200 currently used by the user corresponds to class a, and after the current display device 200 logs in the user account a, the audio data m1 may be directly used to perform audio processing on the audio stream.
The sound effect parameters are adjusted by referring to the flowchart shown in fig. 18, and the specific steps are as follows:
s1801, receiving an audio parameter adjustment instruction sent by a user, wherein the audio parameter adjustment instruction indicates the adjusted audio parameter in the target audio file.
The user sends an audio parameter adjustment instruction to the display apparatus 200 by manipulating the control device 100, and instructs a target audio file to be adjusted, and the adjusted audio parameter.
For example, based on the cloud server storage mode of the fourth embodiment, the adjusted sound effect parameter is the user-defined sound effect parameter.
For example, based on the cloud server storage manner of the fifth embodiment, the adjusted sound effect parameter may be a user-defined sound effect parameter, or may be a corresponding sound effect parameter in the user account indicated by the user.
S1802, responding to the sound effect parameter adjustment instruction, and sending a sound effect parameter adjustment request to a cloud server, wherein the sound effect parameter adjustment request comprises the adjusted sound effect parameter.
The display device 200 responds to the sound effect parameter adjustment instruction, detects the current network connection state, and sends a sound effect parameter adjustment request to the cloud server when the network is in the connection state, wherein the sound effect parameter adjustment request carries the adjusted sound effect parameter.
S1803, receiving the adjusted cloud sound effect parameters returned by the cloud server, wherein the cloud sound effect parameters are adjusted based on the adjusted sound effect parameters.
After receiving the sound effect parameter adjustment request, the cloud server identifies the category of the display device 200 that sent the request. If the adjusted sound effect parameter is a user-defined parameter, the cloud sound effect parameter stored in the cloud server is found according to the category of the display device 200, and the cloud sound effect parameter is adjusted according to the user-defined parameter, so as to obtain the adjusted sound effect parameter. If the adjusted sound effect parameter is the sound effect parameter in the user account indicated by the user, searching the cloud sound effect parameter stored in the cloud server under the user account according to the user account, wherein the cloud sound effect parameter is the adjusted cloud sound effect parameter. The cloud server returns the adjusted cloud sound parameters to the display device 200 for the display device 200 to perform configuration update.
S1804, replacing the sound effect parameters in the target sound effect file by using the adjusted cloud sound effect parameters, and performing sound effect processing on the audio stream by using the adjusted cloud sound effect parameters.
The display device 200 receives the adjusted cloud sound effect parameters returned by the cloud server, and replaces the sound effect parameters in the target sound effect file with the adjusted cloud sound effect parameters so as to complete personalized configuration of the target sound effect file. For example, if the adjusted cloud sound effect parameter is a sound effect parameter adjusted based on a user-defined parameter, the user-defined parameter is adopted after personalized configuration of the target sound effect file; and if the adjusted cloud sound effect parameters are sound effect parameters in the user account indicated by the user, the data sharing with the user account is realized after the personalized configuration of the target sound effect file.
Therefore, based on the adjustment of the cloud server to the sound effect parameters in the target sound effect file, the independent adjustment of the sound effect parameters in the target sound effect file can be realized. The user adjusts the sound effect parameters corresponding to other audio equipment in the same way as the sound effect parameters in the sound effect files to be adjusted so as to realize independent adjustment of the sound effect parameters in each sound effect file.
In some embodiments, various audio devices are developed by a developer, and corresponding audio parameters are updated continuously to improve the playing quality. Referring to the flowchart shown in fig. 19, the audio file on the display apparatus 200 is updated synchronously, and the specific steps are as follows:
s1901, the display device sends a first update query request to a cloud server at a designated node, wherein the first update query request comprises device parameters of the display device.
In the present embodiment, the designated node may be a designated period, a designated date, a designated time, or the like set after the display apparatus 200 is turned on. The display device 200 detects a current network connection state at a designated node, and sends a first update query request to the cloud server when the network is in the connection state, so as to request to query whether the audio parameters of various audio devices on the current display device are updated. Based on the storage mode of the cloud end server in the fourth embodiment, that is, according to the category of the display device and the corresponding storage mode of the sound effect parameters of various audio devices, the device parameters of the display device 200 are carried in the first update query request sent by the display device 200 to the cloud end server, so that the cloud end server can query the relevant sound effect parameters.
S1902, the cloud server acquires the corresponding latest sound effect parameters and the current sound effect parameters of the display equipment according to the equipment parameters.
The cloud server obtains the latest sound effect parameter corresponding to the current display device 200 and the current sound effect parameter of the display device 200 based on the correspondence between the category (device parameter) of the display device and the sound effect parameters of various audio devices. The latest sound effect parameters are provided by a developer, and the current sound effect parameters of the display device 200 can be actively uploaded to the cloud server for storage after the sound effect files are configured by the display device 200 each time, so that the cloud server can directly acquire the sound effect parameters stored currently. The current sound parameters of the display device 200 may also be uploaded to the cloud server in real time by the display device 200, for example, after the cloud server receives the first update query request, the cloud server sends an acquisition request of the current sound parameters to the display device 200, and the display device 200 sends the current sound parameters to the cloud server based on the acquisition request.
S1903, the cloud server compares the latest sound effect parameter with the current sound effect parameter of the display device and generates a query result, wherein the query result comprises no update and update.
The cloud server determines whether update occurs by comparing the latest sound effect parameter with the current sound effect parameter of the display device 200, and generates a query result based on the determination result. If the latest sound effect parameter is different from the current sound effect parameter of the display device, the query result is updated, and the query result also comprises a storage address of the latest sound effect parameter; if the latest sound effect parameter is the same as the current sound effect parameter of the display device, the query result is no update.
And S1904, the cloud server returns a query result to the display device.
And S1905, when the query result is updated, the display equipment acquires the latest sound effect parameters according to the storage address.
When the query result is no update, the display device 200 does not need to update and configure the sound effect file. When the query result is updated, the display device 200 obtains a storage address from the query result, and obtains the latest sound effect parameter according to the storage address, where the latest sound effect parameter is the updated sound effect parameter.
S1906, the display device uses the latest sound effect parameter to replace the sound effect parameter in the corresponding sound effect file.
The display device 200 replaces the sound effect parameters in the corresponding sound effect file with the acquired latest sound effect parameters to complete the updating configuration of the sound effect file. And performing sound effect processing on the audio stream by using sound effect parameters in the configured sound effect file, namely the latest sound effect parameters.
Therefore, when the sound effect parameters of the audio peripheral are updated, the updated sound effect parameters in the cloud server can be synchronously updated to the display equipment, and timeliness of the sound effect parameters used in sound effect processing is ensured.
In some embodiments, the sound effect parameters of various audio devices on various display devices are updated according to the user account number, so that the processed sound effect is more close to the user demand. Referring to the flowchart shown in fig. 20, the audio file on the display apparatus 200 is updated synchronously, and the specific steps are as follows:
s2001, after logging in a user account, sending a second update query request to a cloud server, wherein the second update query request comprises the user account and equipment parameters of the display equipment.
After the display device 200 logs into the user account, the display device 200 may share the sound parameters in the user account. The display device 200 detects the current network connection state, and when the network is in the connection state, sends a second update query request to the cloud server to request whether the audio parameters of various audio devices on the current display device are updated. Based on the storage mode of the cloud end server in the fifth embodiment, that is, according to the user account number and the corresponding storage mode of the category of the display device and the sound effect parameters of various audio devices under the user account number, the display device 200 carries the currently logged-in user account number and the device parameters of the display device 200 in the second update query request sent to the cloud end server, so that the cloud end server can query the relevant sound effect parameters.
The cloud server obtains the latest sound effect parameters corresponding to the display device 200 and the current sound effect parameters of the display device according to the user account number and the device parameters. The method for obtaining the current sound effect parameter of the display device 200 may refer to S1902, which is not described herein.
The cloud server compares the latest sound effect parameter with the current sound effect parameter of the display device and generates a query result, wherein the query result comprises no update and update, and if the latest sound effect parameter is different from the current sound effect parameter of the display device, the query result is updated, for example, an update field is added in the query result, and the value of the update field is set to be 1 so as to indicate that the update exists, and the query result also comprises the storage address of the latest sound effect parameter in the user account; if the latest sound effect parameter is the same as the current sound effect parameter of the display device, the query result is no update, for example, the value of the update field is set to 0 to indicate no update.
S2002, receiving a query result returned by the cloud server, wherein the query result comprises no update and update, and when the query result is update, the query result further comprises a storage address of the latest sound effect parameter.
And S2003, when the query result is updated, the display equipment acquires the latest sound effect parameters according to the storage address.
When the query result is no update, the display device 200 does not need to update and configure the sound effect file. When the query result is updated, the display device 200 obtains a storage address from the query result, and obtains the latest sound effect parameter according to the storage address, where the latest sound effect parameter is the updated sound effect parameter.
And S2004, replacing the sound effect parameters in the corresponding sound effect file by using the latest sound effect parameters.
Based on the same user account, the configuration sound effect file can be updated by a plurality of display devices 200 under the user account. Moreover, if the display device 200 currently logged in to the user account is a strange device, that is, the display device 200 is not bound to the user account, for example, the display device 200 newly purchased by the user may also quickly implement the update configuration of the audio file based on the user account.
The foregoing description, for purposes of explanation, has been presented in conjunction with specific embodiments. However, the above discussion in some examples is not intended to be exhaustive or to limit the embodiments to the precise forms disclosed above. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the disclosure and to enable others skilled in the art to best utilize the embodiments.

Claims (10)

1. A display device, characterized by comprising:
a display configured to display a user interface;
the storage is configured to store sound effect files corresponding to the audio devices, wherein the audio devices comprise built-in audio devices and external audio devices of the display device, each sound effect file corresponds to one type of audio device, and the sound effect files comprise sound effect parameters matched with the corresponding type of audio device;
a controller configured to:
acquiring an audio stream;
identifying a target audio device currently in use;
acquiring a target sound effect file corresponding to the target audio equipment;
and performing sound effect processing on the audio stream by using the sound effect parameters in the target sound effect file to obtain a processed audio stream.
2. The display device of claim 1, wherein each audio device is classified by a source of a corresponding output audio stream or each audio device is classified by a device type, and wherein the controller obtains a target sound effect file corresponding to the target audio device, configured to:
identifying a target classification corresponding to the target audio device;
and acquiring the target sound effect file corresponding to the target classification.
3. The display device of claim 1, wherein each of the sound effect files corresponds to a class of scenes, the scenes including audio content and/or usage environment, the controller obtaining a target sound effect file corresponding to the target audio device, configured to:
acquiring all sound effect files corresponding to the target audio equipment;
identifying a target scene corresponding to the audio stream;
and acquiring the target sound effect file corresponding to the target scene from the all sound effect files.
4. The display device of claim 1, wherein the controller is further configured to:
receiving a device switching instruction input by a user, wherein the device switching instruction indicates audio devices to be switched;
acquiring an audio file of the audio equipment to be switched;
and performing sound effect processing on the audio stream by using the sound effect parameters in the sound effect file of the audio equipment to be switched to obtain the processed audio stream.
5. The display device of claim 1, further comprising a communicator configured to establish a communication connection with a cloud server, the controller further configured to:
Receiving an audio parameter adjustment instruction input by a user, wherein the audio parameter adjustment instruction indicates an adjusted audio parameter in the target audio file;
responding to the sound effect parameter adjustment instruction, sending a sound effect parameter adjustment request to the cloud server, wherein the sound effect parameter adjustment request comprises the adjusted sound effect parameters, and the cloud server stores cloud sound effect parameters corresponding to the sound effect files, and the cloud sound effect parameters are bound with the display equipment or the user account;
receiving the adjusted cloud sound effect parameters returned by the cloud server, wherein the cloud sound effect parameters are adjusted based on the adjusted sound effect parameters;
and replacing the sound effect parameters in the target sound effect file by using the adjusted cloud sound effect parameters, and performing sound effect processing on the audio stream by using the adjusted cloud sound effect parameters.
6. The display device of claim 1, further comprising a communicator configured to establish a communication connection with a cloud server, the controller further configured to:
a first updating query request is sent to the cloud server at a designated node, wherein the first updating query request comprises equipment parameters of the display equipment, and the cloud server stores the latest sound effect parameters corresponding to various audio equipment in the display equipment;
Receiving a query result returned by the cloud server, wherein the query result comprises no update and update, and when the query result is update, the query result further comprises a storage address of the latest sound effect parameter different from the current sound effect parameter in the display device;
when the query result is updated, acquiring the latest sound effect parameters according to the storage address;
and replacing the sound effect parameters in the corresponding sound effect file by using the latest sound effect parameters.
7. The display device of claim 1, further comprising a communicator configured to establish a communication connection with a cloud server, the display device being bound to a user account, the controller further configured to:
after logging in the user account, sending a second updating query request to the cloud server, wherein the second updating query request comprises the user account and equipment parameters, and the cloud server stores the latest sound effect parameters corresponding to various audio equipment in the user account in the display equipment;
receiving a query result returned by the cloud server, wherein the query result comprises no update and update, and when the query result is update, the query result further comprises a storage address of the latest sound effect parameter different from the current sound effect parameter in the display device;
When the query result is updated, acquiring the latest sound effect parameters according to the storage address;
and replacing the sound effect parameters in the corresponding sound effect file by using the latest sound effect parameters.
8. The sound effect setting method of the audio equipment is characterized by being applied to display equipment, wherein the display equipment stores sound effect files corresponding to the audio equipment, the audio equipment comprises built-in audio equipment and externally connected audio equipment of the display equipment, each sound effect file corresponds to one type of audio equipment, the sound effect files comprise sound effect parameters matched with the corresponding type of audio equipment, and the method comprises the following steps:
acquiring an audio stream;
identifying a target audio device currently in use;
acquiring a target sound effect file corresponding to the target audio equipment;
and performing sound effect processing on the audio stream by using the sound effect parameters in the target sound effect file to obtain a processed audio stream.
9. The method of claim 8, wherein the method further comprises:
receiving an audio parameter adjustment instruction input by a user, wherein the audio parameter adjustment instruction indicates an adjusted audio parameter in the target audio file;
Responding to the sound effect parameter adjustment instruction, sending a sound effect parameter adjustment request to a cloud server, wherein the sound effect parameter adjustment request comprises the adjusted sound effect parameters, and the cloud server stores cloud sound effect parameters corresponding to the sound effect files, and the cloud sound effect parameters are bound with the display equipment or the user account;
receiving the adjusted cloud sound effect parameters returned by the cloud server, wherein the cloud sound effect parameters are adjusted based on the adjusted sound effect parameters;
and replacing the sound effect parameters in the target sound effect file by using the adjusted cloud sound effect parameters, and performing sound effect processing on the audio stream by using the adjusted cloud sound effect parameters.
10. The method of claim 8, wherein the method further comprises:
a first updating query request is sent to a cloud server at a designated node, wherein the first updating query request comprises equipment parameters of the display equipment, and the cloud server stores the latest sound effect parameters corresponding to various audio equipment in the display equipment;
receiving a query result returned by the cloud server, wherein the query result comprises no update and update, and when the query result is update, the query result further comprises a storage address of the latest sound effect parameter different from the current sound effect parameter in the display device;
When the query result is updated, acquiring the latest sound effect parameters according to the storage address;
and replacing the sound effect parameters in the corresponding sound effect file by using the latest sound effect parameters.
CN202210369229.7A 2022-04-08 2022-04-08 Display device and sound effect setting method of audio device Pending CN116939262A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210369229.7A CN116939262A (en) 2022-04-08 2022-04-08 Display device and sound effect setting method of audio device
PCT/CN2023/084607 WO2023193643A1 (en) 2022-04-08 2023-03-29 Display device, and processing method for display device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210369229.7A CN116939262A (en) 2022-04-08 2022-04-08 Display device and sound effect setting method of audio device

Publications (1)

Publication Number Publication Date
CN116939262A true CN116939262A (en) 2023-10-24

Family

ID=88374429

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210369229.7A Pending CN116939262A (en) 2022-04-08 2022-04-08 Display device and sound effect setting method of audio device

Country Status (1)

Country Link
CN (1) CN116939262A (en)

Similar Documents

Publication Publication Date Title
CN109658932B (en) Equipment control method, device, equipment and medium
CN111757171A (en) Display device and audio playing method
TWI747031B (en) Video playback method, device and multimedia data playback method
KR102499124B1 (en) Display apparatus and controlling method thereof
US10860273B2 (en) Display device and operation method therefor
CN113794928B (en) Audio playing method and display device
US20240028189A1 (en) Interaction method and apparatus, electronic device and computer readable medium
CN114339383A (en) Display device and multi-Bluetooth audio output method
US10649712B2 (en) Display device and operation method thereof
CN116939262A (en) Display device and sound effect setting method of audio device
CN115278332A (en) Display device, playing device and data transmission method
CN115550825A (en) Display device, hearing aid and volume adjustment method
WO2023193643A1 (en) Display device, and processing method for display device
CN113794919A (en) Display equipment and setting method of sound production equipment
CN113542829A (en) Split screen display method, display terminal and readable storage medium
US20070078945A1 (en) System and method for displaying information of a media playing device on a display device
CN113115105B (en) Display device and prompt method for configuring WISA speaker
CN115223521B (en) Display equipment and relay equipment display method
CN114302248B (en) Display equipment and multi-window voice broadcasting method
WO2022237381A1 (en) Method for saving conference record, terminal, and server
CN116939270A (en) Display equipment and setting method of playing parameters
JP2010050792A (en) Control unit of electronic device
CN117075837A (en) Display equipment and volume adjusting method of eARC peripheral equipment
WO2022228573A1 (en) Display device and method for outputting audio
CN117827140A (en) Intelligent terminal and volume adjusting method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination