CN113138745A - Display device and audio data writing method - Google Patents

Display device and audio data writing method Download PDF

Info

Publication number
CN113138745A
CN113138745A CN202110479263.5A CN202110479263A CN113138745A CN 113138745 A CN113138745 A CN 113138745A CN 202110479263 A CN202110479263 A CN 202110479263A CN 113138745 A CN113138745 A CN 113138745A
Authority
CN
China
Prior art keywords
audio
audio data
channel
display device
writing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110479263.5A
Other languages
Chinese (zh)
Other versions
CN113138745B (en
Inventor
李现旗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Visual Technology Co Ltd
Original Assignee
Hisense Visual Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Visual Technology Co Ltd filed Critical Hisense Visual Technology Co Ltd
Priority to CN202110479263.5A priority Critical patent/CN113138745B/en
Publication of CN113138745A publication Critical patent/CN113138745A/en
Priority to PCT/CN2022/090559 priority patent/WO2022228571A1/en
Application granted granted Critical
Publication of CN113138745B publication Critical patent/CN113138745B/en
Priority to US18/138,996 priority patent/US20230262286A1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/162Interface to dedicated audio devices, e.g. audio drivers, interface to CODECs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing For Digital Recording And Reproducing (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The embodiment of the application shows a display device and an audio data writing method, wherein the display device comprises: a display and a controller configured to perform: reading configuration information, wherein the configuration information at least comprises a recording format, and the recording format is an audio format supported by external audio output equipment; and writing the audio data into the audio drive layer according to the recording format. According to the technical scheme shown in the embodiment, in the process of writing the audio data into the audio drive layer, the first audio hardware interface layer of the controller can write the audio data into the audio drive layer according to the recording format, and in the subsequent audio data recording process, the second audio hardware interface layer records the audio data into the audio hardware interface layer of the external audio output device according to the recording format, so that the audio data are ensured to adopt the same recording format in the writing and recording processes, and the problem of abnormal sound playing (fast/slow) is avoided.

Description

Display device and audio data writing method
Technical Field
The present application relates to the field of file display technologies, and in particular, to a display device and an audio data writing method.
Background
The display device can provide the user with the media asset playing function, such as playing of audio, video, pictures and other resources, which is widely concerned by the user. With the development of big data and artificial intelligence, the functional requirements of users on display devices are increasing day by day. Some users want the display device to provide different sound effects in the scenario of playing different media assets, for example, in the application scenario of playing movies, the display device can provide 3D surround sound.
The display device supports different external audio output devices for satisfying advanced experience of users on sound effects. The external audio output devices mainly include the following devices: bluetooth devices (hereinafter abbreviated as BT), USB audio devices, and the like.
The current external audio output device generally has no decoding capability, and the display device uses an application of a customized player to directly write an audio stream into an audio driver layer to complete audio decoding. Therefore, for the external audio output device, the audio stream processing scheme adopted by the display device is a tunnel mode, that is, all audio streams are written into an audio driver layer (hereinafter, may be referred to as an audio driver layer) of the display device, and an audio hardware interface layer (hereinafter, may be referred to as a hardware interface layer) of the display device records the decoded audio from the audio driver layer according to an audio format (in this embodiment, may be referred to as a recording format) of the external audio output device, and finally reaches the external audio output device to emit sound.
For recording audio, the audio driver layer usually adopts a fixed format (in this embodiment, it can be written into format) of audio stream output (sampling precision 16bit, little tail end, sampling rate 48000Hz, 2 channels), and the audio stream in this format is compatible with most mainstream peripheral devices in the market. With advances in speaker technology, consumer pursuits for audio details are increasing. Different external audio output devices may use different audio formats, which may result in the audio stream being written in a format consistent with the recording format, causing abnormal (fast/slow) playback of the sound.
Disclosure of Invention
In order to solve technical problems in the prior art, embodiments of the present application illustrate a display device and an audio data writing method.
A first aspect of embodiments of the present application shows a display device, including:
a display;
a controller configured to perform:
reading configuration information, wherein the configuration information at least comprises a recording format, and the recording format is an audio format supported by external audio output equipment;
and writing the audio data into the audio drive layer according to the recording format.
According to the technical scheme shown in the embodiment, in the process of writing the audio data into the audio drive layer, the first audio hardware interface layer of the controller can write the audio data into the audio drive layer according to the recording format, and in the subsequent audio data recording process, the second audio hardware interface layer records the audio data into the audio hardware interface layer of the external audio output device according to the recording format, so that the audio data are ensured to adopt the same recording format in the writing and recording processes, and the problem of abnormal sound playing (fast/slow) is avoided.
A second aspect of the embodiments of the present application shows an audio data writing method, including:
reading configuration information, wherein the configuration information at least comprises a recording format, and the recording format is an audio format supported by external audio output equipment;
and writing the audio data into the audio drive layer according to the recording format.
According to the technical scheme shown in the embodiment, in the process of writing the audio data into the audio drive layer, the audio data can be written into the audio drive layer according to the recording format, and in the subsequent audio data recording process, the second audio hardware interface layer records the audio data into the audio hardware interface layer of the external audio output device according to the recording format, so that the audio data is ensured to adopt the same recording format in the writing and recording processes, and the problem of abnormal sound playing (fast/slow) is avoided.
Drawings
In order to more clearly illustrate the embodiments of the present application or the implementation manner in the related art, a brief description will be given below of the drawings required for the description of the embodiments or the related art, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained by those skilled in the art according to the drawings.
FIG. 1 illustrates a usage scenario of a display device according to some embodiments;
fig. 2 illustrates a hardware configuration block diagram of the control apparatus 100 according to some embodiments;
fig. 3 illustrates a hardware configuration block diagram of the display apparatus 200 according to some embodiments;
FIG. 4 illustrates a software configuration diagram in the display device 200 according to some embodiments;
FIG. 5 shows a schematic diagram of the connection of an external audio output device to a display device;
FIG. 6 is a flowchart showing the interaction between various software in the audio data playing process of the display device;
FIG. 7 is a flowchart illustrating operation of the display device according to one possible embodiment;
FIG. 8 is a flow chart illustrating an implementation of writing audio data to an audio driver layer according to the recording format according to a possible embodiment;
FIG. 9 is a flow chart of a method for switching channels according to one possible embodiment;
FIG. 10 is a flow chart illustrating the playing of local audio data according to one possible embodiment;
FIG. 11 is a flow chart illustrating the playing of live audio data in a possible embodiment;
FIG. 12 is a flow chart illustrating playing of network audio data according to one possible embodiment;
FIG. 13 is a flow chart of an audio data writing method according to one possible embodiment;
fig. 14 is a flowchart illustrating a method for writing audio data according to an embodiment of the present application;
fig. 15 is a flowchart illustrating a method for writing audio data according to an embodiment of the present application;
fig. 16 is a flowchart illustrating a method for writing audio data according to an embodiment of the present application.
Detailed Description
To make the purpose and embodiments of the present application clearer, the following will clearly and completely describe the exemplary embodiments of the present application with reference to the attached drawings in the exemplary embodiments of the present application, and it is obvious that the described exemplary embodiments are only a part of the embodiments of the present application, and not all of the embodiments.
It should be noted that the brief descriptions of the terms in the present application are only for the convenience of understanding the embodiments described below, and are not intended to limit the embodiments of the present application. These terms should be understood in their ordinary and customary meaning unless otherwise indicated.
The terms "first," "second," "third," and the like in the description and claims of this application and in the above-described drawings are used for distinguishing between similar or analogous objects or entities and not necessarily for describing a particular sequential or chronological order, unless otherwise indicated. It is to be understood that the terms so used are interchangeable under appropriate circumstances.
The terms "comprises" and "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of elements is not necessarily limited to all elements expressly listed, but may include other elements not expressly listed or inherent to such product or apparatus.
The term "module" refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware and/or software code that is capable of performing the functionality associated with that element.
Fig. 1 is a schematic diagram of a usage scenario of a display device according to an embodiment. As shown in fig. 1, the display apparatus 200 is also in data communication with a server 400, and a user can operate the display apparatus 200 through the smart device 300 or the control device 100.
In some embodiments, the control apparatus 100 may be a remote controller, and the communication between the remote controller and the display device includes at least one of an infrared protocol communication or a bluetooth protocol communication, and other short-distance communication methods, and controls the display device 200 in a wireless or wired manner. The user may control the display apparatus 200 by inputting a user instruction through at least one of a key on a remote controller, a voice input, a control panel input, and the like.
In some embodiments, the smart device 300 may include any of a mobile terminal, a tablet, a computer, a laptop, an AR/VR device, and the like.
In some embodiments, the smart device 300 may also be used to control the display device 200. For example, the display device 200 is controlled using an application running on the smart device.
In some embodiments, the smart device 300 and the display device may also be used for communication of data.
In some embodiments, the display device 200 may also be controlled in a manner other than the control apparatus 100 and the smart device 300, for example, the voice instruction control of the user may be directly received by a module configured inside the display device 200 to obtain a voice instruction, or may be received by a voice control apparatus provided outside the display device 200.
In some embodiments, the display device 200 is also in data communication with a server 400. The display device 200 may be allowed to be communicatively connected through a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 400 may provide various contents and interactions to the display apparatus 200. The server 400 may be a cluster or a plurality of clusters, and may include one or more types of servers.
In some embodiments, software steps executed by one step execution agent may be migrated on demand to another step execution agent in data communication therewith for execution. Illustratively, software steps performed by the server may be migrated to be performed on a display device in data communication therewith, and vice versa, as desired.
Fig. 2 exemplarily shows a block diagram of a configuration of the control apparatus 100 according to an exemplary embodiment. As shown in fig. 2, the control device 100 includes a controller 110, a communication interface 130, a user input/output interface 140, a memory, and a power supply. The control apparatus 100 may receive an input operation instruction from a user and convert the operation instruction into an instruction recognizable and responsive by the display device 200, serving as an interaction intermediary between the user and the display device 200.
In some embodiments, the communication interface 130 is used for external communication, and includes at least one of a WIFI chip, a bluetooth module, NFC, or an alternative module.
In some embodiments, the user input/output interface 140 includes at least one of a microphone, a touchpad, a sensor, a key, or an alternative module.
Fig. 3 shows a hardware configuration block diagram of the display apparatus 200 according to an exemplary embodiment.
In some embodiments, the display apparatus 200 includes at least one of a tuner demodulator 210, a communicator 220, a detector 230, an external device interface 240, a controller 250, a display 260, an audio output interface 270, a memory, a power supply, a user interface.
In some embodiments the controller comprises a central processor, a video processor, an audio processor, a graphics processor, a RAM, a ROM, a first interface to an nth interface for input/output.
In some embodiments, the display 260 includes a display screen component for displaying pictures, and a driving component for driving image display, a component for receiving image signals from the controller output, displaying video content, image content, and menu manipulation interface, and a user manipulation UI interface, etc.
In some embodiments, the display 260 may be at least one of a liquid crystal display, an OLED display, and a projection display, and may also be a projection device and a projection screen.
In some embodiments, the tuner demodulator 210 receives broadcast television signals via wired or wireless reception, and demodulates audio/video signals, such as EPG data signals, from a plurality of wireless or wired broadcast television signals.
In some embodiments, communicator 220 is a component for communicating with external devices or servers according to various communication protocol types. For example: the communicator may include at least one of a Wifi module, a bluetooth module, a wired ethernet module, and other network communication protocol chips or near field communication protocol chips, and an infrared receiver. The display apparatus 200 may establish transmission and reception of control signals and data signals with the control device 100 or the server 400 through the communicator 220.
In some embodiments, the detector 230 is used to collect signals of the external environment or interaction with the outside. For example, detector 230 includes a light receiver, a sensor for collecting ambient light intensity; alternatively, the detector 230 includes an image collector, such as a camera, which may be used to collect external environment scenes, attributes of the user, or user interaction gestures, or the detector 230 includes a sound collector, such as a microphone, which is used to receive external sounds.
In some embodiments, the external device interface 240 may include, but is not limited to, the following: high Definition Multimedia Interface (HDMI), analog or data high definition component input interface (component), composite video input interface (CVBS), USB input interface (USB), RGB port, and the like. The interface may be a composite input/output interface formed by the plurality of interfaces.
In some embodiments, the controller 250 and the modem 210 may be located in different separate devices, that is, the modem 210 may also be located in an external device of the main device where the controller 250 is located, such as an external set-top box.
In some embodiments, the controller 250 controls the operation of the display device and responds to user operations through various software control programs stored in memory. The controller 250 controls the overall operation of the display apparatus 200. For example: in response to receiving a user command for selecting a UI object to be displayed on the display 260, the controller 250 may perform an operation related to the object selected by the user command.
In some embodiments, the object may be any one of selectable objects, such as a hyperlink, an icon, or other operable operation region. The operations related to the selected object are: displaying an operation connected to a hyperlink page, document, image, or the like, or performing an operation of a program corresponding to an icon.
In some embodiments the controller comprises at least one of a Central Processing Unit (CPU), a video processor, an audio processor, a Graphics Processing Unit (GPU), a RAM Random Access Memory (RAM), a ROM (Read-Only Memory), a first to nth interface for input/output, a communication Bus (Bus), and the like.
A CPU processor. For executing operating system and application instructions stored in the memory, and executing various applications, data and contents according to various interactive instructions receiving external input, so as to finally display and play various audio-video contents. The CPU processor may include a plurality of processors. E.g. comprising a main processor and one or more sub-processors.
In some embodiments, a graphics processor for generating various graphics objects, such as: at least one of an icon, an operation menu, and a user input instruction display figure. The graphic processor comprises an arithmetic unit, which performs operation by receiving various interactive instructions input by a user and displays various objects according to display attributes; the system also comprises a renderer for rendering various objects obtained based on the arithmetic unit, wherein the rendered objects are used for being displayed on a display.
In some embodiments, the video processor is configured to receive an external video signal, and perform at least one of video processing such as decompression, decoding, scaling, noise reduction, frame rate conversion, resolution conversion, and image synthesis according to a standard codec protocol of the input signal, so as to obtain a signal displayed or played on the direct display device 200.
In some embodiments, the video processor includes at least one of a demultiplexing module, a video decoding module, an image composition module, a frame rate conversion module, a display formatting module, and the like. The demultiplexing module is used for demultiplexing the input audio and video data stream. And the video decoding module is used for processing the video signal after demultiplexing, including decoding, scaling and the like. And the image synthesis module is used for carrying out superposition mixing processing on the GUI signal input by the user or generated by the user and the video image after the zooming processing by the graphic generator so as to generate an image signal for display. And the frame rate conversion module is used for converting the frame rate of the input video. And the display formatting module is used for converting the received video output signal after the frame rate conversion, and changing the signal to be in accordance with the signal of the display format, such as an output RGB data signal.
In some embodiments, the audio processor is configured to receive an external audio signal, decompress and decode the received audio signal according to a standard codec protocol of the input signal, and perform at least one of noise reduction, digital-to-analog conversion, and amplification processing to obtain a sound signal that can be played in the speaker.
In some embodiments, a user may enter user commands on a Graphical User Interface (GUI) displayed on display 260, and the user input interface receives the user input commands through the Graphical User Interface (GUI). Alternatively, the user may input the user command by inputting a specific sound or gesture, and the user input interface receives the user input command by recognizing the sound or gesture through the sensor.
In some embodiments, a "user interface" is a media interface for interaction and information exchange between an application or operating system and a user that enables conversion between an internal form of information and a form that is acceptable to the user. A commonly used presentation form of the User Interface is a Graphical User Interface (GUI), which refers to a User Interface related to computer operations and displayed in a graphical manner. It may be an interface element such as an icon, a window, an operation area, etc. displayed in the display screen of the electronic device, where the operation area may include at least one of visual interface elements such as an icon, a button, a menu, a tab, a text box, a dialog box, a status bar, a navigation bar, a Widget, etc.
In some embodiments, user interface 280 is an interface that may be used to receive control inputs (e.g., physical buttons on the body of the display device, or the like).
In some embodiments, a system of a display device may include a Kernel (Kernel), a command parser (shell), a file system, and an application. The kernel, shell, and file system together make up the basic operating system structure that allows users to manage files, run programs, and use the system. After power-on, the kernel is started, kernel space is activated, hardware is abstracted, hardware parameters are initialized, and virtual memory, a scheduler, signals and inter-thread communication (IPC) are operated and maintained. And after the kernel is started, loading the Shell and the user application. The application is compiled into machine code after being started, forming a thread.
As shown in fig. 4, the system of the display device may include a Kernel (Kernel), a command parser (shell), a file system, and an application program. The kernel, shell, and file system together make up the basic operating system structure that allows users to manage files, run programs, and use the system. After power-on, the kernel is started, kernel space is activated, hardware is abstracted, hardware parameters are initialized, and virtual memory, a scheduler, signals and interprocess communication (IPC) are operated and maintained. And after the kernel is started, loading the Shell and the user application program. The application program is compiled into machine code after being started, and a process is formed.
As shown in fig. 4, the system of the display device is divided into three layers, i.e., an application layer, a middleware layer and a hardware layer from top to bottom.
The Application layer mainly includes common applications on the television and an Application Framework (Application Framework), wherein the common applications are mainly applications developed based on the Browser, such as: HTML5 APPs; and Native APPs (Native APPs);
an Application Framework (Application Framework) is a complete program model, and has all basic functions required by standard Application software, such as: file access, data exchange, and interfaces to use these functions (toolbars, status lists, menus, dialog boxes).
Native APPs (Native APPs) may support online or offline, message push, or local resource access.
The middleware layer comprises various television protocols, multimedia protocols, system components and other middleware. The middleware can use basic service (function) provided by system software to connect each part of an application system or different applications on a network, and can achieve the purposes of resource sharing and function sharing.
The hardware layer mainly comprises an HAL interface, hardware and a driver, wherein the HAL interface is a unified interface for butting all the television chips, and specific logic is realized by each chip. The driving mainly comprises: audio drive, display driver, bluetooth drive, camera drive, WIFI drive, USB drive, HDMI drive, sensor drive (like fingerprint sensor, temperature sensor, pressure sensor etc.) and power drive etc..
Fig. 6 is a flowchart illustrating interaction between software of a display device during playing audio data, where the display device in this embodiment may include: USB driver layer, USB host service, USB host proxy service, USB audio service, audio policy service, audio hardware interface layer (first audio hardware interface layer).
In the process of connecting the external audio output equipment, the usb driving end can recognize the connection state of the equipment firstly when detecting that the usb peripheral is inserted. And then, the device connection state is thrown to a usb host server in an event form, a callback interface of a usb host agent is called, after the audio device is judged, related information is transferred to the usb audio agent service, the usb audio device is further subdivided according to the information carried by the device, and usb head set and usb out device are distinguished. Then the usb audio proxy service enables the usb audio service, and the usb audio service notifies the connection state of the usb audio device to the audio service.
The audio service transmits the device connection information to the audio policy service, which records and saves the connection state of the device. And then, the connection information is transmitted to an audio hardware interface layer, after the audio hardware interface layer receives the connection information of the usb peripheral device, two threads (a recording thread and a writing thread) are created, wherein the recording thread is used for recording audio data from an audio driving layer, and the writing thread is used for writing the recorded audio data into the usb audio interface layer. The write thread, in addition to writing audio data, also identifies the sound card information of the device and records the relevant configuration information (audio formats supported by the peripheral) for later use.
Some users desire that the display device can provide different sound effects in scenes where different media assets are played. The display device supports different external audio output devices for satisfying advanced experience of users on sound effects. The current external audio output device generally has no decoding capability, and the display device uses an application of a customized player to directly write an audio stream into an audio driver layer to complete audio decoding. Therefore, for the external audio output device, the audio stream processing scheme adopted by the display device is a tunnel mode, that is, all audio streams are written into an audio driver layer (hereinafter, may be referred to as an audio driver layer) of the display device, and an audio hardware interface layer (hereinafter, may be referred to as a first audio hardware interface layer) of the display device records the decoded audio from the audio driver layer according to the audio format (in this embodiment, may be referred to as a recording format) of the external audio output device, and finally reaches the external audio output device to emit sound. For recording audio, the audio driver layer usually adopts a fixed format (in this embodiment, it can be written into format) of audio stream output (sampling precision 16bit, little tail end, sampling rate 48000Hz, 2 channels), and the audio stream in this format is compatible with most mainstream peripheral devices in the market. With advances in speaker technology, consumer pursuits for audio details are increasing. Different external audio output devices may use different recording formats, which may result in the audio stream being written in a format consistent with the recording format, causing abnormal (fast/slow) playback of the sound.
In order to solve the above technical problem, an embodiment of the present application illustrates a display device including at least a controller and a display. The structure of the controller may refer to the above embodiments. The operation of the display device will be described with reference to the accompanying drawings.
Fig. 7 is a flowchart illustrating the operation of the display device according to a possible embodiment, and it can be seen that:
the user inserts or extracts the external audio output device;
in this embodiment, the external audio output device includes a device that can be connected to the display device and can play audio data output by the display device, and may include but is not limited to: bluetooth devices (hereinafter abbreviated as BT), USB audio devices, and the like.
The external audio output device may establish a connection relationship with the display device by inserting the display device and disconnect the connection relationship with the display device by pulling out the display device.
The controller is configured to perform steps S72-S73:
responding to the insertion or the extraction of the external audio output equipment, executing S71 to read configuration information, wherein the configuration information at least comprises a recording format, and the recording format is an audio format supported by the external audio output equipment;
the recording format in the technical solution shown in this embodiment may include, but is not limited to, information such as an audio type supported by the external audio output device, a sampling frequency supported by the external audio output device, a sampling precision supported by the external audio output device, and tail end/little tail end information supported by the external audio output device.
S72 writes the audio data to the audio drive layer according to the recording format.
There are various ways to write audio data into the audio driver layer according to the recording format. FIG. 8 is a flowchart illustrating an implementation of writing audio data in the recording format to the audio drive layer, according to a possible embodiment, wherein the controller is further configured to perform steps S81-S82;
in this embodiment, the recording format includes: the target audio format comprises target sampling precision and a target audio format, wherein the target sampling precision is sampling precision supported by external audio output equipment, and the target audio format is an audio format supported by the external audio output equipment;
s81, sampling the audio data according to the target sampling precision;
s82, converting the sampled audio data into audio data in a target audio format;
s83 writes the converted audio data to the audio driver layer.
In the technical solution shown in this embodiment, the audio hardware interface layer of the controller may sample the audio data according to the target sampling precision, and then write the sampled audio data into the audio driver layer. When the audio data in the audio driving layer is recorded in the subsequent second audio hardware interface layer, the sampling is also carried out according to the target sampling precision, so that the same sampling precision is ensured in the writing and recording processes of the audio data, and the problem of abnormal sound playing (fast/slow) is avoided.
Further, the technical solution shown in this embodiment converts the sampled audio data into audio data in a target audio format, thereby ensuring that the audio data written in the audio driver layer can be recognized by an external audio output device.
The embodiment is merely an exemplary implementation manner for writing the audio data into the audio driving layer according to the recording format, and the implementation manner for writing the audio data into the audio driving layer according to the recording format in the actual application process may be, but is not limited to, the above implementation manner, and all the implementation manners that can ensure that the audio data adopts the same recording format in the writing and recording processes may be applied to the technical solution shown in the embodiment.
According to the technical scheme shown in the embodiment, in the process of writing the audio data into the audio drive layer, the first audio hardware interface layer can write the audio data into the audio drive layer according to the recording format, and in the subsequent audio data recording process, the second audio hardware interface layer records the audio data into the audio hardware interface layer of the external audio output device according to the recording format, so that the same recording format is adopted in the writing and recording processes of the audio data, and the problem of abnormal sound playing (fast/slow) is avoided.
Typically, different external audio output devices support different numbers of channels. When the channel output by the display device does not match the channel supported by the external audio output device, the audio data output by the external audio output device may have a problem of spurious sound or no sound.
In order to avoid the above problems, the present embodiment further illustrates a method for switching channels, which is applicable to the controller of the display device illustrated in the above embodiments, on the basis of the display device illustrated in the above embodiments, and the method for switching channels is described below with reference to specific drawings. Fig. 9 is a flowchart illustrating a channel switching method according to a possible embodiment. On the basis of the above display apparatus, the controller is further configured to perform steps S91 to S95:
in response to the insertion or extraction of the external audio output device, the first controller is configured to perform step S91 of reading the first channel number and the second channel number;
in this application, the configuration information further includes a first channel number, where the first channel number is a channel number supported by the external audio output device.
In this application, the second channel number is the number of channels that the display device is turned on when the external audio output device is inserted or pulled out.
In this application, a Sound Channel (Sound Channel) refers to mutually independent audio signals acquired or played back at different spatial positions when a Sound is recorded or played, so the number of Sound channels is the number of Sound sources when the Sound is recorded or the number of corresponding speakers when the Sound is played back.
S92, judging whether the first channel number and the second channel number are equal;
judging whether the first channel number and the second channel number are equal or not;
for example, as a possible embodiment, the difference between the first channel number and the second channel number can be obtained, and if the difference is equal to 0, the first channel number is equal to the second channel number.
It should be noted that this embodiment is merely an exemplary implementation for determining whether the first channel number and the second channel number are equal, and the implementation is not limited.
S93, if the first channel number is equal to the second channel number, writing the audio data into the audio driver layer according to the recording format.
The implementation manner of writing the audio data into the audio driver layer according to the recording format may refer to the foregoing embodiments, and is not described herein again.
S94 discarding the audio data transmitted on the third channel if the first channel number is smaller than the second channel number, where the third channel is a channel that is redundant in the second channel set compared to the first channel set, the first channel set includes channels supported by the external audio output device, and the second channel set includes channels turned on by the display device.
The above process is described below with reference to specific examples.
In a possible embodiment, the first channel set comprises a left channel and a right channel, the first number of channels is equal to 2, the second channel set comprises a left channel, a right channel, a center, a front left surround, a front right surround and a bass channel, the second number of channels is equal to 6; the third channel includes: mid, front left surround, front right surround and bass. In this embodiment, the first channel number is smaller than the second channel number, so the controller discards the audio data of the center, front left surround, front right surround and bass transmission.
It can be seen that in the technical solution shown in this embodiment, if the first channel number is smaller than the second channel number, the controller ensures that the first channel number is equal to the adjusted second channel number by closing the third channel.
S95 not transmitting audio data to the fourth channel if the first channel number is greater than the second channel number, the fourth channel being a channel of the first channel set that is redundant compared to the second channel set.
The above process is described below with reference to specific examples.
In a possible embodiment, the first set of channels includes { left channel, right channel, center, front left surround, front right surround, bass, and top left and top right channels }, the first number of channels equals 8; the second channel set comprises a left channel, a right channel, a middle position, a front left surrounding, a front right surrounding and a bass, and the number of the second channels is equal to 6; the fourth channel includes: a center upper left channel and an upper right channel. In this embodiment, the first channel number is greater than the second channel number, so that in the process of transferring the audio data, the first controller does not transmit the audio data to the upper left channel and the upper right channel of the external audio output device, and only transmits the audio data to the left channel, the right channel, the middle, the front left surround, the front right surround, and the bass of the external audio output device.
The audio data related to the present embodiment is roughly divided into three types, and the following describes the transmission flow of the audio data in the playing process with reference to specific drawings.
FIG. 10 is a flow chart illustrating the playing of local audio data according to one possible embodiment;
it can be seen that for local audio data, the local player writes the audio data directly to the audio driver layer. And the second audio data interface layer records the audio data in the audio driving layer to a USB driving layer of the external audio output equipment according to the recording format.
FIG. 11 is a flow chart illustrating the playing of live audio data in a possible embodiment;
live audio data may include, but is not limited to: DTV (Digital Television audio data), ATV (Asia Television audio data), HDMI (High Definition Multimedia Interface ) and other types of audio types, and such audio is directly written into the audio driver layer through the TV Interface layer of the display device. And the second audio data interface layer records the audio data in the audio driving layer to a USB driving layer of the external audio output equipment according to the recording format.
FIG. 12 is a flow chart illustrating playing of network audio data according to one possible embodiment;
network audio data may include, but is not limited to: netflix audio data, Youtube audio data, etc., which pass through in sequence: the buffer memory of the multimedia application, the player and the audio frame layer (also called as buffer memory in this embodiment) writes the audio data in the buffer memory into the audio driver layer through the first audio hardware interface layer. And the second audio data interface layer records the audio data in the audio driving layer to a USB driving layer of the external audio output equipment according to the recording format.
For network audio data, the audio data is buffered by the audio framework layer, and an audio stream is written into the audio driver layer through the first audio hardware interface layer. The first audio hardware interface layer can adjust the writing process of the audio data from the buffer area to the audio drive layer, and further ensures that the writing format adopted by the process of writing the audio data into the audio drive layer by the first audio hardware interface layer is consistent with the recording format of the audio data recorded from the audio drive layer by the second audio hardware interface layer.
In order to achieve the above object, the present embodiment shows an audio data writing method, and the following describes the procedure of the audio data writing method with reference to the specific drawings. Fig. 13 is a flowchart showing an audio data writing method according to a possible embodiment, and the method shown in this embodiment is applied to the display device shown in the above embodiment, and is configured to perform steps S131 to S133 on the basis of the display device shown in the above embodiment;
s131 creates a buffer area according to the first channel number and the second channel number.
For example, in one possible embodiment, the second channel set includes { left channel, right channel, center, front left surround, front right surround, and bass }; wherein the third channel comprises: mid, front left surround, front right surround and bass.
The implementation manner of creating the buffer area according to the first channel number and the second channel number may be: the buffer size is the size of the audio data at this time, i.e., the second channel number/the first channel number. For example, in a possible embodiment, the first channel number is equal to 2, the second channel number is equal to 6, and the buffer size is equal to the size of the audio data.
As a possible embodiment, if the first channel number is greater than the second channel number, the second channel number is equal to the third channel number because none of the channels are closed.
The implementation manner of creating the buffer area according to the first channel number and the second channel number may be: the buffer size is the size of the audio data at this time, i.e., the second channel number/the first channel number. For example, in a possible embodiment, the first channel number is equal to 6, the second channel number is equal to 2, and the buffer size is equal to the size of the audio data, such as 2/6.
S132, writing the audio data into the buffer area;
the implementation manner of writing the audio data into the buffer area may adopt a data writing manner that is customary in the art, and the applicant does not make much limitation here.
As a possible embodiment, the size of the data written into the buffer is equal to the target sampling rate/1000 × third channel × target sampling precision/8 × predicted playing time.
S133 writes the audio data in the buffer area into the audio driver layer according to the recording format.
In this embodiment, reference may be made to the above embodiments for implementing writing of the audio data in the buffer area into the audio driver layer according to the recording format, and details of the implementation are not described herein.
In the process of practical application, if the external audio output device and the display device are in a disconnected state, the computing resources of the controller are certainly wasted if the audio data is continuously written into the audio driver layer. In order to avoid the above problem, the present embodiment shows an audio data writing method, in which a controller needs to read a connection state of the external audio output device before writing audio data into an audio driver layer, and then determines whether the audio data needs to be written into the audio driver layer according to the connection state, and the audio data writing method provided in the present embodiment is described below with reference to the drawings.
Fig. 14 is a flowchart illustrating an audio data writing method provided in an embodiment of the present application, the method being applied to the display device illustrated in the above embodiment, wherein the controller is further configured to perform steps S141 to S143.
S141, in response to the completion of the creation of the buffer area, reading the connection state of the external audio output device;
the connection state can be represented by an identification bit, and when the external audio output device is connected with the display device, the identification bit of the external audio output device can be set to be a first identification bit; when the external audio output device is disconnected from the display device, the identification bit of the external audio output device may be set to the second identification bit. The embodiment does not limit the patterns of the first identification position and the second identification position, and any pattern that can distinguish the first identification position from the second identification position can be applied to the embodiment.
And when the controller reads the first identification bit, the connection state is determined to be connected, and when the controller reads the second identification bit, the connection state is determined to be disconnected.
S142, if the connection state is disconnected, audio data are not written into the buffer area.
S143, if the connection state is connected, writing the audio data in the buffer area into an audio drive layer according to the recording format;
according to the technical scheme shown in the embodiment, before the audio data is written into the audio driver layer, the controller needs to read the connection state of the external audio output device, and then determines whether the audio data needs to be written into the audio driver layer according to the connection state, so that the purpose of avoiding the waste of the calculation resources of the controller is achieved.
In the process of practical application, if the audio data is in an unplayed state, at this time, if the audio data continues to be written into the audio driver layer, it is undoubtedly a waste of computing resources of the controller, and in order to avoid the above problem, this embodiment shows an audio data writing method, where before writing the audio data into the audio driver layer, the controller needs to read a playing state of the audio data, and then determines whether the audio data needs to be written into the audio driver layer according to the playing state, and the following describes the audio data writing method provided in this embodiment with reference to the attached drawings.
Fig. 15 is a flowchart illustrating an audio data writing method provided in an embodiment of the present application, the method being applicable to the display device illustrated in the above embodiment, wherein the controller is further configured to perform steps S151 to S154.
S151, in response to the completion of the creation of the buffer area, reading the connection state of the external audio output device;
s152 if the connection status is disconnected, not writing the audio data into the buffer area.
S153, if the connection state is connected, reading the playing state of the audio data;
wherein, the playing status can be represented by an identification bit. When the audio data is played, the identification bit of the audio data is a third identification bit; when the audio data is not played, the identification bit of the audio data is the fourth identification bit. The embodiment does not limit the patterns of the third identification position and the fourth identification position, and any pattern that can distinguish the third identification position from the fourth identification position may be applied to the embodiment. The controller may determine the play status of the audio data by reading the identification of the audio data.
And when the controller reads the third identification bit, determining that the playing state is playing, and when the controller reads the fourth identification bit, determining that the playing state is not playing.
If the playing status is not playing, continuing to execute step S151 to read the connection status of the external audio output device;
if the playing status is playing, step S154 is executed to write the audio data in the buffer into the audio driving layer according to the recording format.
According to the technical scheme shown in the embodiment, before the audio data is written into the audio drive layer, the controller needs to read the playing state of the audio data, and then determines whether the audio data needs to be written into the audio drive layer according to the playing state, so that the aim of avoiding the waste of computing resources of the controller is fulfilled.
In an actual application process, if the buffer area has no audio data, at this time, if the audio data is continuously written into the audio driver layer, it is undoubtedly a waste of computing resources of the controller, and in order to avoid the above problem, this embodiment shows an audio data writing method, where before writing the audio data into the audio driver layer, the controller needs to traverse the buffer area, and then determines whether the audio data needs to be written into the audio driver layer according to whether the audio data is stored in the buffer area, and the following describes the audio data writing method provided in this embodiment with reference to the attached drawings.
Fig. 16 is a flowchart illustrating an audio data writing method provided in an embodiment of the present application, the method being applied to the display device illustrated in the above embodiment, wherein the controller is further configured to execute steps S161 to S165.
S161, in response to the creation of the buffer area being completed, reading the connection state of the external audio output device;
s162, if the connection state is disconnected, not writing the audio data into the buffer area.
S163, if the connection state is connected, reading the playing state of the audio data;
if the playing status is not playing, continuing to execute step S161 to read the connection status of the external audio output device;
if the playing status is playing, executing step S164 to traverse the cache region;
the implementation of traversing the cache region may adopt a data traversal method that is customary in the art, and the applicant supplements too many limitations herein.
If the buffer has no audio data, the process continues to step S161 to read the connection status of the external audio output device.
If the audio data is stored in the buffer area, step S165 is executed to write the audio data in the buffer area into an audio driver layer according to the recording format;
according to the technical scheme shown in the embodiment, the controller needs to traverse the buffer area before writing the audio data into the audio driver layer, and then determines whether the audio data needs to be written into the audio driver layer according to whether the audio data is stored in the buffer area, so that the purpose of avoiding the waste of the calculation resources of the controller is achieved.
A second aspect of the embodiments of the present application shows an audio data writing method, including:
reading configuration information, wherein the configuration information at least comprises a recording format, and the recording format is an audio format supported by external audio output equipment;
and writing the audio data into the audio drive layer according to the recording format.
According to the technical scheme shown in the embodiment, in the process of writing the audio data into the audio drive layer, the audio data can be written into the audio drive layer according to the recording format, and in the subsequent audio data recording process, the second audio hardware interface layer records the audio data into the audio hardware interface layer of the external audio output device according to the recording format, so that the audio data is ensured to adopt the same recording format in the writing and recording processes, and the problem of abnormal sound playing (fast/slow) is avoided.
In a specific implementation, the present invention further provides a computer storage medium, where the computer storage medium may store a program, and the program may include some or all of the steps in each embodiment of the method for customizing a control key and the method for starting the control key provided by the present invention when executed. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), a Random Access Memory (RAM), or the like.
Those skilled in the art will readily appreciate that the techniques of the embodiments of the present invention may be implemented as software plus a required general purpose hardware platform. Based on such understanding, the technical solutions in the embodiments of the present invention may be essentially or partially implemented in the form of software products, which may be stored in a storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and include instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method in the embodiments or some parts of the embodiments of the present invention.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.
The foregoing description, for purposes of explanation, has been presented in conjunction with specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the embodiments to the precise forms disclosed above. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles and the practical application, to thereby enable others skilled in the art to best utilize the embodiments and various embodiments with various modifications as are suited to the particular use contemplated.

Claims (10)

1. A display device, comprising:
a display;
a controller configured to perform:
reading configuration information, wherein the configuration information at least comprises a recording format, and the recording format is an audio format supported by external audio output equipment;
and writing the audio data into the audio drive layer according to the recording format.
2. The display device of claim 1, wherein the configuration information further comprises a first channel number, the first channel number being a number of channels supported by the external audio output device, the controller being further configured to:
and if the first channel number is equal to the second channel number, writing the audio data into an audio drive layer according to the recording format, wherein the second channel is the channel number started by the display equipment.
3. The display device of claim 2, wherein the controller is further configured to:
and if the first channel number is less than the second channel number, discarding the audio data transmitted by the third channel, wherein the third channel is a channel which is redundant in the second channel set compared with the first channel set, the first channel set comprises channels supported by the external audio output device, and the second channel set comprises channels turned on by the display device.
4. The display device of claim 3, wherein the controller is further configured to:
and if the first channel number is greater than the second channel number, not transmitting audio data to the fourth channel, wherein the fourth channel is a channel which is redundant in the first channel set compared with the second channel set.
5. The display device of claim 3, wherein if the first channel number is not equal to the second channel number, the controller is further configured to:
creating a buffer area according to the first channel number and the second channel number;
writing the audio data into the buffer area;
and writing the audio data in the buffer area into an audio drive layer according to a recording format.
6. The display device of claim 5, wherein the first audio hardware interface layer is further configured to:
reading a connection state of the external audio output device in response to completion of the creation of the buffer area;
if the connection state is connected, writing the audio data in the cache area into an audio drive layer according to the recording format;
and if the connection state is disconnected, not writing the audio data into the buffer area.
7. The display device according to claim 6, wherein if the connection state is on, the controller is further configured to:
reading the playing state of the audio data;
if the playing state is not played, continuing to read the connection state of the external audio output equipment;
and if the playing state is playing, writing the audio data in the buffer area into an audio drive layer according to the recording format.
8. The display device according to claim 7, wherein if the play state is play, the controller is further configured to:
traversing the cache region;
if the cache area stores audio data, writing the audio data in the cache area into an audio drive layer according to the recording format;
and if the buffer area has no audio data, continuing to read the connection state of the external audio output equipment.
9. The display device of any of claims 6-8, wherein the recording format comprises: the target audio format comprises target sampling precision and a target audio format, wherein the target sampling precision is sampling precision supported by external audio output equipment, and the target audio format is an audio format supported by the external audio output equipment;
the controller is further configured to:
sampling the audio data in the cache region according to the target sampling precision;
and converting the sampled audio data into audio data in a target audio format.
10. An audio data writing method, comprising:
reading configuration information, wherein the configuration information at least comprises a recording format, and the recording format is an audio format supported by external audio output equipment;
and writing the audio data into the audio drive layer according to the recording format.
CN202110479263.5A 2021-04-30 2021-04-30 Display device and audio data writing method Active CN113138745B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202110479263.5A CN113138745B (en) 2021-04-30 2021-04-30 Display device and audio data writing method
PCT/CN2022/090559 WO2022228571A1 (en) 2021-04-30 2022-04-29 Display device and audio data processing method
US18/138,996 US20230262286A1 (en) 2021-04-30 2023-04-25 Display device and audio data processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110479263.5A CN113138745B (en) 2021-04-30 2021-04-30 Display device and audio data writing method

Publications (2)

Publication Number Publication Date
CN113138745A true CN113138745A (en) 2021-07-20
CN113138745B CN113138745B (en) 2022-12-09

Family

ID=76816527

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110479263.5A Active CN113138745B (en) 2021-04-30 2021-04-30 Display device and audio data writing method

Country Status (1)

Country Link
CN (1) CN113138745B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022228571A1 (en) * 2021-04-30 2022-11-03 海信视像科技股份有限公司 Display device and audio data processing method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060009985A1 (en) * 2004-06-16 2006-01-12 Samsung Electronics Co., Ltd. Multi-channel audio system
CN105632541A (en) * 2015-12-23 2016-06-01 惠州Tcl移动通信有限公司 Method and system for recording audio output by mobile phone, and mobile phone
CN107483993A (en) * 2017-07-14 2017-12-15 深圳Tcl新技术有限公司 Pronunciation inputting method, TV and the computer-readable recording medium of TV
CN110097897A (en) * 2019-04-02 2019-08-06 烽火通信科技股份有限公司 A kind of Android device recording multiplexing method and system
CN111385621A (en) * 2020-03-18 2020-07-07 海信视像科技股份有限公司 Display device and Bluetooth audio transmission method
US20200326907A1 (en) * 2019-04-09 2020-10-15 Hisense Visual Technology Co., Ltd. Method for outputting audio data of applications and display device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060009985A1 (en) * 2004-06-16 2006-01-12 Samsung Electronics Co., Ltd. Multi-channel audio system
CN105632541A (en) * 2015-12-23 2016-06-01 惠州Tcl移动通信有限公司 Method and system for recording audio output by mobile phone, and mobile phone
CN107483993A (en) * 2017-07-14 2017-12-15 深圳Tcl新技术有限公司 Pronunciation inputting method, TV and the computer-readable recording medium of TV
CN110097897A (en) * 2019-04-02 2019-08-06 烽火通信科技股份有限公司 A kind of Android device recording multiplexing method and system
US20200326907A1 (en) * 2019-04-09 2020-10-15 Hisense Visual Technology Co., Ltd. Method for outputting audio data of applications and display device
CN111385621A (en) * 2020-03-18 2020-07-07 海信视像科技股份有限公司 Display device and Bluetooth audio transmission method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022228571A1 (en) * 2021-04-30 2022-11-03 海信视像科技股份有限公司 Display device and audio data processing method

Also Published As

Publication number Publication date
CN113138745B (en) 2022-12-09

Similar Documents

Publication Publication Date Title
CN114302194B (en) Display device and playing method during multi-device switching
CN113268199A (en) Display device and function item setting method
CN112887778A (en) Switching method of video resource playing modes on display equipment and display equipment
CN113360066B (en) Display device and file display method
CN113138745B (en) Display device and audio data writing method
CN113111214A (en) Display method and display equipment for playing records
CN114077584A (en) File transmission method and display device
CN113453069B (en) Display device and thumbnail generation method
CN113573149B (en) Channel searching method and display device
WO2022116600A1 (en) Display device
CN113573112A (en) Display device and remote controller
CN113490030A (en) Display device and channel information display method
CN113784203A (en) Display device and channel switching method
CN112732396A (en) Media asset data display method and display device
CN112667285A (en) Application upgrading method, display device and server
CN112882631A (en) Display method of electronic specification on display device and display device
CN112836158A (en) Resource loading method on display equipment and display equipment
CN112911381A (en) Display device, mode adjustment method, device and medium
CN113138744B (en) Display device and sound channel switching method
CN113350781B (en) Display device and game mode switching method
CN113064515B (en) Touch display device and USB device switching method
CN116567368A (en) Display equipment, local video file preview and thumbnail display method
CN112883302A (en) Method for displaying page corresponding to hyperlink address and display equipment
CN113660532A (en) Multi-webpage video playing method and display equipment
CN113038193A (en) Method for automatically repairing audio and video asynchronism and display device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant