CN111654743B - Audio playing method and display device - Google Patents

Audio playing method and display device Download PDF

Info

Publication number
CN111654743B
CN111654743B CN202010459723.3A CN202010459723A CN111654743B CN 111654743 B CN111654743 B CN 111654743B CN 202010459723 A CN202010459723 A CN 202010459723A CN 111654743 B CN111654743 B CN 111654743B
Authority
CN
China
Prior art keywords
audio data
data
audio
mute
format
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010459723.3A
Other languages
Chinese (zh)
Other versions
CN111654743A (en
Inventor
孙永瑞
张安祺
马斌义
李森
齐消消
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Visual Technology Co Ltd
Original Assignee
Hisense Visual Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Visual Technology Co Ltd filed Critical Hisense Visual Technology Co Ltd
Priority to CN202010459723.3A priority Critical patent/CN111654743B/en
Publication of CN111654743A publication Critical patent/CN111654743A/en
Application granted granted Critical
Publication of CN111654743B publication Critical patent/CN111654743B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • H04N21/4122Peripherals receiving signals from specially adapted client devices additional display device, e.g. video projector
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4396Processing of audio elementary streams by muting the audio signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4398Processing of audio elementary streams involving reformatting operations of audio signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The embodiment of the application relates to the technical field of audio playing, in particular to an audio playing method and display equipment, which are used for directly acquiring audio data from an SOC chip by utilizing Tinylsa in HAL when the audio data are played through external playing equipment so as to reduce the playing time delay of the audio data. The method comprises the following steps: the first Audio track receives audio data in a first format input by an audio data control module, copies the audio data, and respectively outputs the original audio data and the copied audio data to a first Audio Flinger and a second Audio Flinger; the first Audio Flinger receives the audio data, performs sound mixing processing on the audio data to obtain first audio data, and outputs the first audio data to the SOC chip; the second Audio Flinger receives the audio data, performs sound mixing processing on the audio data to obtain second audio data, and writes the second audio data into the HAL; and the Tinyalsa acquires the first audio data from the SOC chip and outputs the first audio data to the external playing equipment for playing when the HAL writes the second audio data.

Description

Audio playing method and display device
Technical Field
The present application relates to the field of audio playing technologies, and in particular, to an audio playing method and a display device.
Background
Currently, most operating systems carried by smart televisions are Android (Android) operating systems. Under an android operating System, when an intelligent television plays audio data through an external playing device, the audio data is firstly decoded and mixed by an SOC (System on a Chip) Chip, then processed by modules such as Tinyalsa (reduced version advanced Linux audio framework), AudioRecord (audio recording module), hippk (application program module), AudioTrack (audio track) and AudioFlinger (first audio data entity), and finally transmitted to an external playing device interface in an HAL (Hardware Abstraction Layer), so that the audio data is played by the external playing device.
However, in the process of routing the audio data from the SOC chip to the interface of the external playback device in the HAL, there are more modules participating in the transmission and processing of the audio data, which results in a higher delay of the audio data when the external playback device plays the audio data.
Disclosure of Invention
The application provides an audio playing method and display equipment, which are used for directly acquiring audio data from an SOC chip by utilizing Tinylsa in HAL when playing audio data through external playing equipment, and transmitting the audio data to the external playing equipment for playing, so that the playing time delay of the audio data is reduced.
In a first aspect, the present application provides a display device comprising:
a display;
a controller coupled to the display, comprising at least: the audio playing system comprises a first audio track AudioTrack, a first audio data entity AudioFlinger, a second Audio Flinger and a simplified version of advanced Linux audio frame Tinyals, wherein the first audio track AudioTrack, the first audio data entity AudioFlinger and the second Audio Flinger are arranged between an audio data control module and a system on a chip (SOC) chip, and the simplified version of advanced Linux audio frame Tinyals is created when an external playing device is started; the Tinyalsa runs in the HAL and is positioned between the SOC chip and the external playing equipment;
the first AudioTrack is configured to receive the audio data in the first format input from the audio data control module, copy the audio data, and output the original audio data and the copied audio data to the first audioflasher and the second audioflasher, respectively;
the first audioFlinger is used for receiving the audio data, performing sound mixing processing on the audio data to obtain first audio data, and outputting the first audio data to the SOC chip;
the second audioFlinger is used for receiving the audio data, performing sound mixing processing on the audio data to obtain second audio data, and writing the second audio data into the HAL;
and the Tinyalsa is used for acquiring the first audio data from the SOC chip and outputting the first audio data to the external playing equipment for playing when the HAL writes the second audio data.
In a second aspect, the present application provides a display device comprising:
a display;
a controller coupled to the display, comprising at least: the mute data control module is arranged between the mute data control module and the SOC chip, is associated with a second AudioTrack, a first audio data entity AudioFlinger and a second AudioFlinger of the mute data control module, and is also associated with a Tinyalsa created when the external playing equipment is started; the Tinyalsa runs in the HAL and is positioned between the SOC chip and the external playing equipment;
the mute data control module is used for constructing mute data in a first format and outputting the mute data to the second AudioTrack when the SOC chip receives audio data in a second format; the playing time length of the mute data is equal to that of the audio data in the second format;
the second AudioTrack is configured to receive the mute data input from the mute data control module, copy the mute data, and output the original mute data and the copied mute data to the first AudioFlinger and the second AudioFlinger, respectively;
the first audioFlinger is used for receiving the mute data, performing sound mixing processing on the mute data to obtain third audio data, and outputting the third audio data to the SOC chip;
the second audioFlinger is used for receiving the mute data, performing sound mixing processing on the mute data to obtain fourth audio data, and writing the fourth audio data into the HAL;
the SOC chip is used for receiving the audio data in the second format, converting the audio data in the second format into audio data in the first format, and performing sound mixing processing on the converted audio data and the received audio data in the first format to obtain fifth audio data when receiving the third audio data;
and the Tinyalsa is used for acquiring the fifth audio data from the SOC chip and outputting the fifth audio data to the external playing equipment for playing when the HAL writes the fourth audio data in.
In a third aspect, the present application provides a display device comprising:
a display;
a controller coupled to the display, comprising at least: a third AudioTrack and a third audioFlinger which are arranged between the audio data control module and the SOC chip;
the third AudioTrack is used for receiving the audio data in the first format input by the audio data control module and outputting the audio data to the third audioFlinger;
and the third audioFlinger is used for receiving the audio data, performing sound mixing processing on the audio data to obtain sixth audio data, and outputting the sixth audio data to the external playing equipment for playing.
In a fourth aspect, the present application provides a display device, comprising:
a display;
a controller coupled to the display, comprising at least: a fourth AudioTrack and a third audioFlinger which are arranged between the mute data control module and the SOC chip and are associated with the mute data control module, and an audio mixer audioMixer and Tinyalsa which are created when an external playing device is started; the AudioMixer and Tinyalsa run on the HAL between the SOC chip and the external playback device;
the mute data control module is used for constructing mute data in a first format and outputting the mute data to the fourth AudioTrack when the SOC chip receives audio data in a second format; the playing time length of the mute data is equal to that of the audio data in the second format;
the fourth AudioTrack is configured to receive the mute data input from the mute data control module, and output the mute data to the third audioFlinger;
the third audioFlinger is used for receiving the mute data, performing sound mixing processing on the mute data to obtain seventh audio data, and outputting the seventh audio data to the audioMixer;
the SOC chip is used for receiving the audio data in the second format and converting the audio data in the second format into the audio data in the first format;
the Tinyalsa is configured to obtain the audio data in the first format from the SOC chip and output the audio data to the AudioMixer when the AudioMixer receives the seventh audio data;
the AudioMixer is configured to receive the audio data in the first format and the seventh audio data output by Tinyalsa, perform audio mixing processing on the audio data in the first format and the seventh audio data to obtain eighth audio data, and output the eighth audio data to the external playing device for playing.
In a fifth aspect, an audio playing method is provided, which includes:
when the system is started, starting a first audio track AudioTrack, a first audio data entity AudioFlinger and a second Audio Flinger which are arranged between an audio data control module and an SOC chip;
when the external playing equipment is started, Tinyalsa is created; the Tinyalsa runs in the HAL and is positioned between the SOC chip and the external playing equipment;
the first AudioTrack receives audio data in a first format input by the audio data control module, copies the audio data, and outputs original audio data and copied audio data to the first audioFlinger and the second audioFlinger respectively;
the first Audio Flinger receives the audio data, performs sound mixing processing on the audio data to obtain first audio data, and outputs the first audio data to the SOC chip;
the second audioFlinger receives the audio data, performs sound mixing processing on the audio data to obtain second audio data, and writes the second audio data into the HAL;
and when the HAL writes the second audio data, the Tinyalsa acquires the first audio data from the SOC chip and outputs the first audio data to the external playing equipment for playing.
In a sixth aspect, the present application provides an audio playing method, including:
when the system is started, starting a mute data control module, and a second AudioTrack, a first audio data entity AudioFlinger and a second AudioFlinger which are arranged between the mute data control module and an SOC chip and are associated with the mute data control module;
when the external playing equipment is started, Tinyalsa is created; the Tinyalsa runs in the HAL and is positioned between the SOC chip and the external playing equipment;
when the SOC chip receives audio data in a second format, the mute data control module constructs mute data in a first format and outputs the mute data to the second Audio track; the playing time length of the mute data is equal to that of the audio data in the second format;
the second AudioTrack receives the mute data input from the mute data control module, copies the mute data, and respectively outputs the original mute data and the copied mute data to the first audioFlinger and the second audioFlinger;
the first audioFlinger receives the mute data, performs sound mixing processing on the mute data to obtain third audio data, and outputs the third audio data to the SOC chip;
the second audioFlinger receives the mute data, performs sound mixing processing on the mute data to obtain fourth audio data, and writes the fourth audio data into the HAL;
the SOC chip receives the audio data in the second format, converts the audio data in the second format into audio data in the first format, and performs sound mixing processing on the converted audio data and the received audio data in the first format to obtain fifth audio data when receiving the third audio data;
and the Tinyalsa acquires the fifth audio data from the SOC chip when the HAL writes the fourth audio data and outputs the fifth audio data to the external playing equipment for playing.
In a seventh aspect, the present application provides an audio playing method, including:
when the system is started, starting a third AudioTrack and a third AudioFlinger which are arranged between the audio data control module and the SOC;
the third AudioTrack receives the audio data in the first format input by the audio data control module and outputs the audio data to the third audioFlinger;
and the third Audio Flinger receives the audio data, performs sound mixing processing on the audio data to obtain sixth audio data, and outputs the sixth audio data to the external playing equipment for playing.
In an eighth aspect, the present application provides an audio playing method, including:
when the computer is started, starting a fourth Audio track, a third Audio Flinger and an audio mixer, wherein the fourth Audio track, the third Audio Flinger and the audio mixer are arranged between a mute data control module and an SOC chip and are associated with the mute data control module;
when the external playing equipment is started, Tinyalsa is created; the AudioMixer and Tinyalsa run on the HAL between the SOC chip and the external playback device;
when the SOC chip receives the audio data in the second format, the mute data control module constructs mute data in the first format and outputs the mute data to the fourth Audio track; the playing time length of the mute data is equal to that of the audio data in the second format;
the fourth AudioTrack receives the mute data input by the mute data control module and outputs the mute data to the third audioFlinger;
the third audioFlinger receives the mute data, performs sound mixing processing on the mute data to obtain seventh audio data, and outputs the seventh audio data to the audioMixer;
the SOC chip receives the audio data in the second format and converts the audio data in the second format into audio data in the first format;
the Tinyalsa acquires the audio data in the first format from the SOC chip and outputs the audio data to the AudioMixer when the AudioMixer receives the seventh audio data;
and the AudioMixer receives the audio data in the first format and the seventh audio data output by the Tinyalsa, performs audio mixing processing on the audio data in the first format and the seventh audio data to obtain eighth audio data, and outputs the eighth audio data to the external playing device for playing.
In the above embodiment, when the audio data is played by the external playing device, two audioflinders are created locally, where one audioflinder is used to mix the audio data transmitted by the AudioTrack into the first audio data and input the first audio data to the SOC chip, and the other audioflinder is used to mix the audio data transmitted by the AudioTrack into the second audio data and input the second audio data to the HAL, so as to trigger tinyssa in the HAL to directly acquire the first audio data from the SOC chip and transmit the first audio data to the external playing device for playing.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1A is a schematic diagram illustrating an operation scenario between a display device and a control apparatus;
fig. 1B is a block diagram schematically illustrating a configuration of the control apparatus 100 in fig. 1A;
fig. 1C is a block diagram schematically illustrating a configuration of the display device 200 in fig. 1A;
FIG. 1D is a block diagram illustrating an architectural configuration of an operating system in memory of display device 200;
fig. 2 exemplarily shows an audio play frame configuration diagram of the display device 200;
3A-C illustrate an interaction flow diagram of a method of audio playback;
FIGS. 4A-D illustrate another audio playback frame configuration diagram for display device 200;
FIGS. 5A-C illustrate an interaction flow diagram of another audio playback method;
fig. 6A-D illustrate another audio playback frame configuration diagram for display device 200.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the accompanying drawings, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The term "user interface" in this application is a media interface for interaction and information exchange between an application or operating system and a user that enables conversion between an internal form of information and a form acceptable to the user. A common presentation form of a user interface is a Graphical User Interface (GUI), which refers to a user interface related to computer operations and displayed in a graphical manner. It may be an interface element such as an icon, a window, a control, etc. displayed in the display screen of the display device, where the control may include a visual interface element such as an icon, a button, a menu, a tab, a text box, a dialog box, a status bar, a navigation bar, a Widget, etc.
The audio playing method provided by the embodiment of the application can be applied to display equipment. The user can control the display device to execute the audio playing method provided by the embodiment of the application by operating the control device for controlling the display device.
Fig. 1A is a schematic diagram illustrating an operation scenario between a display device and a control apparatus. As shown in fig. 1A, the control apparatus 100 and the display device 200 may communicate with each other in a wired or wireless manner.
Among them, the control apparatus 100 is configured to control the display device 200, which may receive an operation instruction input by a user and convert the operation instruction into an instruction recognizable and responsive by the display device 200, serving as an intermediary for interaction between the user and the display device 200. Such as: the user operates the channel up/down key on the control device 100, and the display device 200 responds to the channel up/down operation.
The control device 100 may be a remote controller 100A, which includes infrared protocol communication or bluetooth protocol communication, and other short-distance communication methods, etc. to control the display apparatus 200 in a wireless or other wired manner. The user may input a user instruction through a key on a remote controller, voice input, control panel input, etc., to control the display apparatus 200. Such as: the user can input a corresponding control command through a volume up/down key, a channel control key, up/down/left/right moving keys, a voice input key, a menu key, a power on/off key, etc. on the remote controller, to implement the function of controlling the display device 200.
The control device 100 may also be an intelligent device, such as a mobile terminal 100B, a tablet computer, a notebook computer, and the like. For example, the display device 200 is controlled using an application program running on the smart device. The application program may provide various controls to a user through an intuitive User Interface (UI) on a screen associated with the smart device through configuration.
For example, the mobile terminal 100B may install a software application with the display device 200 to implement connection communication through a network communication protocol for the purpose of one-to-one control operation and data communication. Such as: the mobile terminal 100B may be caused to establish a control instruction protocol with the display device 200 to implement the functions of the physical keys as arranged in the remote control 100A by operating various function keys or virtual buttons of the user interface provided on the mobile terminal 100B. The audio and video content displayed on the mobile terminal 100B may also be transmitted to the display device 200, so as to implement a synchronous display function.
The display apparatus 200 may provide a network television function of a broadcast receiving function and a computer support function. The display device may be implemented as a digital television, a web television, an Internet Protocol Television (IPTV), or the like.
The display device 200 may be a liquid crystal display, an organic light emitting display, a projection device. The specific display device type, size, resolution, etc. are not limited.
The display apparatus 200 also performs data communication with the server 300 through various communication means. Here, the display apparatus 200 may be allowed to be communicatively connected through a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 300 may provide various contents and interactions to the display apparatus 200. By way of example, the display device 200 may send and receive information such as: receiving Electronic Program Guide (EPG) data, receiving software program updates, or accessing a remotely stored digital media library. The servers 300 may be a group or groups of servers, and may be one or more types of servers. Other web service contents such as a video on demand and an advertisement service are provided through the server 300.
Fig. 1B is a block diagram illustrating the configuration of the control device 100. As shown in fig. 1B, the control device 100 includes a controller 110, a memory 120, a communicator 130, a user input interface 140, an output interface 150, and a power supply 160.
The controller 110 includes a Random Access Memory (RAM)111, a Read Only Memory (ROM)112, a processor 113, a communication interface, and a communication bus. The controller 110 is used to control the operation of the control device 100, as well as the internal components of the communication cooperation, external and internal data processing functions.
Illustratively, when an interaction of a user pressing a key disposed on the remote controller 100A or an interaction of touching a touch panel disposed on the remote controller 100A is detected, the controller 110 may control to generate a signal corresponding to the detected interaction and transmit the signal to the display device 200.
And a memory 120 for storing various operation programs, data and applications for driving and controlling the control apparatus 100 under the control of the controller 110. The memory 120 may store various control signal commands input by a user.
The communicator 130 enables communication of control signals and data signals with the display apparatus 200 under the control of the controller 110. Such as: the control apparatus 100 transmits a control signal (e.g., a touch signal or a button signal) to the display device 200 via the communicator 130, and the control apparatus 100 may receive the signal transmitted by the display device 200 via the communicator 130. The communicator 130 may include an infrared signal interface 131 and a radio frequency signal interface 132. For example: when the infrared signal interface is used, the user input instruction needs to be converted into an infrared control signal according to an infrared control protocol, and the infrared control signal is sent to the display device 200 through the infrared sending module. The following steps are repeated: when the rf signal interface is used, a user input command needs to be converted into a digital signal, and then the digital signal is modulated according to the rf control signal modulation protocol and then transmitted to the display device 200 through the rf transmitting terminal.
The user input interface 140 may include at least one of a microphone 141, a touch pad 142, a sensor 143, a key 144, and the like, so that a user can input a user instruction regarding controlling the display apparatus 200 to the control apparatus 100 through voice, touch, gesture, press, and the like.
The output interface 150 outputs a user instruction received by the user input interface 140 to the display apparatus 200, or outputs an image or voice signal received by the display apparatus 200. Here, the output interface 150 may include an LED interface 151, a vibration interface 152 generating vibration, a sound output interface 153 outputting sound, a display 154 outputting an image, and the like. For example, the remote controller 100A may receive an output signal such as audio, video, or data from the output interface 150, and display the output signal in the form of an image on the display 154, in the form of audio on the sound output interface 153, or in the form of vibration on the vibration interface 152.
And a power supply 160 for providing operation power support for each element of the control device 100 under the control of the controller 110. In the form of a battery and associated control circuitry.
A hardware configuration block diagram of the display device 200 is exemplarily illustrated in fig. 1C. As shown in fig. 1C, the display apparatus 200 may include a tuner demodulator 210, a communicator 220, a detector 230, an external device interface 240, a controller 250, a memory 260, a user interface 265, a video processor 270, a display 275, an audio processor 280, an audio output interface 285, and a power supply 290.
The tuner demodulator 210 receives the broadcast television signal in a wired or wireless manner, may perform modulation and demodulation processing such as amplification, mixing, and resonance, and is configured to demodulate, from a plurality of wireless or wired broadcast television signals, an audio/video signal carried in a frequency of a television channel selected by a user, and additional information (e.g., EPG data).
The tuner demodulator 210 is responsive to the user selected frequency of the television channel and the television signal carried by the frequency, as selected by the user and controlled by the controller 250.
The tuner demodulator 210 can receive a television signal in various ways according to the broadcasting system of the television signal, such as: terrestrial broadcasting, cable broadcasting, satellite broadcasting, internet broadcasting, or the like; and according to different modulation types, a digital modulation mode or an analog modulation mode can be adopted; and can demodulate the analog signal and the digital signal according to the different kinds of the received television signals.
In other exemplary embodiments, the tuning demodulator 210 may also be in an external device, such as an external set-top box. In this way, the set-top box outputs a television signal after modulation and demodulation, and inputs the television signal into the display apparatus 200 through the external device interface 240.
The communicator 220 is a component for communicating with an external device or an external server according to various communication protocol types. For example, the display apparatus 200 may transmit content data to an external apparatus connected via the communicator 220, or browse and download content data from an external apparatus connected via the communicator 220. The communicator 220 may include a network communication protocol module or a near field communication protocol module, such as a WIFI module 221, a bluetooth communication protocol module 222, and a wired ethernet communication protocol module 223, so that the communicator 220 may receive a control signal of the control device 100 according to the control of the controller 250 and implement the control signal as a WIFI signal, a bluetooth signal, a radio frequency signal, and the like.
The detector 230 is a component of the display apparatus 200 for collecting signals of an external environment or interaction with the outside. The detector 230 may include a sound collector 231, such as a microphone, which may be used to receive a user's sound, such as a voice signal of a control instruction of the user to control the display device 200; alternatively, ambient sounds may be collected that identify the type of ambient scene, enabling the display device 200 to adapt to ambient noise.
In some other exemplary embodiments, the detector 230, which may further include an image collector 232, such as a camera, a video camera, etc., may be configured to collect external environment scenes to adaptively change the display parameters of the display device 200; and the function of acquiring the attribute of the user or interacting gestures with the user so as to realize the interaction between the display equipment and the user.
In some other exemplary embodiments, the detector 230 may further include a light receiver for collecting the intensity of the ambient light to adapt to the display parameter variation of the display device 200.
In some other exemplary embodiments, the detector 230 may further include a temperature sensor, such as by sensing an ambient temperature, and the display device 200 may adaptively adjust a display color temperature of the image. For example, when the temperature is higher, the display apparatus 200 may be adjusted to display a color temperature of an image that is cooler; when the temperature is lower, the display device 200 may be adjusted to display a warmer color temperature of the image.
The external device interface 240 is a component for providing the controller 250 to control data transmission between the display apparatus 200 and an external apparatus. The external device interface 240 may be connected to an external apparatus such as a set-top box, a game device, a notebook computer, etc. in a wired/wireless manner, and may receive data such as a video signal (e.g., moving image), an audio signal (e.g., music), additional information (e.g., EPG), etc. of the external apparatus.
The external device interface 240 may include: a High Definition Multimedia Interface (HDMI) terminal 241, a Composite Video Blanking Sync (CVBS) terminal 242, an analog or digital Component terminal 243, a Universal Serial Bus (USB) terminal 244, a Component terminal (not shown), a red, green, blue (RGB) terminal (not shown), and the like.
The controller 250 controls the operation of the display device 200 and responds to the operation of the user by running various software control programs (such as an operating system and various application programs) stored on the memory 260. For example, the controller may be implemented as a System-on-a-Chip (SOC Chip).
As shown in fig. 1C, the controller 250 includes a Random Access Memory (RAM)251, a Read Only Memory (ROM)252, a graphics processor 253, a CPU processor 254, a communication interface 255, and a communication bus 256. The RAM251, the ROM252, the graphic processor 253, and the CPU processor 254 are connected to each other through a communication bus 256 through a communication interface 255.
The ROM252 stores various system boot instructions. When the display apparatus 200 starts power-on upon receiving the power-on signal, the CPU processor 254 executes a system boot instruction in the ROM252, copies the operating system stored in the memory 260 to the RAM251, and starts running the boot operating system. After the start of the operating system is completed, the CPU processor 254 copies the various application programs in the memory 260 to the RAM251 and then starts running and starting the various application programs.
And a graphic processor 253 for generating various graphic objects such as icons, operation menus, and user input instruction display graphics, etc. The graphic processor 253 may include an operator for performing an operation by receiving various interactive instructions input by a user, and further displaying various objects according to display attributes; and a renderer for generating various objects based on the operator and displaying the rendered result on the display 275.
A CPU processor 254 for executing operating system and application program instructions stored in memory 260. And according to the received user input instruction, processing of various application programs, data and contents is executed so as to finally display and play various audio-video contents.
In some example embodiments, the CPU processor 254 may comprise a plurality of processors. The plurality of processors may include one main processor and a plurality of or one sub-processor. A main processor for performing some initialization operations of the display apparatus 200 in the display apparatus preload mode and/or operations of displaying a screen in the normal mode. A plurality of or one sub-processor for performing an operation in a state of a standby mode or the like of the display apparatus.
The communication interface 255 may include a first interface to an nth interface. These interfaces may be network interfaces that are connected to external devices via a network.
The controller 250 may control the overall operation of the display apparatus 200. For example: in response to receiving a user input command for selecting a GUI object displayed on the display 275, the controller 250 may perform an operation related to the object selected by the user input command. For example, the controller may be implemented as an SOC Chip (System on Chip) or an MCU (Micro Control Unit).
Where the object may be any one of the selectable objects, such as a hyperlink or an icon. The operation related to the selected object is, for example, an operation of displaying a link to a hyperlink page, document, image, or the like, or an operation of executing a program corresponding to the object. The user input command for selecting the GUI object may be a command input through various input means (e.g., a mouse, a keyboard, a touch panel, etc.) connected to the display apparatus 200 or a voice command corresponding to a voice spoken by the user.
A memory 260 for storing various types of data, software programs, or applications for driving and controlling the operation of the display device 200. The memory 260 may include volatile and/or nonvolatile memory. And the term "memory" includes the memory 260, the RAM251 and the ROM252 of the controller 250, or a memory card in the display device 200.
In some embodiments, the memory 260 is specifically used for storing an operating program for driving the controller 250 of the display device 200; storing various application programs built in the display apparatus 200 and downloaded by a user from an external apparatus; data such as visual effect images for configuring various GUIs provided by the display 275, various objects related to the GUIs, and selectors for selecting GUI objects are stored.
In some embodiments, memory 260 is specifically configured to store drivers for tuner demodulator 210, communicator 220, detector 230, external device interface 240, video processor 270, display 275, audio processor 280, etc., and related data, such as external data (e.g., audio-visual data) received from the external device interface or user data (e.g., key information, voice information, touch information, etc.) received by the user interface.
In some embodiments, memory 260 specifically stores software and/or programs representing an Operating System (OS), which may include, for example: a kernel, middleware, an Application Programming Interface (API), and/or an application program. Illustratively, the kernel may control or manage system resources, as well as functions implemented by other programs (e.g., the middleware, APIs, or applications); at the same time, the kernel may provide an interface to allow middleware, APIs, or applications to access the controller to enable control or management of system resources.
A block diagram of the architectural configuration of the operating system in the memory of the display device 200 is illustrated in fig. 1D. The operating system architecture comprises an application layer, a middleware layer and a kernel layer from top to bottom.
The application layer, the application programs built in the system and the non-system-level application programs belong to the application layer. Is responsible for direct interaction with the user. The application layer may include a plurality of applications such as a setup application, a post application, a media center application, and the like. These applications may be implemented as Web applications that execute based on a WebKit engine, and in particular may be developed and executed based on HTML5, Cascading Style Sheets (CSS), and JavaScript.
Here, HTML, which is called HyperText Markup Language (HyperText Markup Language), is a standard Markup Language for creating web pages, and describes the web pages by Markup tags, where the HTML tags are used to describe characters, graphics, animation, sound, tables, links, etc., and a browser reads an HTML document, interprets the content of the tags in the document, and displays the content in the form of web pages.
CSS, known as Cascading Style Sheets (Cascading Style Sheets), is a computer language used to represent the Style of HTML documents, and may be used to define Style structures, such as fonts, colors, locations, etc. The CSS style can be directly stored in the HTML webpage or a separate style file, so that the style in the webpage can be controlled.
JavaScript, a language applied to Web page programming, can be inserted into an HTML page and interpreted and executed by a browser. The interaction logic of the Web application is realized by JavaScript. The JavaScript can package a JavaScript extension interface through a browser, realize the communication with the kernel layer,
the middleware layer may provide some standardized interfaces to support the operation of various environments and systems. For example, the middleware layer may be implemented as multimedia and hypermedia information coding experts group (MHEG) middleware related to data broadcasting, DLNA middleware which is middleware related to communication with an external device, middleware which provides a browser environment in which each application program in the display device operates, and the like.
The kernel layer provides core system services, such as: file management, memory management, process management, network management, system security authority management and the like. The kernel layer may be implemented as a kernel based on various operating systems, for example, a kernel based on the Linux operating system.
The kernel layer also provides communication between system software and hardware, and provides device driver services for various hardware, such as: provide display driver for the display, provide camera driver for the camera, provide button driver for the remote controller, provide wiFi driver for the WIFI module, provide audio driver for audio output interface, provide power management drive for Power Management (PM) module etc..
A user interface 265 receives various user interactions. Specifically, it is used to transmit an input signal of a user to the controller 250 or transmit an output signal from the controller 250 to the user. For example, the remote controller 100A may transmit an input signal, such as a power switch signal, a channel selection signal, a volume adjustment signal, etc., input by the user to the user interface 265, and then the input signal is transferred to the controller 250 through the user interface 265; alternatively, the remote controller 100A may receive an output signal such as audio, video, or data output from the user interface 265 via the controller 250, and display the received output signal or output the received output signal in audio or vibration form.
In some embodiments, a user may enter user commands on a Graphical User Interface (GUI) displayed on the display 275, and the user interface 265 receives the user input commands through the GUI. Specifically, the user interface 265 may receive user input commands for controlling the position of a selector in the GUI to select different objects or items.
Alternatively, the user may input a user command by inputting a specific sound or gesture, and the user interface 265 receives the user input command by recognizing the sound or gesture through the sensor.
The video processor 270 is configured to receive an external video signal, and perform video data processing such as decompression, decoding, scaling, noise reduction, frame rate conversion, resolution conversion, and image synthesis according to a standard codec protocol of the input signal, so as to obtain a video signal that is directly displayed or played on the display 275.
Illustratively, the video processor 270 includes a demultiplexing module, a video decoding module, an image synthesizing module, a frame rate conversion module, a display formatting module, and the like.
The demultiplexing module is configured to demultiplex an input audio/video data stream, where, for example, an input MPEG-2 stream (based on a compression standard of a digital storage media moving image and voice), the demultiplexing module demultiplexes the input audio/video data stream into a video signal and an audio signal.
And the video decoding module is used for processing the video signal after demultiplexing, including decoding, scaling and the like.
And the image synthesis module is used for carrying out superposition mixing processing on the GUI signal input by the user or generated by the user and the video image after the zooming processing by the graphic generator so as to generate an image signal for display.
The frame rate conversion module is configured to convert a frame rate of an input video, for example, convert a frame rate of an input 60Hz video into a frame rate of 120Hz or 240Hz, where a common format is implemented by using, for example, an interpolation frame method.
And a display formatting module for converting the signal output by the frame rate conversion module into a signal conforming to a display format of a display, such as converting the format of the signal output by the frame rate conversion module to output an RGB data signal.
A display 275 for receiving the image signal from the video processor 270 and displaying the video content, the image and the menu manipulation interface. The display video content may be from the video content in the broadcast signal received by the tuner-demodulator 210, or from the video content input by the communicator 220 or the external device interface 240. The display 275, while displaying a user manipulation interface UI generated in the display apparatus 200 and used to control the display apparatus 200.
And, the display 275 may include a display screen assembly for presenting a picture and a driving assembly for driving the display of an image. Alternatively, a projection device and projection screen may be included, provided display 275 is a projection display.
The audio processor 280 is configured to receive an external audio signal, decompress and decode the received audio signal according to a standard codec protocol of the input signal, and perform audio data processing such as noise reduction, digital-to-analog conversion, and amplification processing to obtain an audio signal that can be played by the speaker 286.
Illustratively, audio processor 280 may support various audio formats. Such as MPEG-2, MPEG-4, Advanced Audio Coding (AAC), high efficiency AAC (HE-AAC), and the like.
The audio output interface 285 is used for receiving an audio signal output by the audio processor 280 under the control of the controller 250, and the audio output interface 285 may include a speaker 286 or an external sound output terminal 287, such as an earphone output terminal, for outputting to a generating device of an external device.
In other exemplary embodiments, video processor 270 may comprise one or more chips. Audio processor 280 may also comprise one or more chips.
And, in other exemplary embodiments, the video processor 270 and the audio processor 280 may be separate chips or may be integrated with the controller 250 in one or more chips.
And a power supply 290 for supplying power supply support to the display apparatus 200 from the power input from the external power source under the control of the controller 250. The power supply 290 may be a built-in power supply circuit installed inside the display apparatus 200 or may be a power supply installed outside the display apparatus 200.
In some embodiments, when the display device is externally connected to an external playing device, the display device outputs the audio data to the external playing device for playing according to the process shown in fig. 2. Referring to the flow shown in fig. 2, when there is audio data to be played in the display device, for example, when an APP (Application program) in the display device generates PCM (Pulse Code Modulation) audio data, the PCM audio data is first transmitted to the SOC chip through the AudioTrack and the audioflag. For another example, when the set-top box transmits the physical audio data of Physic to the display device, the physical audio data of Physic is directly transmitted to the SOC chip. Then, the SOC chip decodes the Physic physical audio, and the SOC chip mixes the decoded audio data with the PCM audio data to obtain audio data after mixing, namely the audio data which needs to be played finally. After the audio data which needs to be played finally is obtained, the audio data after the audio mixing is processed and transmitted by modules such as Tinyalsa, AudioRecord, hippk, AudioTrack, AudioFlinger and the like, so that the audio data after the audio mixing is finally transmitted to an external playing device for playing. However, in the audio playing process, there are more modules participating in the transmission and processing of the audio data, which results in higher delay of the audio data when the external playing device plays the audio data.
Therefore, this embodiment provides an audio playing method, so that when playing audio data through an external playing device, the tinyslas in the HAL is used to directly obtain the audio data from the SOC chip and transmit the audio data to the external playing device for playing, thereby reducing the playing delay of the audio data.
The audio playing method provided by this embodiment is divided into three different situations according to different formats of audio data to be played. Hereinafter, the different cases will be described separately.
As a first situation, the display device needs to output the audio data in the first format to the external playing device for playing. With reference to the flow shown in fig. 3A, in this case, the audio playing method provided in this embodiment may be applied to the display device, and the display device may be equipped with an Android operating system. In conjunction with fig. 4A and 4B, the display device may include a display, a controller coupled to the display. The controller may be provided with at least: the audio playing system comprises a first audio track AudioTrack, a first audio data entity AudioFlinger, a second Audio Flinger and a simplified version of advanced Linux audio frame Tinyals, wherein the first audio track AudioTrack, the first audio data entity AudioFlinger and the second Audio Flinger are arranged between an audio data control module and a system on a chip (SOC) chip, and the simplified version of advanced Linux audio frame Tinyals is created when an external playing device is started; tinyalsa runs in the HAL between the SOC chip and the external playback device.
As shown in fig. 3A, the process may include the following steps:
step 101, the audio data control module outputs audio data in a first format to the first AudioTrack.
As one example, the audio data in the first format may be audio data in a PCM format.
As an example, when the Android operating system loaded on the display device in this embodiment is started, the first AudioTrack is automatically started. The first AudioTrack is used for receiving the audio data transmitted by the audio data control module. The audio data control module can be implemented in a software form or a hardware form, and corresponds to an application program running in the display device, and is used for outputting audio data to be played to the first AudioTrack. Specifically, the audio data control module specifically corresponds to which application program in the display device, and the embodiment of the present application is not limited thereto.
As one example, the audio data may be audio data generated by any application in the display device, such as audio data generated by a video player while playing a video media file or an audio player while playing an audio media file. The audio data may be either mute data or non-mute data. In addition to the user adjusting the playback volume of the application program to 0, and further making the volume of the audio data output by the application program to 0, the following describes an implementation manner of outputting the mute data under the non-user-defined operation. As to the specific role of the implementation in what kind of scenario, the following will describe in detail, and the detailed description is omitted here.
As an example, when there are multiple applications in the display device that need to output audio data, multiple audio data control modules corresponding to the multiple applications are created in the display device, and each audio data control module corresponds to one AudioTrack.
And 102, receiving the audio data in the first format input by the audio data control module by the first AudioTrack, copying the audio data, and respectively outputting the original audio data and the copied audio data to the first audioFlinger and the second audioFlinger.
As an example, when the Android operating system loaded on the display device in this embodiment is started, two audioflingers, namely, a first AudioFlinger and a second AudioFlinger, are automatically started. The first and second are used only for the distinction between different audioflingers and are not specifically limited.
As an example, when the first AudioTrack is created, the first AudioTrack and the first AudioFlinger and the second AudioFlinger may establish a connection through a binding mechanism defined in an Android operating system. Since the binding mechanism is not the focus of the improvement of the present application, it will not be described in detail here.
As an example, since the first AudioTrack created in the display device is associated with two audioflingers, the interaction mode of the first AudioTrack with each AudioFlinger needs to be configured as a copying mode. Specifically, when the first AudioTrack operates in the duplexing mode, after receiving the audio data, the first AudioTrack duplicates the audio data according to the number of the audiomarkers associated with the first AudioTrack, so that each audiomarker associated with the first AudioTrack can receive one copy of the audio data. In connection with step 101, since the first AudioTrack associates the first AudioFlinger and the second AudioFlinger, when the first AudioTrack operates in the Duplicating mode, one copy of the received audio data is copied, and then two copies of the audio data are obtained. The first AudioTrack may then send one of the audio data to the first AudioFlinger and another audio data to the second AudioFlinger.
As one example, the audio data received by the first AudioTrack may be PCM audio data.
And 103, the first audioFlinger receives the audio data and performs sound mixing processing on the audio data to obtain first audio data.
As an example, the first AudioFlinger may be created based on an SOC chip in the display device when being created, and then the first AudioFlinger may output the audio data after the mixing process to the SOC chip.
As an example, the first AudioFlinger is configured to perform mixing processing on the received multiple channels of audio data, so as to obtain a channel of mixed audio data, that is, the first audio data. Since the first audiomaker is created based on the SOC chip in the display device, the first audio data can be transmitted to the SOC chip after being obtained. As for how the SOC chip processes the received first audio data, the following description will be given in detail, and details thereof are not repeated here.
As an example, when the first AudioFlinger performs mixing processing on audio data, a plurality of mixing strategies may be adopted. For example, when the first AudioFlinger performs mixing processing, in order to improve the transmission efficiency of audio data, multiple paths of audio data with a sampling rate of more than 48KHz, a Bit depth of more than 16 bits, and a Bit rate of more than 32kbit/s may be mixed and processed into a path of audio data with a sampling rate of 48KHz, a Bit depth of 16 bits, and a Bit rate of 32 kbit/s. Specifically, assume that the first audioFlinger receives two paths of audio data simultaneously, wherein the playing time of one path of audio data is 3 minutes, 15 seconds and 13 frames, the sampling rate is 48KHz, the Bit depth is 16 bits, the Bit rate is 32kbit/s, the playing time of the other path of audio data is 2 minutes, 10 seconds and 13 frames, the sampling rate is 96KHz, the Bit depth is 24 bits, and the Bit rate is 96 kbit/s. Then, the first AudioFlinger sets the playing time length of the audio data after the audio mixing processing according to the audio data with longer playing time length. Here, the playback time length of the audio data obtained after the mixing process is set to 3 minutes, 15 seconds, and 13 frames. Then, the first audioFlinger adjusts the sampling rate, Bit depth and Bit rate of the two paths of audio data, so that the sampling rate, Bit depth and Bit rate of the two paths of audio data are respectively 48KHz, 16Bit and 32 kbit/s. After the sampling rate, the bit depth and the bit rate of the two paths of audio data are all adjusted to preset values, the two paths of audio data can be processed into one path of audio data in a superposition mode, and then audio mixing processing is completed.
It should be noted that the mixing strategy adopted by the first AudioFlinger may be set according to actual situations, and the present application does not specifically limit the strategy.
As an example, the SOC chip may store the first audio data output by the first audiomaker in the SOC chip after receiving the first audio data. As for how the first audio data stored in the SOC chip is output, the following will be described in detail, and the detailed description thereof will be omitted.
Step 104, the first audiomaker outputs the first audio data to the SOC chip.
As an example, after the first AudioFlinger completes the audio mixing process on the audio data to obtain the first audio data, the first audio data may be output to the SOC chip.
And 105, the second audioFlinger receives the audio data and performs sound mixing processing on the audio data to obtain second audio data.
As an example, when being created, the second AudioFlinger may be created based on an external playing device externally connected to the display device, so that the second AudioFlinger can output the processed audio data to a HAL in an Android operating system, and then the HAL outputs the audio data to the external playing device. How the audio data is output from the HAL to the external playback device will be described in detail later, and will not be described herein again.
As an example, the second AudioFlinger is used to mix multiple channels of received audio data, so as to obtain a channel of mixed audio data, that is, the second audio data. Here, in order to guarantee normal output of subsequent audio data, the mixing strategy of the second AudioFlinger may be set to be the same as that of the first AudioFlinger.
It should be noted that the mixing strategy adopted by the second AudioFlinger may be set according to actual situations, and the present application does not specifically limit the strategy.
The second AudioFlinger writes the second audio data to the HAL, step 106.
As an example, after the second AudioFlinger completes the mixing process of the audio data to obtain the second audio data, the out _ write function may be called to write the second audio data into the HAL.
In step 107, Tinyalsa acquires the first audio data from the SOC chip when the HAL writes the second audio data.
As an example, Tinyalsa, located at the HAL, may be created by the display device upon detecting that a powered-up external playback device is connected, and released by the display device upon detecting that the external playback device is disconnected or turned off.
As an example, Tinyalsa at the HAL may be driven to actively acquire the first audio data from the SOC chip when the second audio data is written to the HAL. Since the operation of Tinyalsa actively acquiring audio data from the SOC chip needs to be driven by the second audio data written in the HAL, the playing time length of the first audio data acquired by Tinyalsa from the SOC chip and the second audio flinger writing the second audio data to the HAL are the same. Specifically, after the second audio data is written into the HAL, Tinyalsa may send a read request for reading the first audio data to the SOC chip, where the read request carries a playing time length of the audio data to be read, and the playing time length is a playing time length of the second audio data, so as to read the first audio data stored in the SOC chip. After receiving the read request sent by Tinyalsa, the SOC chip may send the first audio data stored in the SOC chip to Tinyalsa.
And step 108, Tinyalsa outputs the first audio data to an external playing device for playing.
As an example, after Tinyalsa acquires the first audio data from the SOC chip, the first audio data may be output to an audio playing interface of an external playing device, so that the external playing device performs audio playing according to the first audio data received by the audio playing interface. Referring to fig. 4B, a Data Pool is set between Tinyalsa and the audio playing interface of the external playing device, for buffering the first audio Data, so as to avoid the first audio Data from being lost due to the occurrence of a block during transmission.
To this end, the description of how the display device plays the audio data in the first format is completed.
In the above embodiment, in the process of playing the audio data through the external playing device, two audiomarkers are created locally, where one audiomarker is used to mix the audio data transmitted by the AudioTrack into the first audio data and input the first audio data to the SOC chip, so that the third audio data that needs to be played finally is obtained through processing the first audio data by the SOC chip, and the other audiomarker is used to mix the audio data transmitted by the AudioTrack into the second audio data and input the second audio data to the HAL, so as to trigger tinyssa in the HAL to directly obtain the third audio data from the SOC chip and transmit the third audio data to the external playing device for playing.
As a second case, in conjunction with the above description regarding the first case, the operation of Tinyalsa actively acquiring audio data from the SOC chip needs to be driven by the second audio data written in the HAL. Therefore, when there is no audio data in the first format that needs to be output in the display device, but there is only audio data in the second format that needs to be output, no second audio data can be written in the HAL to drive Tinyalsa to obtain the audio data in the second format from the SOC chip, and the audio data in the second format in the SOC chip cannot be played. Based on this, the embodiment of the present application additionally provides a mute data control module for configuring mute data in the display device to use the mute data to drive Tinyalsa in the above case, which is described in detail below:
with reference to the flow shown in fig. 3B, in this case, the audio playing method provided in this embodiment may be applied to the display device, and the display device may be equipped with an Android operating system. In conjunction with fig. 4C and 4B, the display device may include a display, a controller coupled to the display. The controller may be provided with at least: the mute data control module is arranged between the mute data control module and the SOC chip and is associated with a second AudioTrack, a first audio data entity AudioFlinger and a second AudioFlinger of the mute data control module, and Tinyals created when the external playing equipment is started; tinyalsa runs in the HAL between the SOC chip and the external playback device.
As shown in fig. 3B, the process may include the following steps:
in step 201, the SOC chip receives the audio data in the second format and converts the audio data in the second format into the audio data in the first format.
As an example, here the first format is a PCM format, and the second format is a format other than the first format, e.g. physical audio data of physics. Since the Android operating system defines that the AudioTrack can only transmit audio data in the PCM format, audio data in other formats are first transmitted to the SOC chip, and then decoded into PCM audio data by the SOC chip, and then subsequently processed accordingly.
In step 202, the mute data control module constructs mute data in the first format when the SOC chip receives the audio data in the second format.
As one example, the mute data control module may correspond to a mute application installed in the display device and may be used to construct mute data. The mute data may be audio data in PCM format.
As an example, since the mute data control module may consume processing resources of the display device and generate pressure on the output process of the audio data when constructing the mute data, it may be selected to trigger the mute data control module to construct the mute data only when the SOC chip receives the audio data to be decoded, and then output the mute data. Specifically, monitoring may be performed on an interface in the SOC chip, which is used to receive the audio data in the second format, and once the interface is monitored to receive the audio data in the second format, an instruction for constructing the mute data may be sent to the mute data control module, so that the mute data control module starts to construct the mute data and outputs the mute data. Then, when it is monitored that the interface no longer receives the audio data in the second format, an instruction for stopping constructing the mute data can be sent to the mute data control module, so that the mute data control module stops constructing the mute data. On the basis, the mute data is equal to the playing time length of the audio data in the second format received by the SOC chip.
In step 203, the mute data control module outputs the mute data to the second AudioTrack.
Step 204, the second AudioTrack receives the mute data input from the mute data control module, copies the mute data, and outputs the original mute data and the copied mute data to the first AudioFlinger and the second AudioFlinger, respectively.
As an example, the specific operation performed in this step is similar to that of step 102, and reference may be made to the foregoing description of step 102, which is not repeated herein.
Step 205, the first AudioFlinger receives the mute data, and performs sound mixing processing on the mute data to obtain third audio data.
As an example, in this step, the mixing processing performed by the first AudioFlinger on the mute data is similar to the step 103, and reference may be made to the foregoing description of the step 103, which is not described herein again.
It should be noted that the mixing strategy adopted by the first AudioFlinger may be set according to actual situations, and the present application does not specifically limit the strategy.
In step 206, the first audiomaker outputs the third audio data to the SOC chip.
As an example, after the first AudioFlinger completes the mixing process of the mute data to obtain the third audio data, the third audio data may be output to the SOC chip.
And step 207, the second audioFlinger receives the mute data and performs sound mixing processing on the mute data to obtain fourth audio data.
As an example, in this step, the mixing processing performed by the second AudioFlinger on the mute data is similar to the step 105, and reference may be made to the foregoing description on the step 105, which is not repeated herein.
It should be noted that the mixing strategy adopted by the second AudioFlinger may be set according to actual situations, and the present application does not specifically limit the strategy.
The second AudioFlinger writes the fourth audio data to the HAL, step 208.
As an example, after the second AudioFlinger completes the mixing process of the mute data to obtain the fourth audio data, the second AudioFlinger may call the out _ write function to write the fourth audio data into the HAL.
Step 209, when receiving the third audio data, the SOC chip performs audio mixing processing on the converted audio data and the received audio data in the first format to obtain fifth audio data.
As an example, in the process of decoding the audio data in the second format that needs to be decoded by the SOC chip, after the SOC chip completes decoding the audio data in the second format, the SOC chip performs audio mixing processing on the decoded audio data and the third audio data to obtain the fifth audio data.
As an example, after the SOC chip completes decoding of the audio data in the second format that needs to be decoded, if the third audio data transmitted by the first AudioFlinger is received, the SOC chip performs audio mixing processing on the decoded audio data and the third audio data to obtain a fifth audio.
In step 210, Tinyalsa acquires the fifth audio data from the SOC chip when the HAL writes the fourth audio data.
In step 211, Tinyalsa outputs the fifth audio data to the external playing device for playing.
As an example, after Tinyalsa obtains the fifth audio data from the SOC chip, the fifth audio data may be output to an audio playing interface of the external playing device, so that the external playing device performs audio playing according to the fifth audio data received by the audio playing interface. In specific implementation, a data pool may be set between Tinyalsa and an audio playing interface of an external playing device, for buffering the fifth audio data, so as to avoid the fifth audio data from being lost due to blocking during transmission.
To this end, the description of how the display device plays the second format audio data is completed.
As a third situation, specifically, the display device needs to output the PCM audio data and the physical audio data to the external playing device for playing at the same time. With reference to the flow shown in fig. 3C, in this case, the audio playing method provided in this embodiment may be applied to the display device, and the display device may be equipped with an Android operating system. In conjunction with fig. 4D and 4B, the display device may include a display, a controller coupled to the display. The controller may be provided with at least: the system comprises a first audio track, a second audio track, an audio track, a first audio data entity, an audio Flinger, a second audio Flinger and Tinyals, wherein the first audio track, the second audio track, the first audio data entity, the second audio Flinger and the Tinyals are arranged between an audio data control module and a mute data control module and a system on a chip (SOC) chip, and the Tinyals is created when an external playing device is started; tinyalsa runs in the HAL between the SOC chip and the external playback device.
Since the related contents in this embodiment have been described in the foregoing two embodiments, each step in this embodiment may refer to the description of the foregoing two embodiments, and will not be described again below.
As shown in fig. 3C, the process may include the following steps:
step 301, the SOC chip receives the audio data in the second format and converts the audio data in the second format into the audio data in the first format.
Step 302, when the SOC chip receives the audio data in the second format, the mute data control module constructs mute data in the first format
Step 303, the mute data control module outputs the mute data to the second AudioTrack.
Step 304, the second AudioTrack receives the mute data in the first format from the mute data control module, copies the mute data, and outputs the original mute data and the copied mute data to the first AudioFlinger and the second AudioFlinger, respectively.
At step 305, the audio data control module outputs audio data in a first format to the first AudioTrack.
Step 306, the first AudioTrack receives the audio data in the first format from the audio data control module, copies the audio data, and outputs the original audio data and the copied audio data to the first audioflasher and the second audioflasher, respectively.
Step 307, the first AudioFlinger receives the audio data and the mute data, and performs sound mixing processing on the audio data and the mute data to obtain ninth audio data.
And 308, the second audiomaker receives the audio data and the mute data, and performs sound mixing processing on the audio data and the mute data to obtain tenth audio data.
In step 309, the first AudioFlinger outputs the ninth audio data to the SOC chip.
The second AudioFlinger writes the tenth audio data to the HAL, step 310.
In step 311, the SOC chip receives the ninth audio data, and performs audio mixing processing on the ninth audio data and the audio data in the first format to obtain eleventh audio data.
At step 312, Tinyalsa acquires the eleventh audio data from the SOC chip when the HAL writes the tenth audio data.
And 313, outputting the eleventh audio data to an external playing device by Tinyalsa for playing.
To this end, the description of how the display device simultaneously plays the first format and the second format audio data is completed.
Besides the audio playing method described above, another audio playing method is provided in the embodiments of the present application. The audio playing method provided in this embodiment may also be divided into three different cases according to different formats of audio data to be played. Hereinafter, the different cases will be described separately.
As a first situation, the display device needs to output the audio data in the first format to the external playing device for playing. With reference to the flow shown in fig. 5A, in this case, the audio playing method provided in this embodiment may be applied to the display device, and the display device may be equipped with an Android operating system. In connection with fig. 6A and 6B, the display device may include a display, a controller coupled to the display. The controller may be provided with at least: and a third AudioTrack and a third audioFlinger which are arranged between the audio data control module and the SOC chip.
As shown in fig. 5A, the process may include the following steps:
in step 401, the third AudioTrack receives audio data in the first format from the audio data control module.
As an example, how the third AudioTrack is associated with the third AudioFlinger and how the AudioTrack transmits the audio data in the first format to the third AudioFlinger may refer to the foregoing description of step 101 and step 102, and will not be described herein again.
The third AudioTrack outputs audio data to the third AudioFlinger, step 402.
As an example, since the third AudioTrack in this embodiment only needs to transmit the audio data to the third AudioFlinger, the third AudioTrack in this embodiment does not need to operate in a duplexing mode, that is, the third AudioTrack does not need to copy the received audio data, and can directly transmit the audio data to the third AudioFlinger.
In step 403, the third AudioFlinger receives the audio data and performs audio mixing processing on the audio data to obtain sixth audio data.
As an example, in the present embodiment, the mixing process of the third AudioFlinger may refer to the description of step 103, which is not described herein again.
And step 404, the third audioFlinger outputs the sixth audio data to the external playing device for playing.
As an example, after the third AudioFlinger completes the audio mixing process to obtain the sixth audio data, the sixth audio data may be directly output to an audio playing interface of the external playing device, so that the external playing device performs audio playing according to the sixth audio data received by the audio playing interface.
To this end, the description of how the display device plays the audio data in the first format is completed.
As a second situation, the display device needs to output the audio data in the second format to the external playing device for playing. With reference to the flow shown in fig. 5B, in this case, the audio playing method provided in this embodiment may be applied to the display device, and the display device may be equipped with an Android operating system. In connection with fig. 6C and 6B, the display device may include a display, a controller coupled to the display. The controller may be provided with at least: a fourth AudioTrack and a third audioFlinger which are arranged between the mute data control module and the SOC chip and are associated with the mute data control module, and an audio mixer audioMixer and Tinyalsa which are created when the external playing equipment is started; the AudioMixer and Tinyalsa operate in the HAL, between the SOC chip and the external playback device.
As shown in fig. 5B, the process may include the following steps:
in step 501, the SOC chip receives audio data in the second format and converts the audio data in the second format into audio data in the first format.
As an example, how the SOC chip receives the audio data in the second format and how to convert the audio data in the second format into the audio data in the first format in this step is already described in detail in step 201, and the related contents may refer to the description of step 201, which is not repeated herein.
Step 502, the mute data control module constructs mute data in the first format when the SOC chip receives the audio data in the second format.
As an example, how the mute data control module constructs the mute data in the first format is described in detail in the foregoing step 202, and the related content may refer to the description of the foregoing step 202, which is not described herein again.
In step 503, the mute data control module outputs the mute data to the fourth AudioTrack.
As one example, the fourth AudioTrack is the AudioTrack associated with the mute data control module.
In step 504, the fourth AudioTrack receives the mute data from the mute data control module, and outputs the mute data to the third AudioFlinger.
And 505, the third audioFlinger receives the mute data and performs sound mixing processing on the mute data to obtain seventh audio data.
The third AudioFlinger outputs the seventh audio data to the AudioMixer, step 506.
As an example, when the third AudioFlinger is created, the third AudioFlinger may be created based on an external playing device externally connected to the display device, and then the third AudioFlinger may output the mixed audio data to the HAL in the Android operating system, but since the audio mixer is additionally disposed in the HAL in this embodiment, the third AudioFlinger may output the mixed audio data to the AudioMixer in the HAL, and then the audio is processed by the AudioMixer and transmitted to the external playing device.
In step 507, Tinyalsa obtains the audio data in the first format from the SOC chip when the AudioMixer receives the seventh audio data.
As an example, Tinyalsa at the HAL may be driven to actively acquire audio data in the first format from the SOC chip when the seventh audio data is output to the AudioMixer. Specifically, after the seventh audio data is written into the AudioMixer, Tinyalsa may send a read request for reading the audio data in the first format to the SOC chip, where the read request carries a playing time length of the audio data to be read, and the playing time length is a playing time length of the seventh audio data, so as to read the audio data in the first format stored in the SOC chip.
Tinyalsa outputs the audio data in the first format to the AudioMixer, step 508.
In step 509, the AudioMixer receives the audio data in the first format and the seventh audio data output by Tinyalsa, and performs audio mixing processing on the audio data in the first format and the seventh audio data to obtain eighth audio data.
As an example, the present embodiment additionally provides an AudioMixer in the HAL for mixing audio data written in the HAL. When being created, the AudioMixer can be created based on an external playing device connected with the display device, so that the AudioMixer can output audio data obtained after audio mixing to the external playing device.
As an example, the mixing process performed by the AudioMixer may refer to the foregoing description of AudioFlinger, and will not be described herein again.
And step 510, the AudioMixer outputs the eighth audio data to an external playing device for playing.
To this end, the description of how the display device plays the audio data in the first format is completed.
As a third situation, specifically, the display device needs to output the audio data in the first format and the audio data in the second format to the external playing device for playing at the same time. With reference to the flow shown in fig. 5C, in this case, the audio playing method provided in this embodiment may be applied to the display device, and the display device may be equipped with an Android operating system. In connection with fig. 6D and 6B, the display device may include a display, a controller coupled to the display. The controller may be provided with at least: a third AudioTrack arranged between the audio data control module and the SOC chip, a fourth AudioTrack and a third audioFlinger which are arranged between the mute data control module and the SOC chip and are associated with the mute data control module, an audio mixer and Tinyalsa which is created when the external playing equipment is started; the AudioMixer and Tinyalsa operate in the HAL, between the SOC chip and the external playback device.
Since the related contents in this embodiment have been described in the foregoing two embodiments, each step in this embodiment may refer to the description of the foregoing two embodiments, and will not be described again below.
Referring to fig. 5C, the process may include the following steps:
step 601, the SOC chip receives the audio data in the second format and converts the audio data in the second format into the audio data in the first format.
In step 602, the mute data control module constructs mute data in the first format when the SOC chip receives audio data in the second format.
Step 603, the mute data control module outputs the mute data to the fourth AudioTrack.
In step 604, the fourth AudioTrack receives the mute data from the mute data control module, and outputs the mute data to the third AudioFlinger.
The third AudioTrack receives audio data in the first format from the audio data control module, step 605.
The third AudioTrack outputs the audio data to the third AudioFlinger, step 606.
Step 607, the third AudioFlinger receives the audio data and the mute data, and performs audio mixing processing on the audio data and the mute data to obtain twelfth audio data.
The third audioflasher outputs the twelfth audio data to the AudioMixer, step 608.
In step 609, Tinyalsa acquires the audio data in the first format from the SOC chip when the AudioMixer receives the twelfth audio data.
Tinyalsa outputs the audio data in the first format to the AudioMixer, step 610.
Step 611, the AudioMixer receives the audio data in the first format and the twelfth audio data output by Tinyalsa, and performs audio mixing processing on the audio data in the first format and the twelfth audio data to obtain thirteenth audio data.
Step 612, the AudioMixer outputs the thirteenth audio data to the external playing device for playing.
To this end, the description of how the display device plays the audio data in the first format and the second format is completed.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is also intended to include such modifications and variations.

Claims (12)

1. A display device, comprising:
a display;
a controller coupled to the display, comprising at least: the audio playing system comprises a first AudioTrack, a first AudioFlinger, a second AudioFlinger and a simplified version of advanced Linux audio frame Tinyalsa, wherein the first AudioTrack, the first AudioFlinger and the second AudioFlinger are arranged between an audio data control module and a system on a chip (SOC) chip, and the simplified version of advanced Linux audio frame Tinyalsa is created when an external playing device is started; the Tinyalsa runs in the HAL and is positioned between the SOC chip and the external playing equipment;
the first AudioTrack is configured to receive the audio data in the first format input from the audio data control module, copy the audio data, and output the original audio data and the copied audio data to the first audioflasher and the second audioflasher, respectively;
the first audioFlinger is used for receiving the audio data, performing sound mixing processing on the audio data to obtain first audio data, and outputting the first audio data to the SOC chip;
the second audioFlinger is used for receiving the audio data, performing sound mixing processing on the audio data to obtain second audio data, and writing the second audio data into the HAL;
and the Tinyalsa is used for acquiring the first audio data from the SOC chip and outputting the first audio data to the external playing equipment for playing when the HAL writes the second audio data.
2. The display device of claim 1, wherein the first format is a Pulse Code Modulation (PCM) format.
3. The display device of claim 1, wherein the Tinyalsa is released when the add-on playback device changes from being turned on to being turned off or disconnected.
4. A display device, comprising:
a display;
a controller coupled to the display, comprising at least: the mute data control module is arranged between the mute data control module and the SOC chip, is related to a second AudioTrack, a first AudioFlinger and a second AudioFlinger of the mute data control module, and is also related to a Tinyals created when the external playing equipment is started; the Tinyalsa runs in the HAL and is positioned between the SOC chip and the external playing equipment;
the mute data control module is used for constructing mute data in a first format and outputting the mute data to the second AudioTrack when the SOC chip receives audio data in a second format; the playing time length of the mute data is equal to that of the audio data in the second format;
the second AudioTrack is configured to receive the mute data input from the mute data control module, copy the mute data, and output the original mute data and the copied mute data to the first AudioFlinger and the second AudioFlinger, respectively;
the first audioFlinger is used for receiving the mute data, performing sound mixing processing on the mute data to obtain third audio data, and outputting the third audio data to the SOC chip;
the second audioFlinger is used for receiving the mute data, performing sound mixing processing on the mute data to obtain fourth audio data, and writing the fourth audio data into the HAL;
the SOC chip is used for receiving the audio data in the second format, converting the audio data in the second format into audio data in the first format, and performing sound mixing processing on the audio data converted into the first format and the third audio data to obtain fifth audio data when receiving the third audio data;
and the Tinyalsa is used for acquiring the fifth audio data from the SOC chip and outputting the fifth audio data to the external playing equipment for playing when the HAL writes the fourth audio data in.
5. A display device, comprising:
a display;
a controller coupled to the display, comprising at least: a third AudioTrack and a third audioFlinger which are arranged between the audio data control module and the SOC chip, a fourth AudioTrack which is arranged between the mute data control module and the SOC chip and is associated with the mute data control module, and an audio mixer;
when the external playing equipment is started, Tinyalsa is created; the AudioMixer and the Tinyalsa run in a HAL between the SOC chip and the external playback device;
the SOC chip receives audio data in a second format and converts the audio data in the second format into audio data in a first format;
when the SOC chip receives the audio data in the second format, the mute data control module constructs mute data in the first format and outputs the mute data to the fourth Audio track; the playing time length of the mute data is equal to that of the audio data in the second format;
the fourth AudioTrack receives the mute data input by the mute data control module and outputs the mute data to the third audioFlinger;
the third AudioTrack receives the audio data in the first format input by the audio data control module and outputs the audio data to the third audioFlinger;
the third audioFlinger receives the audio data and the mute data, performs sound mixing processing on the audio data and the mute data to obtain twelfth audio data, and outputs the twelfth audio data to the audioMixer;
the Tinyalsa acquires the audio data in the first format from the SOC chip and outputs the audio data to the AudioMixer when the AudioMixer receives the twelfth audio data;
and the AudioMixer receives the audio data in the first format and the twelfth audio data output by the Tinyalsa, performs audio mixing processing on the audio data in the first format and the twelfth audio data to obtain thirteenth audio data, and outputs the thirteenth audio data to the external playing device for playing.
6. A display device, comprising:
a display;
a controller coupled to the display, comprising at least: a fourth AudioTrack and a third audioFlinger which are arranged between the mute data control module and the SOC chip and are associated with the mute data control module, and an audio mixer audioMixer and Tinyalsa which are created when an external playing device is started; the AudioMixer and the Tinyalsa run in a HAL between the SOC chip and the external playback device;
the mute data control module is used for constructing mute data in a first format and outputting the mute data to the fourth AudioTrack when the SOC chip receives audio data in a second format; the playing time length of the mute data is equal to that of the audio data in the second format;
the fourth AudioTrack is configured to receive the mute data input from the mute data control module, and output the mute data to the third audioFlinger;
the third audioFlinger is used for receiving the mute data, performing sound mixing processing on the mute data to obtain seventh audio data, and outputting the seventh audio data to the audioMixer;
the SOC chip is used for receiving the audio data in the second format and converting the audio data in the second format into the audio data in the first format;
the Tinyalsa is configured to obtain the audio data in the first format from the SOC chip and output the audio data to the AudioMixer when the AudioMixer receives the seventh audio data;
the AudioMixer is configured to receive the audio data in the first format and the seventh audio data output by Tinyalsa, perform audio mixing processing on the audio data in the first format and the seventh audio data to obtain eighth audio data, and output the eighth audio data to the external playing device for playing.
7. An audio playing method, comprising:
when the system is started, starting a first AudioTrack, a first AudioFlinger and a second AudioFlinger which are arranged between an audio data control module and an SOC chip;
when the external playing equipment is started, Tinyalsa is created; the Tinyalsa runs in the HAL and is positioned between the SOC chip and the external playing equipment;
the first AudioTrack receives audio data in a first format input by the audio data control module, copies the audio data, and outputs original audio data and copied audio data to the first audioFlinger and the second audioFlinger respectively;
the first Audio Flinger receives the audio data, performs sound mixing processing on the audio data to obtain first audio data, and outputs the first audio data to the SOC chip;
the second audioFlinger receives the audio data, performs sound mixing processing on the audio data to obtain second audio data, and writes the second audio data into the HAL;
and when the HAL writes the second audio data, the Tinyalsa acquires the first audio data from the SOC chip and outputs the first audio data to the external playing equipment for playing.
8. The method of claim 7, wherein the first format is a Pulse Code Modulation (PCM) format.
9. The method of claim 7, wherein the Tinyalsa is released when the add-on playback device changes from being powered on to being powered off or disconnected.
10. An audio playing method, comprising:
when the system is started, starting a mute data control module, and a second AudioTrack, a first AudioFlinger and a second AudioFlinger which are arranged between the mute data control module and an SOC chip and are associated with the mute data control module;
when the external playing equipment is started, Tinyalsa is created; the Tinyalsa runs in the HAL and is positioned between the SOC chip and the external playing equipment;
when the SOC chip receives audio data in a second format, the mute data control module constructs mute data in a first format and outputs the mute data to the second Audio track; the playing time length of the mute data is equal to that of the audio data in the second format;
the second AudioTrack receives the mute data input from the mute data control module, copies the mute data, and respectively outputs the original mute data and the copied mute data to the first audioFlinger and the second audioFlinger;
the first audioFlinger receives the mute data, performs sound mixing processing on the mute data to obtain third audio data, and outputs the third audio data to the SOC chip;
the second audioFlinger receives the mute data, performs sound mixing processing on the mute data to obtain fourth audio data, and writes the fourth audio data into the HAL;
the SOC chip receives the audio data in the second format, converts the audio data in the second format into audio data in the first format, and performs sound mixing processing on the audio data converted into the first format and the third audio data to obtain fifth audio data when receiving the third audio data;
and the Tinyalsa acquires the fifth audio data from the SOC chip when the HAL writes the fourth audio data and outputs the fifth audio data to the external playing equipment for playing.
11. An audio playing method, comprising:
when the system is started, starting a third AudioTrack and a third AudioFlinger which are arranged between an audio data control module and an SOC chip, a fourth AudioTrack which is arranged between a mute data control module and the SOC chip and is related to the mute data control module, and an audio mixer;
when the external playing equipment is started, Tinyalsa is created; the AudioMixer and the Tinyalsa run in a HAL between the SOC chip and the external playback device;
the SOC chip receives audio data in a second format and converts the audio data in the second format into audio data in a first format;
when the SOC chip receives the audio data in the second format, the mute data control module constructs mute data in the first format and outputs the mute data to the fourth Audio track; the playing time length of the mute data is equal to that of the audio data in the second format;
the fourth AudioTrack receives the mute data input by the mute data control module and outputs the mute data to the third audioFlinger;
the third AudioTrack receives the audio data in the first format input by the audio data control module and outputs the audio data to the third audioFlinger;
the third audioFlinger receives the audio data and the mute data, performs sound mixing processing on the audio data and the mute data to obtain twelfth audio data, and outputs the twelfth audio data to the audioMixer;
the Tinyalsa acquires the audio data in the first format from the SOC chip and outputs the audio data to the AudioMixer when the AudioMixer receives the twelfth audio data;
and the AudioMixer receives the audio data in the first format and the twelfth audio data output by the Tinyalsa, performs audio mixing processing on the audio data in the first format and the twelfth audio data to obtain thirteenth audio data, and outputs the thirteenth audio data to the external playing device for playing.
12. An audio playing method, comprising:
when the computer is started, starting a fourth Audio track, a third Audio Flinger and an audio mixer, wherein the fourth Audio track, the third Audio Flinger and the audio mixer are arranged between a mute data control module and an SOC chip and are associated with the mute data control module;
when the external playing equipment is started, Tinyalsa is created; the AudioMixer and the Tinyalsa run in a HAL between the SOC chip and the external playback device;
when the SOC chip receives the audio data in the second format, the mute data control module constructs mute data in the first format and outputs the mute data to the fourth Audio track; the playing time length of the mute data is equal to that of the audio data in the second format;
the fourth AudioTrack receives the mute data input by the mute data control module and outputs the mute data to the third audioFlinger;
the third audioFlinger receives the mute data, performs sound mixing processing on the mute data to obtain seventh audio data, and outputs the seventh audio data to the audioMixer;
the SOC chip receives the audio data in the second format and converts the audio data in the second format into audio data in the first format;
the Tinyalsa acquires the audio data in the first format from the SOC chip and outputs the audio data to the AudioMixer when the AudioMixer receives the seventh audio data;
and the AudioMixer receives the audio data in the first format and the seventh audio data output by the Tinyalsa, performs audio mixing processing on the audio data in the first format and the seventh audio data to obtain eighth audio data, and outputs the eighth audio data to the external playing device for playing.
CN202010459723.3A 2020-05-27 2020-05-27 Audio playing method and display device Active CN111654743B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010459723.3A CN111654743B (en) 2020-05-27 2020-05-27 Audio playing method and display device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010459723.3A CN111654743B (en) 2020-05-27 2020-05-27 Audio playing method and display device

Publications (2)

Publication Number Publication Date
CN111654743A CN111654743A (en) 2020-09-11
CN111654743B true CN111654743B (en) 2022-04-22

Family

ID=72348625

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010459723.3A Active CN111654743B (en) 2020-05-27 2020-05-27 Audio playing method and display device

Country Status (1)

Country Link
CN (1) CN111654743B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113518258B (en) * 2021-05-14 2023-06-30 北京天籁传音数字技术有限公司 Low-delay full-scene audio implementation method and device and electronic equipment
CN113515406B (en) * 2021-09-15 2021-12-21 北京麟卓信息科技有限公司 Audio playing fault processing method and device in android operating environment
CN116665625A (en) * 2023-07-28 2023-08-29 成都赛力斯科技有限公司 Audio signal processing method, device, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102932567A (en) * 2012-11-19 2013-02-13 东莞宇龙通信科技有限公司 Terminal and audio processing method
CN106325804A (en) * 2015-07-03 2017-01-11 深圳市中兴微电子技术有限公司 Audio processing method and system
CN106504759A (en) * 2016-11-04 2017-03-15 维沃移动通信有限公司 A kind of mixed audio processing method and terminal device
CN107301035A (en) * 2016-04-15 2017-10-27 中兴通讯股份有限公司 A kind of audio sync recording-reproducing system and method based on android system
CN107329726A (en) * 2017-06-09 2017-11-07 青岛海信电器股份有限公司 To the treating method and apparatus of the voice data of input in android system
CN109144464A (en) * 2018-08-27 2019-01-04 歌尔科技有限公司 A kind of method, apparatus and Android device of audio output
CN110175081A (en) * 2019-05-30 2019-08-27 睿云联(厦门)网络通讯技术有限公司 A kind of optimization system and its method played for Android audio
WO2020080867A1 (en) * 2018-10-18 2020-04-23 Samsung Electronics Co., Ltd. Display device and control method thereof

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103379232B (en) * 2012-04-13 2015-07-08 展讯通信(上海)有限公司 Communication server, communication terminal and voice communication method
CN102881305A (en) * 2012-09-21 2013-01-16 北京君正集成电路股份有限公司 Method and device for playing audio file
CN109119100A (en) * 2017-06-26 2019-01-01 北京嘀嘀无限科技发展有限公司 The storage method of audio data or video data, storage system and computer equipment
CN107562405B (en) * 2017-08-18 2020-09-22 Oppo广东移动通信有限公司 Audio playing control method and device, storage medium and mobile terminal
CN108347529B (en) * 2018-01-31 2021-02-23 维沃移动通信有限公司 Audio playing method and mobile terminal

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102932567A (en) * 2012-11-19 2013-02-13 东莞宇龙通信科技有限公司 Terminal and audio processing method
CN106325804A (en) * 2015-07-03 2017-01-11 深圳市中兴微电子技术有限公司 Audio processing method and system
CN107301035A (en) * 2016-04-15 2017-10-27 中兴通讯股份有限公司 A kind of audio sync recording-reproducing system and method based on android system
CN106504759A (en) * 2016-11-04 2017-03-15 维沃移动通信有限公司 A kind of mixed audio processing method and terminal device
CN107329726A (en) * 2017-06-09 2017-11-07 青岛海信电器股份有限公司 To the treating method and apparatus of the voice data of input in android system
CN109144464A (en) * 2018-08-27 2019-01-04 歌尔科技有限公司 A kind of method, apparatus and Android device of audio output
WO2020080867A1 (en) * 2018-10-18 2020-04-23 Samsung Electronics Co., Ltd. Display device and control method thereof
CN110175081A (en) * 2019-05-30 2019-08-27 睿云联(厦门)网络通讯技术有限公司 A kind of optimization system and its method played for Android audio

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Android Audio System之二:AudioFlinger;七夜_雪;《https://blog.csdn.net/louiswangbing/article/details/6620158》;20110720;全文 *
Android的10毫秒问题 解读Android系统音频通道延迟缺陷;郭风朴;《https://blog.csdn.net/Guofengpu/article/details/78135728》;20170929;全文 *
audiotrackplayer demo;agargenta;《https://github.com/twitter-university/AudioTrackPlayerDemo》;20120512;全文 *
深入理解Android音频框架AudioFlinger及Mix过程;欢迎关注公众号:【码农突围】;《https://blog.csdn.net/Ch97CKd/article/details/78641457》;20171127;正文1-11页 *

Also Published As

Publication number Publication date
CN111654743A (en) 2020-09-11

Similar Documents

Publication Publication Date Title
CN111200746B (en) Method for awakening display equipment in standby state and display equipment
WO2021169141A1 (en) Method for displaying audio track language on display device and display device
CN111654743B (en) Audio playing method and display device
CN111601134B (en) Time display method in display equipment and display equipment
WO2021109354A1 (en) Media stream data playback method and device
WO2021189712A1 (en) Method for switching webpage video from full-screen playing to small-window playing, and display device
CN111343492B (en) Display method and display device of browser in different layers
CN111601144B (en) Streaming media file playing method and display equipment
CN111601142B (en) Subtitle display method and display equipment
CN111246309A (en) Method for displaying channel list in display device and display device
CN111726673A (en) Channel switching method and display device
CN111277891A (en) Program recording prompting method and display equipment
CN113378092A (en) Video playing management method and display equipment
CN112004126A (en) Search result display method and display device
CN111324411A (en) Method for upgrading user interface in display device and display device
CN111885415B (en) Audio data rapid output method and display device
CN112040285B (en) Interface display method and display equipment
WO2021227232A1 (en) Method for displaying language options and country options, and display device
CN111050197B (en) Display device
CN113010074A (en) Webpage Video control bar display method and display equipment
CN113329246A (en) Display device and shutdown method
CN113382291A (en) Display device and streaming media playing method
CN111246282B (en) Program information acquisition method in display equipment and display equipment
CN111901686B (en) Method for keeping normal display of user interface stack and display equipment
CN112040307A (en) Play control method and display device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant