CN112601109A - Audio playing method and display device - Google Patents

Audio playing method and display device Download PDF

Info

Publication number
CN112601109A
CN112601109A CN202011378391.2A CN202011378391A CN112601109A CN 112601109 A CN112601109 A CN 112601109A CN 202011378391 A CN202011378391 A CN 202011378391A CN 112601109 A CN112601109 A CN 112601109A
Authority
CN
China
Prior art keywords
data
decoded
audio data
buffer area
audio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011378391.2A
Other languages
Chinese (zh)
Inventor
李现旗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Visual Technology Co Ltd
Original Assignee
Hisense Visual Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Visual Technology Co Ltd filed Critical Hisense Visual Technology Co Ltd
Priority to CN202011378391.2A priority Critical patent/CN112601109A/en
Publication of CN112601109A publication Critical patent/CN112601109A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/233Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams

Abstract

The application provides an audio playing method and display equipment. When the display equipment is used for playing videos, audio data need to be written into a data buffer area, and when the residual data amount in the buffer area to be decoded is smaller than a preset data threshold value, the audio data in the data buffer area is written into the buffer area to be decoded; and then decoding the audio data in the buffer area to be decoded and writing the decoded data into the audio data area so as to enable the peripheral equipment to play the decoded data in the audio data area. The technical scheme of the application can detect the storage condition of the audio data area in time, thereby controlling the writing of the audio data in time, ensuring that the audio data in the data buffer area and the buffer area to be decoded can be complemented in time when being consumed, and further avoiding the video picture blockage caused by too fast consumption and untimely complementation of the audio data in the data buffer area and the buffer area to be decoded when playing the video.

Description

Audio playing method and display device
Technical Field
The present application relates to the field of display technologies, and in particular, to an audio playing method and a display device.
Background
When the display device plays a video, the audio information in the video can be played by using a peripheral device and the like, wherein the peripheral device can be a speaker built in the display device, a wired earphone connected with the display device, a Bluetooth sound box connected with the display device and the like, and can also be a power amplifier and other devices. The player in the display device can perform software decoding or hardware decoding on the audio stream according to the type of the audio stream in the video, the decoding capability of the player, the type of the audio stream supported by the peripheral or the power amplifier, and the like, wherein the player directly outputs a PCM (Pulse Code Modulation) audio stream during the software decoding, and outputs a RAW audio stream during the hardware decoding, and then the RAW audio stream is decoded into the PCM audio stream by the hardware decoder.
Generally, when a display device plays a video, audio data is first put into a data buffer, an audio hardware interface then carries the audio data in the data buffer to a buffer to be decoded, and an audio driver decodes the audio data in the buffer to be decoded and then puts the decoded audio data into an audio data area. The peripheral consumes the audio data in the audio data area, and then plays the audio content in the video. When the data buffer is full, the display device will transport the audio data to the buffer to be decoded. When the display device is connected to the peripheral, the display device needs to decode RAW data in the audio data area into PCM data, perform processing such as equalization and sound effect, and output the PCM data to the peripheral. When the display device is connected with the power amplifier, the display device can decode RAW data in the audio data area into PCM data and directly output the PCM data to the power amplifier.
When the display device plays the RAW data and connects the power amplifier, because the PCM data in the audio data area is not equalized and processed effectively, the PCM data in the audio data area can be consumed quickly, i.e. the level of the data in the audio data area is always in a low water level state. When writing audio data into the buffer area to be decoded, if the audio data area is detected to be in a low water level, the display device can continuously consume the data in the data buffer area and the buffer area to be decoded, and when no data exists in the data buffer area, the display device can pause the playing of the audio. The stopping of the audio will also cause the image to stop playing due to the limitation of the synchronization of the sound and the picture in the display device. The display device resumes the playing of the audio and images again by waiting until the data in the data buffer is sufficient. If the audio data in the audio data area is consumed too fast and is always in a low water level, the display device continues to pause and play, and the played video picture is jammed, which affects the viewing experience of the user.
Disclosure of Invention
The application provides an audio playing method and display equipment, and aims to solve the problem that when the video is played by the existing display equipment, the video picture is blocked because the audio data in an audio data area is consumed too fast and cannot be complemented.
In a first aspect, the present application provides a display device comprising:
a display;
a controller configured to:
writing the audio data to be written into the data buffer area; the audio data to be written is used for representing all audio data which need to be written into the data buffer area when the display equipment plays audio;
under the condition that the current residual data amount in a buffer area to be decoded is smaller than a preset data threshold value, writing written audio data in the data buffer area into the buffer area to be decoded;
decoding the audio data in the buffer area to be decoded and writing the decoded data into an audio data area;
and controlling the display equipment to play the decoded data in the audio data area through the peripheral equipment.
In some embodiments, the controller is further configured to:
detecting whether the playing time length of the decoded data in the audio data area is greater than or equal to a preset time length;
and under the condition that the playing time length of the decoded data in the audio data area is greater than or equal to the preset time length, stopping writing the audio data into the buffer area to be decoded.
In some embodiments, the controller is further configured to:
setting a preset data threshold value according to the type of the audio data to be written under the condition that the playing time length of the decoded data in the audio data area is less than a preset time length;
under the condition that the data volume of the audio data in the data buffer area is larger than 0, detecting whether the current residual data volume of the buffer area to be decoded is larger than or equal to the preset data threshold value;
and writing the audio data in the data buffer area into the buffer area to be decoded under the condition that the current residual data amount of the buffer area to be decoded is smaller than the preset data threshold value.
In some embodiments, the controller is further configured to:
under the condition that the current residual data amount of the buffer area to be decoded is greater than or equal to the preset data threshold, calculating waiting time according to the current residual data amount of the buffer area to be decoded, the preset data threshold and the decoding rate;
after the waiting time elapses, it is detected again whether the data amount of the audio data in the data buffer is larger than 0.
In some embodiments, the controller is further configured to:
and under the condition that the data volume of the audio data in the data buffer area is less than or equal to 0 and all the audio data to be written are written into the data buffer area, stopping writing the audio data into the buffer area to be decoded.
In some embodiments, the controller is further configured to:
acquiring the minimum frame number of the audio data to be written;
and determining the minimum frequency of writing the audio data to be written into the data buffer according to the minimum frame number.
In some embodiments, the controller is further configured to:
and under the condition that the data volume in the data buffer area is less than or equal to 0 and the audio data to be written are not completely written into the data buffer area, continuing to write the audio data to be written into the data buffer area until the data buffer area is full or the data to be written are completely written into the data buffer area.
In some embodiments, the controller is further configured to:
and in the case that the audio data to be written is original RAW data, setting the preset data threshold to be S ═ F × 3, where S denotes the preset data threshold and F denotes the size of each frame of data in the original RAW data.
In some embodiments, the controller is further configured to:
under the condition that the audio data to be written is original RAW data, decoding the original RAW data in the buffer area to be decoded into Pulse Code Modulation (PCM) data;
writing the decoded pulse code modulation PCM data into the audio data area.
In a second aspect, the present application further provides an audio playing method, including:
writing the audio data to be written into the data buffer area; the audio data to be written is used for representing all audio data which need to be written into the data buffer area when the display equipment plays audio;
writing the audio data in the data buffer area into the buffer area to be decoded under the condition that the current residual data amount in the buffer area to be decoded is smaller than a preset data threshold value;
decoding the audio data in the buffer area to be decoded and writing the decoded data into an audio data area;
and controlling the display equipment to play the decoded data in the audio data area through the peripheral equipment.
As can be seen from the above, the present application provides an audio playing method and a display device. When the display equipment is used for playing videos, audio data in the videos need to be written into a data buffer area, and under the condition that the current residual data amount in the buffer area to be decoded is smaller than a preset data threshold value, the audio data in the data buffer area is written into the buffer area to be decoded; decoding the audio data in the buffer area to be decoded and writing the decoded data into the audio data area; and finally, controlling the display equipment to play the decoded data in the audio data area through the peripheral equipment so that the display equipment plays the audio content while playing the video picture. The technical scheme of the application can timely detect the storage condition of the audio data area and the storage condition of the buffer area to be decoded, thereby timely controlling the writing of the audio data to be written, ensuring that the audio data in the data buffer area and the buffer area to be decoded can be timely complemented while being consumed, and avoiding the video picture blockage caused by too fast consumption and insufficient complementation of the audio data in the data buffer area and the buffer area to be decoded when the display equipment plays the video.
Drawings
In order to more clearly explain the technical solution of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious to those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 illustrates a schematic diagram of a usage scenario of a display device according to some embodiments;
fig. 2 illustrates a hardware configuration block diagram of the control apparatus 100 according to some embodiments;
fig. 3 illustrates a hardware configuration block diagram of the display apparatus 200 according to some embodiments;
FIG. 4 illustrates a software configuration diagram in the display device 200 according to some embodiments;
FIG. 5 illustrates a schematic diagram of a process of storing audio data in the display device 200 according to some embodiments;
FIG. 6 illustrates a process flow diagram for controller 250 according to some embodiments;
FIG. 7 illustrates a second process flow diagram of controller 250 according to some embodiments;
FIG. 8 illustrates a third process flow diagram for controller 250 according to some embodiments;
FIG. 9 illustrates a fourth process flow diagram of controller 250 according to some embodiments;
FIG. 10 illustrates a fifth process flow diagram of controller 250 according to some embodiments;
FIG. 11 shows a schematic diagram of an audio playback process in a display device 200 according to some embodiments;
FIG. 12 illustrates a flow diagram of an audio playback method according to some embodiments.
Detailed Description
To make the purpose and embodiments of the present application clearer, the following will clearly and completely describe the exemplary embodiments of the present application with reference to the attached drawings in the exemplary embodiments of the present application, and it is obvious that the described exemplary embodiments are only a part of the embodiments of the present application, and not all of the embodiments.
It should be noted that the brief descriptions of the terms in the present application are only for the convenience of understanding the embodiments described below, and are not intended to limit the embodiments of the present application. These terms should be understood in their ordinary and customary meaning unless otherwise indicated.
The terms "first," "second," "third," and the like in the description and claims of this application and in the above-described drawings are used for distinguishing between similar or analogous objects or entities and not necessarily for describing a particular sequential or chronological order, unless otherwise indicated. It is to be understood that the terms so used are interchangeable under appropriate circumstances.
The terms "comprises" and "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of elements is not necessarily limited to all elements expressly listed, but may include other elements not expressly listed or inherent to such product or apparatus.
The term "module" refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware and/or software code that is capable of performing the functionality associated with that element.
FIG. 1 illustrates a schematic diagram of a usage scenario of a display device according to some embodiments. As shown in fig. 1, the display apparatus 200 is also in data communication with a server 400, and a user can operate the display apparatus 200 through the smart device 300 or the control device 100.
In some embodiments, the control apparatus 100 may be a remote controller, and the communication between the remote controller and the display device includes at least one of an infrared protocol communication or a bluetooth protocol communication, and other short-distance communication methods, and controls the display device 200 in a wireless or wired manner. The user may control the display apparatus 200 by inputting a user instruction through at least one of a key on a remote controller, a voice input, a control panel input, and the like.
In some embodiments, the smart device 300 may include any of a mobile terminal, a tablet, a computer, a laptop, an AR/VR device, and the like.
In some embodiments, the smart device 300 may also be used to control the display device 200. For example, the display device 200 is controlled using an application program running on the smart device.
In some embodiments, the smart device 300 and the display device may also be used for communication of data.
In some embodiments, the display device 200 may also be controlled in a manner other than the control apparatus 100 and the smart device 300, for example, the voice instruction control of the user may be directly received by a module configured inside the display device 200 to obtain a voice instruction, or may be received by a voice control apparatus provided outside the display device 200.
In some embodiments, the display device 200 is also in data communication with a server 400. The display device 200 may be allowed to be communicatively connected through a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 400 may provide various contents and interactions to the display apparatus 200. The server 400 may be a cluster or a plurality of clusters, and may include one or more types of servers.
In some embodiments, software steps executed by one step execution agent may be migrated on demand to another step execution agent in data communication therewith for execution. Illustratively, software steps performed by the server may be migrated to be performed on a display device in data communication therewith, and vice versa, as desired.
Fig. 2 illustrates a block diagram of a hardware configuration of the control apparatus 100 according to some embodiments. As shown in fig. 2, the control device 100 includes a controller 110, a communication interface 130, a user input/output interface 140, a memory, and a power supply. The control apparatus 100 may receive an input operation instruction from a user and convert the operation instruction into an instruction recognizable and responsive by the display device 200, serving as an interaction intermediary between the user and the display device 200.
In some embodiments, the communication interface 130 is used for external communication, and includes at least one of a WIFI chip, a bluetooth module, NFC, or an alternative module.
In some embodiments, the user input/output interface 140 includes at least one of a microphone, a touchpad, a sensor, a key, or an alternative module.
Fig. 3 illustrates a hardware configuration block diagram of a display device 200 according to some embodiments.
In some embodiments, the display apparatus 200 includes at least one of a tuner demodulator 210, a communicator 220, a detector 230, an external device interface 240, a controller 250, a display 260, an audio output interface 270, a memory, a power supply, a user interface.
In some embodiments the controller comprises a central processor, a video processor, an audio processor, a graphics processor, a RAM, a ROM, a first interface to an nth interface for input/output.
In some embodiments, the display 260 includes a display screen component for displaying pictures, and a driving component for driving image display, a component for receiving image signals from the controller output, displaying video content, image content, and menu manipulation interface, and a user manipulation UI interface, etc.
In some embodiments, the display 260 may be at least one of a liquid crystal display, an OLED display, and a projection display, and may also be a projection device and a projection screen.
In some embodiments, the tuner demodulator 210 receives broadcast television signals via wired or wireless reception, and demodulates audio/video signals, such as EPG data signals, from a plurality of wireless or wired broadcast television signals.
In some embodiments, communicator 220 is a component for communicating with external devices or servers according to various communication protocol types. For example: the communicator may include at least one of a Wifi module, a bluetooth module, a wired ethernet module, and other network communication protocol chips or near field communication protocol chips, and an infrared receiver. The display apparatus 200 may establish transmission and reception of control signals and data signals with the control device 100 or the server 400 through the communicator 220.
In some embodiments, the detector 230 is used to collect signals of the external environment or interaction with the outside. For example, detector 230 includes a light receiver, a sensor for collecting ambient light intensity; alternatively, the detector 230 includes an image collector, such as a camera, which may be used to collect external environment scenes, attributes of the user, or user interaction gestures, or the detector 230 includes a sound collector, such as a microphone, which is used to receive external sounds.
In some embodiments, the external device interface 240 may include, but is not limited to, the following: high Definition Multimedia Interface (HDMI), analog or data high definition component input interface (component), composite video input interface (CVBS), USB input interface (USB), RGB port, and the like. The interface may be a composite input/output interface formed by the plurality of interfaces.
In some embodiments, the controller 250 and the modem 210 may be located in different separate devices, that is, the modem 210 may also be located in an external device of the main device where the controller 250 is located, such as an external set-top box.
In some embodiments, the controller 250 controls the operation of the display device and responds to user operations through various software control programs stored in memory. The controller 250 controls the overall operation of the display apparatus 200. For example: in response to receiving a user command for selecting a UI object to be displayed on the display 260, the controller 250 may perform an operation related to the object selected by the user command.
In some embodiments, the object may be any one of selectable objects, such as a hyperlink, an icon, or other actionable control. The operations related to the selected object are: displaying an operation connected to a hyperlink page, document, image, or the like, or performing an operation of a program corresponding to the icon.
In some embodiments the controller comprises at least one of a Central Processing Unit (CPU), a video processor, an audio processor, a Graphics Processing Unit (GPU), a RAM Random Access Memory (RAM), a ROM (Read-Only Memory), a first to nth interface for input/output, a communication Bus (Bus), and the like.
A CPU processor. For executing operating system and application program instructions stored in the memory, and executing various application programs, data and contents according to various interactive instructions receiving external input, so as to finally display and play various audio-video contents. The CPU processor may include a plurality of processors. E.g. comprising a main processor and one or more sub-processors.
In some embodiments, a graphics processor for generating various graphics objects, such as: at least one of an icon, an operation menu, and a user input instruction display figure. The graphic processor comprises an arithmetic unit, which performs operation by receiving various interactive instructions input by a user and displays various objects according to display attributes; the system also comprises a renderer for rendering various objects obtained based on the arithmetic unit, wherein the rendered objects are used for being displayed on a display.
In some embodiments, the video processor is configured to receive an external video signal, and perform at least one of video processing such as decompression, decoding, scaling, noise reduction, frame rate conversion, resolution conversion, and image synthesis according to a standard codec protocol of the input signal, so as to obtain a signal displayed or played on the direct display device 200.
In some embodiments, the video processor includes at least one of a demultiplexing module, a video decoding module, an image composition module, a frame rate conversion module, a display formatting module, and the like. The demultiplexing module is used for demultiplexing the input audio and video data stream. And the video decoding module is used for processing the video signal after demultiplexing, including decoding, scaling and the like. And the image synthesis module is used for carrying out superposition mixing processing on the GUI signal input by the user or generated by the user and the video image after the zooming processing by the graphic generator so as to generate an image signal for display. And the frame rate conversion module is used for converting the frame rate of the input video. And the display formatting module is used for converting the received video output signal after the frame rate conversion, and changing the signal to be in accordance with the signal of the display format, such as an output RGB data signal.
In some embodiments, the audio processor is configured to receive an external audio signal, decompress and decode the received audio signal according to a standard codec protocol of the input signal, and perform at least one of noise reduction, digital-to-analog conversion, and amplification processing to obtain a sound signal that can be played in the speaker.
In some embodiments, a user may enter user commands on a Graphical User Interface (GUI) displayed on display 260, and the user input interface receives the user input commands through the Graphical User Interface (GUI). Alternatively, the user may input the user command by inputting a specific sound or gesture, and the user input interface receives the user input command by recognizing the sound or gesture through the sensor.
In some embodiments, a "user interface" is a media interface for interaction and information exchange between an application or operating system and a user that enables conversion between an internal form of information and a form that is acceptable to the user. A commonly used presentation form of the User Interface is a Graphical User Interface (GUI), which refers to a User Interface related to computer operations and displayed in a graphical manner. It may be an interface element such as an icon, a window, a control, etc. displayed in the display screen of the electronic device, where the control may include at least one of an icon, a button, a menu, a tab, a text box, a dialog box, a status bar, a navigation bar, a Widget, etc. visual interface elements.
In some embodiments, user interface 280 is an interface that may be used to receive control inputs (e.g., physical buttons on the body of the display device, or the like).
In some embodiments, a system of a display device may include a Kernel (Kernel), a command parser (shell), a file system, and an application program. The kernel, shell, and file system together make up the basic operating system structure that allows users to manage files, run programs, and use the system. After power-on, the kernel is started, kernel space is activated, hardware is abstracted, hardware parameters are initialized, and virtual memory, a scheduler, signals and interprocess communication (IPC) are operated and maintained. And after the kernel is started, loading the Shell and the user application program. The application program is compiled into machine code after being started, and a process is formed.
Referring to fig. 4, in some embodiments, the system is divided into four layers, which are an Application (Applications) layer (abbreviated as "Application layer"), an Application Framework (Application Framework) layer (abbreviated as "Framework layer"), an Android runtime (Android runtime) and system library layer (abbreviated as "system runtime library layer"), and a kernel layer from top to bottom.
In some embodiments, at least one application program runs in the application program layer, and the application programs may be windows (windows) programs carried by an operating system, system setting programs, clock programs or the like; or an application developed by a third party developer. In particular implementations, the application packages in the application layer are not limited to the above examples.
The framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions. The application framework layer acts as a processing center that decides to let the applications in the application layer act. The application program can access the resources in the system and obtain the services of the system in execution through the API interface.
As shown in fig. 4, in the embodiment of the present application, the application framework layer includes a manager (Managers), a Content Provider (Content Provider), and the like, where the manager includes at least one of the following modules: an Activity Manager (Activity Manager) is used for interacting with all activities running in the system; the Location Manager (Location Manager) is used for providing the system service or application with the access of the system Location service; a Package Manager (Package Manager) for retrieving various information related to an application Package currently installed on the device; a Notification Manager (Notification Manager) for controlling display and clearing of Notification messages; a Window Manager (Window Manager) is used to manage the icons, windows, toolbars, wallpapers, and desktop components on a user interface.
In some embodiments, the activity manager is used to manage the lifecycle of the various applications as well as general navigational fallback functions, such as controlling exit, opening, fallback, etc. of the applications. The window manager is used for managing all window programs, such as obtaining the size of a display screen, judging whether a status bar exists, locking the screen, intercepting the screen, controlling the change of the display window (for example, reducing the display window, displaying a shake, displaying a distortion deformation, and the like), and the like.
In some embodiments, the system runtime layer provides support for the upper layer, i.e., the framework layer, and when the framework layer is used, the android operating system runs the C/C + + library included in the system runtime layer to implement the functions to be implemented by the framework layer.
In some embodiments, the kernel layer is a layer between hardware and software. As shown in fig. 4, the core layer includes at least one of the following drivers: audio drive, display driver, bluetooth drive, camera drive, WIFI drive, USB drive, HDMI drive, sensor drive (like fingerprint sensor, temperature sensor, pressure sensor etc.) and power drive etc..
In some embodiments, the display device may directly enter a display interface of a signal source selected last time after being started, or a signal source selection interface, where the signal source may be a preset video-on-demand program, or may be at least one of an HDMI interface, a live tv interface, and the like, and after a user selects different signal sources, the display may display contents obtained from different signal sources.
For the purpose of clearly illustrating the embodiments of the present application, some explanations of related terms are given below.
Entity: it is intended to refer to things that exist objectively and that can be distinguished from each other, including specific persons, things, mechanisms, abstract concepts, and the like.
Knowledge graph: is essentially a semantic network that can represent semantic relationships between entities. Entities are used as vertexes or nodes in the knowledge graph, and relationships are used as edges. The knowledge graph can be constructed in various ways, and the embodiment of the application does not point to how to construct the knowledge graph, so the detailed description is not provided.
When the display device 200 plays the video, the audio information in the video may be played by using a peripheral device, where the peripheral device may be a speaker built in the display device 200, a wired earphone connected to the display device 200, a bluetooth speaker connected to the display device 200, or a power amplifier. The player in the display device 200 may perform software decoding or hardware decoding on the audio stream according to the type of the audio stream in the video, the decoding capability of the player, the type of the audio stream supported by the peripheral or the power amplifier, and the like, where the player directly outputs a PCM (Pulse Code Modulation) audio stream during the software decoding, and outputs a RAW audio stream during the hardware decoding, and then the RAW audio stream is decoded into a PCM audio stream by the hardware decoder.
Generally, when playing video, the display device 200 will put audio data into the data buffer, the audio hardware interface will then transport the audio data in the data buffer to the buffer to be decoded, and the audio driver will decode the audio data in the buffer to be decoded and put the decoded audio data into the audio data area. The peripheral consumes the audio data in the audio data area, and then plays the audio content in the video. When the data buffer is filled, the display device 200 will transport the audio data therein to the buffer to be decoded. When the display device 200 is connected to the peripheral device, the display device 200 needs to decode RAW data in the audio data area into PCM data, perform equalization, sound effect, and the like, and output the PCM data to the peripheral device. When the display device 200 is connected to the power amplifier, the display device 200 decodes RAW data in the audio data area into PCM data and outputs the PCM data to the power amplifier.
When the display device 200 plays the RAW data and connects the power amplifier, since the PCM data in the audio data area is not equalized and efficiently processed, the PCM data in the audio data area is quickly consumed, i.e., the level of the data in the audio data area is always in a low water level state. When writing audio data into the buffer to be decoded, if it is detected that the audio data area is at a low level, the display device 200 may continuously consume the data in the data buffer and the buffer to be decoded, and when there is no data in the data buffer, the display device 200 may pause the playing of the audio. The stopping of the audio will also cause the image to stop playing due to the limitation of the synchronization of the audio and video in the display device 200. The display device 200 resumes the playback of the audio and images again by waiting until the data in the data buffer is sufficient. If the audio data in the audio data area is consumed too fast and is always in a low water level, the display device 200 continues to pause and play, and the video picture played at this time is jammed, which affects the viewing experience of the user.
Based on the above, the embodiment of the application provides an audio playing method and a display device. The audio playing method can be applied to the display device 200, and in the display device 200, the storage condition of the audio data area and the storage condition of the buffer area to be decoded can be detected in time, so that the writing of the audio data to be written can be controlled in time, the audio data in the data buffer area and the buffer area to be decoded can be guaranteed to be consumed and complemented in time, and the situation that video pictures are blocked due to the fact that the audio data in the data buffer area and the buffer area to be decoded are consumed too fast and cannot be complemented in time can be avoided when the display device 200 plays videos.
Fig. 5 shows a schematic diagram of a storage process of audio data in the display device 200 according to some embodiments. As shown in fig. 5, the display device 200 according to the embodiment of the present disclosure may include three regions for storing audio data, namely, a data buffer (databuffer), a buffer to be decoded (decoderbuffer), and an audio data region (pcmbuffer).
Generally, when the display device 200 plays a video, the controller 250 writes the audio data in the video resource into the data buffer first, in this embodiment, the audio data that needs to be written into the data buffer but has not been written into the data buffer may be referred to as audio data to be written into, and the data that has been written into the data buffer may be referred to as audio data. Due to the limited storage space of the data buffer, it is usually not possible to store all the audio data to be written at once.
FIG. 6 illustrates a process flow diagram for controller 250 according to some embodiments. After the data buffer is full, the controller 250 may be triggered to write the audio data in the data buffer into the buffer to be decoded. In the embodiment of the present application, as shown in fig. 6, before the controller 250 writes the audio data into the buffer to be decoded, it is required to ensure that the current remaining data amount in the buffer to be decoded is smaller than the preset data threshold. The audio data stored in the buffer area to be decoded needs to be decoded and then played by the external device, so that the audio data in the buffer area to be decoded is always in a consumption process, and after certain data is consumed, if the data volume of the residual audio data cannot meet the current playing requirement, the audio data in the data buffer area is continuously written into the buffer area to be decoded, so that the data volume in the buffer area to be decoded can meet the playing requirement. In order to determine whether the playing requirement can be met, the requirement may be quantized into a preset data threshold, that is, whether the remaining data amount in the buffer to be decoded is smaller than the preset data threshold is detected, and if so, it is determined that the remaining data amount cannot meet the playing requirement.
Then, under the condition that the data amount in the buffer area to be decoded can meet the playing requirement, the controller 250 decodes the audio data in the buffer area to be decoded, and writes the decoded data into the audio data area. Finally, the controller 250 outputs the decoded data in the audio data area to the peripheral device, and controls the display device 200 to play the decoded data through the peripheral device, thereby playing the audio content.
Therefore, the display device 200 in the embodiment of the present application can detect the amount of remaining data in the buffer area to be decoded, and further control whether to write audio data into the buffer area to be decoded, so that it can be ensured that the amount of audio data in the buffer area to be decoded can always meet the playing requirement, and further reduce the problem of stuttering of playing audio and video images caused by the small amount of data in the buffer area to be decoded.
Fig. 7 illustrates a second process flow diagram of controller 250 according to some embodiments. In some embodiments, as shown in fig. 7, the controller 250 may further control whether to continue writing the audio data into the buffer to be decoded by detecting whether a play time of the decoded data in the audio data region is greater than or equal to a preset time. If the playing time of the decoded data in the audio data area is greater than or equal to the preset time, it is determined that the current data amount in the audio data area can meet the playing requirement, and at this time, the decoded data does not need to be continuously written into the audio data area, and further, the data amount in the buffer area to be decoded is not consumed, and then the controller 250 may stop writing the audio data into the buffer area to be decoded. When the current data amount in the audio data area is small and cannot meet the playing requirement, the decoded data is written into the audio data area again, and at this time, the audio data in the buffer area to be decoded needs to be consumed, so that the controller 250 needs to write the audio data into the buffer area to be decoded continuously. In addition, the controller 250 needs to detect whether the data amount in the data buffer is greater than 0, where the data amount is greater than 0, which indicates that audio data currently exists in the data buffer, and at this time, the audio data may be continuously written into the buffer to be decoded.
In the embodiment of the present application, the preset time duration may be set according to the type of the audio data in the actual playing process, for example, 100 milliseconds. Also, the controller 250 may wait for a period of time, for example, 5 milliseconds, and then stop writing the audio data into the buffer to be decoded when detecting that the playing time of the decoded data in the audio data area is greater than or equal to the preset time.
FIG. 8 illustrates a third process flow diagram for controller 250 according to some embodiments. In some embodiments, as shown in fig. 8, for the case that the playing time of the decoded data in the audio data area is less than the preset time, the controller 250 may further set a preset data threshold according to the type of the audio data to be written; for example, in the case where the audio data to be written is the original RAW data, the preset data threshold is set to S ═ F × 3, where S denotes the preset data threshold and F denotes the size of each frame data in the original RAW data.
Then, under the condition that the data volume of the audio data in the data buffer is larger than 0, whether the current remaining data volume of the buffer to be decoded is larger than or equal to a preset data threshold value is detected. In the embodiment of the application, the data volume of the audio data in the data buffer area is greater than 0, which indicates that the audio data which is not written into the buffer area to be decoded still exists in the data buffer area, and at this time, if the data volume in the buffer area to be decoded cannot meet the playing requirement, that is, the current remaining data volume of the buffer area to be decoded is smaller than the preset data threshold, the remaining audio data in the data buffer area can be continuously written into the buffer area to be decoded.
Fig. 9 illustrates a fourth process flow diagram for controller 250 according to some embodiments. In some embodiments, as shown in fig. 9, if it is detected in the foregoing embodiments that the current remaining data amount of the buffer to be decoded is greater than or equal to the preset data threshold, the controller 250 may calculate the waiting time according to the current remaining data amount of the buffer to be decoded, the preset data threshold and the decoding rate; wherein the decoding rate is based on the decoding rate supported by the structure for decoding in the display device 200 in actual use. The calculated waiting time is T, where T is (E-S)/R, where E is used to indicate the current remaining data amount of the buffer to be decoded, S is used to indicate a preset data threshold, and R is used to indicate the decoding rate.
During the waiting time T, the controller 250 does not perform the writing process on the buffer to be decoded, and after the waiting time T elapses, the controller 250 again detects whether the data amount of the audio data in the data buffer is greater than 0.
Fig. 10 illustrates a fifth process flow diagram of controller 250 according to some embodiments. In some embodiments, as shown in fig. 10, in all the embodiments described in the foregoing of the present application, if the data amount of the audio data in the data buffer is less than or equal to 0, it indicates that all the audio data in the data buffer has been written into the buffer to be decoded, and if there is no audio data to be written into the data buffer yet, that is, the data amount of the audio data to be written is less than or equal to 0. At this time, the controller 250 should stop writing the audio data to the buffer to be decoded.
When the data amount in the data buffer is less than or equal to 0, if there is currently audio data to be written that is not written into the data buffer, that is, the data amount of the audio data to be written is greater than 0, the controller 250 may continue to write the audio data to be written into the data buffer until the data amount of the data to be written is 0 or the data buffer is full.
In addition, the number of data frames corresponding to different types of audio data to be written is different, and the corresponding audio data amount is also different. In this embodiment, the controller 250 may further obtain a minimum frame number of the audio data to be written; and determining the minimum frequency of writing the audio data to be written into the data buffer according to the minimum frame number. Generally, the smaller the minimum number of frames, the smaller the minimum frequency of writing.
As described in the foregoing embodiment, the controller 250 needs to decode the audio data in the buffer to be decoded, and when performing software decoding, the audio data to be written is PCM data, and then the controller 250 can directly write the PCM data in the buffer to be decoded into the audio data area. When hardware decoding is performed, the audio data to be written is original RAW data, and then the controller 250 needs to decode the original RAW data in the buffer area to be decoded into pulse code modulation PCM data, where the decoded data is pulse code modulation PCM data; then, the controller 250 writes the pulse code modulation PCM data into the audio data area. The pulse code modulation PCM data in the audio data area can be played by the peripheral equipment, so that the user can hear the specific content of the audio.
The decoded data in the audio data area can be directly output to the peripheral for playing, but the processes of playing the decoded data by different peripherals are not the same, for example, if the peripheral connected to the display device 200 is a speaker, the PCM data in the audio data area can be played by the speaker after being processed by equalization, sound effect and the like, and if the peripheral connected to the display device 200 is a power amplifier, the PCM data in the audio data area can be directly played by the power amplifier. Therefore, in some embodiments, as shown in fig. 11, the controller 250 may further determine the type of the peripheral, and then determine whether the audio data in the audio data area needs to be processed according to the type of the peripheral, and then output the audio data to the different peripheral.
It should be noted that the speaker is only one of the peripherals different from the power amplifier type shown in the embodiment of the present application, and for other peripherals, such as a wired earphone, a bluetooth speaker, etc., connected to the display device 200, the controller 250 needs to perform processing such as equalization, sound effect, etc. on the PCM data modulated by pulse codes in the audio data area, and then output the PCM data to the wired earphone, the bluetooth speaker, etc. for playing.
It can be known from the contents of all the foregoing embodiments that the display device 200 in the embodiments of the present application can detect the storage condition of the audio data area and the storage condition of the buffer area to be decoded in time, thereby controlling the writing of the audio data to be written in time, and ensuring that the audio data in the data buffer area and the buffer area to be decoded can be replenished in time while being consumed, so that when the display device 200 plays a video, the video image jam caused by the fact that the audio data in the data buffer area and the buffer area to be decoded is consumed too fast and is not replenished in time does not occur.
Fig. 12 shows a flow chart of an audio playing method according to some embodiments, which may be applied on the display device 200 in the foregoing embodiments. As shown in fig. 12, the method may include the steps of:
step S101, writing audio data to be written into a data buffer area; the audio data to be written is used for representing all audio data which need to be written into the data buffer area when the display equipment plays audio.
Step S102, writing the audio data in the data buffer area into the buffer area to be decoded under the condition that the current residual data amount in the buffer area to be decoded is smaller than a preset data threshold value.
Step S103, decoding the audio data in the buffer area to be decoded and writing the decoded data into an audio data area.
And step S104, controlling the display equipment to play the decoded data in the audio data area through the peripheral equipment.
In some embodiments, the audio playing method further includes: detecting whether the playing time length of the decoded data in the audio data area is greater than or equal to a preset time length; and under the condition that the playing time length of the decoded data in the audio data area is greater than or equal to the preset time length, stopping writing the audio data into the buffer area to be decoded.
In some embodiments, the audio playing method further includes: setting a preset data threshold value according to the type of the audio data to be written under the condition that the playing time length of the decoded data in the audio data area is less than a preset time length; under the condition that the data volume of the audio data in the data buffer area is larger than 0, detecting whether the current residual data volume of the buffer area to be decoded is larger than or equal to the preset data threshold value; and writing the audio data in the data buffer area into the buffer area to be decoded under the condition that the current residual data amount of the buffer area to be decoded is smaller than the preset data threshold value.
In some embodiments, the audio playing method further includes: under the condition that the current residual data amount of the buffer area to be decoded is greater than or equal to the preset data threshold, calculating waiting time according to the current residual data amount of the buffer area to be decoded, the preset data threshold and the decoding rate; after the waiting time elapses, it is detected again whether the data amount of the audio data in the data buffer is larger than 0.
In some embodiments, the audio playing method further includes: and under the condition that the data volume of the audio data in the data buffer area is less than or equal to 0 and all the audio data to be written are written into the data buffer area, stopping writing the audio data into the buffer area to be decoded.
In some embodiments, the audio playing method further includes: acquiring the minimum frame number of the audio data to be written; and determining the minimum frequency of writing the audio data to be written into the data buffer according to the minimum frame number.
In some embodiments, the audio playing method further includes: and under the condition that the data volume in the data buffer area is less than or equal to 0 and the audio data to be written are not completely written into the data buffer area, continuing to write the audio data to be written into the data buffer area until the data buffer area is full or the data to be written are completely written into the data buffer area.
In some embodiments, the audio playing method further includes: and in the case that the audio data to be written is original RAW data, setting the preset data threshold to be S ═ F × 3, where S denotes the preset data threshold and F denotes the size of each frame of data in the original RAW data.
In some embodiments, the audio playing method further includes: under the condition that the audio data to be written is original RAW data, decoding the original RAW data in the buffer area to be decoded into Pulse Code Modulation (PCM) data; writing the decoded pulse code modulation PCM data into the audio data area.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.
The foregoing description, for purposes of explanation, has been presented in conjunction with specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the embodiments to the precise forms disclosed above. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles and the practical application, to thereby enable others skilled in the art to best utilize the embodiments and various embodiments with various modifications as are suited to the particular use contemplated.

Claims (10)

1. A display device, comprising:
a display;
a controller configured to:
writing the audio data to be written into the data buffer area; the audio data to be written is used for representing all audio data which need to be written into the data buffer area when the display equipment plays audio;
under the condition that the current residual data amount in a buffer area to be decoded is smaller than a preset data threshold value, writing written audio data in the data buffer area into the buffer area to be decoded;
decoding the audio data in the buffer area to be decoded and writing the decoded data into an audio data area;
and controlling the display equipment to play the decoded data in the audio data area through the peripheral equipment.
2. The display device of claim 1, wherein the controller is further configured to:
detecting whether the playing time length of the decoded data in the audio data area is greater than or equal to a preset time length;
and under the condition that the playing time length of the decoded data in the audio data area is greater than or equal to the preset time length, stopping writing the audio data into the buffer area to be decoded.
3. The display device of claim 2, wherein the controller is further configured to:
setting a preset data threshold value according to the type of the audio data to be written under the condition that the playing time length of the decoded data in the audio data area is less than a preset time length;
under the condition that the data volume of the audio data in the data buffer area is larger than 0, detecting whether the current residual data volume of the buffer area to be decoded is larger than or equal to the preset data threshold value;
and writing the audio data in the data buffer area into the buffer area to be decoded under the condition that the current residual data amount of the buffer area to be decoded is smaller than the preset data threshold value.
4. The display device of claim 3, wherein the controller is further configured to:
under the condition that the current residual data amount of the buffer area to be decoded is greater than or equal to the preset data threshold, calculating waiting time according to the current residual data amount of the buffer area to be decoded, the preset data threshold and the decoding rate;
after the waiting time elapses, it is detected again whether the data amount of the audio data in the data buffer is larger than 0.
5. The display device according to any one of claims 3 to 4, wherein the controller is further configured to:
and under the condition that the data volume of the audio data in the data buffer area is less than or equal to 0 and all the audio data to be written are written into the data buffer area, stopping writing the audio data into the buffer area to be decoded.
6. The display device of claim 5, wherein the controller is further configured to:
and under the condition that the data volume in the data buffer area is less than or equal to 0 and the audio data to be written are not completely written into the data buffer area, continuing to write the audio data to be written into the data buffer area until the data buffer area is full or the data to be written are completely written into the data buffer area.
7. The display device of claim 1, wherein the controller is further configured to:
acquiring the minimum frame number of the audio data to be written;
and determining the minimum frequency of writing the audio data to be written into the data buffer according to the minimum frame number.
8. The display device according to any one of claims 1-4, wherein the controller is further configured to:
and in the case that the audio data to be written is original RAW data, setting the preset data threshold to be S ═ F × 3, where S denotes the preset data threshold and F denotes the size of each frame of data in the original RAW data.
9. The display device of claim 8, wherein the controller is further configured to:
under the condition that the audio data to be written is original RAW data, decoding the original RAW data in the buffer area to be decoded into Pulse Code Modulation (PCM) data;
writing the decoded pulse code modulation PCM data into the audio data area.
10. An audio playing method, comprising:
writing the audio data to be written into the data buffer area; the audio data to be written is used for representing all audio data which need to be written into the data buffer area when the display equipment plays audio;
writing the audio data in the data buffer area into the buffer area to be decoded under the condition that the current residual data amount in the buffer area to be decoded is smaller than a preset data threshold value;
decoding the audio data in the buffer area to be decoded and writing the decoded data into an audio data area;
and controlling the display equipment to play the decoded data in the audio data area through the peripheral equipment.
CN202011378391.2A 2020-11-30 2020-11-30 Audio playing method and display device Pending CN112601109A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011378391.2A CN112601109A (en) 2020-11-30 2020-11-30 Audio playing method and display device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011378391.2A CN112601109A (en) 2020-11-30 2020-11-30 Audio playing method and display device

Publications (1)

Publication Number Publication Date
CN112601109A true CN112601109A (en) 2021-04-02

Family

ID=75187460

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011378391.2A Pending CN112601109A (en) 2020-11-30 2020-11-30 Audio playing method and display device

Country Status (1)

Country Link
CN (1) CN112601109A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114727128A (en) * 2022-03-30 2022-07-08 恒玄科技(上海)股份有限公司 Data transmission method and device of playing terminal, playing terminal and storage medium
WO2023015436A1 (en) * 2021-08-10 2023-02-16 深圳Tcl新技术有限公司 Streaming media data transmission method and apparatus, and terminal device

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003101962A (en) * 2001-09-26 2003-04-04 Sony Corp Synchronous reproducing device and method
CN101416510A (en) * 2006-03-29 2009-04-22 索尼爱立信移动通讯有限公司 Method and system for managing audio data
US20090171675A1 (en) * 2007-12-28 2009-07-02 Kabushiki Kaisha Toshiba Decoding reproduction apparatus and method and receiver
CN103139638A (en) * 2011-11-21 2013-06-05 索尼公司 Reproduction apparatus, reproduction method, and program
CN105704554A (en) * 2016-01-22 2016-06-22 广州视睿电子科技有限公司 Audio play method and device
CN105916058A (en) * 2016-05-05 2016-08-31 青岛海信宽带多媒体技术有限公司 Streaming media buffer play method and device and display device
CN107517400A (en) * 2016-06-15 2017-12-26 成都鼎桥通信技术有限公司 Flow media playing method and DST PLAYER
CN110634511A (en) * 2019-09-27 2019-12-31 北京西山居互动娱乐科技有限公司 Audio data processing method and device
US20200059504A1 (en) * 2018-08-19 2020-02-20 Pixart Imaging Inc. Schemes capable of synchronizing native clocks and audio codec clocks of audio playing for bluetooth wireless devices
CN111246284A (en) * 2020-03-09 2020-06-05 深圳创维-Rgb电子有限公司 Video stream playing method, system, terminal and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003101962A (en) * 2001-09-26 2003-04-04 Sony Corp Synchronous reproducing device and method
CN101416510A (en) * 2006-03-29 2009-04-22 索尼爱立信移动通讯有限公司 Method and system for managing audio data
US20090171675A1 (en) * 2007-12-28 2009-07-02 Kabushiki Kaisha Toshiba Decoding reproduction apparatus and method and receiver
CN103139638A (en) * 2011-11-21 2013-06-05 索尼公司 Reproduction apparatus, reproduction method, and program
CN105704554A (en) * 2016-01-22 2016-06-22 广州视睿电子科技有限公司 Audio play method and device
CN105916058A (en) * 2016-05-05 2016-08-31 青岛海信宽带多媒体技术有限公司 Streaming media buffer play method and device and display device
CN107517400A (en) * 2016-06-15 2017-12-26 成都鼎桥通信技术有限公司 Flow media playing method and DST PLAYER
US20200059504A1 (en) * 2018-08-19 2020-02-20 Pixart Imaging Inc. Schemes capable of synchronizing native clocks and audio codec clocks of audio playing for bluetooth wireless devices
CN110634511A (en) * 2019-09-27 2019-12-31 北京西山居互动娱乐科技有限公司 Audio data processing method and device
CN111246284A (en) * 2020-03-09 2020-06-05 深圳创维-Rgb电子有限公司 Video stream playing method, system, terminal and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023015436A1 (en) * 2021-08-10 2023-02-16 深圳Tcl新技术有限公司 Streaming media data transmission method and apparatus, and terminal device
CN114727128A (en) * 2022-03-30 2022-07-08 恒玄科技(上海)股份有限公司 Data transmission method and device of playing terminal, playing terminal and storage medium
CN114727128B (en) * 2022-03-30 2024-04-12 恒玄科技(上海)股份有限公司 Data transmission method and device of playing terminal, playing terminal and storage medium

Similar Documents

Publication Publication Date Title
CN114302219B (en) Display equipment and variable frame rate display method
CN112672195A (en) Remote controller key setting method and display equipment
CN112667184A (en) Display device
CN111836104B (en) Display apparatus and display method
CN112752156A (en) Subtitle adjusting method and display device
CN114302238A (en) Method for displaying prompt message in loudspeaker box mode and display device
CN114040254B (en) Display equipment and high concurrency message display method
CN112601109A (en) Audio playing method and display device
CN113781957B (en) Method for preventing screen burn of display device and display device
CN112860331B (en) Display equipment and voice interaction prompting method
CN114302204B (en) Split-screen playing method and display device
CN113064645B (en) Startup interface control method and display device
CN113556609B (en) Display device and startup picture display method
CN112911381B (en) Display device, mode adjustment method, device and medium
CN112616090B (en) Display equipment system upgrading method and display equipment
CN114007119A (en) Video playing method and display equipment
CN112668546A (en) Video thumbnail display method and display equipment
CN112492393A (en) Method for realizing MIC switch associated energy-saving mode and display equipment
CN112637683A (en) Display equipment system optimization method and display equipment
CN112752152B (en) Delivery video playing method and display equipment
CN113350781B (en) Display device and game mode switching method
CN113064515B (en) Touch display device and USB device switching method
CN113593613B (en) Automatic registration and de-registration method for recording disk
CN114302131A (en) Display device and black screen detection method
CN112631796A (en) Display device and file copying progress display method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210402

RJ01 Rejection of invention patent application after publication