CN116939263A - Display device, media asset playing device and media asset playing method - Google Patents

Display device, media asset playing device and media asset playing method Download PDF

Info

Publication number
CN116939263A
CN116939263A CN202210370079.1A CN202210370079A CN116939263A CN 116939263 A CN116939263 A CN 116939263A CN 202210370079 A CN202210370079 A CN 202210370079A CN 116939263 A CN116939263 A CN 116939263A
Authority
CN
China
Prior art keywords
media
file
tag
description
media asset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210370079.1A
Other languages
Chinese (zh)
Inventor
陈耀宗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Electronic Technology Shenzhen Co ltd
Original Assignee
Hisense Electronic Technology Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Electronic Technology Shenzhen Co ltd filed Critical Hisense Electronic Technology Shenzhen Co ltd
Priority to CN202210370079.1A priority Critical patent/CN116939263A/en
Publication of CN116939263A publication Critical patent/CN116939263A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/434Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams, extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440218Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application discloses a display device, a media asset playing device and a media asset playing method, comprising the following steps: acquiring a media presentation description file of a media resource file; acquiring a first description tag and a first channel tag according to a media presentation description file, wherein the first description tag is used for representing that a media asset file comprises expansion layer data, and the expansion layer data is used for representing a video coding format and can be decoded to obtain the media asset file with ultra-high definition; inserting the first description tag into the first channel tag to convert the first description tag into a second channel tag; establishing a first media resource track according to the first channel label, and establishing a second media resource track according to the second channel label; synchronously decoding the first and second media asset tracks to play the media asset file. The application can enable the display equipment to support the operation of the media file in the SHVC format, and can simultaneously de-multiplex the base layer data and the extension layer data, so that the display equipment can play the media file with ultra-high definition resolution, and the user experience is improved.

Description

Display device, media asset playing device and media asset playing method
Technical Field
The present application relates to the field of display devices, and in particular, to a display device, a media playback apparatus, and a media playback method.
Background
With the diversification development of display devices, the display devices need to adapt to the needs of different users to cope with more scenes. In a scene that multiple persons simultaneously carry out video conference, for video files sent to multiple user terminals by the same video conference terminal, different users have different requirements on code streams and resolution of videos.
The scalable high-efficiency video coding (SVC for HEVC, SHVC) is a high-efficiency video coding format which is applicable to the H.265/HEVC coding standard and can realize time domain scalability, space scalability and quality scalability, and is a mainstream trend of future video coding.
However, the display device in the related art does not support the resolution of the ultra-high definition for the SHVC format video file, so that the display device cannot provide the ultra-high definition video file even if the network condition is satisfied.
Disclosure of Invention
The embodiment of the application provides a display device, a media resource playing device and a media resource playing method, which can enable the display device to support the operation of media resource files in an SHVC format, and can de-multiplex base layer data and extension layer data at the same time, so that the display device can play the media resource files with ultra-high definition and improve user experience.
In a first aspect, the present application shows a display device comprising: a display; a controller configured to: receiving a control instruction sent by a user and used for acquiring a media file; responding to the control instruction, acquiring a media presentation description file of the media asset file, wherein the media presentation description file is used for determining a video coding format contained in the media asset file; acquiring a first description tag according to the media presentation description file, wherein the first description tag is used for representing that the media asset file comprises expansion layer data, and the expansion layer data is used for representing that a video coding format can be decoded to obtain the media asset file with ultra-high definition; inserting the first description tag into the first channel tag to convert the first description tag into a second channel tag; establishing a first media resource track according to a first media stream corresponding to a first channel tag, and establishing a second media resource track according to a second media stream corresponding to a second channel tag; the first media stream and the second media stream are respectively used for encoding media files of different versions; and synchronously activating the first media asset track and the second media asset track so as to synchronously decode the first media stream and the second media stream to play the media asset file. By adopting the embodiment, the display device can modify the media presentation description file, establish the first media resource track and the second media resource track according to the modified media presentation description file, then respectively de-multiplex the first media resource track and the second media resource track, and finally fuse the first media stream and the second media stream in the first media resource track and the second media resource track and decode at the same time, so that the display device can play the media resource file with ultra-high definition and improve user experience.
In a second aspect, the present application provides a media playing device, applied to the display apparatus according to the first aspect and its implementation manner, including: the data source component is used for acquiring a media presentation description file of the media asset file, and the media presentation description file is used for determining a video coding format contained in the media asset file; the first demultiplexing element is used for acquiring a first description tag according to the media presentation description file, the first description tag is used for representing that the media asset file comprises expansion layer data, and the expansion layer data is used for representing that the video coding format can be decoded to obtain the media asset file with ultra-high definition; inserting the first description tag into the first channel tag to convert the first description tag into a second channel tag; the second demultiplexing element is used for establishing a first media resource track according to a first media stream corresponding to the first channel label and establishing a second media resource track according to a second media stream corresponding to the second channel label; the first media stream and the second media stream are respectively used for encoding media files of different versions; an input selection element for synchronously activating the first media track and the second media track; a receiver element for fusing the first media stream with the second media stream according to a predetermined time standard setting principle; and transmitting the first media stream and the second media stream to a decoding element; and a decoding element for receiving the first media stream and the second media stream to synchronously decode the first media stream and the second media stream. By adopting the embodiment, the media asset playing device can modify the media presentation description file, establish the first media asset track and the second media asset track according to the modified media presentation description file, then respectively de-multiplex the first media asset track and the second media asset track, finally fuse the first media stream and the second media stream in the first media asset track and the second media asset track, and simultaneously decode, so that the display device can play the media asset file with ultra-high definition and improve user experience.
In a third aspect, the present application further provides a media asset playing method, including: acquiring a media presentation description file of the media asset file, wherein the media presentation description file is used for determining a video coding format contained in the media asset file; acquiring a first description tag according to the media presentation description file, wherein the first description tag is used for representing that the media asset file comprises expansion layer data, and the expansion layer data is used for representing that a video coding format can be decoded to obtain the media asset file with ultra-high definition; inserting the first description tag into the first channel tag to convert the first description tag into a second channel tag; establishing a first media resource track according to a first media stream corresponding to a first channel tag, and establishing a second media resource track according to a second media stream corresponding to a second channel tag; the first media stream and the second media stream are respectively used for encoding media files of different versions; and synchronously activating the first media asset track and the second media asset track so as to synchronously decode the first media stream and the second media stream to play the media asset file. By adopting the embodiment, the media display description file can be modified, the first media resource track and the second media resource track are established according to the modified media display description file, then the first media resource track and the second media resource track are respectively demultiplexed, and finally the first media stream and the second media stream in the first media resource track and the second media resource track are fused and decoded at the same time, so that the display device can play the media file with ultra-high definition and the user experience is improved.
According to the technical scheme, the display device can support the operation of the media files in the SHVC format, and can demultiplex the base layer data and the extension layer data at the same time, so that the display device can play the media files with ultra-high definition resolution, and user experience is improved.
Drawings
Fig. 1 schematically illustrates an operation scenario between a display device and a control apparatus according to an embodiment of the present application;
fig. 2 exemplarily shows a block diagram of a configuration of a control apparatus 100 of an embodiment of the present application;
fig. 3 is a block diagram exemplarily showing a hardware configuration of a display apparatus 200 of an embodiment of the present application;
fig. 4 is a software configuration block diagram exemplarily showing a display device 200 of an embodiment of the present application;
FIG. 5 illustrates a usage scenario diagram of an embodiment of the present application;
FIG. 6 illustrates a GSstreamer playback pipeline schematic;
fig. 7 is a schematic flow chart of playing a media asset file in SHVC format by a display device;
FIG. 8 illustrates a display device configuration flow diagram of an embodiment of the present application;
fig. 9 illustrates a schematic structural diagram of an MPD file;
FIG. 10 illustrates a display device configuration flow diagram of an embodiment of the present application;
FIG. 11 illustrates a display device configuration flow diagram of an embodiment of the present application;
Fig. 12 is a schematic diagram of a media playback apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application.
In the description of the present application, "/" means "or" unless otherwise indicated, for example, A/B may mean A or B. "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. Furthermore, "at least one" means one or more, and "a plurality" means two or more. The terms "first," "second," and the like do not limit the number and order of execution, and the terms "first," "second," and the like do not necessarily differ.
In the present application, the words "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
In order to facilitate the technical solution of the embodiments of the present application to be understood by the skilled person, technical terms related to the embodiments of the present application are explained below.
1. The hierarchical video coding (Scalable Video Coding) can solve the problems that the coding end needs to code for multiple times or the server needs to code and decode for two times facing different network conditions, different terminal processing capacities and different user quality requirements, and realizes time domain scalability, space scalability and quality scalability, namely, one-time coding can generate video compression code streams with different frame rates, resolutions and image qualities, and the decoding end can carry out self-adaptive adjustment on the video compression code streams, so that the operation burden of the coding end and the server end is reduced.
2. Efficient video coding (High Efficiency Video Coding, HEVC), a new video compression standard used to extend the h.264/AVC coding standard, enables the actual coding of pictures to be made larger, for example from 2k to 4k, or from 4k to 8k resolution changes, to enable faster full high definition video playback rates.
3. Scalable high-efficiency video coding (SVC for HEVC, SHVC), is a high-efficiency video coding format suitable for the h.265/HEVC coding standard that enables temporal scalability, spatial scalability, and quality scalability. For convenience of description, h.265/HEVC is hereinafter referred to as HEVC.
The display device provided by the embodiment of the application can have various implementation forms, for example, a television, an intelligent television, a laser projection device, a display (monitor), an electronic whiteboard (electronic bulletin board), an electronic desktop (electronic table) and the like. Fig. 1 and 2 are specific embodiments of a display device of the present application.
Fig. 1 is a schematic diagram of an operation scenario between a display device and a control apparatus according to an embodiment. As shown in fig. 1, a user may operate the display device 200 through the smart device 300 or the control apparatus 100.
In some embodiments, the control apparatus 100 may be a remote controller, and the communication between the remote controller and the display device includes infrared protocol communication or bluetooth protocol communication, and other short-range communication modes, and the display device 200 is controlled by a wireless or wired mode. The user may control the display device 200 by inputting user instructions through keys on a remote control, voice input, control panel input, etc.
In some embodiments, a smart device 300 (e.g., mobile terminal, tablet, computer, notebook, etc.) may also be used to control the display device 200. For example, the display device 200 is controlled using an application running on a smart device.
In some embodiments, the display device 200 may also perform control in a manner other than the control apparatus 100 and the smart device 300, for example, the voice command control of the user may be directly received through a module configured inside the display device 200 device for acquiring voice commands, or the voice command control of the user may be received through a voice control device configured outside the display device 200 device.
In some embodiments, the display device 200 is also in data communication with a server 400. The display device 200 may be permitted to make communication connections via a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 400 may provide various contents and interactions to the display device 200. The server 400 may be a cluster, or may be multiple clusters, and may include one or more types of servers.
Fig. 2 exemplarily shows a block diagram of a configuration of the control apparatus 100 in accordance with an exemplary embodiment. As shown in fig. 2, the control device 100 includes a controller 110, a communication interface 130, a user input/output interface 140, a memory, and a power supply. The control apparatus 100 may receive an input operation instruction of a user and convert the operation instruction into an instruction recognizable and responsive to the display device 200, and function as an interaction between the user and the display device 200.
Fig. 3 shows a hardware configuration block diagram of the display device 200 in accordance with an exemplary embodiment.
In some embodiments, display apparatus 200 includes at least one of a modem 210, a communicator 220, a detector 230, an external device interface 240, a controller 250, a display 260, an audio output interface 270, memory, a power supply, a user interface.
In some embodiments the controller includes a processor, a video processor, an audio processor, a graphics processor, RAM, ROM, a first interface for input/output to an nth interface.
In some embodiments, the display 260 includes a display screen component for presenting a picture, and a driving component for driving an image display, for receiving image signals from the controller output, for displaying video content, image content, and a menu manipulation interface, and for manipulating a UI interface by a user.
In some embodiments, the display 260 may be a liquid crystal display, an OLED display, a projection device, and a projection screen.
In some embodiments, communicator 220 is a component for communicating with external devices or servers according to various communication protocol types. For example: the communicator may include at least one of a Wifi module, a bluetooth module, a wired ethernet module, or other network communication protocol chip or a near field communication protocol chip, and an infrared receiver. The display device 200 may establish transmission and reception of control signals and data signals with the external control device 100 or the server 400 through the communicator 220.
In some embodiments, the user interface may be configured to receive control signals from the control device 100 (e.g., an infrared remote control, etc.).
In some embodiments, the detector 230 is used to collect signals of the external environment or interaction with the outside. For example, detector 230 includes a light receiver, a sensor for capturing the intensity of ambient light; alternatively, the detector 230 includes an image collector such as a camera, which may be used to collect external environmental scenes, user attributes, or user interaction gestures, or alternatively, the detector 230 includes a sound collector such as a microphone, or the like, which is used to receive external sounds.
In some embodiments, the external device interface 240 may include, but is not limited to, the following: high Definition Multimedia Interface (HDMI), analog or data high definition component input interface (component), composite video input interface (CVBS), USB input interface (USB), RGB port, or the like. The input/output interface may be a composite input/output interface formed by a plurality of interfaces.
In some embodiments, the modem 210 receives broadcast television signals via wired or wireless reception and demodulates audio-video signals, such as EPG data signals, from a plurality of wireless or wired broadcast television signals.
In some embodiments, the controller 250 and the modem 210 may be located in separate devices, i.e., the modem 210 may also be located in an external device to the main device in which the controller 250 is located, such as an external set-top box or the like.
In some embodiments, the controller 250 controls the operation of the display device and responds to user operations through various software control programs stored on the memory. The controller 250 controls the overall operation of the display apparatus 200. For example: in response to receiving a user command to select a UI object to be displayed on the display 260, the controller 250 may perform an operation related to the object selected by the user command.
In some embodiments, the object may be any one of selectable objects, such as a hyperlink, an icon, or other operable control. The operations related to the selected object are: displaying an operation of connecting to a hyperlink page, a document, an image, or the like, or executing an operation of a program corresponding to the icon.
In some embodiments the controller includes at least one of a central processing unit (Central Processing Unit, CPU), video processor, audio processor, graphics processor (Graphics Processing Unit, GPU), RAM Random Access Memory, RAM), ROM (Read-Only Memory, ROM), first to nth interfaces for input/output, a communication Bus (Bus), and the like.
A CPU processor. For executing operating system and application program instructions stored in the memory, and executing various application programs, data and contents according to various interactive instructions received from the outside, so as to finally display and play various audio and video contents. The CPU processor may include a plurality of processors. Such as one main processor and one or more sub-processors.
In some embodiments, a graphics processor is used to generate various graphical objects, such as: icons, operation menus, user input instruction display graphics, and the like. The graphic processor comprises an arithmetic unit, which is used for receiving various interactive instructions input by a user to operate and displaying various objects according to display attributes; the device also comprises a renderer for rendering various objects obtained based on the arithmetic unit, wherein the rendered objects are used for being displayed on a display.
In some embodiments, the video processor is configured to receive an external video signal, perform video processing such as decompression, decoding, scaling, noise reduction, frame rate conversion, resolution conversion, image composition, etc., according to a standard codec protocol of an input signal, and may obtain a signal that is displayed or played on the directly displayable device 200.
In some embodiments, the video processor includes a demultiplexing module, a video decoding module, an image synthesis module, a frame rate conversion module, a display formatting module, and the like. The demultiplexing module is used for demultiplexing the input audio and video data stream. And the video decoding module is used for processing the demultiplexed video signal, including decoding, scaling and the like. And an image synthesis module, such as an image synthesizer, for performing superposition mixing processing on the graphic generator and the video image after the scaling processing according to the GUI signal input by the user or generated by the graphic generator, so as to generate an image signal for display. And the frame rate conversion module is used for converting the frame rate of the input video. And the display formatting module is used for converting the received frame rate into a video output signal and changing the video output signal to be in accordance with a display format, such as outputting RGB data signals.
In some embodiments, the audio processor is configured to receive an external audio signal, decompress and decode the audio signal according to a standard codec protocol of an input signal, and perform noise reduction, digital-to-analog conversion, and amplification processing to obtain a sound signal that can be played in a speaker.
In some embodiments, a user may input a user command through a Graphical User Interface (GUI) displayed on the display 260, and the user input interface receives the user input command through the Graphical User Interface (GUI). Alternatively, the user may input the user command by inputting a specific sound or gesture, and the user input interface recognizes the sound or gesture through the sensor to receive the user input command.
In some embodiments, a "user interface" is a media interface for interaction and exchange of information between an application or operating system and a user that enables conversion between an internal form of information and a form acceptable to the user. A commonly used presentation form of the user interface is a graphical user interface (Graphic User Interface, GUI), which refers to a user interface related to computer operations that is displayed in a graphical manner. It may be an interface element such as an icon, a window, a control, etc. displayed in a display screen of the electronic device, where the control may include a visual interface element such as an icon, a button, a menu, a tab, a text box, a dialog box, a status bar, a navigation bar, a Widget, etc.
In some embodiments, a system of display devices may include a Kernel (Kernel), a command parser (shell), a file system, and an application program. The kernel, shell, and file system together form the basic operating system architecture that allows users to manage files, run programs, and use the system. After power-up, the kernel is started, the kernel space is activated, hardware is abstracted, hardware parameters are initialized, virtual memory, a scheduler, signal and inter-process communication (IPC) are operated and maintained. After the kernel is started, shell and user application programs are loaded again. The application program is compiled into machine code after being started to form a process.
As shown in fig. 4, a system of display devices may include a Kernel (Kernel), a command parser (shell), a file system, and an application program. The kernel, shell, and file system together form the basic operating system architecture that allows users to manage files, run programs, and use the system. After power-up, the kernel is started, the kernel space is activated, hardware is abstracted, hardware parameters are initialized, virtual memory, a scheduler, signal and inter-process communication (IPC) are operated and maintained. After the kernel is started, shell and user application programs are loaded again. The application program is compiled into machine code after being started to form a process.
As shown in fig. 4, the system of the display device is divided into three layers, an application layer, a middleware layer, and a hardware layer, from top to bottom.
The application layer mainly comprises common applications on the television, and an application framework (Application Framework), wherein the common applications are mainly applications developed based on Browser, such as: HTML5 APPs; native applications (Native APPs);
the application framework (Application Framework) is a complete program model with all the basic functions required by standard application software, such as: file access, data exchange, and the interface for the use of these functions (toolbar, status column, menu, dialog box).
Native applications (Native APPs) may support online or offline, message pushing, or local resource access.
The middleware layer includes middleware such as various television protocols, multimedia protocols, and system components. The middleware can use basic services (functions) provided by the system software to connect various parts of the application system or different applications on the network, so that the purposes of resource sharing and function sharing can be achieved.
The hardware layer mainly comprises a HAL interface, hardware and a driver, wherein the HAL interface is a unified interface for all the television chips to be docked, and specific logic is realized by each chip. The driving mainly comprises: audio drive, display drive, bluetooth drive, camera drive, WIFI drive, USB drive, HDMI drive, sensor drive (e.g., fingerprint sensor, temperature sensor, pressure sensor, etc.), and power supply drive, etc.
As shown in fig. 5, SVC has a wide application, and in an example where multiple users simultaneously perform a video conference, a video conference terminal performs one-time encoding by using SVC to generate video compression code streams with different frame rates, resolutions and image quality, and distributes the video compression code streams through a multipoint control unit (Multi Control Unit, MCU) to realize signal tandem and switching between multiple user terminals, so that the multiple user terminals respectively decode the video compression code streams encoded by the SVC, and decode the video with different resolutions at different user terminals. For example, the first ue has a poor network environment, and in order to ensure that the video conference is smooth, a 360p resolution may be selected for decoding; the second client network environment is better, and 1080p resolution can be selected for decoding in order to improve the video conference experience.
In the related art, video with 360p, 720p and 1080p resolutions can be encoded and decoded by adopting the video encoding technology of H.264/AVC, however, the video encoding technology of H.264/AVC has the problems of low compression efficiency and high occupied bandwidth, so that the video with ultra-high definition such as 4k, 8k and the like is difficult to support, and the user side cannot watch the video with ultra-high definition. The video coding technology of HEVC can solve the problem, the video coding technology of HEVC can improve compression efficiency, reduce occupied bandwidth, support the change of resolution from 2k to 4k, or from 4k to 8k, so that the display device is configured to support the encoding and decoding of HEVC format is necessary, on the basis, because different users have different requirements on video quality, in the scene shown in fig. 5, the display device should also be configured to support the operation of media files of SHVC format, so that when decoding the same video compression code stream, different users can select the required resolution according to their own practical situation, and can select the ultra-high definition video with larger resolution of 4k, 8k, etc. on the basis of 1080 p.
FIG. 6 illustrates a GSstreamer playback pipeline diagram. GStreamer is an open source multimedia framework for constructing streaming media application, which is used for simplifying development of audio/video application programs and can be used for processing multimedia data in various formats such as MP3, ogg, MPEG1, MPEG2, AVI and the like. GStreamer is plug-in based in that some plug-ins provide a variety of codecs, some plug-ins may also provide other functionality, and any plug-in may be linked into a defined data stream pipeline. As shown in FIG. 6, the GSstreamer playback pipeline includes a data Source (Source) element therein for reading data in the pipeline; a Dash de-multiplexing (Dash Demux) element for performing a preliminary processing split on the data; a Buffer (Buffer) element for buffering the data processed by the Dash demultiplexing element; a media setup demultiplexing (Media Segment Demux) element for performing further demultiplexing; an input select (inputselect) element for processing data input by the media setting demultiplexing element, selecting the input data, and discarding useless data; a receiver (Sink) element for receiving the input video data and audio data and transmitting the video data and audio data to a decoding (Decode) element, respectively, to complete a decoding function.
Fig. 7 is a schematic flow chart of a display device playing a SHVC format media asset file through a GStreamer playback pipe. As shown in fig. 7, when the display device receives a control instruction sent by a user to obtain a media file in SHVC format, in response to the control instruction, the data source element obtains the media file in SHVC format from a local storage or a server, where the media file in SHVC format includes multiple layers of data for meeting different quality requirements of the user on the media file, for example, the media file in the format includes Base Layer (Base Layer) data and one or more enhancement layers (enhancement layers) data, where the Base Layer data carries video data with a Base quality level, for example, can be used to play video with resolutions such as 360p, 720p, and 1080p, and the one or more enhancement layers can carry additional video data to support higher spatial, temporal, or signal-to-noise ratio levels, for example, can be used to play video with ultra-high resolution such as 4k, 8k, and it is to be noted that the enhancement Layer data cannot be decoded independently, but needs to rely on the Base Layer data to be decoded. When the data source element acquires the media file in the SHVC format, the demultiplexing element demultiplexes the data in the SHVC format, however, the existing demultiplexing element can only demultiplex data in one level, so that only basic layer data can be demultiplexed, but two levels of data cannot be demultiplexed simultaneously by the extended layer data depending on the basic layer data, so that the display device can only provide the resolution of the basic layer data in the process of playing the media file, and even if the network condition is satisfied, the ultra-high definition resolution supported by the extended layer data cannot be provided.
In order to solve the problems, the technical scheme can enable the display equipment to support the operation of the media files in the SHVC format, and can de-multiplex the base layer data and the extension layer data at the same time, so that the display equipment can play the media files with ultra-high definition resolution, and user experience is improved.
The present application shows a display device comprising: a display, a controller; is configured to perform the steps shown in fig. 8.
The display equipment receives a control instruction sent by a user and used for acquiring the media asset file, and responds to the control instruction to acquire the media presentation description file of the media asset file. In a specific implementation, the display device may obtain, through the control device, a media asset file from a local storage or from a server, where the media asset file is typically a plurality of slices with different code streams and different resolutions, where the different code streams and the different resolutions have the same content, and each slice corresponds to a media presentation description (Media Presentation Descripition, MPD) file, where the MPD file is a file in XML format and can be used to describe the code stream of the corresponding slice. For example, the display device may obtain multiple slices of the media asset file in SHVC format, where a first slice may correspond to an MPD file in advanced video coding (Advanced Video Coding, AVC) format with a resolution of 1080p, and a second slice may correspond to an MPD file in HEVC format with a resolution of 4 k.
The code stream is used for representing the data flow used by the media asset file in unit time. The code streams of the media assets with different data formats are different, so that the display equipment can judge the data format of the media asset file according to the code streams. After the display equipment acquires the media asset files, a plurality of MPD files are acquired, the MPD files are traversed, whether the MPD files contain HEVC-format code streams or not is judged, if the MPD files do not contain HEVC-format code streams, the media asset files are not media asset files capable of providing ultra-high definition resolution, the display equipment can be downward compatible with the media asset files, decoding is directly carried out on the data formats of the media asset files, and then media assets are played; if the MPD file contains the HEVC format code stream, the media file is indicated to be the media file capable of providing ultra-high definition resolution.
In a specific implementation, the display device may determine, according to a structure of the code stream, whether the MPD file includes the code stream in the HEVC format, where the structural order of the code stream in the HEVC format is generally: start code, video Parameter Set (VPS), start code, sequence Parameter Set (SPS), start code, picture Parameter Set (PPS), supplemental Enhancement Information (SEI), start code, difference frame, start code, reference frame … …; however, the AVC format code stream structure usually has no VPS, so it can be determined whether the MPD file contains the HEVC format code stream according to the code stream structure sequence. It should be noted that, in the code stream structure of the HEVC format, VPS, SPS and PPS are fixed, and do not necessarily contain SEI, then video frames such as a difference frame and a reference frame are arranged in sequence, and when the display device reads the code stream of the HEVC format, it is determined that the media file is a media file capable of providing ultra-high definition resolution.
Fig. 9 illustrates a schematic structure of an MPD file. As shown in fig. 9, the MPD file is composed of a single Period or multiple periods (Period), each Period is composed of one or more channel (adaptation) tags, each channel tag includes one or more media content components, each media content component may include multiple encoded versions, each encoded version is called a media stream, and each media stream corresponds to an encoding parameter attribute, and includes: code stream, resolution, and encoder type. Illustratively, the media asset content component in one channel tag includes: at least one video component and at least one audio component, where one video component may include a media stream in SHVC format with a resolution of 360p, a media stream in SHVC format with a resolution of 720p, a media stream in SHVC format with a resolution of 1080p, a media stream in SHVC format with a resolution of 4k, and a media stream in SHVC format with a resolution of 8k, for example. Wherein each media stream corresponds to a presentation tag, and the display device can switch the media streams according to the presentation tag.
It should be noted that, the description tag includes a uniform resource locator (Uniform Resource Locator, URL), and the display device may obtain the media stream from the server according to the URL, or may obtain the media stream from the broadcast according to the service list tag (Sevice List Table, SLT). In the related art, the GStreamer playing pipeline cannot simultaneously de-multiplex the media stream data corresponding to the plurality of description tags, but can simultaneously de-multiplex the media stream data corresponding to the plurality of channel tags. This is because media streams of different code rates and different resolutions can be supported in the same channel tag, while the same description tag can only support media streams of one code rate and resolution. Therefore, for the media resource file in the SHVC format, the base layer data and the extension layer data both have description tags, and the display device cannot demultiplex the base layer data and the extension layer data at the same time.
When the display equipment determines that the MPD file contains the HEVC format code stream, acquiring at least one candidate description tag in the MPD file; the candidate description tag is used for describing the code stream, resolution and encoder type of the MPD file. By way of example, the display device may obtain a candidate description tag for describing a media stream of a SHVC format with a resolution of 360p, a candidate description tag for a media stream of a SHVC format with a resolution of 720p, a candidate description tag for a media stream of a SHVC format with a resolution of 1080p, a candidate description tag for a media stream of a SHVC format with a resolution of 4k, and a candidate description tag for a media stream of a SHVC format with a resolution of 8 k.
After the display equipment acquires at least one candidate description tag in the MPD file, traversing all the candidate description tags; judging whether all candidate description tags comprise a first identifier or not; the first identifier is used for identifying that the candidate description tag comprises extension layer data; if the candidate descriptive label includes the first identification, the candidate descriptive label is determined to be the first descriptive label. For example, the first identifier may be "lhe1". For example, the candidate description tag of the media stream in the SHVC format with the resolution of 4k and the candidate description tag of the media stream in the SHVC format with the resolution of 8k may each include a first identifier, and the display device may determine such candidate description tag having the first identifier as the first identifier.
After the display device determines the candidate description tag as a first description tag, storing the first description tag into a storage space (Buffer); the storage space is used for caching the first description tag; triggering a modification instruction for modifying the MPD file when the display device stores the first description tag into the storage space; and in response to the modification instruction, inserting the first description tag into the first channel tag so as to convert the description tag into the second channel tag, thereby generating the modified MPD file.
It should be noted that, the display device stores the first description tag into the storage space and stores other candidate description tags into the storage space at the same time, because the extended layer data corresponding to the first description tag cannot be decoded independently, but needs to rely on the base layer data corresponding to the other candidate description tags to decode, and the embodiment of the application only describes the operation process of the first description tag.
When the display device defaults to play the media with the resolution of 4k, the method according to the embodiment of the present application is executed without obtaining other instructions, and when the display device defaults to play the media with the resolution (such as 360p, 720p, 1080 p) obtained by parsing the base layer data, the display device needs to receive a switching instruction for switching the resolution from a user and then execute the method according to the embodiment of the present application, in a specific implementation, a configuration flow of the display device may be as shown in fig. 10, after the display device determines the candidate description tag as the first description tag, the operation of storing the first description tag in the storage space is performed, and in an example, after the display device obtains the first description tag of the media stream with the resolution of 4k in SHVC format and the first description tag of the media stream with the resolution of 8k in SHVC format, two first description tags are simultaneously stored in the storage space, and corresponding first channel tags are simultaneously inserted, so as to generate a modified MPD file according to the first description tag for describing the resolution of 4k and a modified MPD file for describing the first description tag for the resolution of 8k in the MPD file; and responding to the switching instruction, selecting a modified MPD file corresponding to the switching instruction, for example, switching the resolution of 4k by a user, and selecting the modified MPD file generated according to the MPD file of the first description label for describing the resolution of 4k for subsequent operation.
In a specific implementation, as shown in fig. 11, after the configuration flow of the display device may further determine the candidate description tag as the first description tag, the display device may select the first description tag corresponding to the switching instruction in response to the switching instruction, store the first description tag corresponding to the switching instruction in the storage space, and, for example, the user switches the resolution to 4k, selects the first description tag for describing the resolution to 4k, stores the first description tag for describing the resolution to 4k in the storage space, and inserts the first description tag for describing the resolution to 4k into the first channel tag, so as to execute the subsequent operation.
The display equipment analyzes the modified MPD file to obtain a first channel label and a second channel label; establishing a first media resource track according to a first media stream corresponding to a first channel tag, and establishing a second media resource track according to a second media stream corresponding to a second channel tag; wherein the first media stream may be retrieved from a local store or from a server and the second media stream may be retrieved from a local store or from a server.
It should be noted that, the base layer data and the extension layer data in SHVC format are distinguished by description labels, so that the GStreamer playing pipeline in the related art cannot process the base layer data and the extension layer data at the same time, and the sources of the first media stream in the base layer data and the second media stream in the extension layer data may be different, and the media streams of different sources need to be processed in the same GStreamer playing pipeline.
The display equipment establishes a first media resource track according to a first media stream corresponding to the first channel tag, and establishes a second media resource track according to a second media stream corresponding to the second channel tag; the first media stream and the second media stream are respectively used for encoding media files of different versions. In the embodiment of the application, the first media stream is used for encoding the base layer data, and the second media stream is used for encoding the extension layer data.
After the display equipment establishes a second media resource track according to a second media stream corresponding to a second channel label, registering a calling function in the second media resource track; and calling a second media stream according to the calling function so as to enable the second media resource track to be synchronously activated with the first media resource track.
It should be noted that, in the embodiment of the present application, the second media track is set up to play the extension layer data, and the extension layer data needs to be played on the basis of the base layer data, so after the second media track is set up, the second media track and the first media track need to be synchronized first.
In the related art, if the second media track is not activated under the condition that the first media track and the second media track in the display device are synchronous, the display device directly discards the second media stream corresponding to the second media track, so in the related art, even if the second media track is established, the display device cannot directly play the media according to the second media track.
In some embodiments, the display device registers a call function in the second media asset track to call the discarded second media stream, and the call function may be, for example: shvc_customer_data_event.
The display device is generally only capable of activating the first media asset track, and after the display device calls the second media stream according to the calling function, the first media asset track and the second media asset track can be simultaneously activated, and after the first media asset track and the second media asset track are synchronously activated, the first media stream and the second media stream are fused according to a set time standard setting principle (Predetermined Times Standards, PTS) so that the first media stream and the second media stream are synchronously decoded to play the media asset file.
Fig. 12 is a schematic diagram of a media playback apparatus according to an embodiment of the present application, where the media playback apparatus according to the embodiment of the present application is applicable to the above-mentioned display device, and is a player apparatus based on a GStreamer playback pipeline, and includes:
the data source component is used for acquiring a media presentation description file of the media asset file, and the media presentation description file is used for determining a video coding format contained in the media asset file.
The first demultiplexing element is used for acquiring a first description tag and a first channel tag according to the media presentation description file, wherein the first description tag is used for representing that the media asset file comprises expansion layer data, and the expansion layer data is used for representing that the video coding format can be decoded to obtain the media asset file with ultra-high definition resolution; the first description tag is inserted into the first channel tag to convert the first description tag into the second channel tag. The first demultiplexing element of the embodiment of the present application may be, for example, a Dash demultiplexing element.
In some embodiments, the first demultiplexing element is further configured to: judging whether the media asset display description file contains a high-efficiency video coding format or not; if the media asset display description file contains the high-efficiency video coding format, acquiring at least one candidate description tag in the media asset display description file; the candidate description tag is used for describing the code stream, the resolution and the encoder type of the media asset display description file; if the media asset display description file does not contain the high-efficiency video coding format, decoding the media asset display description file to play the media asset file; traversing candidate description tags; judging whether the candidate description label comprises a first identifier or not; the first identifier is used for identifying that the candidate description tag comprises extension layer data; if the candidate descriptive label includes the first identification, the candidate descriptive label is determined to be the first descriptive label.
In some embodiments, the media asset playing device further comprises: a Buffer (Buffer) element; after the first demultiplexing element determines the candidate descriptive label as a first descriptive label, storing the first descriptive label to a buffer element; when the first description tag is stored in the buffer element, the buffer element triggers a modification instruction for modifying the media presentation description file, and the first description tag is inserted into the first channel tag in response to the modification instruction so as to convert the first description tag into the second channel tag.
It should be noted that, the media playback device stores the first description tag in the buffer element and stores other candidate description tags in different areas of the buffer element, because the extension layer data corresponding to the first description tag cannot be decoded independently, but needs to rely on the base layer data corresponding to the other candidate description tags to decode, and the embodiment of the application only describes the operation process of the first description tag.
The second demultiplexing element is used for establishing a first media resource track according to a first media stream corresponding to the first channel label and establishing a second media resource track according to a second media stream corresponding to the second channel label; the first media stream and the second media stream are respectively used for encoding media files of different versions. By way of example, the second demultiplexing element in the present application may provide a demultiplexing element for the media. It should be noted that, establishing the first media track according to the first media stream corresponding to the first channel label requires a second demultiplexing element to separately demultiplex the first media track, and establishing the second media track according to the second media stream corresponding to the second channel label requires another second demultiplexing element to separately demultiplex the second media track.
And the input selection element is used for synchronously activating the first media resource track and the second media resource track.
In some embodiments, the input selection element is further configured to register the calling function and send the calling function to the receiver element; it should be noted that, when the two second demultiplexing elements demultiplex the first media stream and the second media stream respectively, the first media stream and the second media stream are input to the input selecting element at the same time, at this time, the display device activates the first media track where the first media stream is located, and invokes the second media stream by calling a function to activate the second media track where the second media stream is located.
A receiver element for fusing the first media stream with the second media stream according to a predetermined time standard setting principle; and transmitting the first media stream and the second media stream to a decoding element; in some embodiments, the receiver element is further configured to call the second media stream according to the call function after receiving the call function to enable the second media asset track to be activated in synchronization with the first media asset track; and after the first media resource track and the second media resource track are synchronously activated, fusing the first media stream and the second media stream according to a set time standard setting principle.
And a decoding element for receiving the first media stream and the second media stream to synchronously decode the first media stream and the second media stream.
The specific implementation of each element in the embodiment of the present application corresponds to the specific implementation in the display device, and will not be described herein.
The embodiment of the application also discloses a media asset playing method, which comprises the following steps:
acquiring a media presentation description file of the media asset file, wherein the media presentation description file is used for determining a video coding format contained in the media asset file; acquiring a first description tag and a first channel tag according to a media presentation description file, wherein the first description tag is used for representing that a media asset file comprises expansion layer data, and the expansion layer data is used for representing a video coding format and can be decoded to obtain the media asset file with ultra-high definition; inserting the first description tag into the first channel tag to convert the first description tag into a second channel tag; establishing a first media resource track according to a first media stream corresponding to a first channel tag, and establishing a second media resource track according to a second media stream corresponding to a second channel tag; the first media stream and the second media stream are respectively used for encoding media files of different versions; and synchronously activating the first media asset track and the second media asset track so as to synchronously decode the first media stream and the second media stream to play the media asset file.
The display device, the media asset playing device and the media asset playing method can modify the media presentation description file, establish the first media asset track and the second media asset track according to the modified media presentation description file, respectively de-multiplex the first media asset track and the second media asset track, and finally fuse the first media stream and the second media stream in the first media asset track and the second media asset track and decode at the same time, so that the display device can play the media asset file with ultra-high definition and improve user experience.
The technical scheme shown in the embodiment of the application can be downward compatible to play media in other video coding formats in the specific implementation process, and has better compatibility.
It should be noted that the technical solution shown above is applicable to the ROUTE (Real time Object delivery Over Unidirectional Transport)/DASH protocol in the ATSC 3.0 standard, and the technical solution in the present application includes, but is not limited to, application to only the protocol. In specific application, the technical scheme provided by the application has universality, and can be applied to the field of real-time clocks, for example, the media playing device of the technology provided by the application is integrated into a gatherer conference, and the media with ultra-high definition resolution can be observed by a user through transmission with a lower network bandwidth.
The foregoing detailed description of the embodiments of the present application further illustrates the purposes, technical solutions and advantageous effects of the embodiments of the present application, and it should be understood that the foregoing is merely a specific implementation of the embodiments of the present application, and is not intended to limit the scope of the embodiments of the present application, and any modifications, equivalent substitutions, improvements, etc. made on the basis of the technical solutions of the embodiments of the present application should be included in the scope of the embodiments of the present application.

Claims (10)

1. A display device, characterized by comprising:
a display;
a controller configured to:
receiving a control instruction sent by a user and used for acquiring a media file;
responding to the control instruction, acquiring a media presentation description file of the media resource file, wherein the media presentation description file is used for determining a video coding format contained in the media resource file;
acquiring a first description tag and a first channel tag according to the media presentation description file, wherein the first description tag is used for representing that the media asset file comprises extension layer data, and the extension layer data is used for representing that the video coding format can be decoded to obtain the media asset file with ultra-high definition;
Inserting the first description tag into the first channel tag to convert the first description tag into a second channel tag;
establishing a first media resource track according to a first media stream corresponding to the first channel tag, and establishing a second media resource track according to a second media stream corresponding to the second channel tag; wherein the first media stream and the second media stream are respectively used for encoding the media resource files of different versions;
and synchronously activating the first media asset track and the second media asset track so as to synchronously decode the first media stream and the second media stream to play the media asset file.
2. The display device of claim 1, wherein the controller performs the step of obtaining a first description tag and a first channel tag from the media presentation description file, and is further configured to:
judging whether the media asset display description file contains a high-efficiency video coding format or not;
if the media asset display description file contains the high-efficiency video coding format, acquiring at least one candidate description tag in the media asset display description file; the candidate description tag is used for describing the code stream, the resolution ratio and the encoder type of the media asset display description file;
And if the media asset display specification file does not contain the high-efficiency video coding format, decoding the media asset display specification file to play the media asset file.
3. The display device of claim 2, wherein the controller performs the step of obtaining at least one candidate descriptive label in the media asset presentation profile, and is further configured to:
traversing the candidate descriptive labels;
judging whether the candidate description label comprises a first identifier or not; the first identifier is used for identifying that the candidate description tag comprises extension layer data;
and if the candidate description label comprises a first identification, determining the candidate description label as the first description label.
4. The display device of claim 3, wherein the controller is further configured to:
after the candidate description tag is determined to be the first description tag, storing the first description tag into a storage space; the storage space is used for caching the first description tag;
triggering a modification instruction for modifying the media asset display description file when the first description tag is stored in a storage space;
And responding to the modification instruction, executing the step of inserting the first description label into a first channel label so as to convert the first description label into a second channel label, and generating a modified media asset display description file.
5. The display device of claim 4, wherein the controller performs the steps of establishing a first media asset track from a first media stream corresponding to the first channel tag and establishing a second media asset track from a second media stream corresponding to the second channel tag, and is further configured to:
analyzing the modified media asset display description file to obtain the first channel tag and the second channel tag;
establishing a first media resource track according to a first media stream corresponding to the first channel tag, and establishing a second media resource track according to a second media stream corresponding to the second channel tag; wherein the first media stream may be retrieved from a local store or from a server and the second media stream may be retrieved from a local store or from a server.
6. The display device of claim 5, wherein the controller performs the step of synchronously activating the first media asset track and the second media asset track to synchronously decode the first media stream and the second media stream to play the media asset file, further configured to:
After a second media resource track is established according to a second media stream corresponding to the second channel tag, registering a calling function in the second media resource track;
invoking the second media stream according to the invoking function so as to enable the second media resource track and the first media resource track to be synchronously activated;
and after the first media resource track and the second media resource track are synchronously activated, fusing the first media stream and the second media stream according to a set time standard setting principle so that the first media stream and the second media stream are synchronously decoded to play the media resource file.
7. A media player apparatus for use with a display device as claimed in any one of claims 1 to 6, comprising:
the system comprises a data source element, a video encoding element and a video encoding element, wherein the data source element is used for acquiring a media presentation description file of a media asset file, and the media presentation description file is used for determining a video encoding format contained in the media asset file;
the first demultiplexing element is used for acquiring a first description tag and a first channel tag according to the media presentation description file, wherein the first description tag is used for representing that the media asset file comprises extension layer data, and the extension layer data is used for representing that the video coding format can be decoded to obtain the media asset file with ultra-high definition resolution; inserting the first description tag into the first channel tag to convert the first description tag into a second channel tag;
The second demultiplexing element is used for establishing a first media resource track according to a first media stream corresponding to the first channel label and establishing a second media resource track according to a second media stream corresponding to the second channel label; wherein the first media stream and the second media stream are respectively used for encoding the media resource files of different versions;
an input selection element for synchronously activating the first and second media asset tracks;
a receiver element for fusing the first media stream with the second media stream according to a set time standard setting principle; and transmitting the first media stream and the second media stream to a decoding element;
and a decoding element for receiving the first media stream and the second media stream to synchronously decode the first media stream and the second media stream.
8. The media player device of claim 7, wherein,
the first demultiplexing element is further configured to: judging whether the media asset display description file contains a high-efficiency video coding format or not; if the media asset display description file contains the high-efficiency video coding format, acquiring at least one candidate description tag in the media asset display description file; the candidate description tag is used for describing the code stream, the resolution ratio and the encoder type of the media asset display description file; if the media asset display description file does not contain the high-efficiency video coding format, decoding the media asset display description file to play the media asset file; traversing the candidate descriptive labels; judging whether the candidate description label comprises a first identifier or not; the first identifier is used for identifying that the candidate description tag comprises extension layer data; and if the candidate description label comprises a first identification, determining the candidate description label as the first description label.
9. The media player device of claim 7, wherein,
the input selection element is further configured to register a calling function and send the calling function to the receiver element;
the receiver element is further configured to call the second media stream according to the call function after receiving the call function, so as to enable the second media asset track to be activated synchronously with the first media asset track; and after the first media resource track and the second media resource track are synchronously activated, fusing the first media stream and the second media stream according to a set time standard setting principle.
10. A media resource playing method is characterized by comprising the following steps:
acquiring a media presentation description file of a media asset file, wherein the media presentation description file is used for determining a video coding format contained in the media asset file;
acquiring a first description tag and a first channel tag according to the media presentation description file, wherein the first description tag is used for representing that the media asset file comprises extension layer data, and the extension layer data is used for representing that the video coding format can be decoded to obtain the media asset file with ultra-high definition;
Inserting the first description tag into the first channel tag to convert the first description tag into a second channel tag;
establishing a first media resource track according to a first media stream corresponding to the first channel tag, and establishing a second media resource track according to a second media stream corresponding to the second channel tag; wherein the first media stream and the second media stream are respectively used for encoding the media resource files of different versions;
and synchronously activating the first media asset track and the second media asset track so as to synchronously decode the first media stream and the second media stream to play the media asset file.
CN202210370079.1A 2022-04-08 2022-04-08 Display device, media asset playing device and media asset playing method Pending CN116939263A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210370079.1A CN116939263A (en) 2022-04-08 2022-04-08 Display device, media asset playing device and media asset playing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210370079.1A CN116939263A (en) 2022-04-08 2022-04-08 Display device, media asset playing device and media asset playing method

Publications (1)

Publication Number Publication Date
CN116939263A true CN116939263A (en) 2023-10-24

Family

ID=88374406

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210370079.1A Pending CN116939263A (en) 2022-04-08 2022-04-08 Display device, media asset playing device and media asset playing method

Country Status (1)

Country Link
CN (1) CN116939263A (en)

Similar Documents

Publication Publication Date Title
CN114302190B (en) Display equipment and image quality adjusting method
WO2021169141A1 (en) Method for displaying audio track language on display device and display device
KR20100127240A (en) Using triggers with video for interactive content identification
WO2020098504A1 (en) Video switching control method and display device
CN113630654B (en) Display equipment and media resource pushing method
WO2021109354A1 (en) Media stream data playback method and device
CN112601117B (en) Display device and content presentation method
CN112153406A (en) Live broadcast data generation method, display equipment and server
CN114095778B (en) Audio hard decoding method of application-level player and display device
CN113453052B (en) Sound and picture synchronization method and display device
CN114095769B (en) Live broadcast low-delay processing method of application-level player and display device
CN115209208B (en) Video cyclic playing processing method and device
CN116939263A (en) Display device, media asset playing device and media asset playing method
CN114630101A (en) Display device, VR device and display control method of virtual reality application content
CN111629250A (en) Display device and video playing method
CN112911371A (en) Double-channel video resource playing method and display equipment
CN115174991B (en) Display equipment and video playing method
CN113038221B (en) Double-channel video playing method and display equipment
CN113038193B (en) Method for automatically repairing asynchronous audio and video and display equipment
CN114339344B (en) Intelligent device and video recording method
CN113099308B (en) Content display method, display equipment and image collector
CN113490013B (en) Server and data request method
CN112887769B (en) Display equipment
CN116939295A (en) Display equipment and method for dynamically adjusting utilization rate of controller
CN117119234A (en) Display equipment and media asset playing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination