CN111526414B - Subtitle display method and display equipment - Google Patents

Subtitle display method and display equipment Download PDF

Info

Publication number
CN111526414B
CN111526414B CN202010362553.7A CN202010362553A CN111526414B CN 111526414 B CN111526414 B CN 111526414B CN 202010362553 A CN202010362553 A CN 202010362553A CN 111526414 B CN111526414 B CN 111526414B
Authority
CN
China
Prior art keywords
subtitle
processing
time
display
caption
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010362553.7A
Other languages
Chinese (zh)
Other versions
CN111526414A (en
Inventor
商潮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vidaa Netherlands International Holdings BV
Original Assignee
Qingdao Hisense Media Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Hisense Media Network Technology Co Ltd filed Critical Qingdao Hisense Media Network Technology Co Ltd
Priority to CN202010362553.7A priority Critical patent/CN111526414B/en
Publication of CN111526414A publication Critical patent/CN111526414A/en
Application granted granted Critical
Publication of CN111526414B publication Critical patent/CN111526414B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The application discloses a subtitle display method and display equipment, which are used for estimating the processing time length of a subtitle to be processed according to the processing time length of the processed subtitle, and processing the subtitle to be processed before the display starting time of the subtitle to be processed by taking the processing time length of the subtitle to be processed as a time offset, so that the subtitle display error is reduced. The method comprises the following steps: in the video playing process, acquiring data of a subtitle from a queue for storing subtitle data, wherein the data of the subtitle comprises display starting time; acquiring the processing duration of the first M subtitles from a time list for recording the processing duration of the processed subtitles, and calculating the duration required for processing the subtitle; subtracting the calculated time length required for processing the subtitle from the display starting time of the subtitle to obtain the processing starting time of the subtitle; and when the video playing time reaches the processing starting time, rendering and drawing the subtitle to display the subtitle on the display.

Description

Subtitle display method and display equipment
Technical Field
The present application relates to the field of display technologies, and in particular, to a subtitle display method and a display device.
Background
With the development of multimedia technology, the display device can not only play audio and video, but also display subtitles.
The conventional subtitle processing flow is that when the video playing time reaches the subtitle display starting time, the subtitle is rendered and drawn, and after the drawing is completed, a user can see the subtitle through a display. Due to the fact that rendering and drawing time is long, delay exists in subtitle display, and display errors are large.
Disclosure of Invention
The application provides a subtitle display method and display equipment, which are used for estimating the processing time length of a subtitle to be processed according to the processing time length of the processed subtitle, and processing the subtitle to be processed before the display starting time of the subtitle to be processed by taking the processing time length of the subtitle to be processed as a time offset, so that the subtitle display error is reduced.
To achieve the above object of the invention, the present application provides a display device comprising:
a display;
a controller for performing:
in the process of playing a video, acquiring data of a subtitle from a queue for storing subtitle data corresponding to the video, wherein the data of the subtitle comprises the display starting time of the subtitle;
Acquiring the processing time lengths of the first M subtitles from a time list for recording the processing time lengths of the processed subtitles, wherein the processing time lengths comprise the time lengths for rendering and drawing the subtitles;
calculating the time length required for processing the first M subtitles according to the processing time length of the first M subtitles;
subtracting the calculated required time for processing the caption from the display starting time of the caption to obtain the processing starting time of the caption;
and when the playing time of the video reaches the processing starting time of the subtitle, rendering and drawing the subtitle to display the subtitle on a display.
The present application also provides a display device, including:
a display;
a controller for performing:
in the process of playing a video, acquiring data of a subtitle from a queue for storing subtitle data corresponding to the video, wherein the data of the subtitle comprises the display starting time of the subtitle;
acquiring processing durations of the first M subtitles from a time list for recording the processing durations of the processed subtitles, wherein the processing durations comprise durations for rendering and drawing the subtitles;
Calculating the time length required for processing the caption according to the processing time length of the first M captions;
subtracting the calculated time length required for processing the caption from the display starting time of the caption to obtain the processing starting time of the caption;
when the playing time of the video reaches the processing starting time of the subtitle, rendering the subtitle and recording the rendering starting time of the subtitle;
drawing the rendered caption to display the caption on a display, and recording the drawing end time of the caption;
subtracting the rendering starting time from the rendering ending time to obtain the actual processing duration of the caption;
and adding the actual processing duration of the caption into the time list.
The present application also provides a display device, including:
a display;
a controller for performing:
in the process of playing a video, acquiring data of a subtitle from a queue for storing subtitle data corresponding to the video, wherein the data of the subtitle comprises display starting time of the subtitle and a style identification of the subtitle;
Searching a time list corresponding to the style identification according to the style identification of the subtitle;
acquiring processing duration of the first M subtitles from the searched time list, wherein the processing duration comprises duration for rendering and drawing the subtitles;
calculating the time length required for processing the caption according to the processing time length of the first M captions;
subtracting the time length required for processing the caption from the display starting time of the caption to obtain the processing starting time of the caption;
and when the playing time of the video reaches the processing starting time of the subtitle, rendering and drawing the subtitle to display the subtitle on a display.
The application also provides a subtitle display method, which comprises the following steps:
in the process of playing a video, acquiring data of a subtitle from a queue for storing subtitle data corresponding to the video, wherein the data of the subtitle comprises the display starting time of the subtitle;
acquiring processing durations of the first M subtitles from a time list for recording the processing durations of the processed subtitles, wherein the processing durations comprise durations for rendering and drawing the subtitles;
Calculating the time length required for processing the caption according to the processing time length of the first M captions;
subtracting the calculated time length required for processing the caption from the display starting time of the caption to obtain the processing starting time of the caption;
and when the playing time of the video reaches the processing starting time of the subtitle, rendering and drawing the subtitle to display the subtitle on a display.
The application also provides a subtitle display method, which comprises the following steps:
in the process of playing a video, acquiring data of a subtitle from a queue for storing subtitle data corresponding to the video, wherein the data of the subtitle comprises the display starting time of the subtitle;
acquiring processing durations of the first M subtitles from a time list for recording the processing durations of the processed subtitles, wherein the processing durations comprise durations for rendering and drawing the subtitles;
calculating the time length required for processing the caption according to the processing time length of the first M captions;
subtracting the calculated time length required for processing the caption from the display starting time of the caption to obtain the processing starting time of the caption;
When the playing time of the video reaches the processing starting time of the caption, rendering the caption, and recording the rendering starting time of the caption;
drawing the rendered subtitle to display the subtitle on a display, and recording the drawing end time of the subtitle;
subtracting the rendering starting time from the rendering ending time to obtain the actual processing duration of the subtitle;
and adding the actual processing duration of the subtitle into the time list.
The application also provides a subtitle display method, which comprises the following steps:
in the process of playing a video, acquiring data of a subtitle from a queue for storing subtitle data corresponding to the video, wherein the data of the subtitle comprises display starting time of the subtitle and a style identification of the subtitle;
searching a time list corresponding to the style identification according to the style identification of the subtitle;
acquiring processing duration of the first M subtitles from the searched time list, wherein the processing duration comprises duration for rendering and drawing the subtitles;
calculating the time length required for processing the first M subtitles according to the processing time length of the first M subtitles;
Subtracting the time length required by processing the caption from the display starting time of the caption to obtain the processing starting time of the caption;
and when the playing time of the video reaches the processing starting time of the subtitle, rendering and drawing the subtitle to display the subtitle on a display.
In the above embodiment, the display device estimates the processing duration of the current subtitle to be processed according to the processing duration of the processed subtitle, and advances the estimated processing duration corresponding to the subtitle to be processed based on the display start time of the subtitle to be processed to process the subtitle to be processed, so that the subtitle to be processed can be displayed in time when the display start time of the subtitle to be processed is reached, and the subtitle display error is reduced. In addition, the processing duration is dynamically estimated according to the historical data (the processing duration of the processed subtitles), and the method is not influenced by factors such as subtitle formats and platform performances, so that the method is wider in application range.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1A is a schematic diagram illustrating an operation scenario between a display device and a control apparatus;
fig. 1B is a block diagram schematically illustrating a configuration of the control apparatus 100 in fig. 1A;
fig. 1C is a block diagram schematically illustrating a configuration of the display device 200 in fig. 1A;
FIG. 1D is a block diagram illustrating an architectural configuration of an operating system in memory of display device 200;
a conventional subtitle processing flow is illustrated in fig. 2;
fig. 3 illustrates an example subtitle processing flow;
fig. 4 illustrates an example subtitle processing flow;
the time list corresponding to the different display styles is exemplarily shown in fig. 5.
Detailed Description
To make the objects, technical solutions and advantages of the exemplary embodiments of the present application clearer, the technical solutions in the exemplary embodiments of the present application will be clearly and completely described below with reference to the drawings in the exemplary embodiments of the present application, and it is obvious that the exemplary embodiments described are only a part of the embodiments of the present application, and not all the embodiments.
All other embodiments, which can be derived by a person skilled in the art from the exemplary embodiments shown in the present application without inventive step, are within the scope of protection of the present application. Moreover, while the disclosure herein has been presented in terms of exemplary one or more examples, it is to be understood that each aspect of the disclosure can be utilized independently and separately from other aspects of the disclosure to provide a complete disclosure.
The terms "comprises" and "comprising," and any variations thereof, as used herein, are intended to cover a non-exclusive inclusion, such that a product or device that comprises a list of elements is not necessarily limited to those elements explicitly listed, but may include other elements not expressly listed or inherent to such product or device.
The term module, as used herein, refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware or/and software code that is capable of performing the functionality associated with that element.
The term "gesture" as used in this application refers to a user's behavior through a change in hand shape or an action such as hand motion to convey a desired idea, action, purpose, or result.
Fig. 1A is a schematic diagram illustrating an operation scenario between a display device and a control apparatus. As shown in fig. 1A, the control apparatus 100 and the display device 200 may communicate with each other in a wired or wireless manner.
Among them, the control apparatus 100 is configured to control the display device 200, which may receive an operation instruction input by a user and convert the operation instruction into an instruction recognizable and responsive by the display device 200, serving as an intermediary for interaction between the user and the display device 200. Such as: the user operates the channel up/down key on the control device 100, and the display device 200 responds to the channel up/down operation.
The control device 100 may be a remote controller 100A, which includes infrared protocol communication or bluetooth protocol communication, and other short-distance communication methods, etc. to control the display apparatus 200 in a wireless or other wired manner. The user may input a user instruction through a key on a remote controller, voice input, control panel input, etc., to control the display apparatus 200. Such as: the user can input a corresponding control command through a volume up/down key, a channel control key, up/down/left/right moving keys, a voice input key, a menu key, a power on/off key, etc. on the remote controller, to implement the function of controlling the display device 200.
The control device 100 may also be an intelligent device, such as a mobile terminal 100B, a tablet computer, a notebook computer, and the like. For example, the display device 200 is controlled using an application program running on the smart device. The application program may provide various controls to a user through an intuitive User Interface (UI) on a screen associated with the smart device through configuration.
For example, the mobile terminal 100B may install a software application with the display device 200, implement connection communication through a network communication protocol, and implement the purpose of one-to-one control operation and data communication. Such as: the mobile terminal 100B may be caused to establish a control instruction protocol with the display device 200 to implement functions of physical keys as arranged in the remote control 100A by operating various function keys or virtual buttons of a user interface provided on the mobile terminal 100B. The audio and video content displayed on the mobile terminal 100B may also be transmitted to the display device 200, so as to implement a synchronous display function.
The display apparatus 200 may provide a network television function of a broadcast receiving function and a computer support function. The display device may be implemented as a digital television, a web television, an Internet Protocol Television (IPTV), or the like.
The display device 200 may be a liquid crystal display, an organic light emitting display, a projection device. The specific display device type, size, resolution, etc. are not limited.
The display apparatus 200 also performs data communication with the server 300 through various communication means. Here, the display apparatus 200 may be allowed to be communicatively connected through a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 300 may provide various contents and interactions to the display apparatus 200. By way of example, the display device 200 may send and receive information such as: receiving Electronic Program Guide (EPG) data, receiving software program updates, or accessing a remotely stored digital media library. The servers 300 may be a group or groups, and may be one or more types of servers. Other web service contents such as a video on demand and an advertisement service are provided through the server 300.
Fig. 1B is a block diagram illustrating the configuration of the control device 100. As shown in fig. 1B, the control device 100 includes a controller 110, a memory 120, a communicator 130, a user input interface 140, an output interface 150, and a power supply 160.
The controller 110 includes a Random Access Memory (RAM)111, a Read Only Memory (ROM)112, a processor 113, a communication interface, and a communication bus. The controller 110 is used to control the operation of the control device 100, as well as the internal components of the communication cooperation, external and internal data processing functions.
Illustratively, when an interaction of a user pressing a key disposed on the remote controller 100A or an interaction of touching a touch panel disposed on the remote controller 100A is detected, the controller 110 may control to generate a signal corresponding to the detected interaction and transmit the signal to the display device 200.
And a memory 120 for storing various operation programs, data and applications for driving and controlling the control apparatus 100 under the control of the controller 110. The memory 120 may store various control signal commands input by a user.
The communicator 130 enables communication of control signals and data signals with the display apparatus 200 under the control of the controller 110. Such as: the control apparatus 100 transmits a control signal (e.g., a touch signal or a button signal) to the display device 200 via the communicator 130, and the control apparatus 100 may receive the signal transmitted by the display device 200 via the communicator 130. The communicator 130 may include an infrared signal interface 131 and a radio frequency signal interface 132. For example: when the infrared signal interface is used, the user input instruction needs to be converted into an infrared control signal according to an infrared control protocol, and the infrared control signal is sent to the display device 200 through the infrared sending module. The following steps are repeated: when the rf signal interface is used, a user input command needs to be converted into a digital signal, and then the digital signal is modulated according to the rf control signal modulation protocol and then transmitted to the display device 200 through the rf transmitting terminal.
The user input interface 140 may include at least one of a microphone 141, a touch pad 142, a sensor 143, a key 144, and the like, so that a user can input a user instruction regarding controlling the display apparatus 200 to the control apparatus 100 through voice, touch, gesture, press, and the like.
The output interface 150 outputs a user instruction received by the user input interface 140 to the display apparatus 200, or outputs an image or voice signal received by the display apparatus 200. Here, the output interface 150 may include an LED interface 151, a vibration interface 152 generating vibration, a sound output interface 153 outputting sound, a display 154 outputting an image, and the like. For example, the remote controller 100A may receive an output signal such as audio, video, or data from the output interface 150, and display the output signal in the form of an image on the display 154, in the form of audio on the sound output interface 153, or in the form of vibration on the vibration interface 152.
And a power supply 160 for providing operation power support for each element of the control device 100 under the control of the controller 110. In the form of a battery and associated control circuitry.
A hardware configuration block diagram of the display device 200 is exemplarily illustrated in fig. 1C. As shown in fig. 1C, the display apparatus 200 may include a tuner demodulator 210, a communicator 220, a detector 230, an external device interface 240, a controller 250, a memory 260, a user interface 265, a video processor 270, a display 275, an audio processor 280, an audio output interface 285, and a power supply 290.
The tuner demodulator 210 receives the broadcast television signal in a wired or wireless manner, may perform modulation and demodulation processing such as amplification, mixing, and resonance, and is configured to demodulate, from a plurality of wireless or wired broadcast television signals, an audio/video signal carried in a frequency of a television channel selected by a user, and additional information (e.g., EPG data).
The tuner demodulator 210 is responsive to the user selected frequency of the television channel and the television signal carried by the frequency, as selected by the user and controlled by the controller 250.
The tuner demodulator 210 can receive a television signal in various ways according to the broadcasting system of the television signal, such as: terrestrial broadcasting, cable broadcasting, satellite broadcasting, internet broadcasting, or the like; and according to different modulation types, a digital modulation mode or an analog modulation mode can be adopted; and can demodulate the analog signal and the digital signal according to the different kinds of the received television signals.
In other exemplary embodiments, the tuning demodulator 210 may also be in an external device, such as an external set-top box. In this way, the set-top box outputs a television signal after modulation and demodulation, and inputs the television signal into the display apparatus 200 through the external device interface 240.
The communicator 220 is a component for communicating with an external device or an external server according to various communication protocol types. For example, the display apparatus 200 may transmit content data to an external apparatus connected via the communicator 220, or browse and download content data from an external apparatus connected via the communicator 220. The communicator 220 may include a network communication protocol module or a near field communication protocol module, such as a WIFI module 221, a bluetooth communication protocol module 222, and a wired ethernet communication protocol module 223, so that the communicator 220 may receive a control signal of the control device 100 according to the control of the controller 250 and implement the control signal as a WIFI signal, a bluetooth signal, a radio frequency signal, and the like.
The detector 230 is a component of the display apparatus 200 for collecting signals of an external environment or interaction with the outside. The detector 230 may include a sound collector 231, such as a microphone, which may be used to receive a user's sound, such as a voice signal of a control instruction of the user to control the display device 200; alternatively, ambient sounds may be collected that identify the type of ambient scene, enabling the display device 200 to adapt to ambient noise.
In some other exemplary embodiments, the detector 230, which may further include an image collector 232, such as a camera, a video camera, etc., may be configured to collect external environment scenes to adaptively change the display parameters of the display device 200; and the function of acquiring the attribute of the user or interacting gestures with the user so as to realize the interaction between the display equipment and the user.
In some other exemplary embodiments, the detector 230 may further include a light receiver for collecting the intensity of the ambient light to adapt to the display parameter variation of the display device 200.
In other exemplary embodiments, the detector 230 may further include a temperature sensor, such as by sensing an ambient temperature, and the display device 200 may adaptively adjust a display color temperature of the image. For example, when the temperature is higher, the display device 200 may be adjusted to display a color temperature of the image that is colder; when the temperature is lower, the display device 200 can be adjusted to display the image with a warmer color temperature.
The external device interface 240 is a component for providing the controller 250 to control data transmission between the display apparatus 200 and an external apparatus. The external device interface 240 may be connected to an external apparatus such as a set-top box, a game device, a notebook computer, etc. in a wired/wireless manner, and may receive data such as a video signal (e.g., moving image), an audio signal (e.g., music), additional information (e.g., EPG), etc. of the external apparatus.
The external device interface 240 may include: a High Definition Multimedia Interface (HDMI) terminal 241, a Composite Video Blanking Sync (CVBS) terminal 242, an analog or digital Component terminal 243, a Universal Serial Bus (USB) terminal 244, a Component terminal (not shown), a red, green, blue (RGB) terminal (not shown), and the like.
The controller 250 controls the operation of the display device 200 and responds to the operation of the user by running various software control programs (such as an operating system and various application programs) stored on the memory 260.
As shown in fig. 1C, the controller 250 includes a Random Access Memory (RAM)251, a Read Only Memory (ROM)252, a graphics processor 253, a CPU processor 254, a communication interface 255, and a communication bus 256. The RAM251, the ROM252, the graphic processor 253, and the CPU processor 254 are connected to each other through a communication bus 256.
The ROM252 stores various system startup instructions. When the display apparatus 200 starts power-on upon receiving the power-on signal, the CPU processor 254 executes a system boot instruction in the ROM252, copies the operating system stored in the memory 260 to the RAM251, and starts running the boot operating system. After the start of the operating system is completed, the CPU processor 254 copies the various application programs in the memory 260 to the RAM251 and then starts running and starting the various application programs.
And a graphic processor 253 for generating various graphic objects such as icons, operation menus, and user input instruction display graphics, etc. The graphic processor 253 may include an operator for performing an operation by receiving various interactive instructions input by a user, and further displaying various objects according to display attributes; and a renderer for generating various objects based on the operator and displaying the rendered result on the display 275.
A CPU processor 254 for executing operating system and application program instructions stored in memory 260. And according to the received user input instruction, processing of various application programs, data and contents is executed so as to finally display and play various audio-video contents.
In some example embodiments, the CPU processor 254 may comprise a plurality of processors. The plurality of processors may include one main processor and a plurality of or one sub-processor. A main processor for performing some initialization operations of the display apparatus 200 in the display apparatus preload mode and/or operations of displaying a screen in the normal mode. A plurality of or one sub-processor for performing an operation in a state of a standby mode or the like of the display apparatus.
The communication interface 255 may include a first interface to an nth interface. These interfaces may be network interfaces that are connected to external devices via a network.
The controller 250 may control the overall operation of the display apparatus 200. For example: in response to receiving a User input command for selecting a Graphical User Interface (GUI) object displayed on the display 275, the controller 250 may perform an operation related to the object selected by the User input command.
The GUI refers to a user interface related to computer operations displayed in a graphical manner. It may be an interface element such as an icon, a window, a control, etc. displayed in a display screen of the display device, where the control may include a visual interface element such as an icon, a button, a menu, a tab, a text box, a dialog box, a status bar, a navigation bar, a Widget, etc.
Where the object may be any one of the selectable objects, such as a hyperlink or an icon. The operation related to the selected object is, for example, an operation of displaying a link to a hyperlink page, document, image, or the like, or an operation of executing a program corresponding to the object. The user input command for selecting the GUI object may be a command input through various input means (e.g., a mouse, a keyboard, a touch panel, etc.) connected to the display apparatus 200 or a voice command corresponding to a voice spoken by the user.
A memory 260 for storing various types of data, software programs, or applications for driving and controlling the operation of the display device 200. The memory 260 may include volatile and/or nonvolatile memory. And the term "memory" includes the memory 260, the RAM251 and the ROM252 of the controller 250, or a memory card in the display device 200.
In some embodiments, the memory 260 is specifically used for storing an operating program for driving the controller 250 of the display device 200; storing various application programs built in the display apparatus 200 and downloaded by a user from an external apparatus; data such as a visual effect image for configuring a GUI provided by the display 275, various objects related to the GUI, and a selector for selecting the GUI object is stored.
In some embodiments, memory 260 is specifically configured to store drivers for tuner demodulator 210, communicator 220, detector 230, external device interface 240, video processor 270, display 275, audio processor 280, etc., and related data, such as external data (e.g., audio-visual data) received from the external device interface or user data (e.g., key information, voice information, touch information, etc.) received by the user interface.
In some embodiments, memory 260 specifically stores software and/or programs representing an Operating System (OS), which may include, for example: a kernel, middleware, an Application Programming Interface (API), and/or an application program. Illustratively, the kernel may control or manage system resources, as well as functions implemented by other programs (e.g., the middleware, APIs, or applications); at the same time, the kernel may provide an interface to allow middleware, APIs, or applications to access the controller to enable control or management of system resources.
A block diagram of the architectural configuration of the operating system in the memory of the display device 200 is illustrated in fig. 1D. The operating system architecture comprises an application layer, a middleware layer and a kernel layer from top to bottom.
The application layer, the application programs built in the system and the non-system-level application programs belong to the application layer. Is responsible for direct interaction with the user. The application layer may include a plurality of applications such as a setup application, a post application, a media center application, and the like. These applications may be implemented as Web applications that execute based on a WebKit engine, and in particular may be developed and executed based on HTML5, Cascading Style Sheets (CSS), and JavaScript.
Here, HTML, which is called HyperText Markup Language (HyperText Markup Language), is a standard Markup Language for creating web pages, and describes the web pages by Markup tags, where the HTML tags are used to describe characters, graphics, animation, sound, tables, links, etc., and a browser reads an HTML document, interprets the content of the tags in the document, and displays the content in the form of web pages.
CSS, known as Cascading Style Sheets (Cascading Style Sheets), is a computer language used to represent the Style of HTML documents, and may be used to define Style structures, such as fonts, colors, locations, etc. The CSS style can be directly stored in the HTML webpage or a separate style file, so that the style in the webpage can be controlled.
JavaScript, a language applied to Web page programming, can be inserted into an HTML page and interpreted and executed by a browser. The interaction logic of the Web application is realized by JavaScript. The JavaScript can package a JavaScript extension interface through a browser, realize the communication with the kernel layer,
the middleware layer may provide some standardized interfaces to support the operation of various environments and systems. For example, the middleware layer may be implemented as multimedia and hypermedia information coding experts group (MHEG) middleware related to data broadcasting, DLNA middleware which is middleware related to communication with an external device, middleware which provides a browser environment in which each application program in the display device operates, and the like.
The kernel layer provides core system services, such as: file management, memory management, process management, network management, system security authority management and the like. The kernel layer may be implemented as a kernel based on various operating systems, for example, a kernel based on the Linux operating system.
The kernel layer also provides communication between system software and hardware, and provides device driver services for various hardware, such as: provide display driver for the display, provide camera driver for the camera, provide button driver for the remote controller, provide wiFi driver for the WIFI module, provide audio driver for audio output interface, provide power management drive for Power Management (PM) module etc..
A user interface 265 receives various user interactions. Specifically, it is used to transmit an input signal of a user to the controller 250 or transmit an output signal from the controller 250 to the user. For example, the remote controller 100A may transmit an input signal, such as a power switch signal, a channel selection signal, a volume adjustment signal, etc., input by the user to the user interface 265, and then the input signal is transferred to the controller 250 through the user interface 265; alternatively, the remote controller 100A may receive an output signal such as audio, video, or data output from the user interface 265 via the controller 250, and display the received output signal or output the received output signal in audio or vibration form.
In some embodiments, the user may enter user commands in a Graphical User Interface (GUI) displayed on the display 275, and the user interface 265 receives the user input commands through the GUI. Specifically, the user interface 265 may receive user input commands for controlling the position of a selector in the GUI to select different objects or items.
Alternatively, the user may input a user command by inputting a specific sound or gesture, and the user interface 265 receives the user input command by recognizing the sound or gesture through the sensor.
The video processor 270 is configured to receive an external video signal, and perform video data processing such as decompression, decoding, scaling, noise reduction, frame rate conversion, resolution conversion, and image synthesis according to a standard codec protocol of the input signal, so as to obtain a video signal that is directly displayed or played on the display 275.
Illustratively, the video processor 270 includes a demultiplexing module, a video decoding module, an image synthesizing module, a frame rate conversion module, a display formatting module, and the like.
The demultiplexing module is configured to demultiplex an input audio/video data stream, where, for example, an input MPEG-2 stream (based on a compression standard of a digital storage media moving image and voice), the demultiplexing module demultiplexes the input audio/video data stream into a video signal and an audio signal.
And the video decoding module is used for processing the video signal after demultiplexing, including decoding, scaling and the like.
And the image synthesis module is used for carrying out superposition mixing processing on the GUI signal input by the user or generated by the user and the video image after the zooming processing by the graphic generator so as to generate an image signal for display.
The frame rate conversion module is configured to convert a frame rate of an input video, for example, convert a frame rate of an input 60Hz video into a frame rate of 120Hz or 240Hz, where a common format is implemented by using, for example, an interpolation frame method.
And a display formatting module for converting the signal output by the frame rate conversion module into a signal conforming to a display format of a display, such as converting the format of the signal output by the frame rate conversion module to output an RGB data signal.
A display 275 for receiving the image signal from the video processor 270 and displaying the video content, the image and the menu manipulation interface. The display video content may be from the video content in the broadcast signal received by the tuner-demodulator 210, or from the video content input by the communicator 220 or the external device interface 240. The display 275, while displaying a user manipulation interface UI generated in the display apparatus 200 and used to control the display apparatus 200.
And, the display 275 may include a display screen assembly for presenting a picture and a driving assembly for driving the display of an image. Alternatively, a projection device and projection screen may be included, provided display 275 is a projection display.
The audio processor 280 is configured to receive an external audio signal, decompress and decode the received audio signal according to a standard codec protocol of the input signal, and perform audio data processing such as noise reduction, digital-to-analog conversion, and amplification processing to obtain an audio signal that can be played by the speaker 286.
Illustratively, audio processor 280 may support various audio formats. Such as MPEG-2, MPEG-4, Advanced Audio Coding (AAC), high efficiency AAC (HE-AAC), and the like.
The audio output interface 285 is used for receiving an audio signal output by the audio processor 280 under the control of the controller 250, and the audio output interface 285 may include a speaker 286 or an external sound output terminal 287, such as an earphone output terminal, for outputting to a generating device of an external device.
In other exemplary embodiments, video processor 270 may comprise one or more chips. Audio processor 280 may also comprise one or more chips.
And, in other exemplary embodiments, the video processor 270 and the audio processor 280 may be separate chips or may be integrated with the controller 250 in one or more chips.
And a power supply 290 for supplying power supply support to the display apparatus 200 from the power input from the external power source under the control of the controller 250. The power supply 290 may be a built-in power supply circuit installed inside the display apparatus 200 or may be a power supply installed outside the display apparatus 200.
On the basis of fig. 1A to 1D described above, the controller 250 of the display apparatus 200 receives streaming media data provided by an external apparatus or an external server through the communicator 220. The streaming media data generally includes video data, audio data, and subtitle data. Wherein the subtitle data is transmitted in the form of a subtitle file.
As an embodiment, the start part of the streaming media data may carry a download address of the subtitle file. The controller 250 downloads a corresponding subtitle file from an external device or an external server according to the download address of the subtitle file, and stores the downloaded subtitle file in the memory 260 for subsequent subtitle processing.
Meanwhile, the controller 250 feeds the received streaming media data to the video processor 270. The streaming media data is demultiplexed by a demultiplexing module included in the video processor 270 to demultiplex a video signal and an audio signal.
The demultiplexed video signal is decoded, scaled, and the like by a video decoding module included in the video processor 270, and then the controller 250 controls to send the processed video signal to the display 275 for display; the demultiplexed audio signal is decompressed, decoded, etc. by the audio processor 280, and then the controller 250 controls the processed audio signal to be sent to the audio output interface 285 (e.g., the speaker 286).
As another embodiment, the subtitle file may be carried directly in the streaming media data (with the streaming media data segment being sent). Controller 250 feeds the received streaming media data to video processor 270. The streaming media data is demultiplexed by a demultiplexing module included in the video processor 270 to obtain a video signal, an audio signal, and a subtitle file. The controller 250 stores the demultiplexed subtitle file in the memory 260 for use in subsequent subtitle processing.
The demultiplexed video signal is decoded, scaled, etc. by the video decoding module included in the video processor 270, and then the controller 250 controls to send the processed video signal to the display 275 for display; the demultiplexed audio signal is decompressed, decoded, etc. by the audio processor 280, and then the controller 250 controls the processed audio signal to be sent to the audio output interface 285 (e.g., the speaker 286).
After acquiring the subtitle file in the above embodiment, the controller 250 performs the following processing on the subtitle file. Referring to fig. 2, a conventional subtitle processing flow is shown.
As shown in fig. 2, the process includes the following steps:
in step S31, the controller 250 parses a piece of subtitle data from the subtitle file.
Each piece of subtitle data at least comprises the display starting time, the display ending time and the subtitle content of the subtitle. Wherein, the display starting time to the display ending time are used for representing the display time interval of the caption. For example, a piece of subtitle data includes a display start time of 5 seconds and a display end time of 7 seconds, which indicates that the piece of subtitle should be displayed at the 5 th to 7 th seconds of video playback.
In step S32, the controller 250 stores the parsed piece of subtitle data in the subtitle data queue.
In step S33, the controller 250 retrieves a piece of subtitle data from the subtitle data queue.
After the controller 250 starts playing the video, it takes out a piece of caption data from the caption data queue corresponding to the video for processing.
In step S34, the controller 250 determines whether the current video playing time reaches the display start time of the subtitle.
The controller 250 acquires the display start time of the subtitle from the subtitle data and acquires the play time of the current video, and compares the two times.
If the playing time of the current video reaches the display starting time of the subtitles, turning to step S35; if the display start time of the subtitle is not reached, the method continues to wait until the video playing time reaches the display start time of the subtitle, and goes to step S35.
In step S35, the controller 250 renders and draws the subtitle.
That is, when it is determined that the subtitle display time has reached in step S34, the subtitle is rendered and drawn. After rendering, the bar of subtitles is displayed on the display 275.
The controller 250 processes each subtitle data in the subtitle data queue to complete the display of each subtitle by circularly performing steps S33 through S35.
Thus, the flow shown in fig. 2 is completed.
As can be seen from the flow shown in fig. 2, the rendering and drawing of the subtitles are started only when the subtitle display time is reached. Rendering and drawing consume system time, and inevitably cause subtitle display delay. Especially for some complex subtitles, the delay caused by the processing is unacceptable for users.
In view of the foregoing problems, an embodiment of the present invention provides a method for reducing error in displaying subtitles. Referring to fig. 3, a subtitle processing flow according to an embodiment of the present application is shown.
As shown in fig. 3, the process may include the following steps:
in step S41, a piece of subtitle data is obtained from the subtitle data queue.
The source of the caption data in the caption data queue is referred to the aforementioned steps S31 and S32, which are not described herein again.
After the controller 250 starts playing the video, a piece of caption data is fetched from the caption data queue corresponding to the video. The piece of subtitle data includes a display start time of the piece of subtitle.
In step S42, the controller 250 obtains a preset time offset, for example, 120 msec.
In step S43, the controller 250 calculates the processing start time of the subtitle according to the time offset and the display start time of the subtitle.
Specifically, the processing start time is a display start time-time offset.
For example, if the display start time is 5 seconds and the time offset is 120 milliseconds, the processing start time of the subtitle is 4.88 seconds, which is 5 seconds to 120 milliseconds.
It can be seen that the processing start time of the subtitle is earlier than the display start time of the subtitle.
In step S44, the controller 250 determines whether the current video playing time reaches the processing start time of the subtitle.
The controller 250 acquires the playing time of the current video and compares it with the processing start time calculated by step S43.
If the playing time of the current video reaches the processing starting time of the subtitles, turning to step S45; if the processing start time of the subtitle is not reached, the process continues to wait until the video playing time reaches the processing start time of the subtitle, and the process goes to step S45.
In step S45, the controller 250 renders and draws the subtitle.
That is, when the subtitle processing start time is reached, the subtitle is rendered and drawn. After rendering, the bar of subtitles is displayed on the display 275.
The controller 250 processes each subtitle data in the subtitle data queue to complete the display of each subtitle by circularly performing steps S41 through S45.
The flow shown in fig. 3 is completed.
As can be seen from the flow shown in fig. 3, in the embodiment of the present application, rendering and drawing of subtitles are started before the subtitle display start time, so that time after the subtitle display start time is avoided as much as possible, and therefore, subtitle display errors can be reduced.
However, as can also be seen from the above processing procedure, the processing method uses fixed-duration advance. The fixed duration advance is usually obtained by testing the time consumed by the display device to process the subtitles. If the display platform needs to be replaced to a different display platform (the processing performance of the different display platforms is different), or the same display platform needs to process subtitles with different formats, for example, subtitles in a Timed Text Markup Language (TTML) format and subtitles in a Web Video Text track (WebVTT) format, the processing method for the fixed time advance cannot be applied to different display platforms, and cannot meet the display requirements of the subtitles with multiple formats. Furthermore, even with the same subtitle format, there may be a plurality of subtitle styles, such as different font sizes, colors, display positions (center, left-justified, right-justified), and so on. The processing time lengths of the subtitles with different styles are different, so that the adoption of the fixed time length advance cannot ensure that all the subtitles can meet the requirement of display error, for example, the display error is controlled within 40 milliseconds.
In view of the problems in the process shown in fig. 3, an embodiment of the present application provides a method for reducing a subtitle display error by dynamically determining a time advance. Referring to fig. 4, a subtitle processing flow according to an embodiment of the present application is shown.
As shown in fig. 4, the process may include the following steps:
in step S51, a piece of subtitle data is obtained from the subtitle data queue.
The origin of the caption data in the caption data queue is referred to the aforementioned steps S31 and S32, which are not described herein again.
After the controller 250 starts playing the video, a piece of caption data is fetched from the caption data queue corresponding to the video. The piece of subtitle data includes a display start time of the subtitle and a style identification of the subtitle.
The style identifier is used to identify a display style of the subtitle, for example, the style identifier is 1, and indicates that the display style of the corresponding subtitle is centered display and the font is bolded.
In step S52, the controller 250 searches for a time list corresponding to the style identifier according to the style identifier included in the subtitle data.
The time list records the processing time length of each processed subtitle in sequence. Here, the processing time length refers to a time length from rendering to completion of drawing of the subtitle.
Because the processing complexity of the subtitles with different display styles is different and the processing time lengths required by the subtitles are different, the embodiment of the application respectively establishes a corresponding time list for each display style so as to record the processing time lengths of the processed subtitles belonging to different styles.
Referring to fig. 5, a time list corresponding to different display styles is shown in the embodiment of the present application. Fig. 5 takes 3 display styles as an example, and 3 time lists are respectively established. The style 1 time list is used for recording the processing duration of the subtitle with the processed display style of style 1; the style 2 time list is used for recording the processing duration of the subtitle with the processed display style of style 2; the style 3 time list is used to record the processing duration of the subtitle whose processed display style is style 3.
Of course, the number of time lists is not limited in the embodiments of the present application, and the number of time lists depends on the number of corresponding subtitle styles of the video.
Further, as can be seen from the time list of each style shown in fig. 5, the processing time period (about 60 ms) for the subtitle having the display style of style 1 is relatively short; the processing time (about 100 ms) for displaying the subtitle with the style of style 2 is relatively long; the processing time length (around 80 ms) of the subtitle having the presentation style of style 3 is centered. That is, the processing time length of subtitles of different display styles is different.
In this step, the controller 250 searches for a time list corresponding to the style identifier according to the style identifier included in the currently extracted subtitle data. For example, if the style identifier included in the subtitle data is the identifier of style 1, the controller 250 may find the time list corresponding to style 1, i.e., the first time list on the left side, from the time lists shown in fig. 5.
In step S53, the controller 250 obtains the processing durations of the first M subtitles from the searched time list. Here, M.gtoreq.1.
Still taking the time list of style 1 as an example, the current time list records the processing durations of 5 subtitles. When M is 3, the processing time lengths of the most recently processed 3 subtitles can be obtained from the end of the time list, which are 68ms, 66ms, and 64ms, respectively.
In step S54, the controller 250 estimates the duration required for processing the current subtitle according to the acquired processing duration of the first M subtitles.
Specifically, the controller 250 may use an average value of processing durations of the first M subtitles as an estimated duration required for processing the current subtitle, which is hereinafter referred to as an estimated processing duration.
It can be seen that, in the embodiment of the present application, the estimated processing time of the to-be-processed subtitle is dynamically determined according to the historical data (the processing time of the processed subtitle), and therefore, the processing time of each to-be-processed subtitle can be estimated more accurately. The estimation mode is not influenced by the subtitle format and the performance of the display platform, and the estimation processing time length can be adaptively adjusted according to different subtitle formats and different display platforms.
In step S55, the controller 250 calculates the processing start time of the current subtitle according to the display start time and the estimated processing duration of the current subtitle.
Specifically, the processing start time is the display start time — the estimated processing time.
That is, the estimated processing time period is used as the time advance.
In step S56, the controller 250 determines whether the current video playing time reaches the processing start time of the subtitle.
The controller 250 acquires the playing time of the current video and compares it with the processing start time of the subtitle calculated through step S55.
If the playing time of the current video reaches the processing starting time of the subtitles, turning to step S57; if the processing start time of the subtitle is not reached, the process continues to wait until the video playing time reaches the processing start time of the subtitle, and the process goes to step S57.
In step S57, the controller 250 renders and draws the subtitle.
In the embodiment of the present application, when rendering the subtitle, the controller 250 records the rendering start time, and when the rendering is finished, records the drawing end time. After rendering is complete, the bar of subtitles is displayed on the display 275.
Meanwhile, the controller 250 may calculate an actual processing time of the subtitle according to the rendering end time and the rendering start time. That is, the actual processing time length is the drawing end time-rendering start time.
Here, the rendering end time and the rendering start time may be UTC time, or may be system time of the display device, which is not limited in the present application.
After calculating the actual processing time length of the subtitle, the controller 250 may further calculate a difference between the actual processing time length and the estimated processing time length.
If the absolute value of the difference does not exceed the preset difference threshold, the difference between the actual processing time length and the processing time length estimated according to the historical data is not large, and the difference can be used as a basis for estimating the processing time length of the subsequent caption, so that the actual processing time length of the caption is added into the corresponding time queue.
If the absolute value of the difference exceeds the preset difference threshold, the actual processing time length is possibly influenced by external factors, and the difference between the actual processing time length and the processing time length estimated according to the historical data is larger, so that the actual processing time length of the current caption is forbidden to be added into the time queue, and the influence on the display of the subsequent caption is avoided.
The controller 250 processes each subtitle data in the subtitle data queue to complete the display of each subtitle by circularly performing steps S51 through S57.
The flow shown in fig. 4 is completed.
As can be seen from the flow shown in fig. 4, in the embodiment of the present application, the display device may dynamically estimate the time advance required by each subtitle according to the historical data, thereby ensuring that each subtitle can be displayed in time when the display time is reached, and reducing the display error. The embodiment of the application is not influenced by the subtitle format and the performance of the display platform, and can be adaptive to various subtitle formats and display platforms.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (12)

1. A display device, comprising:
a display;
a controller for performing:
in the process of playing a video, acquiring data of a subtitle from a queue for storing subtitle data corresponding to the video, wherein the data of the subtitle comprises the display starting time of the subtitle;
acquiring processing durations of the first M subtitles from a time list for recording the processing durations of the processed subtitles, wherein the processing durations comprise durations for rendering and drawing the subtitles;
Calculating the time length required for processing the caption according to the processing time length of the first M captions;
subtracting the calculated time length required for processing the caption from the display starting time of the caption to obtain the processing starting time of the caption;
and when the playing time of the video reaches the processing starting time of the subtitle, rendering and drawing the subtitle to display the subtitle on a display.
2. The display device according to claim 1, wherein the calculating a duration required for processing the one subtitle according to the processing duration of the first M subtitles comprises:
and taking the average value of the processing time lengths of the first M subtitles as the time length required by processing the subtitle.
3. A display device, comprising:
a display;
a controller for performing:
in the process of playing a video, acquiring data of a subtitle from a queue for storing subtitle data corresponding to the video, wherein the data of the subtitle comprises the display starting time of the subtitle;
acquiring processing durations of the first M subtitles from a time list for recording the processing durations of the processed subtitles, wherein the processing durations comprise durations for rendering and drawing the subtitles;
Calculating the time length required for processing the caption according to the processing time length of the first M captions;
subtracting the calculated time length required for processing the caption from the display starting time of the caption to obtain the processing starting time of the caption;
when the playing time of the video reaches the processing starting time of the subtitle, rendering the subtitle and recording the rendering starting time of the subtitle;
drawing the rendered caption to display the caption on a display, and recording the drawing end time of the caption;
subtracting the rendering starting time from the rendering ending time to obtain the actual processing duration of the caption;
and if the actual processing duration of the subtitle meets the condition of adding the subtitle into the time list, adding the actual processing duration of the subtitle into the time list.
4. The display device according to claim 3, wherein the adding the actual processing duration of the one subtitle to the time list if the actual processing duration of the one subtitle satisfies a condition for adding to the time list comprises:
And if the absolute value of the difference between the actual processing time length of the caption and the calculated time length required for processing the caption does not exceed a preset difference threshold, adding the actual processing time length of the caption into the time list.
5. The display device according to claim 3, wherein the rendering start time and the rendering end time are both UTC time or system time of the display device.
6. A display device, comprising:
a display;
a controller for performing:
in the process of playing a video, acquiring data of a subtitle from a queue for storing subtitle data corresponding to the video, wherein the data of the subtitle comprises display starting time of the subtitle and a style identification of the subtitle;
searching a time list corresponding to the style identification according to the style identification of the subtitle;
acquiring processing duration of the first M subtitles from the searched time list, wherein the processing duration comprises duration for rendering and drawing the subtitles;
calculating the time length required for processing the caption according to the processing time length of the first M captions;
subtracting the time length required for processing the caption from the display starting time of the caption to obtain the processing starting time of the caption;
And when the playing time of the video reaches the processing starting time of the subtitle, rendering and drawing the subtitle to display the subtitle on a display.
7. A method for displaying subtitles, the method comprising:
in the process of playing a video, acquiring data of a subtitle from a queue for storing subtitle data corresponding to the video, wherein the data of the subtitle comprises the display starting time of the subtitle;
acquiring processing durations of the first M subtitles from a time list for recording the processing durations of the processed subtitles, wherein the processing durations comprise durations for rendering and drawing the subtitles;
calculating the time length required for processing the caption according to the processing time length of the first M captions;
subtracting the calculated time length required for processing the caption from the display starting time of the caption to obtain the processing starting time of the caption;
and when the playing time of the video reaches the processing starting time of the subtitle, rendering and drawing the subtitle to display the subtitle on a display.
8. The method of claim 7, wherein the calculating the duration required for processing the one subtitle according to the processing duration of the first M subtitles comprises:
And taking the average value of the processing time lengths of the first M subtitles as the time length required by processing the subtitle.
9. A method for displaying subtitles, the method comprising:
in the process of playing a video, acquiring data of a subtitle from a queue for storing subtitle data corresponding to the video, wherein the data of the subtitle comprises the display starting time of the subtitle;
acquiring processing durations of the first M subtitles from a time list for recording the processing durations of the processed subtitles, wherein the processing durations comprise durations for rendering and drawing the subtitles;
calculating the time length required for processing the caption according to the processing time length of the first M captions;
subtracting the calculated required time for processing the caption from the display starting time of the caption to obtain the processing starting time of the caption;
when the playing time of the video reaches the processing starting time of the subtitle, rendering the subtitle and recording the rendering starting time of the subtitle;
drawing the rendered caption to display the caption on a display, and recording the drawing end time of the caption;
Subtracting the rendering starting time from the rendering ending time to obtain the actual processing duration of the subtitle;
and if the actual processing duration of the subtitle meets the condition of adding the subtitle into the time list, adding the actual processing duration of the subtitle into the time list.
10. The method of claim 9, wherein adding the actual processing duration of the one subtitle to the time list if the actual processing duration of the one subtitle satisfies a condition for adding to the time list comprises:
and if the absolute value of the difference between the actual processing time length of the caption and the calculated time length required for processing the caption does not exceed a preset difference threshold, adding the actual processing time length of the caption into the time list.
11. The method of claim 9, wherein the rendering start time and the rendering end time are both UTC time or system time of a display device.
12. A method for displaying subtitles, the method comprising:
in the process of playing a video, acquiring data of a subtitle from a queue for storing subtitle data corresponding to the video, wherein the data of the subtitle comprises display starting time of the subtitle and a style identification of the subtitle;
Searching a time list corresponding to the style identification according to the style identification of the subtitle;
acquiring processing time lengths of the first M subtitles from the searched time list, wherein the processing time lengths comprise time lengths for rendering and drawing the subtitles;
calculating the time length required for processing the first M subtitles according to the processing time length of the first M subtitles;
subtracting the time length required for processing the caption from the display starting time of the caption to obtain the processing starting time of the caption;
and when the playing time of the video reaches the processing starting time of the subtitle, rendering and drawing the subtitle to display the subtitle on a display.
CN202010362553.7A 2020-04-30 2020-04-30 Subtitle display method and display equipment Active CN111526414B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010362553.7A CN111526414B (en) 2020-04-30 2020-04-30 Subtitle display method and display equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010362553.7A CN111526414B (en) 2020-04-30 2020-04-30 Subtitle display method and display equipment

Publications (2)

Publication Number Publication Date
CN111526414A CN111526414A (en) 2020-08-11
CN111526414B true CN111526414B (en) 2022-06-07

Family

ID=71908571

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010362553.7A Active CN111526414B (en) 2020-04-30 2020-04-30 Subtitle display method and display equipment

Country Status (1)

Country Link
CN (1) CN111526414B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114640874A (en) * 2022-03-09 2022-06-17 湖南国科微电子股份有限公司 Subtitle synchronization method and device, set top box and computer readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1787630A (en) * 2004-12-10 2006-06-14 凌阳科技股份有限公司 Method for controlling using subtitles relevant time as audio-visual playing and audio-sual playing apparatus thereof
CN101093703A (en) * 2003-10-04 2007-12-26 三星电子株式会社 Information storage medium storing text-based subtitle, and apparatus and method for processing text-based subtitle
CN104795083A (en) * 2015-04-30 2015-07-22 联想(北京)有限公司 Information processing method and electronic device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050196146A1 (en) * 2004-02-10 2005-09-08 Yoo Jea Y. Method for reproducing text subtitle and text subtitle decoding system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101093703A (en) * 2003-10-04 2007-12-26 三星电子株式会社 Information storage medium storing text-based subtitle, and apparatus and method for processing text-based subtitle
CN1787630A (en) * 2004-12-10 2006-06-14 凌阳科技股份有限公司 Method for controlling using subtitles relevant time as audio-visual playing and audio-sual playing apparatus thereof
CN104795083A (en) * 2015-04-30 2015-07-22 联想(北京)有限公司 Information processing method and electronic device

Also Published As

Publication number Publication date
CN111526414A (en) 2020-08-11

Similar Documents

Publication Publication Date Title
CN111200746B (en) Method for awakening display equipment in standby state and display equipment
CN111314789B (en) Display device and channel positioning method
CN111601134B (en) Time display method in display equipment and display equipment
CN111601135A (en) Method for synchronously injecting audio and video elementary streams and display equipment
CN111726673B (en) Channel switching method and display device
CN111601142B (en) Subtitle display method and display equipment
CN111629249B (en) Method for playing startup picture and display device
CN111639281A (en) Page resource display method and display equipment
CN114073098A (en) Streaming media synchronization method and display device
CN111277891B (en) Program recording prompting method and display equipment
CN109922364B (en) Display device
CN111757181B (en) Method for reducing network media definition jitter and display device
CN111526401B (en) Video playing control method and display equipment
CN111526414B (en) Subtitle display method and display equipment
CN113115092A (en) Display device and detail page display method
CN112004127B (en) Signal state display method and display equipment
CN112040285B (en) Interface display method and display equipment
CN111405329B (en) Display device and control method for EPG user interface display
CN113115093B (en) Display device and detail page display method
CN113329246A (en) Display device and shutdown method
CN113382291A (en) Display device and streaming media playing method
CN111757160A (en) Method for starting sports mode and display equipment
CN111601401B (en) Network connection control method and display device
CN111901686B (en) Method for keeping normal display of user interface stack and display equipment
CN113094140B (en) System update display method and display equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20221026

Address after: 83 Intekte Street, Devon, Netherlands

Patentee after: VIDAA (Netherlands) International Holdings Ltd.

Address before: 266061 room 131, 248 Hong Kong East Road, Laoshan District, Qingdao City, Shandong Province

Patentee before: QINGDAO HISENSE MEDIA NETWORKS Ltd.