WO2024051030A1 - 一种显示设备及字幕显示方法 - Google Patents

一种显示设备及字幕显示方法 Download PDF

Info

Publication number
WO2024051030A1
WO2024051030A1 PCT/CN2022/140799 CN2022140799W WO2024051030A1 WO 2024051030 A1 WO2024051030 A1 WO 2024051030A1 CN 2022140799 W CN2022140799 W CN 2022140799W WO 2024051030 A1 WO2024051030 A1 WO 2024051030A1
Authority
WO
WIPO (PCT)
Prior art keywords
display
content
subtitle
characteristic
reached
Prior art date
Application number
PCT/CN2022/140799
Other languages
English (en)
French (fr)
Inventor
金程贵
陆华色
Original Assignee
海信电子科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 海信电子科技(深圳)有限公司 filed Critical 海信电子科技(深圳)有限公司
Publication of WO2024051030A1 publication Critical patent/WO2024051030A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker

Definitions

  • the present application relates to the technical field of display devices, and in particular, to a display device and a subtitle display method.
  • TTML Timed Text Markup Language
  • ATSC3.0 broadcast system has introduced TTML as its subtitle standard in a breakthrough.
  • Some embodiments of the present application provide a display device, including: a display; and a controller configured to: obtain the first content of the first subtitle segment, the first display characteristic, the first display start time, and the first display end time. ; When the first display startup time is reached, control the display to display the first content according to the first display characteristic; obtain the second content, the second display characteristic and the second display startup of the second subtitle fragment time; calculate the difference between the second display start time and the first display end time; if the difference is less than the preset value, when the first display end time is reached, control the display according to the The second display characteristic displays second content.
  • the controller controls the display to display the second content according to the second display characteristic when the first display end time is reached, and is further configured to: determine whether the first content is the same as the first display end time. Whether the second content is the same; if the first content is the same as the second content, when the first display end time is reached, the display is controlled to display the second content according to the second display characteristic. content.
  • Figure 1 shows an operation scenario between a display device and a control device according to some embodiments
  • FIG. 2 shows a hardware configuration block diagram of the control device 100 according to some embodiments
  • Figure 3 shows a hardware configuration block diagram of a display device 200 according to some embodiments
  • Figure 4 shows a software configuration diagram in the display device 200 according to some embodiments
  • Figure 5 shows a flow chart inside a TTML module according to some embodiments
  • Figure 6 shows a schematic diagram of a first subtitle display effect according to some embodiments
  • Figure 7 shows a schematic diagram of a second subtitle display effect according to some embodiments.
  • Figure 8 shows a system architecture diagram of a display device 200 according to some embodiments.
  • Figure 9 shows an interaction diagram related to a TTML module according to some embodiments.
  • Figure 10 shows a schematic diagram of a live broadcast application setting interface according to some embodiments
  • Figure 11 shows a schematic diagram of a live broadcast application subtitle setting interface according to some embodiments.
  • Figure 12 shows a flow diagram inside another TTML module according to some embodiments.
  • Figure 13 shows a flow chart inside yet another TTML module according to some embodiments.
  • Figure 14 shows a flow chart of steps performed by a subtitle display control module according to some embodiments
  • Figure 15 shows a schematic diagram of a subtitle display personalization setting user interface according to some embodiments
  • Figure 16 shows a schematic diagram of a third subtitle display effect according to some embodiments.
  • Figure 17 shows a schematic diagram of a fourth subtitle display effect according to some embodiments.
  • Figure 18 shows a schematic diagram of a fifth subtitle display effect according to some embodiments.
  • Figure 19 shows a schematic diagram of a sixth subtitle display effect according to some embodiments.
  • Figure 20 shows a schematic diagram of a seventh subtitle display effect according to some embodiments.
  • the display device provided by the embodiment of the present application can have a variety of implementation forms, for example, it can be a TV, a smart TV, a laser projection device, a monitor, an electronic bulletin board, an electronic table, etc.
  • Figures 1 and 2 illustrate a specific implementation of the display device of the present application.
  • FIG. 1 is a schematic diagram of an operation scenario between a display device and a control device according to an embodiment. As shown in FIG. 1 , the user can operate the display device 200 through the smart device 300 or the control device 100 .
  • control device 100 may be a remote controller.
  • the communication between the remote controller and the display device includes infrared protocol communication or Bluetooth protocol communication, and other short-distance communication methods to control the display device 200 through wireless or wired methods.
  • the user can control the display device 200 by inputting user instructions through buttons on the remote control, voice input, control panel input, etc.
  • a smart device 300 (such as a mobile terminal, a tablet, a computer, a laptop, etc.) can also be used to control the display device 200 .
  • the display device 200 is controlled using an application running on the smart device.
  • the display device may not use the above-mentioned smart device or control device to receive instructions, but may receive user control through touch or gestures.
  • the display device 200 can also be controlled in a manner other than the control device 100 and the smart device 300 .
  • the display device 200 can directly receive the user's voice command control through a module configured inside the display device 200 to obtain voice commands.
  • the user's voice command control can also be received through a voice control device provided outside the display device 200 .
  • display device 200 also communicates data with server 400.
  • the display device 200 may be allowed to communicate via a local area network (LAN), a wireless local area network (WLAN), and other networks.
  • Server 400 can provide various content and interactions to display device 200.
  • the server 400 may be a cluster or multiple clusters, and may include one or more types of servers.
  • FIG. 2 schematically shows a configuration block diagram of the control device 100 according to an exemplary embodiment.
  • the control device 100 includes a controller 110, a communication interface 130, a user input/output interface 140, a memory, and a power supply.
  • the control device 100 can receive input operation instructions from the user, and convert the operation instructions into instructions that the display device 200 can recognize and respond to, thereby mediating the interaction between the user and the display device 200 .
  • the display device 200 includes at least one of a tuner and demodulator 210, a communicator 220, a detector 230, an external device interface 240, a controller 250, a display 260, an audio output interface 270, a memory, a power supply, and a user interface. kind.
  • the controller includes a processor, a video processor, an audio processor, a graphics processor, RAM, ROM, first to nth interfaces for input/output.
  • the display 260 includes a display screen component for presenting images, a driving component for driving image display, a component for receiving image signals output from the controller, displaying video content, image content, menu control interface components, and a user control UI interface. .
  • the display 260 can be a liquid crystal display, an OLED display, a projection display, or a projection device and a projection screen.
  • the display 260 also includes a touch screen.
  • the touch screen is used to receive input control instructions from actions such as sliding or clicking of the user's fingers on the touch screen.
  • the communicator 220 is a component for communicating with external devices or servers according to various communication protocol types.
  • the communicator may include at least one of a Wifi module, a Bluetooth module, a wired Ethernet module, other network communication protocol chips or near field communication protocol chips, and an infrared receiver.
  • the display device 200 can establish transmission and reception of control signals and data signals with the external control device 100 or the server 400 through the communicator 220 .
  • the user interface can be used to receive control signals from the control device 100 (such as an infrared remote control, etc.).
  • the detector 230 is used to collect signals from the external environment or interactions with the outside.
  • the detector 230 includes a light receiver, a sensor used to collect ambient light intensity; or the detector 230 includes an image collector, such as a camera, which can be used to collect external environment scenes, user attributes or user interaction gestures, or , the detector 230 includes a sound collector, such as a microphone, etc., for receiving external sounds.
  • the external device interface 240 may include, but is not limited to, any of the following: high-definition multimedia interface (HDMI), analog or data high-definition component input interface (component), composite video input interface (CVBS), USB input interface (USB), RGB port, etc. or multiple interfaces. It can also be a composite input/output interface formed by the above-mentioned multiple interfaces.
  • HDMI high-definition multimedia interface
  • component analog or data high-definition component input interface
  • CVBS composite video input interface
  • USB USB input interface
  • RGB port etc.
  • It can also be a composite input/output interface formed by the above-mentioned multiple interfaces.
  • the tuner-demodulator 210 receives broadcast television signals through wired or wireless reception methods, and demodulates audio and video signals, such as EPG data signals, from multiple wireless or wired broadcast television signals.
  • the controller 250 and the tuner-demodulator 210 may be located in different separate devices, that is, the tuner-demodulator 210 may also be located in an external device of the main device where the controller 250 is located, such as an external set-top box. wait.
  • the controller 250 controls the operation of the display device and responds to user operations through various software control programs stored in the memory.
  • the controller 250 controls the overall operation of the display device 200. For example, in response to receiving a user command for selecting a UI object to be displayed on display 260, controller 250 may perform operations related to the object selected by the user command.
  • the controller includes a central processing unit (Central Processing Unit, CPU), a video processor, an audio processor, a graphics processor (Graphics Processing Unit, GPU), RAM Random Access Memory, RAM), ROM (Read- Only Memory, ROM), at least one of the first to nth interfaces for input/output, communication bus (Bus), etc.
  • CPU Central Processing Unit
  • video processor video processor
  • audio processor audio processor
  • graphics processor Graphics Processing Unit, GPU
  • RAM Random Access Memory
  • ROM Read- Only Memory
  • the user may input a user command into a graphical user interface (GUI) displayed on the display 260, and the user input interface receives the user input command through the graphical user interface (GUI).
  • GUI graphical user interface
  • the user can input a user command by inputting a specific sound or gesture, and the user input interface recognizes the sound or gesture through the sensor to receive the user input command.
  • the display device system is divided into three layers, from top to bottom: application layer, middleware layer and hardware layer.
  • the application layer mainly includes commonly used applications on the TV, as well as the application framework (Application Framework).
  • the commonly used applications are mainly applications developed based on the browser, such as: HTML5 APPs; and native applications (Native APPs);
  • Application Framework is a complete program model that has all the basic functions required by standard application software, such as: file access, data exchange..., as well as the usage interfaces of these functions (toolbar, status bar, menu , dialog box).
  • Native APPs can support online or offline, message push or local resource access.
  • the middleware layer includes middleware such as services of various television protocols, services based on multimedia protocols (for example, MPEG, etc.), and system components.
  • Middleware can use the basic services (functions) provided by system software to connect various parts of the application system or different applications on the network, and can achieve the purpose of resource sharing and function sharing.
  • the hardware layer mainly includes the HAL interface, hardware and drivers.
  • the HAL interface is a unified interface for all TV chips to connect, and the specific logic is implemented by each chip.
  • Drivers mainly include: audio driver, display driver, Bluetooth driver, camera driver, WIFI driver, USB driver, HDMI driver, sensor driver (such as fingerprint sensor, temperature sensor, pressure sensor, etc.), and power driver, etc.
  • TTML is an XML-based sequential text markup language. It is intended for use across subtitle and subtitle delivery applications worldwide, simplifying interoperability and maintaining consistency and compatibility with other subtitle file formats. Due to its good compatibility, simple configuration and convenient synchronization, TTML has been introduced into the ATSC3.0 broadcast system as its subtitle standard.
  • the flow chart inside the TTML module is as shown in Figure 5.
  • the TTML module includes CC_Interface (Closed Caption_Interface, subtitle interface), CC_Parser (subtitle syntax analysis program), CC_Process (subtitle process) and CC_Display_Control (subtitle display control module).
  • DM DataManager, data management module
  • the subtitle syntax analysis program parses the first subtitle fragment data, it obtains relevant information about the first subtitle fragment.
  • the relevant information includes the subtitle content, display start time, display end time, display characteristics, etc. of the first subtitle fragment.
  • the display characteristics include subtitle font type, font size, font transparency, font background, subtitle position, etc. Display characteristics can be set by default or can be personalized by the user.
  • the subtitle syntax analysis program preprocesses the parsed first subtitle fragment data and sends it to the subtitle process, and the subtitle process sends the preprocessed first subtitle fragment data to the subtitle display control module.
  • the purpose of preprocessing is to organize the relevant information of the first subtitle fragment into a unified data structure and put it into a queue to facilitate the transmission and acquisition of subtitle data.
  • the subtitle display control module After obtaining the preprocessed first subtitle fragment data, the subtitle display control module performs operations related to the display of the first subtitle fragment content.
  • the data management module will send the obtained second subtitle fragment data to the subtitle syntax analysis program
  • the subtitle syntax analysis program preprocesses the parsed second subtitle fragment data and sends it to the subtitle process, and the subtitle process sends the preprocessed second subtitle fragment data to the subtitle display control module.
  • the subtitle display control module When the display start time of the second subtitle segment is reached, the subtitle display control module performs operations related to the content display of the second subtitle segment, and when the display end time of the second subtitle segment is reached, it ends displaying the second subtitle segment. Subtitle segment content.
  • the fragmentation interval is too short, and the hardware graphics performance cannot keep up, it will lead to subtitle flickering, insufficient smoothness of the content, and other poor experience during the rapid multi-fragment transmission and display process. . That is, due to hardware performance limitations, there must be at least 40ms between displaying blank content and displaying the second subtitle segment content again. If the difference between the display end time of the first subtitle fragment and the display start time of the second subtitle fragment is greater than or equal to 40 ms, the display of the first subtitle fragment, blank space, and the second subtitle fragment can be completed normally within the specified time.
  • the content of the first subtitle segment is "Beijing” and the display time period is 1000ms-3000ms.
  • the content of the second subtitle segment is "Shanghai” and the display time period is 3100ms-5100ms, that is, the display blank time If it is 100ms, the effect of displaying "Beijing" ⁇ displaying blank ⁇ displaying "Shanghai" can be completed according to the original time.
  • the difference between the display end time of the first subtitle fragment and the display start time of the second subtitle fragment is less than 40ms, then as shown in Figure 7, the content of the first subtitle fragment is "Beijing" and the display time period is 1000ms- 3000ms, the content of the second subtitle fragment is "Shanghai", the display time period is 3001ms-5000ms, that is, the display blank time is 1ms, the original intention is to achieve the effect shown in Effect 1 in Figure 7, but it is limited by hardware performance , it is impossible to display blank content according to 1ms, and can only achieve the effect shown in Effect 2 in Figure 7. That is, the second subtitle segment should be displayed at 3001ms, but it can only be displayed at 3040ms. Displaying blank content between 3001ms and 3040ms will cause subtitle pauses and an uneven display experience for the user.
  • the display effect is that a small blank suddenly appears between the first half of the sentence and the second half of the sentence. , that is, pause for a sentence.
  • the display effect is that the subtitles flash once on the screen. If it lasts for multiple If the fragments all have the same content, the same content will flash differently on the screen.
  • TTML subtitles will have some problems affecting the subtitle display effect due to hardware performance limitations and unreasonable subtitle sentence segmentation design, which will lead to poor user experience.
  • embodiments of the present application provide a display device 200.
  • the structure and functions of each part of the display device 200 can be referred to the above embodiments.
  • this embodiment further improves some functions of the display device 200.
  • FIG. 8 is a system architecture diagram of the display device 200.
  • the system architecture of the display device 200 can also be expressed as including an application (APP) layer, various services (Service), and a hardware (HAL) layer.
  • the application layer includes multiple applications, such as live broadcast applications (Live Tv) and TV input channels (Tv Input).
  • TTML subtitles are displayed on the Live Tv screen.
  • Applications in the application layer are used to interact with the TTML module to complete TTML display, stop display, setting and switching of display characteristics, etc.
  • the middleware layer in Figure 4 can include various services that can be obtained and used by applications. Services can be viewed as the logical layer of TTML.
  • the DFB (DIRECT_FB, drawing engine module) of the same layer serves as a drawing service and is used to draw TTML subtitles.
  • Modules used for TTML services include DM (DataManager, data management module), GFXHAL (GRAPHICS Hardware abstract layer, graphics hardware abstraction module) and TTML modules.
  • the data management module is used for TTML data aggregation and distribution of TTML original data.
  • the graphics hardware abstraction module serves as the interface for TTML subtitle display features and is used to manage the drawing of various TTML lines, text, and rectangles.
  • the TTML module serves as the implementation of TTML subtitle data parsing, data analysis, prediction, data processing and display features.
  • the hardware layer is used to provide the source of TTML display data.
  • the TTML subtitle fragmentation data comes from the modules in the hardware layer, such as the modulation, demodulation, encoding and decoding modules mentioned above for processing broadcast data. These can be On-chip implementation for display devices (e.g., televisions, etc.).
  • the hardware layer includes Demux (demultiplexing module).
  • the demultiplexing module is a hardware module of the main chip. It is used to filter various types of data. It can filter out TTML subtitle fragment data from the code stream and divide the TTML subtitles into The slice data is sent to the data management module.
  • FIG. 9 is an interaction diagram related to the TTML module.
  • the TTML module includes subtitle interface, subtitle syntax analysis program, subtitle process and subtitle display control module.
  • the system interaction logic is as follows: LiveTv serves as the system's command setting source to control the display and display attributes of TTML subtitles.
  • the data management module serves as the source of TTML subtitle data and is used to input subtitle fragment data to the TTML module.
  • Player as the source of TTML synchronization clock, is responsible for providing TTML synchronization time.
  • the graphics hardware abstraction module is used for drawing docking to realize TTML content drawing and display, and is drawn by the drawing engine module. The performance of this part is greatly affected by the hardware.
  • the user inputs an instruction to open a live broadcast application, where the live broadcast application is used to play live television programs, especially ATSC3.0 programs.
  • the live broadcast application is opened, the TTML subtitle function is turned on by default.
  • the display device 200 will receive audio and video data and subtitle fragment data, play the audio and video data after processing, and display the content of the subtitle fragment in the corresponding video data. superior.
  • the TTML subtitle function is not turned on by default, but the TTML subtitle function switch is set so that the user can choose to turn on or off the TTML subtitle function.
  • the user opens the live broadcast application and presses the menu key of the control device 100 to display the user interface as shown in FIG. 10 , which includes a subtitle control 101 .
  • a user interface as shown in Figure 11 is displayed.
  • the user interface in Figure 11 includes a subtitle function control 111.
  • the user can control the opening or closing of the TTML subtitle function by selecting the subtitle function control 111.
  • the subtitle interface starts the display of TTML subtitles.
  • the demultiplexing module will send the first subtitle fragment data filtered from the code stream to the data management module, and the data management module will send the first subtitle fragment data to the subtitle syntax analysis program;
  • the subtitle syntax analysis program After parsing the first subtitle fragment data, the subtitle syntax analysis program obtains relevant information of the first subtitle fragment.
  • the subtitle syntax analysis program preprocesses the parsed first subtitle fragment data and sends it to the subtitle process, and the subtitle process sends the preprocessed first subtitle fragment data to the subtitle display control module.
  • the demultiplexing module will send the second subtitle fragment data filtered out from the code stream to the data management module, and the data management module will send the second subtitle fragment data to Subtitle parser;
  • the subtitle syntax analysis program preprocesses the parsed second subtitle fragment data and sends it to the subtitle process, and the subtitle process sends the preprocessed second subtitle fragment data to the subtitle display control module.
  • the demultiplexing module will send the first subtitle fragment data and the second subtitle fragment data filtered out from the code stream to the data management module.
  • the data management module will separate the first subtitle fragment data and the second subtitle fragment data.
  • the film data is sent to the subtitle parser;
  • the subtitle syntax analysis program preprocesses the parsed first subtitle fragment data and second subtitle fragment data and sends them to the subtitle process;
  • the subtitle process sends the preprocessed first subtitle fragment data and second subtitle fragment data to the subtitle display control module.
  • the controller includes a subtitle display control module, as shown in Figure 14.
  • the subtitle display control module performs the following steps:
  • Step S1401 Obtain the first content, first display characteristic, first display start time and first display end time of the first subtitle segment;
  • the subtitle display control module may obtain the first content of the first subtitle fragment, the first display characteristic, the first display start time and the first display end time from the pre-processed first subtitle fragment data sent by the subtitle process.
  • the first display characteristics include the display position, font type, font size, transparency, subtitle background, etc. of the first content.
  • the display position of the first content can be obtained by parsing the first subtitle fragment data
  • the font type, font size, transparency and subtitle background of the first content can be obtained from the Live Tv user setting file. If the display characteristics obtained by parsing the first subtitle fragment data are different from the display characteristics set by the user, one of them may prevail. For example, if the font type obtained by parsing the first subtitle fragment data is different from the font type set by the user, the font type obtained by parsing the first subtitle fragment data may prevail, or the font type set by the user may prevail.
  • the user interface in Figure 11 also includes a subtitle display personalized control 112.
  • the user selects the subtitle display personalized control 112 and presses the confirmation key of the control device 100 to display the user interface as shown in Figure 15.
  • the user interface of Figure 15 includes a font type control 151, a font size control 152, a transparency control 153, and a subtitle background color control 154. Users can modify the display characteristics of subtitles by selecting the corresponding controls.
  • Step S1402 When the first display startup time is reached, control the display to display the first content according to the first display characteristic
  • the subtitle display control module sends the first content and the first display characteristic of the first subtitle fragment to the graphics hardware abstraction module, and the graphics hardware abstraction module sends the first content and the first display characteristic of the first subtitle fragment to the graphics engine module. , and the drawing engine module draws the first content based on the first display characteristic.
  • the display 260 is controlled to display the first content drawn by the drawing engine module according to the first display characteristic at a position corresponding to the current playback interface.
  • the player will periodically synchronize time with the subtitle display control module. That is, every preset time interval, the player will send the current time to the subtitle display control module, and the subtitle display control module calculates the difference between the current time recorded by itself and the current time sent by the player. If the difference is greater than the preset synchronization value, the time recorded by the player is adjusted according to the time sent by the player to ensure time synchronization between the player and the subtitle display control module, so that the audio and video data played by the player corresponds to the subtitles. If the difference is less than or equal to the preset synchronization value, no operations related to time synchronization will be performed.
  • Step S1403 Obtain the second content, the second display characteristic, the second display start time and the second display end time of the second subtitle segment;
  • the subtitle display control module may obtain the second content of the second subtitle fragment, the second display characteristic, the second display start time and the second display end time from the preprocessed second subtitle fragment data.
  • the second display characteristics include the display position, font type, font size, transparency, subtitle background, etc. of the second content.
  • Step S1404 Calculate the difference between the second display start time and the first display end time
  • Step S1405 Determine whether the difference is less than a preset value
  • the default value is the time required by the hardware for displaying blank space and displaying the next subtitle.
  • step S1406 when the display end time of the first subtitle segment is reached, clear the first content
  • the subtitle display control module sends an instruction to clear the first content to the graphics engine module through the graphics hardware abstraction module, and the graphics engine module clears the first content, that is, displays blank content.
  • Step S1407 Control the display to display the second content according to the second display characteristic.
  • the subtitle display control module sends the second content and the second display characteristics of the second subtitle fragment to the graphics hardware abstraction module.
  • the graphics hardware abstraction module sends the second content and the second display characteristic of the second subtitle fragment to the graphics engine module, and drawn by the drawing engine module.
  • the display 260 is controlled to display the second content drawn by the drawing engine module according to the second display characteristic at a position corresponding to the current playback interface.
  • the display is controlled to display the second content according to the second display characteristic.
  • the subtitle display control module sends the second content and the second display characteristics of the second subtitle fragment to the graphics hardware abstraction module.
  • the graphics hardware abstraction module sends the second content and the second display characteristic of the second subtitle fragment to the graphics engine module, and drawn by the drawing engine module.
  • the display 260 is controlled to display the second content drawn by the drawing engine module according to the second display characteristic at a position corresponding to the current playback interface.
  • the subtitle display effect is: display "Beijing" ⁇ display blank ⁇ display "Shanghai"
  • the display time period is 1000ms-3000ms
  • the content of the second subtitle segment is "Shanghai”
  • the subtitle display effect is: display "Beijing” ⁇ display "Shanghai”.
  • step S1408 determine whether the first content and the second content are the same;
  • the display 260 is controlled to display the graphics engine module at the position corresponding to the current playback interface according to the second display The secondary content of feature drawing.
  • the subtitle display control module sends an instruction to clear the first content to the graphics engine module through the graphics hardware abstraction module, And the drawing engine module clears the first content, that is, displays blank content;
  • the display 260 is controlled to display the second content drawn by the drawing engine module according to the second display characteristic at a position corresponding to the current playback interface.
  • the content of the first subtitle segment is “Beijing” and the display time period is 1000ms-3000ms.
  • the content of the second subtitle segment is "Beijing” and the display time period is 3001ms-5000ms.
  • the subtitle display effect is: display "Beijing” ⁇ display "Beijing".
  • the content of the first subtitle segment is "Beijing” and the display time period is 1000ms-3000ms.
  • the subtitle display effect is: display "Beijing" ⁇ display blank ⁇ display "Shanghai".
  • the embodiment of this application uses the hardware performance limited time as the dividing point. When it is determined that the difference between adjacent subtitle fragments is less than the preset value, it also adds a judgment of whether the displayed content is the same. Only when the content of adjacent subtitle fragments is the same, There is no need to insert extra blank space for display. When the content of adjacent subtitle fragments is different, the effect of displaying blank space is still retained to ensure that effects such as TTML sentence segmentation display are not destroyed.
  • step S1409 determine whether the first display characteristic and the second display characteristic are the same;
  • the display characteristics include display position, font type, font size, transparency, subtitle background, etc.
  • the first display characteristic and the second display characteristic being the same may refer to the situation where the display position, font type, font size, transparency, subtitle background and other information are the same.
  • the first display characteristic and the second display characteristic may also be the same at least one of the information such as designated display position, font type, font size, transparency, and subtitle background. For example, as long as the display positions in the first display characteristic and the second display characteristic are the same, it is tentatively determined that the first display characteristic and the second display characteristic are the same.
  • step S1410 is performed: when the first display end time is reached, control the display to display the second content according to the second display characteristic.
  • step S1406 is executed, that is, when the display end time of the first subtitle segment is reached, the subtitle display control module passes the instruction to clear the first content through the graphics hardware.
  • the abstract module sends it to the drawing engine module, and the drawing engine module clears the first content, that is, displays blank content;
  • the display 260 is controlled to display the second content drawn by the drawing engine module according to the second display characteristic at a position corresponding to the current playback interface.
  • the content of the first subtitle segment is "Beijing” and the display time period is 1000ms-3000ms.
  • the content of the second subtitle segment is "Beijing” and the display time period is 3001ms-5000ms.
  • the subtitle display effect is: display "Beijing" ⁇ display "Beijing".
  • the content of the first subtitle segment is Beijing, and the display time period is 1000ms-3000ms.
  • the content of the second subtitle segment is Beijing, and the display time period is 3001ms-5000ms.
  • the subtitle display effect is: display "Beijing" ⁇ display blank ⁇ display "Beijing".
  • Some embodiments of this application also add a judgment of whether the display characteristics are the same when the content of adjacent subtitle fragments is the same. Only when the display characteristics of adjacent subtitle fragments are the same, there is no need to insert extra blank display, and the adjacent subtitle fragments have the same display characteristics. When the display characteristics are different, the effect of displaying blank space is still retained to ensure that effects such as TTML translation display are not destroyed.
  • step S1411 determine whether the first content and the second content are associated;
  • the step of determining whether the first content and the second content are associated includes:
  • the first content ends with a punctuation mark at the end of a sentence, it is determined that the first content and the second content are not related;
  • the first content does not end with a punctuation mark at the end of the sentence, it is determined that the first content and the second content have an associated relationship.
  • punctuation marks at the end of sentences include periods, question marks, exclamation points, and ellipses, etc.
  • Punctuation marks in sentences include commas, commas, dashes, semicolons, colons, double quotes, single quotes, etc.
  • the first content obtained is "The weather is so nice today!, “Where are you going to go?" "I have completed my task today.” or "There are pandas, tigers, lions in the zoo".
  • it is recognized that the first content ends with “.”, "?", "! or "" it is determined that the first content does not have an association relationship with the second content.
  • the association flag bit of the first content and the first subtitle segment is set to 1
  • the association flag bit in the first subtitle fragment is set to 0.
  • the step of determining whether the first content and the second content are associated includes:
  • association flag bit of the first subtitle fragment is 1, it is determined that the first content and the second content have an association relationship
  • association flag bit of the first subtitle fragment is not 1, it is determined that the first content and the second content do not have an association relationship.
  • step S1410 is executed, that is, when the display end time of the first subtitle segment is reached, the display 260 is controlled to display the drawing engine at the position corresponding to the current playback interface.
  • step S1406 is executed, that is, when the display end time of the first subtitle segment is reached, the subtitle display control module passes the instruction to clear the first content through the graphics hardware.
  • the abstract module sends it to the drawing engine module, and the drawing engine module clears the first content, that is, displays blank content;
  • the display 260 is controlled to display the second content drawn by the drawing engine module according to the second display characteristic at a position corresponding to the current playback interface.
  • the content of the first subtitle segment is "I just saw a child,” and the display time period is 1000ms-3000ms.
  • the content of the second subtitle segment is "She is holding a cute child.” puppy.”
  • the subtitle display effect is: display “I just saw a child," ⁇ display "She was holding a cute puppy.”.
  • the content of the first subtitle segment is "The weather will be sunny today.”, and the display time period is 1000ms-3000ms.
  • the subtitle display effect is: display “It will be sunny today.” ⁇ display blank ⁇ display "It will be sunny tomorrow.”
  • the judgment of the correlation of the contents of adjacent fragments is also added. Only when the contents of adjacent subtitle fragments have a correlation relationship, there is no need to insert redundant blanks for display, and the corresponding When the content of adjacent subtitle fragments is not related, the effect of displaying blank space is still retained to ensure that effects such as TTML sentence segmentation display are not destroyed.
  • control the display to display the second content according to the second display characteristic can be replaced by "when the second display start time is reached, control The display displays second content based on the second display characteristics.”
  • the content of the first subtitle segment is "Beijing”
  • the display time period is 1000ms-3000ms
  • the content of the second subtitle segment is "Shanghai”
  • the display time period is 3001ms-5000ms
  • the default value is 40ms.
  • the subtitle display effect is: display "Beijing" ⁇ display "Shanghai".
  • Some embodiments of the present application provide a subtitle display method, the method is suitable for a display device, the display device includes a display and a controller, the controller is configured to: obtain the first content of the first subtitle fragment, the first Display characteristics and display end time; control the display to display the first content according to the first display characteristics; obtain the second content, second display characteristics and display start time of the second subtitle segment; calculate the display start time The difference between time and the display end time; if the difference is less than the preset value, when the display end time is reached, the display is controlled to display the second content according to the second display characteristic.
  • the display time interval between two adjacent fragments is short, the blank content can not be displayed and the content of the latter fragment can be directly displayed to avoid subtitle flickering and insufficient smooth content, and improve user experience.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

本申请一些实施例示出一种显示设备及字幕显示方法,所述方法包括:获取第一字幕分片的第一内容、第一显示特性、第一显示启动时间和第一显示结束时间;在到达所述第一显示启动时间时,控制所述显示器根据所述第一显示特性显示所述第一内容;获取第二字幕分片的第二内容、第二显示特性和第二显示启动时间;计算所述第二显示启动时间与所述第一显示结束时间的差值;如果所述差值小于预设值,在到达所述第一显示结束时间时,控制所述显示器根据所述第二显示特性显示第二内容。

Description

一种显示设备及字幕显示方法
相关申请的交叉引用
本申请要求在2022年9月8日提交的、申请号为202211100434.X的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及显示设备技术领域,尤其涉及一种显示设备及字幕显示方法。
背景技术
TTML(Timed Text Markup Language,计时文本标记语言)因其兼容性好,配置简单,同步方便的特点,ATSC3.0广播系统突破性引入了TTML作为其字幕标准。
发明内容
本申请一些实施例中提供一种显示设备,包括:显示器;控制器,被配置为:获取第一字幕分片的第一内容、第一显示特性、第一显示启动时间和第一显示结束时间;在到达所述第一显示启动时间时,控制所述显示器根据所述第一显示特性显示所述第一内容;获取第二字幕分片的第二内容、第二显示特性和第二显示启动时间;计算所述第二显示启动时间与所述第一显示结束时间的差值;如果所述差值小于预设值,在到达所述第一显示结束时间时,控制所述显示器根据所述第二显示特性显示第二内容。
在一些实施例中,所述控制器执行在到达所述第一显示结束时间时,控制所述显示器根据所述第二显示特性显示第二内容,被进一步配置为:判断所述第一内容与所述第二内容是否相同;如果所述第一内容与所述第二内容相同,则在到达所述第一显示结束时间时,控制所述显示器根据所述第二显示特性显示所述第二内容。
附图说明
图1示出了根据一些实施例的显示设备与控制装置之间操作场景;
图2示出了根据一些实施例的控制设备100的硬件配置框图;
图3示出了根据一些实施例的显示设备200的硬件配置框图;
图4示出了根据一些实施例的显示设备200中软件配置图;
图5示出了根据一些实施例的一种TTML模块内部的流程图;
图6示出了根据一些实施例的第一种字幕显示效果的示意图;
图7示出了根据一些实施例的第二种字幕显示效果的示意图;
图8示出了根据一些实施例的一种显示设备200的系统架构图;
图9示出了根据一些实施例的一种与TTML模块相关的交互图;
图10示出了根据一些实施例的一种直播应用设置界面的示意图;
图11示出了根据一些实施例的一种直播应用字幕设置界面的示意图;
图12示出了根据一些实施例的另一种TTML模块内部的流程图;
图13示出了根据一些实施例的又一种TTML模块内部的流程图;
图14示出了根据一些实施例的一种字幕显示控制模块执行步骤的流程图;
图15示出了根据一些实施例的一种字幕显示个性化设置用户界面的示意图;
图16示出了根据一些实施例的第三种字幕显示效果的示意图;
图17示出了根据一些实施例的第四种字幕显示效果的示意图;
图18示出了根据一些实施例的第五种字幕显示效果的示意图;
图19示出了根据一些实施例的第六种字幕显示效果的示意图;
图20示出了根据一些实施例的第七种字幕显示效果的示意图。
具体实施方式
为使本申请的目的和实施方式更加清楚,下面将结合本申请示例性实施例中的附图,对本申请示例性实施方式进行清楚、完整地描述,显然,描述的示例性实施例仅是本申请一部分实施例,而不是全部的实施例。
需要说明的是,本申请中对于术语的简要说明,仅是为了方便理解接下来描述的实施方式,而不是意图限定本申请的实施方式。除非另有说明,这些术语应当按照其普通和通常的含义理解。
本申请中说明书和权利要求书及上述附图中的术语“第一”、“第二”和“第三”等是用于区别类似或同类的对象或实体,而不必然意味着限定特定的顺序或先后次序,除非另外注明。应该理解这样使用的用语在适当情况下可以互换。
本申请实施方式提供的显示设备可以具有多种实施形式,例如,可以是电视、智能电视、激光投影设备、显示器(monitor)、电子白板(electronic bulletin board)、电子桌面(electronic table)等。图1和图2为本申请的显示设备的一种具体实施方式。
图1为根据实施例中显示设备与控制装置之间操作场景的示意图。如图1所示,用户可通过智能设备300或控制装置100操作显示设备200。
在一些实施例中,控制装置100可以是遥控器,遥控器和显示设备的通信包括红外协议通信或蓝牙协议通信,及其他短距离通信方式,通过无线或有线方式来控制显示设备200。用户可以通过遥控器上按键、语音输入、控制面板输入等输入用户指令,来控制显示设备200。
在一些实施例中,也可以使用智能设备300(如移动终端、平板电脑、计算机、笔记本电脑等)以控制显示设备200。例如,使用在智能设备上运行的应用程序控制显示设备200。
在一些实施例中,显示设备可以不使用上述的智能设备或控制设备接收指令,而是通过触摸或者手势等接收用户的控制。
在一些实施例中,显示设备200还可以采用除了控制装置100和智能设备300之外的方式进行控制,例如,可以通过显示设备200设备内部配置的获取语音指令的模块直接接收用户的语音指令控制,也可以通过显示设备200设备外部设置的语音控制设备来接收用户的语音指令控制。
在一些实施例中,显示设备200还与服务器400进行数据通信。可允许显示设备200通过局域网(LAN)、无线局域网(WLAN)和其他网络进行通信连接。服务器400可以向显示 设备200提供各种内容和互动。服务器400可以是一个集群,也可以是多个集群,可以包括一类或多类服务器。
图2示例性示出了根据示例性实施例中控制装置100的配置框图。如图2所示,控制装置100包括控制器110、通信接口130、用户输入/输出接口140、存储器、供电电源。控制装置100可接收用户的输入操作指令,且将操作指令转换为显示设备200可识别和响应的指令,起用用户与显示设备200之间交互中介作用。
如图3,显示设备200包括调谐解调器210、通信器220、检测器230、外部装置接口240、控制器250、显示器260、音频输出接口270、存储器、供电电源、用户接口中的至少一种。
在一些实施例中控制器包括处理器,视频处理器,音频处理器,图形处理器,RAM,ROM,用于输入/输出的第一接口至第n接口。
显示器260包括用于呈现画面的显示屏组件,以及驱动图像显示的驱动组件,用于接收源自控制器输出的图像信号,进行显示视频内容、图像内容以及菜单操控界面的组件以及用户操控UI界面。
显示器260可为液晶显示器、OLED显示器、以及投影显示器,还可以为一种投影装置和投影屏幕。
显示器260还包括触控屏,触控屏用于接收用户手指在触控屏上滑动或点击等动作输入控制指令。
通信器220是用于根据各种通信协议类型与外部设备或服务器进行通信的组件。例如:通信器可以包括Wifi模块,蓝牙模块,有线以太网模块等其他网络通信协议芯片或近场通信协议芯片,以及红外接收器中的至少一种。显示设备200可以通过通信器220与外部控制设备100或服务器400建立控制信号和数据信号的发送和接收。
用户接口,可用于接收控制装置100(如:红外遥控器等)的控制信号。
检测器230用于采集外部环境或与外部交互的信号。例如,检测器230包括光接收器,用于采集环境光线强度的传感器;或者,检测器230包括图像采集器,如摄像头,可以用于采集外部环境场景、用户的属性或用户交互手势,再或者,检测器230包括声音采集器,如麦克风等,用于接收外部声音。
外部装置接口240可以包括但不限于如下:高清多媒体接口接口(HDMI)、模拟或数据高清分量输入接口(分量)、复合视频输入接口(CVBS)、USB输入接口(USB)、RGB端口等任一个或多个接口。也可以是上述多个接口形成的复合性的输入/输出接口。
调谐解调器210通过有线或无线接收方式接收广播电视信号,以及从多个无线或有线广播电视信号中解调出音视频信号,如以及EPG数据信号。
在一些实施例中,控制器250和调谐解调器210可以位于不同的分体设备中,即调谐解调器210也可在控制器250所在的主体设备的外置设备中,如外置机顶盒等。
控制器250,通过存储在存储器上中各种软件控制程序,来控制显示设备的工作和响应用户的操作。控制器250控制显示设备200的整体操作。例如:响应于接收到用于选择在显示器260上显示UI对象的用户命令,控制器250便可以执行与由用户命令选择的对象有关的操作。
在一些实施例中控制器包括中央处理器(Central Processing Unit,CPU),视频处理器,音频处理器,图形处理器(Graphics Processing Unit,GPU),RAM Random Access Memory, RAM),ROM(Read-Only Memory,ROM),用于输入/输出的第一接口至第n接口,通信总线(Bus)等中的至少一种。
用户可在显示器260上显示的图形用户界面(GUI)输入用户命令,则用户输入接口通过图形用户界面(GUI)接收用户输入命令。或者,用户可通过输入特定的声音或手势进行输入用户命令,则用户输入接口通过传感器识别出声音或手势,来接收用户输入命令。
如图4所示,将显示设备的系统分为三层,从上至下分别为应用层、中间件层和硬件层。
应用层主要包含电视上的常用应用,以及应用框架(Application Framework),其中,常用应用主要是基于浏览器Browser开发的应用,例如:HTML5APPs;以及原生应用(Native APPs);
应用框架(Application Framework)是一个完整的程序模型,具备标准应用软件所需的一切基本功能,例如:文件存取、资料交换...,以及这些功能的使用接口(工具栏、状态列、菜单、对话框)。
原生应用(Native APPs)可以支持在线或离线,消息推送或本地资源访问。
中间件层包括各种电视协议的服务、基于多媒体协议(例如,MPEG等)的服务以及系统组件等中间件。中间件可以使用系统软件所提供的基础服务(功能),衔接网络上应用系统的各个部分或不同的应用,能够达到资源共享、功能共享的目的。
硬件层主要包括HAL接口、硬件以及驱动,其中,HAL接口为所有电视芯片对接的统一接口,具体逻辑由各个芯片来实现。驱动主要包含:音频驱动、显示驱动、蓝牙驱动、摄像头驱动、WIFI驱动、USB驱动、HDMI驱动、传感器驱动(如指纹传感器,温度传感器,压力传感器等)、以及电源驱动等。
TTML是一种基于XML的时序文本标记语言。它旨在用于全球范围内的跨字幕和字幕传递应用程序,从而简化互操作性并保持与其他字幕文件格式的一致性和兼容性。TTML因其兼容性好,配置简单,同步方便的特点,ATSC3.0广播系统引入了TTML作为其字幕标准。
在一些实施例中,TTML模块内部的流程图如图5所示。其中,TTML模块包括CC_Interface(Closed Caption_Interface,字幕接口)、CC_Parser(字幕语法分析程序)、CC_Process(字幕进程)和CC_Display_Control(字幕显示控制模块)。
S501:在字幕接口接收到启动TTML显示的指令后,启动TTML显示;
S502:DM(DataManager,数据管理模块)会将获取到的第一字幕分片数据发送至字幕语法分析程序;
S503:字幕语法分析程序解析第一字幕分片数据后,得到第一字幕分片的相关信息,相关信息包括第一字幕分片的字幕内容、显示启动时间、显示结束时间、显示特性等。其中,显示特性包括字幕的字体类型、字体大小、字体透明度、字体背景、字幕位置等。显示特性可默认设置,也可以通过用户个性化设置。
S504:字幕语法分析程序将解析得到的第一字幕分片数据进行预处理并发送至字幕进程,并由字幕进程将预处理后的第一字幕分片数据发送至字幕显示控制模块。其中,预处理的目的在于将第一字幕分片的相关信息整理到一个统一的数据结构中并放入队列中,便于字幕数据的传输与获取。
S505:字幕显示控制模块在获取到预处理后的第一字幕分片数据后执行与第一字幕分 片内容显示的相关操作。
S506:在到达第一字幕分片的显示结束时间时,结束显示第一字幕分片的内容,即显示空白内容。
S507:数据管理模块会将获取到的第二字幕分片数据发送至字幕语法分析程序;
S508:字幕语法分析程序解析第二字幕分片数据后,得到第二字幕分片的相关信息。
S509:字幕语法分析程序将解析得到的第二字幕分片数据进行预处理并发送至字幕进程,并由字幕进程将预处理后的第二字幕分片数据发送至字幕显示控制模块。
S510:在到达第二字幕分片的显示启动时间时,字幕显示控制模块执行与第二字幕分片内容显示的相关操作,并在到达第二字幕分片的显示结束时间时,结束显示第二字幕分片内容。
但是TTML字幕因为分片小,传输分片多,分片间隔过短,加之硬件绘图性能跟不上,会导致在快速多分片传输显示过程中存在字幕闪烁,内容不够平滑等体验不好的情况。即限于硬件性能,在显示空白内容和再次显示第二字幕分片内容之间至少要间隔40ms。如果第一字幕分片的显示结束时间与第二字幕分片的显示启动时间之差大于或等于40ms,则可以在指定时间正常完成第一字幕分片、空白和第二字幕分片的显示。如图6所示,第一字幕分片的内容为“北京”,显示时间段为1000ms-3000ms,第二字幕分片的内容为“上海”,显示时间段为3100ms-5100ms,即显示空白时间为100ms,则可以按照原定时间完成显示“北京”→显示空白→显示“上海”的效果。
如果第一字幕分片的显示结束时间与第二字幕分片的显示启动时间之差小于40ms,则如图7所示,第一字幕分片的内容为“北京”,显示时间段为1000ms-3000ms,第二字幕分片的内容为“上海”,显示时间段为3001ms-5000ms,即显示空白时间为1ms,则原定想达到如图7中的效果1所示的效果,但是限于硬件性能,按照1ms显示空白内容是完成不了的,只能达到如图7中的效果2所示的效果。即本应该在3001ms显示第二字幕分片,却只能在3040ms显示,则在3001ms-3040ms显示空白内容会给用户造成字幕停顿,显示不平滑的体验。
如果第一字幕分片与第二字幕分片是上下文关系,例如:第一字幕分片内容与第二字幕分片内容是一句话,显示效果是上半句和下半句突然出现小一段空白,即一句话停顿一下。
如果第一字幕分片与第二字幕分片的内容相同,例如:第一字幕分片内容与第二字幕分片内容是相同的内容,显示效果是字幕在屏幕上闪烁一下,如果持续多个分片都是相同内容,就会出现相同的内容在屏幕上不同闪烁的情况。
综上,TTML字幕会因为硬件性能限制以及字幕的语句分割设计不合理导致出现一些影响字幕显示效果的情况,进而导致用户体验差。
考虑以上技术问题,本申请实施例提供一种显示设备200,显示设备200的结构及各部分的功能可以参阅上述实施例。此外,在上述实施例示出的显示设备200的基础上,本实施例对显示设备200的一些功能做进一步的完善。
图8是显示设备200的系统架构图。在一些实施方式中,与图4所示不同,显示设备200的系统架构还可表示为包括应用(APP)层、各种服务(Service)和硬件(HAL)层。应用层包括多个应用程序,例如,直播应用程序(Live Tv)和电视输入通道(Tv Input)等应用程序。TTML字幕显示在Live Tv画面之上。应用层中的应用程序用来与TTML模 块交互,完成TTML显示、停止显示、显示特性的设置和切换等。
如上所述,图4中的中间件层可包括各种服务,这些服务可以被应用程序获取和使用。服务可以视为TTML的逻辑层。同层的DFB(DIRECT_FB,绘图引擎模块)作为绘图服务,用来绘制TTML字幕。用于TTML服务的模块包括DM(DataManager,数据管理模块),GFXHAL(GRAPHICS Hardware astract layer,图形硬件抽象模块)和TTML模块。其中,数据管理模块用于TTML数据汇聚,用来分发TTML原始数据。图形硬件抽象模块作为TTML字幕显示特性的对接,用来管理各种TTML线条,文字,矩形的绘制。TTML模块作为TTML字幕数据解析、数据分析、预测、数据处理以及显示特性的实现。
硬件层用于提供TTML显示数据来源,TTML字幕分片数据都是来源于硬件层中的模块,例如前面所述的用于处理广播数据的调制解调、编码和解码等模块,这些都可以在显示设备(例如,电视等)的芯片上实现。硬件层包括Demux(解复用模块),解复用模块是主芯片的一个硬件模块,用来过滤各种类型的数据,可以从码流中过滤出TTML字幕分片数据,并将TTML字幕分片数据发送至数据管理模块。
图9是与TTML模块相关的交互图。TTML模块包括字幕接口、字幕语法分析程序、字幕进程和字幕显示控制模块。系统交互逻辑如下:LiveTv作为系统的命令设置来源,控制TTML字幕的显示以及显示属性。数据管理模块作为TTML字幕数据来源,用于向TTML模块输入字幕分片数据。Player(播放器)作为TTML同步时钟的来源,负责提供TTML同步时间。图形硬件抽象模块用于绘图对接,实现TTML内容绘制显示,并由绘图引擎模块绘制。该部分的性能受硬件影响较大。
在一些实施例中,用户输入打开直播应用程序的指令,其中,直播应用程序用来播放电视直播节目,特别是ATSC3.0节目。直播应用打开后,TTML字幕功能是默认打开的,显示设备200会接收到音视频数据和字幕分片数据,将音视频数据经处理后播放,并将字幕分片的内容显示在对应的视频数据上。
在一些实施例中,用户打开直播应用程序后,TTML字幕功能不是默认打开的,而是通过设置TTML字幕功能开关以使用户可以选择开启或关闭TTML字幕功能。示例性的,用户打开直播应用程序并按压控制装置100的菜单键显示如图10所示的用户界面,该用户界面包括字幕控件101。接收用户输入选中字幕控件101的指令,显示如图11所示的用户界面。图11的用户界面包括字幕功能控件111,用户可通过选中字幕功能控件111控制TTML字幕功能的开启或关闭。当接收用开启户TTML字幕功能的指令,字幕接口启动TTML字幕显示。
在一些实施例中,如图12所示,
S1201:在字幕接口接收到TTML字幕功能开启的指令后,启动TTML显示;
S1202:解复用模块会将从码流中过滤出来的第一字幕分片数据发送至数据管理模块,数据管理模块将第一字幕分片数据发送至字幕语法分析程序;
S1203:字幕语法分析程序解析第一字幕分片数据后,得到第一字幕分片的相关信息。
S1204:字幕语法分析程序将解析得到的第一字幕分片数据进行预处理并发送至字幕进程,并由字幕进程将预处理后的第一字幕分片数据发送至字幕显示控制模块。
S1205:字幕显示控制模块执行后续操作。
S1206:在显示第一字幕分片的过程中,解复用模块会将从码流中过滤出来的第二字幕分片数据发送至数据管理模块,数据管理模块将第二字幕分片数据发送至字幕语法分析程 序;
S1207:字幕语法分析程序解析第二字幕分片数据后,得到第二字幕分片的相关信息。
S1208:字幕语法分析程序将解析得到的第二字幕分片数据进行预处理并发送至字幕进程,并由字幕进程将预处理后的第二字幕分片数据发送至字幕显示控制模块。
S1209:字幕显示控制模块执行后续操作。
在一些实施例中,如图13所示,
S1301:在字幕接口接收到TTML字幕功能开启的指令后,启动TTML显示;
S1302:解复用模块会将从码流中过滤出来的第一字幕分片数据和第二字幕分片数据一同发送至数据管理模块,数据管理模块将第一字幕分片数据和第二字幕分片数据发送至字幕语法分析程序;
S1303:字幕语法分析程序解析第一字幕分片数据和第二字幕分片数据后,得到第一字幕分片和第二字幕分片的相关信息。
S1304:字幕语法分析程序将解析得到的第一字幕分片数据和第二字幕分片数据进行预处理并发送至字幕进程;
S1305:字幕进程将预处理后的第一字幕分片数据和第二字幕分片数据发送至字幕显示控制模块。
S1306:字幕显示控制模块执行后续操作。
控制器包括字幕显示控制模块,如图14所示,字幕显示控制模块执行以下步骤:
步骤S1401:获取第一字幕分片的第一内容、第一显示特性、第一显示启动时间和第一显示结束时间;
字幕显示控制模块可以字幕进程发送预处理的第一字幕分片数据中获取第一字幕分片的第一内容,第一显示特性、第一显示启动时间和第一显示结束时间。其中,第一显示特性包括第一内容的显示位置、字体类型、字体大小、透明度和字幕背景等。其中,第一内容的显示位置可以从第一字幕分片数据解析得到,而第一内容的字体类型、字体大小、透明度和字幕背景可从Live Tv用户设置文件中获取。如果第一字幕分片数据解析得到显示特性与用户设置的显示特性不同,可以以其中一个为准。例如:从第一字幕分片数据解析得到的字体类型与用户设置的字体类型不同,可以以从第一字幕分片数据解析得到的字体类型为准,也可以以用户设置的字体类型为准。
用户可在直播应用设置界面选择打开显示特性的设置界面,如图11所示。图11的用户界面还包括字幕显示个性化控件112,用户选择字幕显示个性化控件112并按压控制装置100的确认键,显示如图15所示的用户界面。图15的用户界面包括字体类型控件151、字体大小控件152、透明度控件153和字幕背景颜色控件154。用户可通过选中相应的控件对字幕的显示特性进行修改。
步骤S1402:在到达所述第一显示启动时间时,控制所述显示器根据所述第一显示特性显示所述第一内容;
字幕显示控制模块将第一字幕分片的第一内容和第一显示特性发送至图形硬件抽象模块,图形硬件抽象模块将第一字幕分片的第一内容和第一显示特性发送至绘图引擎模块,并由绘图引擎模块基于第一显示特性绘制第一内容。
在到达第一显示启动时间时,控制所述显示器260在当前播放界面对应的位置显示绘图引擎模块根据第一显示特性绘制的第一内容。
需要说明的是,播放器会定期与字幕显示控制模块进行时间同步。即每间隔预设时长,播放器会发送当前时间给字幕显示控制模块,字幕显示控制模块计算自身记录的当前时间与播放器发送当前时间的差值。如果差值大于预设同步值,则根据播放器发送的时间调整自身记录的时间,以保证播放器与字幕显示控制模块的时间同步,从而使播放器播放的音视频数据与字幕相对应。如果差值小于或等于预设同步值,不执行与时间同步的相关操作。
步骤S1403:获取第二字幕分片的第二内容、第二显示特性、第二显示启动时间和第二显示结束时间;
字幕显示控制模块可以从预处理的第二字幕分片数据获取第二字幕分片的第二内容,第二显示特性、第二显示启动时间和第二显示结束时间。其中,第二显示特性包括第二内容的显示位置、字体类型、字体大小、透明度和字幕背景等。
步骤S1404:计算所述第二显示启动时间与所述第一显示结束时间的差值;
步骤S1405:判断所述差值是否小于预设值;
其中,预设值为显示空白与显示下一段字幕的过程中硬件所需的耗时。
如果所述差值大于或等于预设值,则执行步骤S1406:在到达第一字幕分片的显示结束时间时,清除所述第一内容;
字幕显示控制模块将清除第一内容的指令通过图形硬件抽象模块发送至绘图引擎模块,并由绘图引擎模块清除第一内容,即显示空白内容。
步骤S1407:控制所述显示器根据所述第二显示特性显示所述第二内容。
字幕显示控制模块将第二字幕分片的第二内容和第二显示特性发送至图形硬件抽象模块图形硬件抽象模块将第二字幕分片的第二内容和第二显示特性发送至绘图引擎模块,并由绘图引擎模块绘制。在到达第二显示启动时间时,控制所述显示器260在当前播放界面对应的位置显示绘图引擎模块根据第二显示特性绘制的第二内容。
在一些实施例中,获取第三字幕分片的第三内容、第三显示特性、第三显示启动时间和第三显示结束时间;
计算所述第三显示启动时间与所述第二显示结束时间的差值,根据差值确定显示第三字幕分片内容的时间。
在一些实施例中,如果所述差值小于预设值,在到达第一字幕分片的显示结束时间时,控制所述显示器根据所述第二显示特性显示第二内容。
字幕显示控制模块将第二字幕分片的第二内容和第二显示特性发送至图形硬件抽象模块图形硬件抽象模块将第二字幕分片的第二内容和第二显示特性发送至绘图引擎模块,并由绘图引擎模块绘制。在到达第一字幕分片的显示结束时间时,控制所述显示器260在当前播放界面对应的位置显示绘图引擎模块根据第二显示特性绘制的第二内容。
示例性的,如图16所示,如果第一字幕分片的内容为北京,显示时间段为1000ms-3000ms,第二字幕分片的内容为上海,显示时间段为3100ms-5100ms,差值=3100-3000=100ms,由于100ms>40ms,因此在3100ms是显示第二字幕分片的内容。如图16中的效果2所示,字幕显示效果为:显示“北京”→显示空白→显示“上海”
如果第一字幕分片的内容为“北京”,显示时间段为1000ms-3000ms,第二字幕分片的内容为“上海”,显示时间段为3001ms-5000ms,预设值为40ms。差值=3001-3000=1ms,由于1ms<40ms,因此,在3000ms是显示第二字幕分片的内容。如图16中的效果1所示,字幕显示效果为:显示“北京”→显示“上海”。
以上实施例在差值小于预设值时,单纯移除了空白内容的显示,会导致TTML一些显示特性,例如:平移显示、断句显示,等遭到破坏。
在一些实施例中,如果所述差值小于预设值,执行步骤S1408:判断所述第一内容与所述第二内容是否相同;
在一些实施例中,如果所述第一内容与所述第二内容相同,则在到达第一显示结束时间时,控制所述显示器260在当前播放界面对应的位置显示绘图引擎模块根据第二显示特性绘制的第二内容。
如果所述第一内容与所述第二内容不相同,在到达第一字幕分片的显示结束时间时,字幕显示控制模块将清除第一内容的指令通过图形硬件抽象模块发送至绘图引擎模块,并由绘图引擎模块清除第一内容,即显示空白内容;
在到达第二字幕分片的显示启动时间时,控制所述显示器260在当前播放界面对应的位置显示绘图引擎模块根据第二显示特性绘制的第二内容。
示例性的,如图17所示,第一字幕分片的内容为“北京”,显示时间段为1000ms-3000ms,第二字幕分片的内容为“北京”,显示时间段为3001ms-5000ms,预设值为40ms。差值=3001-3000=1ms,由于1ms<40ms且第一字幕分片与第二字幕分片的内容同为“北京”,因此在3000ms是显示第二字幕分片的内容。如图17中的效果1所示,字幕显示效果为:显示“北京”→显示“北京”。
第一字幕分片的内容为“北京”,显示时间段为1000ms-3000ms,第二字幕分片的内容为“上海”,显示时间段为3001ms-5000ms,预设值为40ms。差值=3001-3000=1ms,由于1ms<40ms但第一字幕分片与第二字幕分片的内容不同,因此,在3000ms时显示空白,在3040ms时,显示第二字幕分片的内容。如图17中的效果2所示,字幕显示效果为:显示“北京”→显示空白→显示“上海”。
本申请实施例以硬件性能受限时间为分割点,在确定相邻字幕分片差值小于预设值时,还增加显示内容是否相同的判断,只能当相邻字幕分片内容相同时,无须插入多余空白显示,而相邻字幕分片内容不同时,仍保留显示空白的效果,以保证TTML断句显示等效果不被破坏。
在一些实施例中,如果所述第一内容与所述第二内容相同,则执行步骤S1409:判断所述第一显示特性与所述第二显示特性是否相同;
显示特性包括显示位置、字体类型、字体大小、透明度和字幕背景等,第一显示特性与第二显示特性相同可以是指显示位置、字体类型、字体大小、透明度和字幕背景等信息都相同的情况。第一显示特性与第二显示特性相同还可以是指定显示位置、字体类型、字体大小、透明度和字幕背景等信息中至少一个信息相同。例如:只要第一显示特性与第二显示特性中的显示位置相同,即姑且确定第一显示特性与第二显示特性相同。
如果所述第一显示特性与所述第二显示特性相同,则执行步骤S1410:在到达所述第一显示结束时间时,控制所述显示器根据所述第二显示特性显示第二内容。
如果所述第一显示特性与所述第二显示特性不相同,则执行步骤S1406,即在到达第一字幕分片的显示结束时间时,字幕显示控制模块将清除第一内容的指令通过图形硬件抽象模块发送至绘图引擎模块,并由绘图引擎模块清除第一内容,即显示空白内容;
在到达第二字幕分片的显示启动时间时,控制所述显示器260在当前播放界面对应的位置显示绘图引擎模块根据第二显示特性绘制的第二内容。
示例性的,如图18所示,第一字幕分片的内容为“北京”,显示时间段为1000ms-3000ms,第二字幕分片的内容为“北京”,显示时间段为3001ms-5000ms,预设值为40ms。差值=3001-3000=1ms,由于1ms<40ms且第一字幕分片与第二字幕分片的显示位置相同,因此在3000ms是显示第二字幕分片的内容。如图18中的效果1所示,字幕显示效果为:显示“北京”→显示“北京”。
第一字幕分片的内容为北京,显示时间段为1000ms-3000ms,第二字幕分片的内容为北京,显示时间段为3001ms-5000ms,预设值为40ms。差值=3001-3000=1ms,由于1ms<40ms但第一字幕分片与第二字幕分片的显示位置不同,因此,在3000ms时显示空白,在3040ms时,显示第二字幕分片的内容。如图18中的效果2所示,字幕显示效果为:显示“北京”→显示空白→显示“北京”。
本申请一些实施例在相邻分片内容相同的情况下,还增加显示特性是否相同的判断,只能当相邻字幕分片显示特性相同时,无须插入多余空白显示,而相邻字幕分片显示特性不同时,仍保留显示空白的效果,以保证TTML平移显示等效果不被破坏。
在一些实施例中,如果所述第一内容与所述第二内容不相同,则执行步骤S1411:判断所述第一内容与所述第二内容是否具有关联关系;
在一些实施例中,判断所述第一内容与所述第二内容是否具有关联关系的步骤,包括:
判断所述第一内容是否以句末标点符号为结尾;
如果所述第一内容是以句末标点符号为结尾,则确定所述第一内容与所述第二内容不具有关联关系;
如果所述第一内容不是以句末标点符号为结尾,则确定所述第一内容与所述第二内容具有关联关系。
其中,句末标点符号包括句号、问号、感叹号和省略号等。句中标点符号包括逗号、顿号、破折号、分号、冒号、双引号、单引号等。
示例性的,如果获取到第一内容为“今天天气真好!”,“你打算去哪里?”“我今天已经完成我的任务。”或“动物园里有熊猫、老虎、狮子……”。当识别到第一内容以“。”、“?”、“!”或“……”为结尾,则确定第一内容与所述第二内容不具有关联关系。如果获取到第一内容为“我刚才看到一个小孩,”“你这样做简直是拿鸡蛋碰石头—”“小红说:“”“今天主要讲以下三点:”“第一点是注意卫生;”或“创建城市文明的关键”。当识别到第一内容以“,”、“—”、“:”“;”或者文字为结尾,则确定第一内容与所述第二内容具有关联关系。
在一些实施例中,如果第一内容与第二内容具有关联关系,即第一内容与在第一字幕分片的关联标志位设置为1,如果第一内容与第二内容不具有关联关系,在第一字幕分片的关联标志位设置为0。
判断所述第一内容与所述第二内容是否具有关联关系的步骤,包括:
判断所述第一字幕分片的关联标志位是否为1,
如果所述第一字幕分片的关联标志位为1,则确定所述第一内容与所述第二内容具有关联关系;
如果所述第一字幕分片的关联标志位不为1,则确定所述第一内容与所述第二内容不具有关联关系。
如果所述第一内容与所述第二内容具有关联关系,则执行步骤S1410,即在到达第一 字幕分片的显示结束时间时,控制所述显示器260在当前播放界面对应的位置显示绘图引擎模块根据第二显示特性绘制的第二内容。
如果所述第一内容与所述第二内容不具有关联关系,则执行步骤S1406,即在到达第一字幕分片的显示结束时间时,字幕显示控制模块将清除第一内容的指令通过图形硬件抽象模块发送至绘图引擎模块,并由绘图引擎模块清除第一内容,即显示空白内容;
在到达第二字幕分片的显示启动时间时,控制所述显示器260在当前播放界面对应的位置显示绘图引擎模块根据第二显示特性绘制的第二内容。
示例性的,如图19所示,第一字幕分片的内容为“我刚才看到一个小孩,”,显示时间段为1000ms-3000ms,第二字幕分片的内容为“她牵着一个可爱的小狗。”,显示时间段为3001ms-5000ms,预设值为40ms。差值=3001-3000=1ms,由于1ms<40ms且第一字幕分片与第二字幕分片的内容具有关联关系,因此,在3000ms是显示第二字幕分片的内容。如图19中的效果1所示,字幕显示效果为:显示“我刚才看到一个小孩,”→显示“她牵着一只可爱小狗。”。
第一字幕分片的内容为“今日天气晴。”,显示时间段为1000ms-3000ms,第二字幕分片的内容为“明日天气晴。”,显示时间段为3001ms-5000ms,预设值为40ms。差值=3001-3000=1ms,由于1ms<40ms但第一字幕分片与第二字幕分片的内容不具有关联关系。因此,在3000ms时显示空白,在3040ms时,显示第二字幕分片的内容。如图19中的效果2所示,字幕显示效果为:显示“今日天气晴。”→显示空白→显示“明日天气晴。”
本申请一些实施例在相邻分片内容不同的情况下,还增加相邻分片内容关联性的判断,只能当相邻字幕分片内容具有关联关系时,无须插入多余空白显示,而相邻字幕分片内容不具有关联关系时,仍保留显示空白的效果,以保证TTML断句显示等效果不被破坏。
在一些实施例中,“在到达所述第一显示结束时间时,控制所述显示器根据所述第二显示特性显示第二内容”可以替换为“在到达所述第二显示启动时间时,控制所述显示器根据所述第二显示特性显示第二内容。”
示例性的,第一字幕分片的内容为“北京”,显示时间段为1000ms-3000ms,第二字幕分片的内容为“上海”,显示时间段为3001ms-5000ms,预设值为40ms。差值=3001-3000=1ms,由于1ms<40ms,因此,在3001ms时显示第二字幕分片的内容。如图20所示,字幕显示效果为:显示“北京”→显示“上海”。
本申请的一些实施例提供一种字幕显示方法,所述方法适用于显示设备,显示设备包括显示器和控制器,所述控制器被配置为:获取第一字幕分片的第一内容、第一显示特性和显示结束时间;控制所述显示器根据所述第一显示特性显示所述第一内容;获取第二字幕分片的第二内容、第二显示特性和显示启动时间;计算所述显示启动时间与所述显示结束时间的差值;如果所述差值小于预设值,在到达所述显示结束时间时,控制所述显示器根据所述第二显示特性显示第二内容。当相邻两个分片的显示时间间隔较短时,可以不显示空白内容,直接显示后一个分片的内容,避免出现字幕闪烁,内容不够平滑的情况,提升用户体验。
为了方便解释,已经结合具体的实施方式进行了上述说明。但是,上述示例性的讨论不是意图穷尽或者将实施方式限定到上述公开的具体形式。根据上述的教导,可以得到多种修改和变形。上述实施方式的选择和描述是为了更好的解释原理及实际的应用,从而使得本领域技术人员更好的使用所述实施方式以及适于具体使用考虑的各种不同的变形的 实施方式。

Claims (10)

  1. 一种显示设备,包括:
    显示器;
    控制器,被配置为:
    获取第一字幕分片的第一内容、第一显示特性、第一显示启动时间和第一显示结束时间;
    在到达所述第一显示启动时间时,控制所述显示器根据所述第一显示特性显示所述第一内容;
    获取第二字幕分片的第二内容、第二显示特性和第二显示启动时间;
    计算所述第二显示启动时间与所述第一显示结束时间的差值;
    如果所述差值小于预设值,在到达所述第一显示结束时间时,控制所述显示器根据所述第二显示特性显示第二内容。
  2. 根据权利要求1所述的显示设备,所述控制器执行在到达所述第一显示结束时间时,控制所述显示器根据所述第二显示特性显示第二内容,被进一步配置为:
    判断所述第一内容与所述第二内容是否相同;
    如果所述第一内容与所述第二内容相同,则在到达所述第一显示结束时间时,控制所述显示器根据所述第二显示特性显示所述第二内容。
  3. 根据权利要求2所述的显示设备,所述控制器,被配置为:
    如果所述第一内容与所述第二内容不相同,则在到达所述第一显示结束时间时,清除所述第一内容,以及控制所述显示器根据所述第二显示特性显示所述第二内容。
  4. 根据权利要求2所述的显示设备,所述控制器,被配置为:
    如果所述第一内容与所述第二内容不相同,则判断所述第一内容与所述第二内容是否具有关联关系;
    如果所述第一内容与所述第二内容具有关联关系,在到达所述第一显示结束时间时,控制所述显示器根据所述第二显示特性显示第二内容;
    如果所述第一内容与所述第二内容不具有关联关系,则在到达所述第一显示结束时间时,清除所述第一内容,以及控制所述显示器根据所述第二显示特性显示第二内容。
  5. 根据权利要求4所述的显示设备,所述控制器执行判断所述第一内容与所述第二内容是否具有关联关系,被进一步配置为:
    判断所述第一内容是否以句末标点符号为结尾;
    如果所述第一内容是以句末标点符号为结尾,则确定所述第一内容与所述第二内容不具有关联关系;
    如果所述第一内容不是以句末标点符号为结尾,则确定所述第一内容与所述第二内容具有关联关系。
  6. 根据权利要求1所述的显示设备,所述控制器执行在到达所述第一显示结束时间时,控制所述显示器根据所述第二显示特性显示第二内容,被进一步配置为:
    判断所述第一内容与所述第二内容是否相同;
    如果所述第一内容与所述第二内容相同,则判断所述第一显示特性与所述第二显示特性是否相同;
    如果所述第一显示特性与所述第二显示特性相同,在到达所述第一显示结束时间时,控制所述显示器根据所述第二显示特性显示第二内容;
    如果所述第一显示特性与所述第二显示特性不相同,则在到达所述第一显示结束时间时,清除所述第一内容,以及控制所述显示器根据所述第二显示特性显示第二内容。
  7. 根据权利要求1所述的显示设备,所述控制器,被配置为:
    如果所述差值大于或等于预设值,则在到达所述第一显示结束时间时,清除所述第一内容;
    在到达所述第二显示启动时间时,控制所述显示器根据所述第二显示特性显示所述第二内容。
  8. 一种字幕显示方法,包括:
    获取第一字幕分片的第一内容、第一显示特性、第一显示启动时间和第一显示结束时间;
    在到达所述第一显示启动时间时,控制所述显示器根据所述第一显示特性显示所述第一内容;
    获取第二字幕分片的第二内容、第二显示特性和第二显示启动时间;
    计算所述第二显示启动时间与所述第一显示结束时间的差值;
    如果所述差值小于预设值,在到达所述第一显示结束时间时,控制所述显示器根据所述第二显示特性显示第二内容。
  9. 根据权利要求8所述的方法,所述在到达所述第一显示结束时间时,控制所述显示器根据所述第二显示特性显示第二内容的步骤,包括:
    判断所述第一内容与所述第二内容是否相同;
    如果所述第一内容与所述第二内容相同,则在到达所述第一显示结束时间时,控制所述显示器根据所述第二显示特性显示所述第二内容。
  10. 根据权利要求8所述的方法,所述方法还包括:
    如果所述第一内容与所述第二内容不相同,则在到达所述第一显示结束时间时,清除所述第一内容,以及控制所述显示器根据所述第二显示特性显示所述第二内容。
PCT/CN2022/140799 2022-09-08 2022-12-21 一种显示设备及字幕显示方法 WO2024051030A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211100434.XA CN117714805A (zh) 2022-09-08 2022-09-08 一种显示设备及字幕显示方法
CN202211100434.X 2022-09-08

Publications (1)

Publication Number Publication Date
WO2024051030A1 true WO2024051030A1 (zh) 2024-03-14

Family

ID=90155779

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/140799 WO2024051030A1 (zh) 2022-09-08 2022-12-21 一种显示设备及字幕显示方法

Country Status (2)

Country Link
CN (1) CN117714805A (zh)
WO (1) WO2024051030A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103370929A (zh) * 2011-02-15 2013-10-23 索尼公司 显示控制方法、记录介质、和显示控制装置
CN103988520A (zh) * 2011-12-16 2014-08-13 索尼公司 接收装置、控制接收装置的方法、分发装置、分发方法、程序以及分发系统
CN107005733A (zh) * 2014-12-19 2017-08-01 索尼公司 发送装置、发送方法、接收装置以及接收方法
CN107852517A (zh) * 2015-07-16 2018-03-27 索尼公司 传输装置、传输方法、接收装置和接收方法
CN108702530A (zh) * 2016-12-27 2018-10-23 索尼公司 发送装置、发送方法、接收装置及接收方法
JP2018186566A (ja) * 2018-07-27 2018-11-22 ソニー株式会社 受信装置、および送信装置、並びにデータ処理方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103370929A (zh) * 2011-02-15 2013-10-23 索尼公司 显示控制方法、记录介质、和显示控制装置
CN103988520A (zh) * 2011-12-16 2014-08-13 索尼公司 接收装置、控制接收装置的方法、分发装置、分发方法、程序以及分发系统
CN107005733A (zh) * 2014-12-19 2017-08-01 索尼公司 发送装置、发送方法、接收装置以及接收方法
CN107852517A (zh) * 2015-07-16 2018-03-27 索尼公司 传输装置、传输方法、接收装置和接收方法
CN108702530A (zh) * 2016-12-27 2018-10-23 索尼公司 发送装置、发送方法、接收装置及接收方法
JP2018186566A (ja) * 2018-07-27 2018-11-22 ソニー株式会社 受信装置、および送信装置、並びにデータ処理方法

Also Published As

Publication number Publication date
CN117714805A (zh) 2024-03-15

Similar Documents

Publication Publication Date Title
WO2021109491A1 (zh) Epg用户界面的显示方法及显示设备
WO2021147299A1 (zh) 一种内容显示方法及显示设备
CN114302194B (zh) 一种显示设备及多设备切换时的播放方法
WO2020098504A1 (zh) 一种视频切换的控制方法及显示设备
WO2021109354A1 (zh) 媒体流数据播放方法及设备
WO2018227860A1 (zh) 一种数据处理方法、智能终端、vr设备及存储介质
CN112788422A (zh) 显示设备
CN113542851A (zh) 一种菜单刷新方法及显示设备
US11962865B2 (en) Display apparatus and process method for display apparatus
CN112055245B (zh) 一种彩色字幕实现方法及显示设备
CN113489938B (zh) 虚拟会议控制方法、智能设备及终端设备
WO2024051030A1 (zh) 一种显示设备及字幕显示方法
CN111093106B (zh) 一种显示设备
CN111541924B (zh) 显示设备及显示方法
CN114915833A (zh) 一种显示器控制方法及显示设备、终端设备
CN111050197B (zh) 一种显示设备
CN111405329B (zh) 显示设备及epg用户界面显示的控制方法
US20230412890A1 (en) Refreshing method and display apparatus
CN113940049B (zh) 基于内容的语音播放方法及显示设备
CN115174991B (zh) 一种显示设备及视频播放方法
CN111107403B (zh) 一种显示设备
WO2022174751A1 (zh) 一种显示方法和显示设备
US20220083146A1 (en) Display apparatus and method
WO2023160100A1 (zh) 显示设备、外接设备、音频播放及音效处理方法
CN113271503A (zh) 字幕信息的显示方法及显示设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22957988

Country of ref document: EP

Kind code of ref document: A1