CN116684674A - Subtitle display method and display equipment - Google Patents

Subtitle display method and display equipment Download PDF

Info

Publication number
CN116684674A
CN116684674A CN202210064547.2A CN202210064547A CN116684674A CN 116684674 A CN116684674 A CN 116684674A CN 202210064547 A CN202210064547 A CN 202210064547A CN 116684674 A CN116684674 A CN 116684674A
Authority
CN
China
Prior art keywords
value
display
caption
color
subtitle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210064547.2A
Other languages
Chinese (zh)
Inventor
邵肖明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Visual Technology Co Ltd
Original Assignee
Hisense Visual Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Visual Technology Co Ltd filed Critical Hisense Visual Technology Co Ltd
Priority to CN202210064547.2A priority Critical patent/CN116684674A/en
Publication of CN116684674A publication Critical patent/CN116684674A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • H04N21/4856End-user interface for client configuration for language selection, e.g. for the menu or subtitles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations

Abstract

The application provides a subtitle display method and a display device. And analyzing the region data of the subtitle display region to obtain the region color corresponding to the pixel value. And calculating a target region color corresponding to the region color, wherein the target region color has contrast with the region color. Setting the target area color to the color of the subtitle text, and displaying the subtitle text presenting the target area color in the user interface. The method and the device solve the problem that the user watching experience is poor because the situation that the video picture is blocked occurs when the subtitle information of the black background is continuously displayed at the fixed position of the video picture in the process of watching the video resource by the user.

Description

Subtitle display method and display equipment
Technical Field
The present application relates to the field of display devices, and in particular, to a subtitle display method and a display device.
Background
The display device refers to a terminal device capable of outputting a specific display screen, such as a smart television, a mobile terminal, a smart advertisement screen, a projector, and the like. With the popularization of display devices and the continuous updating of multimedia smart televisions, the content available to users is becoming more and more rich. The display device can play local and network multimedia video resources, and when the video resources are played, caption information is generally displayed in video pictures, and the caption information is synchronous with human voice and dialogue of people in the video pictures. By displaying the subtitle information, a language translation function can be provided, and voice is converted into language characters used by users, so that the users can understand the content in the video resources more conveniently.
However, conventional caption information is displayed sentence by sentence at a fixed position (e.g., the bottom) on a video screen, wherein the color of text in the caption information is white, and the background color corresponding to the text is black. Because the color of the video picture is frequently changed, if the caption information of the black background is continuously displayed at a fixed position, the situation that the video picture is excessively blocked can occur, and the watching experience of a user is reduced.
Disclosure of Invention
The application provides a subtitle display method and a subtitle display device. The method and the device solve the problem that the user watching experience is poor because the situation that the video picture is blocked occurs when the subtitle information of the black background is continuously displayed at the fixed position of the video picture in the process of watching the video resource by the user.
In a first aspect, the present application provides a display apparatus comprising:
a display configured to display a user interface that displays a video picture including a caption text display area therein and caption text thereof, the caption text being displayed in the caption text display area;
a controller coupled to the display, the controller configured to:
acquiring region data of a caption display region, wherein the region data comprises at least one pixel point value in the caption display region;
Analyzing the region data of the caption display region to obtain the region color corresponding to the pixel value;
calculating a target region color corresponding to the region color, wherein the target region color has contrast with the region color;
setting the target area color to the color of the subtitle text, and displaying the subtitle text presenting the target area color in the user interface.
In a second aspect, the present application provides a subtitle display method, which specifically includes the following steps:
acquiring region data of a caption display region, wherein the region data comprises at least one pixel point value in the caption display region;
analyzing the region data of the caption display region to obtain the region color corresponding to the pixel value;
calculating a target region color corresponding to the region color, wherein the target region color has contrast with the region color;
setting the target area color to the color of the subtitle text, and displaying the subtitle text presenting the target area color in the user interface.
According to the technical scheme, the application provides a subtitle display method and a display device, and the regional data of a subtitle display region are acquired, wherein the regional data comprise at least one pixel point value in the subtitle display region. And analyzing the region data of the subtitle display region to obtain the region color corresponding to the pixel value. And calculating a target region color corresponding to the region color, wherein the target region color has contrast with the region color. Setting the target area color to the color of the subtitle text, and displaying the subtitle text presenting the target area color in the user interface. The method and the device solve the problem that the user watching experience is poor because the situation that the video picture is blocked occurs when the subtitle information of the black background is continuously displayed at the fixed position of the video picture in the process of watching the video resource by the user.
Drawings
In order to more clearly illustrate the technical solution of the present application, the drawings that are needed in the embodiments will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 illustrates a usage scenario of a display device according to some embodiments;
fig. 2 shows a hardware configuration block diagram of the control apparatus 100 according to some embodiments;
fig. 3 illustrates a hardware configuration block diagram of a display device 200 according to some embodiments;
FIG. 4 illustrates a software configuration diagram in a display device 200 according to some embodiments;
FIG. 5 is a schematic diagram of an icon control interface display for an application in a display device according to some embodiments of the present application;
FIG. 6 is a video asset page according to some embodiments of the application;
FIG. 7 is a video asset detail page according to some embodiments of the application;
FIG. 8 is a schematic diagram of an interface for playing a target video according to some embodiments of the present application;
fig. 9 is a flowchart illustrating a subtitle display method according to some embodiments of the present application;
FIG. 10 is a flow chart of a region data acquisition method according to some embodiments of the application;
FIG. 11 is a flow chart illustrating the calculation of target region colors according to some embodiments of the application;
FIG. 12 is a flow chart illustrating the calculation of target region colors according to some embodiments of the application;
fig. 13 is a schematic diagram of an interface for playing a target video according to some embodiments of the present application.
Detailed Description
For the purposes of making the objects and embodiments of the present application more apparent, an exemplary embodiment of the present application will be described in detail below with reference to the accompanying drawings in which exemplary embodiments of the present application are illustrated, it being apparent that the exemplary embodiments described are only some, but not all, of the embodiments of the present application.
It should be noted that the brief description of the terminology in the present application is for the purpose of facilitating understanding of the embodiments described below only and is not intended to limit the embodiments of the present application. Unless otherwise indicated, these terms should be construed in their ordinary and customary meaning.
The terms first, second, third and the like in the description and in the claims and in the above-described figures are used for distinguishing between similar or similar objects or entities and not necessarily for describing a particular sequential or chronological order, unless otherwise indicated. It is to be understood that the terms so used are interchangeable under appropriate circumstances.
The terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of elements is not necessarily limited to all elements explicitly listed, but may include other elements not expressly listed or inherent to such product or apparatus.
The term "module" refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware or/and software code that is capable of performing the function associated with that element.
Fig. 1 is a schematic diagram of a usage scenario of a display device according to an embodiment. As shown in fig. 1, the display device 200 is also in data communication with a server 400, and a user can operate the display device 200 through the smart device 300 or the control apparatus 100.
In some embodiments, the control apparatus 100 may be a remote controller, and the communication between the remote controller and the display device includes at least one of infrared protocol communication or bluetooth protocol communication, and other short-range communication modes, and the display device 200 is controlled by a wireless or wired mode. The user may control the display apparatus 200 by inputting a user instruction through at least one of a key on a remote controller, a voice input, a control panel input, and the like.
In some embodiments, the smart device 300 may include any one of a mobile terminal, tablet, computer, notebook, AR/VR device, etc.
In some embodiments, the smart device 300 may also be used to control the display device 200. For example, the display device 200 is controlled using an application running on a smart device.
In some embodiments, the smart device 300 and the display device may also be used for communication of data.
In some embodiments, the display device 200 may also perform control in a manner other than the control apparatus 100 and the smart device 300, for example, the voice command control of the user may be directly received through a module configured inside the display device 200 device for acquiring voice commands, or the voice command control of the user may be received through a voice control apparatus configured outside the display device 200 device.
In some embodiments, the display device 200 is also in data communication with a server 400. The display device 200 may be permitted to make communication connections via a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 400 may provide various contents and interactions to the display device 200. The server 400 may be a cluster, or may be multiple clusters, and may include one or more types of servers.
In some embodiments, software steps performed by one step execution body may migrate on demand to be performed on another step execution body in data communication therewith. For example, software steps executed by the server may migrate to be executed on demand on a display device in data communication therewith, and vice versa.
Fig. 2 exemplarily shows a block diagram of a configuration of the control apparatus 100 in accordance with an exemplary embodiment. As shown in fig. 2, the control device 100 includes a controller 110, a communication interface 130, a user input/output interface 140, a memory, and a power supply. The control apparatus 100 may receive an input operation instruction of a user and convert the operation instruction into an instruction recognizable and responsive to the display device 200, and function as an interaction between the user and the display device 200.
In some embodiments, the communication interface 130 is configured to communicate with the outside, including at least one of a WIFI chip, a bluetooth module, NFC, or an alternative module.
In some embodiments, the user input/output interface 140 includes at least one of a microphone, a touchpad, a sensor, keys, or an alternative module.
Fig. 3 shows a hardware configuration block diagram of the display device 200 in accordance with an exemplary embodiment.
In some embodiments, display apparatus 200 includes at least one of a modem 210, a communicator 220, a detector 230, an external device interface 240, a controller 250, a display 260, an audio output interface 270, memory, a power supply, a user interface.
In some embodiments the controller comprises a central processor, a video processor, an audio processor, a graphics processor, RAM, ROM, a first interface for input/output to an nth interface.
In some embodiments, the display 260 includes a display screen component for presenting a picture, and a driving component for driving an image display, for receiving an image signal from the controller output, for displaying video content, image content, and components of a menu manipulation interface, and a user manipulation UI interface, etc.
In some embodiments, the display 260 may be at least one of a liquid crystal display, an OLED display, a projection device, and a projection screen.
In some embodiments, the modem 210 receives and broadcasts television signals via wired or wireless reception, and demodulates audio-video signals, such as EPG data signals, from a plurality of wireless or wired broadcast television signals.
In some embodiments, communicator 220 is a component for communicating with external devices or servers according to various communication protocol types. For example: the communicator may include at least one of a WIFI module, a bluetooth module, a wired ethernet module, or other network communication protocol chip or a near field communication protocol chip, and an infrared receiver. The display apparatus 200 may establish transmission and reception of control signals and data signals with the control device 100 or the server 400 through the communicator 220.
In some embodiments, the detector 230 is used to collect signals of the external environment or interaction with the outside. For example, detector 230 includes a light receiver, a sensor for capturing the intensity of ambient light; alternatively, the detector 230 includes an image collector such as a camera, which may be used to collect external environmental scenes, user attributes, or user interaction gestures, or alternatively, the detector 230 includes a sound collector such as a microphone, or the like, which is used to receive external sounds.
In some embodiments, the external device interface 240 may include, but is not limited to, the following: high Definition Multimedia Interface (HDMI), analog or data high definition component input interface (component), composite video input interface (CVBS), USB input interface (USB), RGB port, etc. The input/output interface may be a composite input/output interface formed by a plurality of interfaces.
In some embodiments, the controller 250 and the modem 210 may be located in separate devices, i.e., the modem 210 may also be located in an external device to the main device in which the controller 250 is located, such as an external set-top box or the like.
In some embodiments, the controller 250 controls the operation of the display device and responds to user operations through various software control programs stored on the memory. The controller 250 controls the overall operation of the display apparatus 200. For example: in response to receiving a user command to select a UI object to be displayed on the display 260, the controller 250 may perform an operation related to the object selected by the user command.
In some embodiments, the object may be any one of selectable objects, such as a hyperlink, an icon, or other operable control. The operations related to the selected object are: displaying an operation of connecting to a hyperlink page, a document, an image, or the like, or executing an operation of a program corresponding to the icon.
In some embodiments the controller includes at least one of a central processing unit (Central Processing Unit, CPU), video processor, audio processor, graphics processor (Graphics Processing Unit, GPU), RAM Random Access Memory, RAM), ROM (Read-Only Memory, ROM), first to nth interfaces for input/output, a communication Bus (Bus), and the like.
A CPU processor. For executing operating system and application program instructions stored in the memory, and executing various application programs, data and contents according to various interactive instructions received from the outside, so as to finally display and play various audio and video contents. The CPU processor may include a plurality of processors. Such as one main processor and one or more sub-processors.
In some embodiments, a graphics processor is used to generate various graphical objects, such as: at least one of icons, operation menus, user input instruction display graphics, and the like. The graphic processor comprises an arithmetic unit, which is used for receiving various interactive instructions input by a user to operate and displaying various objects according to display attributes; the device also comprises a renderer for rendering various objects obtained based on the arithmetic unit, wherein the rendered objects are used for being displayed on a display.
In some embodiments, the video processor is configured to receive an external video signal, perform at least one of decompression, decoding, scaling, noise reduction, frame rate conversion, resolution conversion, image composition, and the like according to a standard codec protocol of an input signal, and obtain a signal that is displayed or played on the directly displayable device 200.
In some embodiments, the video processor includes at least one of a demultiplexing module, a video decoding module, an image compositing module, a frame rate conversion module, a display formatting module, and the like. The demultiplexing module is used for demultiplexing the input audio and video data stream. And the video decoding module is used for processing the demultiplexed video signal, including decoding, scaling and the like. And an image synthesis module, such as an image synthesizer, for performing superposition mixing processing on the graphic generator and the video image after the scaling processing according to the GUI signal input by the user or generated by the graphic generator, so as to generate an image signal for display. And the frame rate conversion module is used for converting the frame rate of the input video. And the display formatting module is used for converting the received frame rate into a video output signal and changing the video output signal to be in accordance with a display format, such as outputting RGB data signals.
In some embodiments, the audio processor is configured to receive an external audio signal, decompress and decode according to a standard codec protocol of an input signal, and at least one of noise reduction, digital-to-analog conversion, and amplification, to obtain a sound signal that can be played in the speaker.
In some embodiments, a user may input a user command through a Graphical User Interface (GUI) displayed on the display 260, and the user input interface receives the user input command through the Graphical User Interface (GUI). Alternatively, the user may input the user command by inputting a specific sound or gesture, and the user input interface recognizes the sound or gesture through the sensor to receive the user input command.
In some embodiments, a "user interface" is a media interface for interaction and exchange of information between an application or operating system and a user that enables conversion between an internal form of information and a form acceptable to the user. A commonly used presentation form of the user interface is a graphical user interface (Graphic User Interface, GUI), which refers to a user interface related to computer operations that is displayed in a graphical manner. It may be an interface element such as an icon, a window, a control, etc. displayed in a display screen of the electronic device, where the control may include at least one of a visual interface element such as an icon, a button, a menu, a tab, a text box, a dialog box, a status bar, a navigation bar, a Widget, etc.
In some embodiments, the user interface 280 is an interface (e.g., physical keys on a display device body, or the like) that may be used to receive control inputs.
In some embodiments, a system of display devices may include a Kernel (Kernel), a command parser (shell), a file system, and an application program. The kernel, shell, and file system together form the basic operating system architecture that allows users to manage files, run programs, and use the system. After power-up, the kernel is started, the kernel space is activated, hardware is abstracted, hardware parameters are initialized, virtual memory, a scheduler, signal and inter-process communication (IPC) are operated and maintained. After the kernel is started, shell and user application programs are loaded again. The application program is compiled into machine code after being started to form a process.
Referring to FIG. 4, in some embodiments, the system is divided into four layers, from top to bottom, an application layer (referred to as an "application layer"), an application framework layer (Application Framework layer) (referred to as a "framework layer"), a An Zhuoyun row (Android run) and a system library layer (referred to as a "system runtime layer"), and a kernel layer, respectively.
In some embodiments, at least one application program is running in the application program layer, and these application programs may be a Window (Window) program of an operating system, a system setting program, a clock program, or the like; or may be an application developed by a third party developer. In particular implementations, the application packages in the application layer are not limited to the above examples.
The framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions. The application framework layer corresponds to a processing center that decides to let the applications in the application layer act. Through the API interface, the application program can access the resources in the system and acquire the services of the system in the execution.
As shown in fig. 4, the application framework layer in the embodiment of the present application includes a manager (manager), a Content Provider (Content Provider), and the like, where the manager includes at least one of the following modules: an Activity Manager (Activity Manager) is used to interact with all activities that are running in the system; a Location Manager (Location Manager) is used to provide system services or applications with access to system Location services; a Package Manager (Package Manager) for retrieving various information about an application Package currently installed on the device; a notification manager (Notification Manager) for controlling the display and clearing of notification messages; a Window Manager (Window Manager) is used to manage icons, windows, toolbars, wallpaper, and desktop components on the user interface.
In some embodiments, the activity manager is used to manage the lifecycle of the individual applications as well as the usual navigation rollback functions, such as controlling the exit, opening, fallback, etc. of the applications. The window manager is used for managing all window programs, such as obtaining the size of the display screen, judging whether a status bar exists or not, locking the screen, intercepting the screen, controlling the change of the display window (for example, reducing the display window to display, dithering display, distorting display, etc.), etc.
In some embodiments, the system runtime layer provides support for the upper layer, the framework layer, and when the framework layer is in use, the android operating system runs the C/C++ libraries contained in the system runtime layer to implement the functions to be implemented by the framework layer.
In some embodiments, the kernel layer is a layer between hardware and software. As shown in fig. 4, the kernel layer contains at least one of the following drivers: audio drive, display drive, bluetooth drive, camera drive, WIFI drive, USB drive, HDMI drive, sensor drive (e.g., fingerprint sensor, temperature sensor, pressure sensor, etc.), power drive, etc.
In some embodiments, the kernel layer further includes a power driver module for power management.
The above embodiments introduce the content of the hardware/software architecture, the functional implementation, etc. of the display device. For the display device, for example, a video resource may be obtained from an external signal source (such as a set top box, etc.), a network, or a locally stored video resource, and the video resource may be loaded and played. When playing video resources, the display device generally plays subtitle information synchronously besides video data, the subtitle information is a text converted from sound content spoken by a sound object in the video resources, the subtitle information can be displayed or translated according to language habits of users, for example, original sound of a film is English, the display device faces Chinese users, the original sound can be translated into characters of simplified Chinese language, and then the subtitle information is displayed in simplified Chinese. In addition, by converting the audio content into visual subtitle information, it is also convenient for the hearing impaired to understand the episodes and content conveyed by the video resources.
In some embodiments, the operator may provide, in addition to the video resource, subtitle data of the video resource, where the subtitle data includes a plurality of pieces of subtitle information, and configures corresponding time information for each piece of subtitle information, where the time information is used to indicate a time node where the subtitle information is displayed, for example, a total playing duration of a certain video resource is 30 minutes, and the subtitle information 1 is configured to be displayed at a time node where a video playing progress is 50 seconds.
In some embodiments, each video asset may be associated with a play time axis having a length equal to the total duration of the video, on which display nodes of the pieces of subtitle information included in the video asset are marked, and at each marked node, a subtitle information ID to be displayed may be recorded, so that the display device knows which subtitle information or information should be displayed at the node. Each marked node on the playing time axis can be mapped to one or more pieces of caption information, when the node and the caption information are in one-to-many mapping relation, the condition that a plurality of objects simultaneously make sounds at the moment of the node is indicated, and the same piece of caption information cannot be mapped to a plurality of nodes is indicated.
In some embodiments, the display device receives the caption data synchronously while receiving the video data, and controls the caption display in the user interface according to the current time and the time information preset by the operator.
The synchronized display of subtitle information when the display device is loading video data is further described below in conjunction with fig. 5-8.
In some implementations, a user may open a corresponding application by touching an application icon on the user interface, or may open a corresponding folder by touching a folder icon on the user interface, such as opening an application folder that includes at least one application.
Fig. 5 is a schematic diagram illustrating an icon control interface of an application in a display device according to some embodiments of the present application. As shown in fig. 5, the application folder contains at least one icon control that an application can display in a display, such as: a live television application icon control, a video on demand application icon control, a media center application icon control, an application center icon control, a game application icon control, and the like.
In some embodiments, the live television application may provide live television via different signal sources. For example, a live television application may provide television signals using inputs from cable television, radio broadcast, satellite services, or other types of live television services. And, the live television application may display video of the live television signal on the display device.
In some embodiments, the video on demand application may provide video from different storage sources. Unlike live television applications, video-on-demand provides video displays from some storage sources. For example, video-on-demand may come from the server side of cloud storage, from a local hard disk storage containing stored video programs.
In some embodiments, the media center application may provide various multimedia content playing applications. For example, a media center may be a different service than live television or video on demand, and a user may access various images or audio through a media center application.
In some embodiments, an application center may be provided to store various applications. The application may be a game, an application, or some other application associated with a computer system or other device but which may be run in a smart television. The application center may obtain these applications from different sources, store them in local storage, and then be run on the display device.
In some embodiments, the display device receives a user-entered selection of a video on demand application icon control, and displays a video asset page. Fig. 6 is a video asset page according to some embodiments of the application, which may specifically be a page entered after a user has entered a selection operation by clicking on the video on demand application icon control of fig. 5. As shown in fig. 6, the video asset page includes a navigation bar 600 and a content display area 610 located below the navigation bar, the content display area 610 including a plurality of video asset controls, such as "movie a", "movie B", and the like. Wherein the content displayed in the content display area 610 will change as the selected control in the navigation bar 600 changes. When the video resource page is displayed, a user can trigger to enter a corresponding video detail page by clicking any one of the video resource controls to input a video detail page display instruction for any video resource. Illustratively, when the user clicks on "movie A" in FIG. 6, the video details page of "movie A" will be entered. For convenience of explanation, in the following embodiments, a video asset clicked by a user is a target video asset. It should be noted that, the user may also input the selection operation of the video resource control in other manners, so as to trigger the video detail page entering the target video. For example, the control is entered into the video detail page corresponding to the target video resource by using a voice control function or a search function.
Fig. 7 is a detail page of a video resource, specifically a detail page of "movie a" displayed after the user clicks on "movie a" in fig. 6, according to some embodiments of the present application. As shown in FIG. 7, the media asset detail page interface includes a number of display areas and other operational controls. Specifically, the first content area, the second content area, the third content area and the fourth content area are sequentially included from top to bottom. The first content area is used for displaying a cover of the target video; the second content area is used for displaying the introduction content of the target video, for example, the video name is film A, the video score is 9.2 points, the video type is military travel type and the like; the third content area is used for displaying the scenario introduction of the target video; the fourth content area is for displaying other related media assets. Other operational controls include play controls, pause controls, double speed play controls, progress bar controls, and the like. The operation control is used for controlling the target video by the display equipment.
In some embodiments, the display device controls the display to play the target video "movie a" in response to user input of a selection operation of the play control. Referring to the example of fig. 8, in an environmental scenario of a target video, a dialogue between a plurality of persons is included, for example, a time node corresponding to fig. 8 is 19:30:31, at which a main performance sound among the plurality of persons in a video picture is at the time node, thereby displaying caption text corresponding to the target video in a preset area, the caption text corresponding to the main performance sound, for example, "how do we have a length? ". It can be seen that the subtitle text 810 is white in color, the background of the subtitle text is black, i.e., the color of the preset area is black, and the video picture is green. For convenience of subsequent description, the color of the video frame is the color of the caption display area, and the caption display area is a partial area of the video frame. The background area color of the default caption text is the same as the color of the caption display area and changes in real time, but the background area color of the caption text is not particularly limited, and can be transparent. Meanwhile, it should be noted that the background area of the caption text is an area overlapping with the video picture, that is, the caption display area in the present application.
In this scene, subtitle information is synchronously displayed in a display of a display device. The caption information includes caption text, which is typically displayed in the form of a font 810 that is white and a background 800 that is black, and which is typically not color changeable, such that the caption is monotonous when displayed. Further, when the caption with the black background is displayed, the display content of the video can be changed in real time, the color of the video picture is updated in real time, the video picture is extremely easy to be excessively shielded, the viewing mood is influenced, and the use experience of a user is reduced. If the caption of the black background is displayed in a fixed position for too long, the problem of screen burning of hardware in the display device is caused, and the damage of the display device is easily caused. The service life of the display device is shortened while the aging of the display device is accelerated.
Therefore, in order to reduce the shielding of the caption to the video picture and prolong the service life color of the display device in the display process of the caption, the embodiment of the application provides the display device, and the color of the overlapping part of the caption and the video picture can be obtained according to the display position of the caption and the length of the caption text. The color with the larger contrast with the video picture is calculated by the color algorithm. And the subtitle color is set to be the color, so that the fonts, namely the colors, of the subtitle text have larger contrast with food pictures, and a user can read the subtitle content more clearly. The application only displays the caption characters, does not display the background of the caption any more, reduces the shielding of the video picture after the color of the caption characters changes along with the color of the video picture, and can improve the watching effect. And meanwhile, damage to the hardware of the display equipment is avoided.
When realizing that the color of caption characters changes and is displayed along with the color change of a video picture, the application provides display equipment, which comprises: and a display configured to display a user interface, the user interface displaying a video picture and subtitle text thereof, the video picture including a subtitle text display area in which the subtitle text is displayed. To facilitate the user's reading of the subtitle text, the present application extracts the contrasting color to the color of the subtitle text display area and sets the contrasting color to the color of the subtitle text. To reduce occlusion of the video picture by the subtitle text display area.
The following specifically describes a display device and a subtitle display method provided by the present application.
Fig. 9 is a flowchart illustrating a subtitle display method according to some embodiments of the present application. Referring to fig. 9, a display device provided in an embodiment of the present application, a controller configured to perform the following steps when performing a subtitle display method:
s1, acquiring area data of a caption display area, wherein the area data comprises at least one pixel value in the caption display area.
When changing the display color of the caption text, the color displayed in the current caption display area needs to be acquired, the color can be represented by pixel values, and different pixel values correspond to different colors. Therefore, the color of the caption display area, that is, the area data of the caption display area, needs to be acquired in the user interface, wherein the area data includes at least one pixel value in the caption display area.
In some embodiments, fig. 10 is a flow chart of a region data acquisition method according to some embodiments of the present application.
S11, acquiring caption information, wherein the caption information comprises font sizes, the number of caption texts and caption display reference positions corresponding to the caption texts.
In some embodiments, the display device receives the target video data and simultaneously receives subtitle information corresponding to the target video data. The caption information includes caption text, and font size, text number and caption display reference position corresponding to the caption text. The font size corresponding to the subtitle text is the length and width of each character in the text. The number of text is the total number of characters in the text. The caption display reference position is used to characterize the display position of each character in the text, for example, the caption display reference position is a position 1cm above the bottom end face of the video picture. In order not to affect the viewing of the target video by the user, the subtitle display reference is typically located in an edge region of the video frame.
In some embodiments, the subtitle information needs to be obtained from a code stream of a target video file that is played, specifically, a subtitle data packet (pid) corresponding to the subtitle information is obtained from the code stream of the target video file, and the subtitle data packet (pid) may be obtained from a PMT table through parsing. Because different display forms exist for the caption information, such as Chinese captions, english captions or Korean captions, in order to facilitate the user interface in the display to display the caption information which can be understood by the user, the user can determine the target caption display form adopted by the caption information during display. That is, the caption data packets corresponding to the caption information acquired by the controller include caption data packets corresponding to different caption presentation forms. Therefore, according to the target caption display form determined by the user, the target caption data packet corresponding to the form can be filtered from the caption data packet corresponding to the caption information.
When filtering, the controller can set the caption data packet (pid) into demux to split, namely extracting the caption data packet corresponding to the target caption display form according to the target caption display form to obtain the target caption data packet. The target caption data packet is used for realizing the display of caption information in a target caption display form.
When the controller acquires the caption data packet of the caption information, there may be blank packets, that is, padding data packets, and such data packets need not be displayed, and no pixel value exists. Therefore, after the target caption data packet is determined, it is also necessary to determine whether the caption data in the target caption data packet is valid data. Since the subtitle information is displayed in a sentence-by-sentence manner or a page-by-page manner when displayed. When one caption jumps to another caption or one caption jumps to another caption, a blank sentence or blank page appears, and at this time, the corresponding caption data packet does not contain any byte data, namely, is a blank packet. If the subtitle information is displayed, byte data, i.e., pixel values, exist in the corresponding subtitle data packet, and the byte data is valid data at this time.
In some embodiments, after receiving the target video data and synchronously receiving the subtitle information, the controller loads the subtitle information according to the playing progress of the target video in real time, so as to realize subsequent processing operation on the subtitle information.
And S12, sequentially calculating the positions corresponding to the subtitle display areas and the areas of the subtitle display areas based on the font size, the number of the subtitle texts and the subtitle display reference positions.
In some embodiments, the font size includes a length and a width of each character in the subtitle text. In calculating the subtitle display area according to the font size, the number of subtitle text, and the subtitle display reference position, it is further configured to: and multiplying the length of the characters by the number of the caption texts to obtain the length of the caption display area. And multiplying the length of the caption display area by the width of the character to obtain the area of the caption display area. For example, assume that the character has a length of 1cm and a width of 2cm. The number of caption texts is 10, and the length of the caption display area is 10cm. The area of the caption display region is 20cm 2
In some embodiments, when performing the sequentially calculating the position corresponding to the subtitle display area and the subtitle display area based on the font size, the number of subtitle text, and the subtitle display reference position, the method is further configured to: and acquiring the width of the target video picture, wherein the direction corresponding to the width is consistent with the direction corresponding to the character length. And calculating coordinates of the caption display area in an image coordinate system according to the width of the video picture and the caption display reference position to determine the position corresponding to the caption display area. The video frame is divided into a width and a height, and the height of the video frame is based on the display, and the display is oriented up and down. The width of the video picture is based on the display, and the display is vertical and horizontal.
In particular implementations, an image coordinate system may be constructed in the target video frame and coordinates of edge points in the subtitle display area in the image coordinate system may be calculated. The image coordinate system is assumed to be positive with respect to the x-axis (width direction) and positive with respect to the y-axis (height direction) with respect to the bottom left corner of the target video frame as the origin. The edge points in the subtitle display area are edge points of the quadrangular region, i.e., A 1 (x 1 ,y 1 )、A 2 (x 1 ,y 2 )、B 1 (x 2 ,y 1 ) And B 2 (x 2 ,y 2 )。A 1 And A 2 Is the same as the abscissa value of B 1 And B 2 The abscissa values of (2) are the same. A is that 1 And B 1 Is the same as the longitudinal coordinate value of A 2 And B 2 Is the same.
In some embodiments, A is known from the length of the caption display area and the width of the target video frame 1 And A 2 The abscissa value of (2) =the width of the target video picture/2-the length of the subtitle display area/2. B (B) 1 And B 2 Abscissa value of (2) =width of target video picture/2+length of subtitle display area/2. Illustratively, the width of the target video picture is 20cm, and the length of the caption display area is 10cm. Namely A 1 And A 2 Is 5, B 1 And B 2 And the abscissa value of (2) is 15.
In some embodiments, as can be seen from the above caption display reference position, the above caption display reference position is illustratively a position 1cm above the bottom end surface of the video frame, and the width of the caption display area is 1cm. A is that 2 And B 2 Is 1.A is that 1 And B 1 Vertical coordinate value of (a) =caption display reference position+width of caption display area, i.e., a 1 And B 1 Is 2. Thus, the area and position of the subtitle display area can be determined. Note that, the position of the caption display area is the target videoThe location of the screen is not the location of the entire user interface in the display. The area of the target video picture is smaller than or equal to the whole user interface, and the size and the position of the target video picture can be adjusted by a user according to the user.
And S13, controlling the display to display the caption text on the video picture according to the position corresponding to the caption display area and the caption display area.
In some embodiments, the position corresponding to the caption display area and the area of the caption display area are determined, that is, caption information may be displayed in the target video picture so that the user views the caption synchronized with the target video.
S2, analyzing the area data of the subtitle display area to obtain the area color corresponding to the pixel value.
In some embodiments, the principle of the display displaying colors is the principle of additive color mixing using R (Red), G (Green), B (Blue). By emitting three electron beams with different intensities, the red, green and blue phosphorescent materials covered on the inner side of the screen emit light to generate color. This method of representation of colors is called RGB color space representation.
In other embodiments, parameters of the display colors are Hue (H, hue), saturation (S, saturation), brightness (V, value), respectively. The hue is measured by an angle, the value range is 0-360 degrees, the red is 0 degrees, the green is 120 degrees, and the blue is 240 degrees, calculated from the red in the anticlockwise direction. The saturation is in the range of 0 to 1.0, and the larger the value is, the more saturated the color is. The brightness ranges from 0 (black) to 255 (white). This method of representation of color is known as HSV color space representation.
In some embodiments, the parsing the region data of the subtitle display region to obtain the region color corresponding to the pixel value includes: the number of all pixel points in the caption display area is acquired. And respectively accumulating the R value, the G value and the B value corresponding to each pixel point value, and respectively dividing the R value, the G value and the B value by the number of the pixel points to obtain an R value average value, a G value average value and a B value average value. The region color of the subtitle display region is determined based on the R value average, the G value average, and the B value average.
In specific implementation, the subtitle information may further include: the number of frames corresponding to the caption, the number of all pixels of the caption display area of the frame, and the pixel values of the respective pixels. And decomposing the pixel values of the pixel points to obtain corresponding R values, G values and B values.
For example, if each subtitle can be displayed for several minutes, the frames played in the several minutes are frames corresponding to the subtitle. For example, the caption is "HX", the caption is continuously displayed for 1 minute, and 10 frames per minute of video play will be displayed for 10 frames, and the 10 frames can be the frames corresponding to the caption "HX". Then, after the pixel values of the pixels are decomposed, the R value, the G value and the B value corresponding to each pixel value are respectively accumulated and divided by the number of the pixels and the number of frames in sequence to obtain an R value average value, a G value average value and a B value average value. Further, the area color of the caption display area is determined based on the obtained R value average value, G value average value, and B value average value for each pixel point in the caption display area.
S3, calculating a target area color corresponding to the area color, wherein the target area color has contrast with the area color.
In some embodiments, the contrast color in the present application is a color that clearly distinguishes between the subtitle display area and the subtitle text. For example, the subtitle display area is white in color, and red and black may be colors having contrast with white. The color of the target area can be obtained through calculation by the following two calculation methods.
Two embodiments of calculating the color of the target area provided by the present application will be described below.
FIG. 11 is a flow chart illustrating the calculation of target region colors according to some embodiments of the application. Referring to fig. 11, in performing calculation of a target region color having a contrast with the region color, it is configured to:
s310, separating each pixel point in the color of the region to obtain an initial R value, an initial G value and an initial B value.
For example, each pixel point in the color of the area is separated, and the pixel value of any one pixel point is selected as follows: r value 120, G value 210, B value 100.
S311, converting the initial R value, G value and B value into binary form.
The pixel value of the pixel point is exemplary: binary forms of R value 120, G value 210, B value 100 are 0, 1, 0.
S312, carrying out inverse processing on the initial R value, the initial G value and the initial B value based on the binary form to obtain a target R value, a target G value and a target B value; wherein the target R value, G value and B value are mutually opposite codes to the initial R value, G value and B value. The target R, G, and B values are combined to generate a target region color.
Illustratively, the binary forms of the initial R, G, and B values are 0, 1, 0, and the inversion is performed to obtain 1, 0, 1. And (3) carrying out inverse conversion on the binary form subjected to the inverse post-processing to obtain the R value, the G value and the B value of the target. Further, a target region color having a degree of contrast with the region color is obtained. Therefore, the contrast between the color of the caption text and the color of the caption display area is realized, and the identifiability of the caption text is improved.
FIG. 12 is a flow chart illustrating the calculation of target region colors according to some embodiments of the application. Referring to fig. 12, in performing calculation of a target region color having a contrast with the region color, it is configured to:
s320, converting the RGB space of the regional color into HSV space to obtain an initial tone component, an initial saturation component and an initial brightness component.
S321, performing preset processing on the initial tone component, keeping the initial saturation component and the brightness component unchanged, and generating a target tone component, a saturation component and a brightness component. Wherein the performing of the preset processing on the initial hue component H includes: 180 degrees are added on the basis of the initial tone component, and the current tone component is obtained. Whether the current tone component is greater than 360 degrees is determined, and the current tone component is determined to be the target tone component in the case that the current tone component is less than or equal to 360 degrees. In the case where the current tone component is greater than 360 degrees, the current tone component is subtracted by 360 degrees to obtain the target tone component.
S322, merging the target hue component, saturation component, and luminance component, and inversely converting into RGB space to generate a target region color.
By the conversion of the RGB space and the HSV space, the application determines the color of the target area by adjusting the tone of the color of the caption display area so as to enhance the contrast ratio of the color of the caption text and the color of the caption display area. Therefore, the contrast between the color of the caption text and the color of the caption display area is realized, and the identifiability of the caption text is improved.
S4, setting the color of the target area as the color of the subtitle text, and displaying the subtitle text presenting the color of the target area in a user interface.
Fig. 13 is a schematic diagram of an interface for playing a target video according to some embodiments of the present application. Referring to fig. 13, the subtitle text is set to a target region color having a contrast with the subtitle display region color, and the subtitle text having the target region color is controlled to be presented in a target video picture. For example, the video picture is white, the subtitle display area is white, and the target area is black with contrast with white. As shown in the figure, the subtitle text is black. The background of the subtitle text is transparent. The application only limits the color of the caption text, does not specifically limit the color corresponding to the background of the caption text, and can be set according to actual conditions. For example, the background of the subtitle text is a transparent color or the same color as the subtitle display area, etc.
In some embodiments, the present application calculates pixel values of colors having contrast with the colors of the caption display area. Setting the color pixel value as a subtitle pixel value synchronous with the progress of the currently played target video, obtaining the pixel value of the subtitle text as the color pixel value, and displaying the pixel value in a user interface. Therefore, the display device provided by the embodiment of the application can adjust the color of the caption text based on the real-time change of the video picture of the playing target video, thereby changing the display color of the caption in real time. Namely, the captions can present different colors when displayed, thereby avoiding the situation that the color of the background image of the video or the display content is similar to that of the background image or too much shielding of the video picture, and ensuring that the user can more clearly view the caption display content.
The UI drawings provided by the application are only schematic for describing the scheme, and do not represent actual product design, and the caption format and the display effect should be based on actual application and design.
In one embodiment, the present application further provides a subtitle display method, which is executed by a controller of a display device, and includes the following program steps:
and acquiring area data of a caption display area, wherein the area data comprises at least one pixel point value in the caption display area. And analyzing the region data of the subtitle display region to obtain the region color corresponding to the pixel value. And calculating a target region color corresponding to the region color, wherein the target region color has contrast with the region color. Setting the target area color to the color of the subtitle text, and displaying the subtitle text presenting the target area color in the user interface.
The caption display method provided by the application can obtain the color of the overlapping part of the caption and the video picture according to the caption display position and the length of the caption text. The color with the larger contrast with the video picture is calculated by the color algorithm. And the subtitle color is set to be the color, so that the fonts, namely the colors, of the subtitle text have larger contrast with food pictures, and a user can read the subtitle content more clearly. The application only displays the caption characters, does not display the background of the caption any more, reduces the shielding of the video picture after the color of the caption characters changes along with the color of the video picture, and can improve the watching effect. And meanwhile, damage to the hardware of the display equipment is avoided.
In some embodiments, calculating a target region color corresponding to the region color includes: and separating each pixel point in the color of the region to obtain an initial R value, an initial G value and an initial B value. The initial R, G, and B values are converted to binary form. Performing inverse processing on the initial R value, the initial G value and the initial B value based on a binary form to obtain a target R value, a target G value and a target B value; wherein the target R value, G value and B value are mutually opposite codes to the initial R value, G value and B value. The target R, G, and B values are combined to generate a target region color.
In some embodiments, calculating a target region color corresponding to the region color includes: and converting the RGB space of the regional color into HSV space to obtain an initial tone component, an initial saturation component and an initial brightness component. And carrying out preset processing on the initial tone component, keeping the initial saturation component and the brightness component unchanged, and generating a target tone component, a saturation component and a brightness component. The target hue component, saturation component, and luminance component are combined and inverse-converted into RGB space to generate a target region color.
In some embodiments, performing the preset processing on the initial hue component H includes: 180 degrees are added on the basis of the initial tone component, and the current tone component is obtained. It is determined whether the current hue component is greater than 360 degrees. In the case where the current tone component is 360 degrees or less, the current tone component is determined as the target tone component.
In some embodiments, performing the preset processing on the initial hue component H includes: in the case where the current tone component is greater than 360 degrees, the current tone component is subtracted by 360 degrees to obtain the target tone component.
In some embodiments, parsing the region data of the subtitle display region to obtain a region color corresponding to the pixel value includes: the number of all pixel points in the caption display area is acquired. And respectively accumulating the R value, the G value and the B value corresponding to each pixel point value, and respectively dividing the R value, the G value and the B value by the number of the pixel points to obtain an R value average value, a G value average value and a B value average value. The region color of the subtitle display region is determined based on the R value average, the G value average, and the B value average.
In some embodiments, acquiring region data of a subtitle display region includes: and acquiring subtitle information, wherein the subtitle information comprises font sizes, the number of the subtitle texts and the subtitle display reference positions corresponding to the subtitle texts. And sequentially calculating the position corresponding to the caption display area and the caption display area based on the font size, the caption text quantity and the caption display reference position. And controlling the display to display the caption text on the video picture according to the position corresponding to the caption display area and the caption display area.
In some embodiments, the font size includes a length and a width of each character in the subtitle text; calculating a caption display area according to a font size, the number of caption texts and a caption display reference position, comprising: and multiplying the length of the characters by the number of the caption texts to obtain the length of the caption display area. And multiplying the length of the caption display area by the width of the character to obtain the area of the caption display area.
In some embodiments, sequentially calculating a position corresponding to a subtitle display area and a subtitle display area based on a font size, a number of subtitle text, and a subtitle display reference position includes: the width of the video picture is acquired, and the direction corresponding to the width is consistent with the direction corresponding to the character length. And calculating coordinates of the caption display area in an image coordinate system according to the width of the video picture and the caption display reference position to determine the position corresponding to the caption display area.
According to the technical scheme, the application provides a subtitle display method and a display device, and the regional data of a subtitle display region are acquired, wherein the regional data comprise at least one pixel point value in the subtitle display region. And analyzing the region data of the subtitle display region to obtain the region color corresponding to the pixel value. And calculating a target region color corresponding to the region color, wherein the target region color has contrast with the region color. Setting the target area color to the color of the subtitle text, and displaying the subtitle text presenting the target area color in the user interface. The method and the device solve the problem that the user watching experience is poor because the situation that the video picture is blocked occurs when the subtitle information of the black background is continuously displayed at the fixed position of the video picture in the process of watching the video resource by the user.
The same and similar parts of the embodiments in this specification are referred to each other, and are not described herein.
In a specific implementation, the present application further provides a computer storage medium, where the computer storage medium may store a program, where the program may include some or all of the steps in each embodiment of the method for expanding the number of media assets provided by the present application when executed. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), a random access memory (random access memory, RAM), or the like.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the application.
The foregoing description, for purposes of explanation, has been presented in conjunction with specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the embodiments to the precise forms disclosed above. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles and the practical application, to thereby enable others skilled in the art to best utilize the embodiments and various embodiments with various modifications as are suited to the particular use contemplated.

Claims (10)

1. A display device, characterized by comprising:
a display configured to display a user interface that displays a video picture including a caption text display area therein and caption text thereof, the caption text being displayed in the caption text display area;
a controller coupled to the display, the controller configured to:
acquiring region data of a caption display region, wherein the region data comprises at least one pixel point value in the caption display region;
analyzing the region data of the caption display region to obtain the region color corresponding to the pixel value;
calculating a target region color corresponding to the region color, wherein the target region color has contrast with the region color;
setting the target area color to the color of the subtitle text, and displaying the subtitle text presenting the target area color in the user interface.
2. The display device of claim 1, wherein the controller, when performing the calculation of the target region color corresponding to the region color, is further configured to:
separating each pixel point in the region color to obtain an initial R value, an initial G value and an initial B value;
Converting the initial R, G, and B values into binary form;
performing inverse processing on the initial R value, the initial G value and the initial B value based on the binary form to obtain a target R value, a target G value and a target B value; wherein the target R value, the G value and the B value are mutually opposite codes to the initial R value, the initial G value and the initial B value;
and combining the target R value, the G value and the B value to generate the target region color.
3. The display device of claim 1, wherein the controller, when performing the calculation of the target region color corresponding to the region color, is further configured to:
converting the RGB space of the regional color into an HSV space to obtain an initial tone component, an initial saturation component and an initial brightness component;
performing preset processing on the initial tone component, keeping the initial saturation component and the brightness component unchanged, and generating a target tone component, a saturation component and a brightness component;
the target hue component, saturation component, and luminance component are combined and inverse-converted into the RGB space to generate the target region color.
4. A display device according to claim 3, wherein the controller, when performing the preset processing of the initial hue component H, is further configured to:
180 degrees are added on the basis of the initial tone component, so that a current tone component is obtained;
judging whether the current tone component is larger than 360 degrees;
and determining the current tone component as the target tone component in the case that the current tone component is 360 degrees or less.
5. The display device according to claim 4, wherein the controller, when performing the preset processing of the initial hue component H, is further configured to:
and subtracting 360 degrees from the current tone component to obtain the target tone component when the current tone component is larger than 360 degrees.
6. The display device according to claim 1, wherein the controller, when performing parsing of the region data of the subtitle display region to obtain a region color corresponding to the pixel value, is further configured to:
acquiring the number of all the pixel points in the subtitle display area;
respectively accumulating the R value, the G value and the B value corresponding to each pixel point value, and respectively dividing the R value, the G value and the B value by the number of the pixel points to obtain an R value average value, a G value average value and a B value average value;
and determining the region color of the subtitle display region based on the R value average, G value average and B value average.
7. The display device according to claim 1, wherein the controller, when executing the acquisition of the area data of the subtitle display area, is further configured to:
acquiring subtitle information, wherein the subtitle information comprises font sizes, the number of subtitle texts and a subtitle display reference position corresponding to the subtitle texts;
sequentially calculating the position corresponding to the caption display area and the caption display area based on the font size, the caption text quantity and the caption display reference position;
and controlling the display to display the caption text on the video picture according to the position corresponding to the caption display area and the caption display area.
8. The display device of claim 7, wherein the font size comprises a length and a width of each character in the subtitle text; the controller, upon performing the calculation of the subtitle display area according to the font size, the number of subtitle text, and the subtitle display reference position, is further configured to:
multiplying the length of the characters by the number of the caption texts to obtain the length of the caption display area;
multiplying the length of the caption display area by the width of the character to obtain the area of the caption display area.
9. The display device according to claim 7, wherein the controller, when performing the sequential calculation of the position corresponding to the subtitle display area and the subtitle display area based on the font size, the amount of subtitle text, and the subtitle display reference position, is further configured to:
acquiring the width of the video picture, wherein the direction corresponding to the width is consistent with the direction corresponding to the character length;
and calculating coordinates of the caption display area in an image coordinate system according to the width of the video picture and the caption display reference position so as to determine the position corresponding to the caption display area.
10. A subtitle display method, applied to a display device, characterized in that the method specifically comprises the following steps:
acquiring region data of a caption display region, wherein the region data comprises at least one pixel point value in the caption display region;
analyzing the region data of the caption display region to obtain the region color corresponding to the pixel value;
calculating a target region color corresponding to the region color, wherein the target region color has contrast with the region color;
setting the target area color to a color of a subtitle text, and displaying the subtitle text presenting the target area color in the user interface.
CN202210064547.2A 2022-01-20 2022-01-20 Subtitle display method and display equipment Pending CN116684674A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210064547.2A CN116684674A (en) 2022-01-20 2022-01-20 Subtitle display method and display equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210064547.2A CN116684674A (en) 2022-01-20 2022-01-20 Subtitle display method and display equipment

Publications (1)

Publication Number Publication Date
CN116684674A true CN116684674A (en) 2023-09-01

Family

ID=87777455

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210064547.2A Pending CN116684674A (en) 2022-01-20 2022-01-20 Subtitle display method and display equipment

Country Status (1)

Country Link
CN (1) CN116684674A (en)

Similar Documents

Publication Publication Date Title
CN114302190B (en) Display equipment and image quality adjusting method
CN113630655B (en) Method for changing color of peripheral equipment along with picture color and display equipment
CN112580302B (en) Subtitle correction method and display equipment
CN112118468A (en) Method for changing color of peripheral equipment along with color of picture and display equipment
CN113014939A (en) Display device and playing method
CN112601117A (en) Display device and content presentation method
CN112055245B (en) Color subtitle realization method and display device
CN113111214A (en) Display method and display equipment for playing records
CN112752156A (en) Subtitle adjusting method and display device
CN112580625A (en) Display device and image content identification method
CN111954043A (en) Information bar display method and display equipment
CN112087671A (en) Display method and display equipment for control prompt information of input method control
CN113453069B (en) Display device and thumbnail generation method
CN112911381B (en) Display device, mode adjustment method, device and medium
CN113992960A (en) Subtitle previewing method on display device and display device
CN111787350B (en) Display device and screenshot method in video call
CN116684674A (en) Subtitle display method and display equipment
CN112363683A (en) Method for supporting multi-layer display of webpage application and display equipment
CN112199560A (en) Setting item searching method and display device
CN113766164B (en) Display equipment and signal source interface display method
CN113436564B (en) EPOS display method and display equipment
US11960674B2 (en) Display method and display apparatus for operation prompt information of input control
CN114415864B (en) Touch area determining method and display device
CN115150667B (en) Display device and advertisement playing method
CN113596563B (en) Background color display method and display device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination