CN110351605B - Subtitle processing method and device - Google Patents

Subtitle processing method and device Download PDF

Info

Publication number
CN110351605B
CN110351605B CN201910755712.7A CN201910755712A CN110351605B CN 110351605 B CN110351605 B CN 110351605B CN 201910755712 A CN201910755712 A CN 201910755712A CN 110351605 B CN110351605 B CN 110351605B
Authority
CN
China
Prior art keywords
subtitle
size
caption
target
terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910755712.7A
Other languages
Chinese (zh)
Other versions
CN110351605A (en
Inventor
吴青
柏效净
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vidaa Netherlands International Holdings BV
Original Assignee
Hisense Electronic Technology Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Electronic Technology Shenzhen Co ltd filed Critical Hisense Electronic Technology Shenzhen Co ltd
Priority to CN201910755712.7A priority Critical patent/CN110351605B/en
Publication of CN110351605A publication Critical patent/CN110351605A/en
Application granted granted Critical
Publication of CN110351605B publication Critical patent/CN110351605B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/278Subtitling

Abstract

The invention relates to a subtitle processing method and device, and belongs to the technical field of display. The method comprises the following steps: acquiring a subtitle size threshold; judging whether the size of the zoomed caption in a first direction is smaller than the caption size threshold value or not, wherein the first direction is any direction; and when the size of the scaled caption in the first direction is smaller than the caption size threshold value, prohibiting the caption from being displayed. The invention avoids the resource waste caused by the display of invalid subtitles by the terminal.

Description

Subtitle processing method and device
Technical Field
The present invention relates to the field of display technologies, and in particular, to a method and an apparatus for processing subtitles.
Background
With the development of technology, terminals supporting a caption function have become increasingly popular, and the caption function refers to: and in the process that the terminal plays the video by adopting the video window, the size of the subtitle changes in direct proportion to the size of the video window. For example, when the video window is maximized (i.e., displayed full screen), the size of the subtitle may be correspondingly enlarged to the maximum size that the subtitle may be displayed. When the video window is reduced (small window display), the size of the subtitle can be reduced correspondingly to the minimum size that the subtitle can be displayed.
However, when the video window is reduced to a certain size, because the size of the subtitles in the video played by the video window is correspondingly reduced, the subtitles may not be recognized by the user, so that invalid subtitles are generated, and finally, the terminal wastes resources due to the display of the invalid subtitles.
Disclosure of Invention
The invention provides a subtitle processing method and device, which can solve the problem of resource waste caused by the display of invalid subtitles by a terminal. The technical scheme is as follows:
in a first aspect, a method for processing subtitles is provided, and is applied to a terminal, and the method includes:
acquiring a subtitle size threshold;
judging whether the size of the zoomed caption in a first direction is smaller than the caption size threshold value or not, wherein the first direction is any direction;
and when the size of the scaled caption in the first direction is smaller than the caption size threshold value, prohibiting the caption from being displayed.
In a second aspect, a subtitle processing apparatus is provided, which is applied to a terminal and includes modules for executing the subtitle processing method according to any one of the above first aspects.
In a third aspect, a subtitle processing apparatus is provided, including:
a processor;
a memory for storing executable instructions of the processor;
when the processor executes the executable instructions, the subtitle processing method according to any one of the first aspect may be implemented.
In a fourth aspect, a readable storage medium having instructions stored therein is provided;
when the instructions are executed on a processing component, the processing component is caused to execute the subtitle processing method according to any one of the first aspect.
The technical scheme provided by the invention can have the following beneficial effects:
according to the subtitle processing method and device provided by the embodiment of the invention, whether the size of the zoomed subtitle in the first direction is smaller than the subtitle size threshold value or not can be judged, so that the subtitle is forbidden to be displayed when the size of the zoomed subtitle in the first direction is smaller than the subtitle size threshold value. Therefore, when the video window is reduced to a certain size, which results in the scaled subtitles being too small in size in the first direction, the subtitles can be prohibited from being displayed, so as to avoid the resource waste of the terminal caused by displaying invalid subtitles.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
In order to illustrate the embodiments of the present invention more clearly, the drawings that are needed in the description of the embodiments will be briefly described below, it being apparent that the drawings in the following description are only some embodiments of the invention, and that other drawings may be derived from those drawings by a person skilled in the art without inventive effort.
FIG. 1 is a schematic diagram of an interface where a video window is located in a display area of a terminal;
FIG. 2 is a schematic view of an interface where a video window is located in a display area of another terminal;
fig. 3 is a schematic flowchart of a subtitle processing method according to an embodiment of the present invention;
fig. 4 is a schematic flowchart of another subtitle processing method according to an embodiment of the present invention;
fig. 5 is a schematic flowchart of obtaining a subtitle size threshold according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of an interface for a user to input target vision data according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of an interface for a user to input a target viewing distance according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a method for calculating a viewing coefficient according to an embodiment of the present invention;
fig. 9 is a block diagram of a subtitle processing apparatus according to an embodiment of the present invention;
FIG. 10 is a block diagram of an acquisition module provided by embodiments of the present invention;
fig. 11 is a block diagram of another subtitle processing apparatus according to an embodiment of the present invention;
fig. 12 is a schematic structural diagram of a subtitle processing apparatus according to an embodiment of the present invention.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
A terminal such as a television or a set-top box supporting Digital Video Broadcasting (DVB) standard generally supports a subtitle function. When a user plays a video through an Electronic Program Guide (EPG) or other video window, the size of the subtitles in the video window may be scaled according to the current position and size of the video displayed in the video window. As shown in fig. 1, in the display area of the terminal 01 (i.e. the display area of the display screen of the terminal 01), when the video window a is maximized (i.e. the video window a is displayed in full screen), the size of the subtitle S may be correspondingly enlarged to the maximum size of the subtitle S that can be displayed. As shown in fig. 2, when the video window a is reduced to a corner of the display area of the terminal 01 in the same display area of the terminal 01 as in fig. 1, the size of the subtitles S is also reduced accordingly.
However, when the video window is reduced to a certain size, the size of the subtitles in the video played by the video window is correspondingly reduced, so that the subtitles may not be recognized by the user due to the undersize of the subtitles, thereby generating invalid subtitles, and finally causing resource waste due to the display of the invalid subtitles by the terminal.
Referring to fig. 3, a flowchart of a subtitle processing method according to an embodiment of the present invention is shown, which may solve the above problem to some extent. The subtitle processing method may be applied to a terminal, which may include the following steps.
Step 101, acquiring a subtitle size threshold.
And 102, judging whether the size of the zoomed caption in a first direction is smaller than a caption size threshold value, wherein the first direction is any direction.
And 103, when the size of the zoomed caption in the first direction is smaller than the caption size threshold value, prohibiting the caption from being displayed.
In summary, the subtitle processing method provided by the embodiment of the present invention may prohibit displaying the subtitle by determining whether the size of the scaled subtitle in the first direction is smaller than the subtitle size threshold, so that when the size of the scaled subtitle in the first direction is smaller than the subtitle size threshold. Therefore, when the video window is reduced to a certain size, which results in the scaled subtitles being too small in size in the first direction, the subtitles can be prohibited from being displayed, so as to avoid the resource waste of the terminal caused by displaying invalid subtitles.
Referring to fig. 4, another subtitle processing method according to an embodiment of the present invention is shown. As shown in fig. 4, the subtitle processing method may be applied to a terminal, which may include the following steps.
Step 201, acquiring a subtitle size threshold.
The subtitle size threshold may reflect a minimum length of an object in the second direction, in which the definition of the object set is greater than or equal to the definition threshold, when the user views the object set with his/her eyesight and a certain distance. Illustratively, the second direction may be either direction. When the second direction is a gravity direction, the caption size threshold may be considered as a line height (recommend _ height) of the caption at a certain distance that the user can see clearly. The caption size threshold may be stored in the terminal by data directly input by the user, so that the terminal may directly obtain the caption size threshold. Alternatively, the caption size threshold may be determined by the terminal through a specific calculation.
Step 202, determining the first pixel number corresponding to the subtitle size threshold.
The caption size threshold is equal to the product of the first pixel number and the size of the terminal pixel in the first direction. That is, the first number of pixels is the ratio of the caption size threshold to the size of the terminal pixel in the first direction. The first direction may be any direction. For example, the first direction may be a row direction or a column direction of the pixels in the terminal.
Alternatively, the size of the pixel of the termination in the first direction may be determined according to the size of the termination in the first direction and the resolution of the termination. For example, assume that the terminal is a 70 "4K (one resolution standard) television. The width of the display area in this terminal is 1550.6mm (millimeters), the height is 872.2mm, and the resolution is 4096 × 2160. And assuming that the first direction is the column direction of the pixels in the terminal (i.e., the first direction is the direction of the height of the terminal), the size of the terminal in the first direction is 872.2mm, and the size of the pixels of the terminal in the first direction is 872.2 ÷ 2160 ≈ 0.4038 mm. In the first case, if the caption size threshold is 6.09mm, the number of first pixels corresponding to the caption size threshold is 6.09 ÷ 0.4038 ≈ 15. In the second case, if the caption size threshold is 12.18mm, the number of first pixels corresponding to the caption size threshold is 12.18 ÷ 0.4038 ≈ 30.
And step 203, acquiring a second pixel number corresponding to the size of the scaled caption in the first direction.
Wherein the scaled caption has a size in the first direction equal to the product of the number of the second pixels and the size of the terminal pixel in the first direction. That is, the second number of pixels is a ratio of a size of the scaled subtitle in the first direction to a size of the terminal pixel in the first direction.
Continuing with the example in step 202 above, assuming that in the first case, the scaled subtitle has a size of 5.6532mm in the first direction, the second number of pixels is 5.6532 ÷ 0.4038 ═ 14. In the second case, when the size of the scaled subtitle in the first direction is 11.7102mm, the second number of pixels is 11.7102 ÷ 0.4038 ═ 29.
It should be noted that, when the number of pixels is adopted in the terminal to measure the size of the subtitle in the first direction, the terminal may also directly obtain the second number of pixels obtained by measurement, and the calculation is not required to be performed through the scaled size of the subtitle in the first direction. This reduces the amount of computation of the terminal.
And step 204, judging whether the size of the scaled caption in the first direction is smaller than a caption size threshold value or not according to the first pixel number and the second pixel number. If the size of the scaled subtitle in the first direction is smaller than the subtitle size threshold, go to step 205; if the size of the scaled subtitle in the first direction is greater than or equal to the subtitle size threshold, step 206 is performed.
The terminal may compare the first number of pixels with the second number of pixels. And when the second pixel number is smaller than the first pixel number, determining that the size of the scaled caption in the first direction is smaller than a caption size threshold value. And when the second pixel number is larger than or equal to the first pixel number, determining that the size of the scaled caption in the first direction is larger than or equal to a caption size threshold value. Illustratively, the example in step 203 above is continued as an example. In the first case, since the second number of pixels is 14 smaller than the first number of pixels 15, it is determined that the size of the scaled subtitle in the first direction is smaller than the subtitle size threshold. In the second case, since the second number of pixels is 29 smaller than the first number of pixels 30, it is determined that the size of the scaled subtitle in the first direction is smaller than the subtitle size threshold.
Step 205, prohibiting displaying the subtitles.
If the size of the scaled subtitle in the first direction is smaller than the subtitle size threshold, the size of the scaled subtitle is too small, and the user cannot recognize the subtitle. Therefore, the terminal can avoid the waste of resources caused by displaying invalid subtitles by prohibiting the subtitles from being displayed.
And step 206, displaying the zoomed subtitles.
If the size of the zoomed subtitle in the first direction is larger than or equal to the subtitle size threshold, the user can identify the zoomed subtitle, and at the moment, the zoomed subtitle can be normally displayed, so that the subtitle function is normally provided for the user.
It should be noted that, in the above steps 202 to 204, it is an implementation manner for the terminal to determine whether the size of the scaled subtitle in the first direction is smaller than the subtitle size threshold. In the embodiment of the present invention, the terminal may also determine whether the size of the scaled subtitle in the first direction is smaller than the subtitle size threshold in other manners, so that step 205 may be executed when the size of the scaled subtitle in the first direction is smaller than the subtitle size threshold. Step 206 may be performed when the size of the scaled subtitle in the first direction is greater than or equal to the subtitle size threshold. For example: the manner of determining whether the size of the scaled subtitle in the first direction is smaller than the subtitle size threshold may be a direct comparison manner. The direct comparison means directly comparing the size of the scaled subtitle in the first direction with the subtitle size threshold. When the terminal employs the direct comparison method, the above steps 202 to 204 are not required to be performed, and the step 205 or the step 206 is performed based on the comparison result of the direct comparison.
In summary, in the subtitle processing method provided by the embodiment of the present invention, under the condition that the factors such as the eyesight data and the viewing distance of the user are considered, it is possible to prohibit displaying the subtitle by determining whether the size of the scaled subtitle in the first direction is smaller than the subtitle size threshold value, so that when the size of the scaled subtitle in the first direction is smaller than the subtitle size threshold value. Therefore, when the video window is reduced to a certain size, which results in the scaled subtitles being too small in size in the first direction, the subtitles can be prohibited from being displayed, so as to avoid the resource waste of the terminal caused by displaying invalid subtitles. Furthermore, the blocking of the invalid subtitles to the video is avoided by prohibiting the subtitles from being displayed, so that the user experience is improved, and the watching effect is enhanced.
In this embodiment of the present invention, when the caption size threshold needs to be calculated by the terminal, as shown in fig. 5, the process of obtaining the caption size threshold in step 201 may include:
step 2011, receiving the target vision data and the target viewing distance.
The target vision data may be vision data of a user of the terminal. The target viewing distance may be a distance from a human eye of the user to the terminal. Wherein, the target vision data can be in a form of five-point record, and the value range of the target vision data can be [4.0, 5.3 ]. Alternatively, the target vision data may be in the form of decimal records, and the value range of the target vision data may be [0.1, 2.0 ].
In the embodiment of the invention, the terminal can directly receive the target vision data and the target viewing distance input by the user. Or, the terminal may perform a vision test on the user, so as to obtain target vision data and a target viewing distance according to a result of the vision test.
For example, as shown in fig. 6, it shows an interface schematic diagram of a terminal for a user to input target vision data according to an embodiment of the present invention. The interface Z1 of the target vision data in fig. 6 may include a selection key X and a screen scroll bar J corresponding to a vision (vision) typeface and a plurality of vision data, which may reflect different visions of the human eye, respectively. The user may also input the vision data of the user by selecting a selection key X corresponding to a plurality of vision data in the interface Z1 of the target vision data displayed on the terminal in step 2011. When there is no user's vision data in the interface Z1 for the current target vision data, the user can find hidden vision data by pulling the screen scroll bar J. The default target vision data for the terminal may be 5.0 (i.e., 1.0).
Wherein the selection key X may have only indicia thereon in the form of a five-point recording of vision data. Alternatively, it is also possible to have only an identification of the vision data in the form of a decimal record. Alternatively, it is also possible to have indicia of vision data in the form of both quintile records and decimal records. When the selection key X has the identification of the vision data in the form of five-point recording and decimal recording, the multiple identifications can meet the recording habits of different users on the vision data in different forms, and the user experience is improved.
As another example, as shown in fig. 7, it shows an interface schematic diagram of a terminal provided by an embodiment of the present invention, where a user inputs a target viewing distance. The interface Z2 for the target viewing distance in fig. 7 may include an input box Y for the target viewing distance, and a prompt message "visual distance" corresponding to the input box Y for the target viewing distance. The input box Y of the target viewing distance may include a first input bit, a decimal point, a second input bit, and a unit (e.g., m) of the target viewing distance, which are sequentially arranged. It is to be understood that the user may also input the target viewing distance of the user in the input box Y of the target viewing distance interface Z2 displayed by the terminal in step 2011, and the target viewing distance may be a decimal. Wherein, the default target viewing distance of the terminal may be the farthest viewing distance stored in the terminal.
Step 2012, in the correspondence between the vision data and the viewing coefficient, a target viewing coefficient corresponding to the target vision data is queried.
Wherein the vision data is positively correlated with vision and negatively correlated with the viewing coefficient.
Optionally, the viewing coefficient corresponding to the vision data in the corresponding relationship may be: when the human eye views the object set under the vision reflected by the vision data, the minimum length of the object with the definition greater than or equal to the definition threshold value in the second direction in the object set is obtained. The unit of the viewing factor may be mm. Wherein the set of objects may include: an object at a distance from the human eye greater than a distance threshold. By way of example, the set of objects may include: a plurality of optotypes (which may be E-shaped) in the logarithmic visual chart. The distance threshold may be 0.5m, 1m or 2 m.
Since the subtitle size threshold needs to be determined subsequently based on the viewing factor, the second direction needs to be consistent with the first direction. Illustratively, when the first direction is a row direction of pixels in the terminal, the second direction is a horizontal direction. Alternatively, when the first direction is a column direction of the pixels in the terminal, the second direction is a gravity direction.
Further optionally, the terminal may store a subtitle adjustment table in which a correspondence between the eyesight data and the viewing coefficient is recorded. The terminal may query a target viewing coefficient corresponding to the target vision data in a subtitle adjustment table for recording the correspondence.
For example, assume that the set of objects are a plurality of E-shaped optotypes in a logarithmic visual chart, and the second direction is a direction of gravity. Accordingly, the minimum length of the object in the second direction may be the minimum height of the object. The subtitle adjustment table may be as shown in table 1 below.
TABLE 1
Eyesight data Viewing coefficient
4.0 14.54
4.1 11.55
4.2 9.18
4.3 7.29
4.4 5.79
4.5 4.6
4.6 3.65
4.7 2.9
4.8 2.3
4.9 1.83
5.0 1.45
5.1 1.16
5.2 0.92
5.3 0.73
As shown in table 1, when the eyesight data is 4.0, the viewing factor corresponding to the eyesight data may be 14.54. The vision data is 4.1, and the viewing factor corresponding to the vision data may be 11.55. When the vision data is 4.2, the viewing coefficient corresponding to the vision data may be 9.18.
It should be noted that the vision data may be in the form of a five point recording or in the form of a decimal recording. Then, the subtitle adjustment table may also record vision data in two forms. The above table 1 is described only by taking an example in which the subtitle adjustment table records visual data in the form of five-point records.
Fig. 8 is a schematic diagram illustrating a principle of calculating a viewing coefficient according to an embodiment of the present invention. In the case where the sharpness of the object is the sharpness threshold, the minimum length of the object in the second direction is proportional to the distance threshold. And in the case that the object set comprises a plurality of visual targets in the logarithmic visual acuity chart, when the distance between the object in the object set and the human eye is usually 5m, and when the human eye views the object set under the visual acuity reflected by the visual acuity data, the minimum length of the object in the second direction, of which the definition is greater than or equal to the definition threshold value, in the object set is the visual target side length (since the length and the width of the visual targets in the logarithmic visual acuity chart are usually equal, the minimum length of the object in the second direction can be called the visual target side length). Therefore, when the distance threshold is 1m, the viewing coefficient is typically a ratio of the optotype side length to 5. For example, assuming that the optotype side is 7.27mm, the viewing factor is 1.45mm when the distance threshold is 1 m.
Optionally, the subtitle adjustment table may also include the length of the visual target edge. For example, as shown in table 2, table 2 explains the case where the visual acuity data includes visual acuity data in the form of five-point records and visual acuity data in the form of decimal records.
TABLE 2
Eyesight data Side length of visual target Viewing coefficient
4.0(0.1) 72.72 14.54
4.1(0.12) 57.76 11.55
4.2(0.15) 45.88 9.18
4.3(0.2) 36.45 7.29
4.4(0.25) 28.95 5.79
4.5(0.3) 23.00 4.6
4.6(0.4) 18.27 3.65
4.7(0.5) 14.51 2.9
4.8(0.6) 11.53 2.3
4.9(0.8) 9.16 1.83
5.0(1.0) 7.27 1.45
5.1(1.2) 5.78 1.16
5.2(1.5) 4.59 0.92
5.3(2.0) 3.64 0.73
As shown in table 2, when the visual acuity data is 4.0(0.1), the side length of the visual target corresponding to the visual acuity data is 72.12, and the corresponding viewing factor may be 14.54. When the visual acuity data is 4.1(0.12), the side length of the visual target corresponding to the visual acuity data is 57.76, and the corresponding viewing coefficient may be 11.55. When the vision data is 4.2(0.15), the side length of the visual target corresponding to the vision data is 45.88, and the corresponding viewing coefficient can be 9.18.
And 2013, determining a subtitle size threshold according to the target viewing coefficient and the target viewing distance.
Wherein the caption size threshold is positively correlated with both the target viewing coefficient and the target viewing distance.
Optionally, the process of determining the subtitle size threshold according to the target viewing coefficient and the target viewing distance may include: and determining a subtitle size threshold according to the target viewing coefficient, the target viewing distance and the target formula. Wherein the target formula may include:
Figure BDA0002168646740000101
l denotes a subtitle size threshold, and x denotes a target viewing distance
Distance, a represents a distance threshold and W represents a target viewing coefficient. For example, the distance threshold a is usually 1m, and the target formula may be L ═ x × W. When a is 1, the target formula can be simplified, so that the terminal calculation process is simplified, and the resource consumption is reduced.
For example, assume that the distance threshold a is 1 m. In one case, assuming that the target viewing distance x is 4.2m, the target vision data is 5.0. As can be seen from Table 1 or Table 2, the viewing factor W was 1.45 mm. The caption size threshold L is 4.2 × 1.45mm to 6.09 mm. In another case, assuming that the target viewing distance x is 4.2m, the target vision data is 4.7. As can be seen from Table 1 or Table 2, the viewing factor W was 2.9 mm. The subtitle size threshold L is 4.2 × 2.9mm 12.18 mm.
It should be noted that, the order of the steps of the subtitle processing method provided in the embodiment of the present invention may be appropriately adjusted, and the steps may also be increased or decreased according to the circumstances, and any method that can be easily conceived by those skilled in the art within the technical scope disclosed in the present invention should be included in the protection scope of the present invention, and therefore, the details are not described again.
Referring to fig. 9, a block diagram of a subtitle processing apparatus according to an embodiment of the present invention is shown. The caption processing apparatus is applied to a terminal, and the caption processing apparatus 100 includes:
an obtaining module 110, configured to obtain a subtitle size threshold.
The determining module 120 is configured to determine whether the size of the scaled subtitle in the first direction is smaller than a subtitle size threshold, where the first direction is any direction.
And a display module 130, configured to prohibit displaying the subtitles when the size of the scaled subtitles in the first direction is smaller than a subtitle size threshold.
Optionally, as shown in fig. 10, the obtaining module 110 includes:
the receiving sub-module 1101 is configured to receive target vision data and a target viewing distance, where the target vision data is vision data of a user of the terminal, and the target viewing distance is a distance from eyes of the user to the terminal.
The query submodule 1102 is configured to query a target viewing coefficient corresponding to the target vision data in a corresponding relationship between the vision data and the viewing coefficient, where the vision data is positively correlated with the vision and negatively correlated with the viewing coefficient.
The determining sub-module 1103 is configured to determine a subtitle size threshold according to the target viewing coefficient and the target viewing distance, where the subtitle size threshold is positively correlated with both the target viewing coefficient and the target viewing distance.
Optionally, the viewing coefficient corresponding to the vision data in the corresponding relationship is: when the human eye views the object set under the vision reflected by the vision data, the minimum length of the object with the definition greater than or equal to the definition threshold value in the second direction in the object set is obtained. Wherein the set of objects comprises: an object at a distance from the human eye greater than or equal to a distance threshold. The first direction is the row direction of the pixels in the terminal, and the second direction is the horizontal direction; alternatively, the first direction is a column direction of the pixels in the terminal, and the second direction is a gravity direction.
Optionally, the determining sub-module 1103 is further configured to:
according to the object viewAnd determining a caption size threshold value by using the viewing coefficient, the target viewing distance and the target formula. Wherein the target formula comprises:
Figure BDA0002168646740000111
l denotes a subtitle size threshold, x denotes a target viewing distance, a denotes a distance threshold, and W denotes a target viewing coefficient.
Optionally, as shown in fig. 11, the subtitle processing apparatus 100 further includes:
the first determining module 140 is configured to determine a first number of pixels corresponding to a caption size threshold, where the caption size threshold is equal to a product of the first number of pixels and a size of a terminal pixel in the first direction.
The second determining module 150 is configured to obtain a second number of pixels corresponding to a size of the scaled subtitle in the first direction, where the size of the scaled subtitle in the first direction is equal to a product of the second number of pixels and a size of a pixel of the terminal in the first direction.
The determining module 120 is further configured to: and judging whether the size of the scaled caption in the first direction is smaller than a caption size threshold value or not according to the first pixel number and the second pixel number. And when the second pixel number is smaller than the first pixel number, determining that the size of the scaled caption in the first direction is smaller than a caption size threshold value. And when the second pixel number is larger than or equal to the first pixel number, determining that the size of the scaled caption in the first direction is larger than or equal to a caption size threshold value.
Optionally, the first direction is a row direction or a column direction of the pixels in the terminal.
Optionally, the set of objects comprises: a plurality of optotypes in a logarithmic visual chart.
Optionally, the query submodule 1102 is further configured to: and inquiring a target viewing coefficient corresponding to the target vision data in a subtitle adjustment table for recording the corresponding relation.
In summary, the subtitle processing apparatus according to the embodiments of the present invention may prohibit displaying the subtitle by determining whether the size of the scaled subtitle in the first direction is smaller than the subtitle size threshold by considering factors such as the eyesight data and the viewing distance of the user. Therefore, when the video window is reduced to a certain size, which results in the scaled subtitles being too small in size in the first direction, the subtitles can be prohibited from being displayed, so as to avoid the resource waste of the terminal caused by displaying invalid subtitles. Furthermore, the blocking of the invalid subtitles to the video is avoided by prohibiting the subtitles from being displayed, so that the user experience is improved, and the watching effect is enhanced.
An embodiment of the present invention provides a subtitle processing apparatus, as shown in fig. 12, the subtitle processing apparatus 200 includes: a processor 201; a memory 202 for storing executable instructions for the processor 201. When the processor 201 runs the executable instructions, any subtitle processing method provided by the embodiment of the present invention can be implemented.
An embodiment of the present invention provides a readable storage medium having instructions stored therein. When the instructions are executed on the processing component, the processing component is enabled to execute any subtitle processing method provided by the embodiment of the invention.
The embodiment of the invention also provides a computer program product containing instructions, and when the computer program product runs on a computer, the computer is enabled to execute the subtitle processing method provided by the embodiment of the invention.
In the present invention, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. The term "plurality" means two or more unless expressly limited otherwise.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This invention is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (10)

1. A subtitle processing method is applied to a terminal, and the method comprises the following steps:
acquiring a subtitle size threshold;
judging whether the size of the zoomed caption in a first direction is smaller than the caption size threshold value or not, wherein the first direction is any direction;
when the size of the zoomed caption in the first direction is smaller than the caption size threshold value, prohibiting the caption from being displayed;
the subtitle size threshold is related to target vision data and a target viewing distance, the target vision data is vision data of a user of the terminal, and the target viewing distance is a distance from eyes of the user to the terminal.
2. The method of claim 1, wherein obtaining the caption size threshold comprises:
receiving target vision data and a target viewing distance;
inquiring a target viewing coefficient corresponding to the target vision data in the corresponding relation between the vision data and the viewing coefficient, wherein the vision data is positively correlated with the vision and negatively correlated with the viewing coefficient;
and determining the subtitle size threshold according to the target viewing coefficient and the target viewing distance, wherein the subtitle size threshold is positively correlated with the target viewing coefficient and the target viewing distance.
3. The method of claim 2, wherein the viewing coefficients corresponding to the vision data in the correspondence relationship are:
when the human eyes watch the object set under the vision reflected by the vision data, the minimum length of the object with the definition greater than or equal to the definition threshold value in the second direction in the object set is obtained;
wherein the set of objects comprises: an object having a distance to the human eye greater than or equal to a distance threshold;
the first direction is a row direction of pixels in the terminal, and the second direction is a horizontal direction; or, the first direction is a column direction of the pixels in the terminal, and the second direction is a gravity direction.
4. The method of claim 3, wherein determining the caption size threshold based on the target viewing coefficient and the target viewing distance comprises:
determining the subtitle size threshold according to the target viewing coefficient, the target viewing distance and a target formula;
wherein the target formula comprises:
Figure FDA0002877213950000021
l represents the subtitle size threshold, x represents the target viewing distance, a represents the distance threshold, and W represents the target viewing coefficient.
5. The method of any of claims 1 to 4, further comprising:
determining a first pixel number corresponding to the caption size threshold, wherein the caption size threshold is equal to the product of the first pixel number and the size of the pixel of the terminal in the first direction;
acquiring a second pixel number corresponding to the size of the scaled caption in the first direction, wherein the size of the scaled caption in the first direction is equal to the product of the second pixel number and the size of the pixel of the terminal in the first direction;
the determining whether the size of the scaled subtitle in the first direction is smaller than the subtitle size threshold includes:
judging whether the size of the zoomed caption in the first direction is smaller than the caption size threshold value or not according to the first pixel number and the second pixel number;
when the second pixel number is smaller than the first pixel number, determining that the size of the scaled caption in the first direction is smaller than the caption size threshold;
and when the second pixel number is greater than or equal to the first pixel number, determining that the size of the scaled caption in the first direction is greater than or equal to the caption size threshold.
6. The method of claim 3 or 4, wherein the set of objects comprises: a plurality of optotypes in a logarithmic visual chart.
7. The method of claim 2, wherein the querying the target viewing coefficient corresponding to the target vision data in the correspondence relationship between the vision data and the viewing coefficient comprises:
and inquiring a target viewing coefficient corresponding to the target vision data in a subtitle adjustment table for recording the corresponding relation.
8. A subtitle processing apparatus applied to a terminal, the subtitle processing apparatus comprising modules for performing the subtitle processing method according to any one of claims 1 to 7.
9. A subtitle processing apparatus, comprising:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor, when executing the executable instructions, is capable of implementing the subtitle processing method of any one of claims 1 to 7.
10. A readable storage medium having instructions stored therein;
when run on a processing component, the instructions cause the processing component to perform the subtitle processing method of any of claims 1 to 7.
CN201910755712.7A 2019-08-15 2019-08-15 Subtitle processing method and device Active CN110351605B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910755712.7A CN110351605B (en) 2019-08-15 2019-08-15 Subtitle processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910755712.7A CN110351605B (en) 2019-08-15 2019-08-15 Subtitle processing method and device

Publications (2)

Publication Number Publication Date
CN110351605A CN110351605A (en) 2019-10-18
CN110351605B true CN110351605B (en) 2021-05-25

Family

ID=68185194

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910755712.7A Active CN110351605B (en) 2019-08-15 2019-08-15 Subtitle processing method and device

Country Status (1)

Country Link
CN (1) CN110351605B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113807375A (en) * 2020-06-12 2021-12-17 海信视像科技股份有限公司 Display device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101071562A (en) * 2006-05-12 2007-11-14 上海乐金广电电子有限公司 Caption size regulating method for karaoke audio device
CN102111601A (en) * 2009-12-23 2011-06-29 大猩猩科技股份有限公司 Content-based adaptive multimedia processing system and method
JP2011182008A (en) * 2010-02-26 2011-09-15 Oki Electric Industry Co Ltd Subtitle synthesizer
CN103853318A (en) * 2012-11-30 2014-06-11 晨星软件研发(深圳)有限公司 User interface generating device and relevant method
CN106331828A (en) * 2016-08-19 2017-01-11 暴风集团股份有限公司 Method and system for adjusting subtitle according to picture
CN107736032A (en) * 2015-06-30 2018-02-23 索尼公司 Reception device, method of reseptance, transmitting device and transmission method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101071562A (en) * 2006-05-12 2007-11-14 上海乐金广电电子有限公司 Caption size regulating method for karaoke audio device
CN102111601A (en) * 2009-12-23 2011-06-29 大猩猩科技股份有限公司 Content-based adaptive multimedia processing system and method
JP2011182008A (en) * 2010-02-26 2011-09-15 Oki Electric Industry Co Ltd Subtitle synthesizer
CN103853318A (en) * 2012-11-30 2014-06-11 晨星软件研发(深圳)有限公司 User interface generating device and relevant method
CN107736032A (en) * 2015-06-30 2018-02-23 索尼公司 Reception device, method of reseptance, transmitting device and transmission method
CN106331828A (en) * 2016-08-19 2017-01-11 暴风集团股份有限公司 Method and system for adjusting subtitle according to picture

Also Published As

Publication number Publication date
CN110351605A (en) 2019-10-18

Similar Documents

Publication Publication Date Title
CN105187691B (en) Image processing method and image processing apparatus
JP5465620B2 (en) Video output apparatus, program and method for determining additional information area to be superimposed on video content
CN107743224B (en) Method and system for reminding dirtiness of lens, readable storage medium and mobile terminal
CN110865865B (en) Popup window position determining method, device, equipment and storage medium
KR101975247B1 (en) Image processing apparatus and image processing method thereof
CN103260039A (en) Image processing apparatus, image processing method, and program
US7643070B2 (en) Moving image generating apparatus, moving image generating method, and program
US20140143691A1 (en) User interface generating apparatus and associated method
US20180295420A1 (en) Methods, systems and apparatus for media content control based on attention detection
CN110876079B (en) Video processing method, device and equipment
EP2141658B1 (en) Method for detecting layout areas in a video image and method for generating a reduced size image using the detection method
CN110351605B (en) Subtitle processing method and device
WO2017102389A1 (en) Display of interactive television applications
US10733706B2 (en) Mobile device, and image processing method for mobile device
US11126399B2 (en) Method and device for displaying sound volume, terminal equipment and storage medium
US20130113887A1 (en) Apparatus and method for measuring 3-dimensional interocular crosstalk
CN114584832B (en) Video self-adaptive multi-size dynamic playing method and device
CN114339371A (en) Video display method, device, equipment and storage medium
CN110324694B (en) Video playing method and storage medium
US8077926B2 (en) Method of motion detection using adaptive threshold
US20110221875A1 (en) Three-dimensional image display apparatus and image processing apparatus
CN102487447B (en) The method and apparatus of adjustment object three dimensional depth and the method and apparatus of detection object three dimensional depth
CN112800274A (en) Course video recommendation method and related device
CN114297433B (en) Method, device, equipment and storage medium for searching question and answer result
EP3568993B1 (en) Display device for recognizing user interface and controlling method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20221021

Address after: 83 Intekte Street, Devon, Netherlands

Patentee after: VIDAA (Netherlands) International Holdings Ltd.

Address before: 518000 Hisense Electronic Technology (Shenzhen) Co., Ltd., 9th floor, Hisense south building, 1777 Chuangye Road, Yuehai street, Nanshan District, Shenzhen City, Guangdong Province

Patentee before: HISENSE ELECTRONIC TECHNOLOGY (SHENZHEN) Co.,Ltd.