CN112055261A - Subtitle display method and device, electronic equipment and storage medium - Google Patents

Subtitle display method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112055261A
CN112055261A CN202010674904.8A CN202010674904A CN112055261A CN 112055261 A CN112055261 A CN 112055261A CN 202010674904 A CN202010674904 A CN 202010674904A CN 112055261 A CN112055261 A CN 112055261A
Authority
CN
China
Prior art keywords
subtitle
user interface
font size
user
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010674904.8A
Other languages
Chinese (zh)
Inventor
辛永正
苏文嗣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010674904.8A priority Critical patent/CN112055261A/en
Publication of CN112055261A publication Critical patent/CN112055261A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440236Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by media transcoding, e.g. video is transformed into a slideshow of still pictures, audio is converted into text
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • H04N21/4854End-user interface for client configuration for modifying image parameters, e.g. image brightness, contrast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • H04N21/4856End-user interface for client configuration for language selection, e.g. for the menu or subtitles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • H04N21/4858End-user interface for client configuration for modifying screen layout parameters, e.g. fonts, size of the windows

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a subtitle display method, a subtitle display device, electronic equipment and a storage medium, and relates to the field of natural language processing. The implementation scheme is as follows: and generating a subtitle according to an audio part of the video played by the user interface, synchronously displaying the subtitle on the user interface, detecting target operation of the user interface, responding to the target operation, adjusting the font size of the subtitle, and displaying the subtitle with the adjusted font size on the user interface. Therefore, the user can set the font size of the subtitle through triggering target operation so as to meet the actual watching requirement of the user and improve the watching experience of the user.

Description

Subtitle display method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of video processing, and in particular, to the field of natural language processing, and more particularly, to a method and an apparatus for displaying subtitles, an electronic device, and a storage medium.
Background
With the continuous development of terminal technology, various Applications (APP) are continuously appearing like bamboo shoots in spring after rain. The user can install different types of APP on the terminal equipment according to self requirements, such as social APP, payment APP, entertainment APP, video APP and the like. At present, for video APPs or APPs with video playing functions (such as browsers), users can select or search videos to play according to their own needs.
However, the font size of the subtitles displayed in the video playing interface is fixed, and may not meet the viewing requirements of different users.
Disclosure of Invention
The application provides a method and a device for displaying subtitles, electronic equipment and a storage medium.
According to an aspect of the present application, there is provided a subtitle display method including:
generating a subtitle according to an audio part of a video played by a user interface, and synchronously displaying the subtitle on the user interface;
detecting a target operation of the user interface;
adjusting the font size of the subtitle in response to the target operation;
and displaying the subtitle in the adjusted font size on the user interface.
According to another aspect of the present application, there is provided a subtitle exhibiting apparatus including:
the playing module is used for generating subtitles according to an audio part of a video played by a user interface and synchronously displaying the subtitles on the user interface;
the detection module is used for detecting the target operation of the user interface;
the adjusting module is used for responding to the target operation and adjusting the font size of the subtitle;
and the display module is used for displaying the subtitles on the user interface according to the adjusted font size.
According to yet another aspect of the present application, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute the subtitle display method according to the above embodiments of the present application.
According to still another aspect of the present application, there is provided a non-transitory computer-readable storage medium of computer instructions for causing a computer to perform the subtitle presentation method proposed by the above-described embodiment of the present application.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present application, nor do they limit the scope of the present application. Other features of the present application will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
fig. 1 is a schematic flowchart of a subtitle display method according to an embodiment of the present application;
FIG. 2 is a first schematic diagram of a user interface in an embodiment of the present application;
fig. 3 is a schematic flowchart of a subtitle display method according to a second embodiment of the present application;
FIG. 4 is a second schematic diagram of a user interface in an embodiment of the present application;
FIG. 5 is a third schematic view of a user interface in an embodiment of the present application;
FIG. 6 is a fourth schematic view of a user interface in an embodiment of the present application;
fig. 7 is a schematic flowchart of a subtitle display method according to a third embodiment of the present application;
FIG. 8 is a fifth schematic view of a user interface in an embodiment of the present application;
fig. 9 is a schematic flowchart of a subtitle display method according to a fourth embodiment of the present application;
fig. 10 is a schematic structural diagram of a subtitle display apparatus according to a fifth embodiment of the present application;
fig. 11 is a block diagram of an electronic device according to a subtitle presentation method according to an embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
A subtitle presentation method, apparatus, electronic device, and storage medium according to embodiments of the present application are described below with reference to the accompanying drawings.
Fig. 1 is a schematic flowchart of a subtitle display method according to an embodiment of the present application.
The embodiment of the present application is exemplified by the subtitle display method being configured in a subtitle display apparatus, which can be applied to any electronic device, so that the electronic device can perform a subtitle display function.
The electronic device may be any device having a computing capability, for example, a Personal Computer (PC), a mobile terminal, a server, and the like, and the mobile terminal may be a hardware device having various operating systems, touch screens, and/or display screens, such as a mobile phone, a tablet Computer, a Personal digital assistant, a wearable device, and a vehicle-mounted device.
As shown in fig. 1, the subtitle display method may include the steps of:
step 101, generating subtitles according to an audio part of a video played by a user interface, and synchronously displaying the subtitles on the user interface.
In this embodiment of the application, the video played on the user interface may be a video locally stored in the electronic device, such as a video that is downloaded and shot in advance, or may also be a video that is browsed online, or may also be a new video obtained after video processing, such as a new video obtained after processing a certain video based on video processing, and the like, and the application does not limit this.
In the embodiment of the application, a user can select or search one video to play according to own requirements, and in the process of playing the video on the user interface, the subtitle display device can generate the subtitle according to the audio part of the video and display the subtitle on the user interface. Further, to enhance the viewing experience of the user, the subtitles may be presented in synchronization with the audio portion of the video.
As a possible implementation manner, the language of the subtitle may be the same as the language of the audio portion of the video, that is, the subtitle is an original text corresponding to the audio portion of the video, and the audio portion of the video may be converted into the corresponding subtitle according to a voice recognition technology.
It should be noted that the subtitles may also be organized inside the video file together with the audio track and the video track in a data track manner, so that the subtitles of the video may also be directly obtained during video playing, and the subtitles are synchronously displayed on the user interface. Alternatively, the subtitles may also be superimposed onto a video image frame corresponding to the video data as a part of the video image frame during audio production, so that in the present application, the subtitles may be extracted by a subtitle encoding tool, such as an Optical Character Recognition (OCR) technology, so as to synchronously display the extracted subtitles on the user interface.
As another possible implementation manner, the language of the caption may be different from the language of the audio portion of the video, that is, the caption is a translation corresponding to the audio portion of the video, and in the present application, the audio portion of the video may be translated into the corresponding caption according to a preset translation setting, for example, the preset translation setting may include information of a setting for translating from the first language to the second language, so that the caption display device may translate the audio portion of the first language into the caption of the second language. For example, if the default translation is set to english, the first language may be english and the second language may be chinese.
The preset translation setting may be preset by a built-in program of the electronic device, or, in order to improve the applicability of the method, the user may set the translation setting according to a requirement of the user, for example, the first language is a language corresponding to an audio portion of a video that is pre-recorded, and the language generally cannot be changed.
It will be appreciated that the phoneme arrangement is different between different languages, i.e. the pronunciation of the pronunciation, the phoneme string, and the frequency and context of occurrence of the phoneme are different, and based on the above characteristics, the languages can be distinguished. Therefore, in the present application, the first language corresponding to the audio portion of the video may be identified, the audio portion may be converted into a text file according to the first language, and then the text file may be translated into a translation file of the second language according to the translation rule between the first language and the second language, and the translation file may be used as the subtitle.
As another possible implementation manner, the languages of the subtitles may include both the first language and the second language, that is, the subtitles include both the original text and the translated text corresponding to the audio portion of the video, in this case, the subtitles may be called as the simulcast subtitles, that is, when the language of the audio portion of the video is the first language, the subtitles may include the subtitles in the first language obtained by performing the voice recognition on the audio portion of the video, and the audio portion of the video is translated into the subtitles in the second language according to the preset translation setting.
As an application scene, when a user plays a video based on a video APP or an APP with a video playing function, an application client can pick up sound in electronic equipment or a microphone, and synchronously display corresponding subtitles on a playing interface after recognition.
As another kind of application scene, can also bind with video type APP or APP that has the video playback function through embedded plug-in, pick up the sound in the above-mentioned APP by embedded plug-in, send to above-mentioned APP after discerning to show corresponding subtitle in the broadcast interface synchronization.
Step 102, detecting a target operation of a user interface.
In the embodiment of the application, the target operation is triggered by a user, and the target operation is used for adjusting the font size of the subtitle. For example, the target operation may be a user operation such as a click operation, a slide operation, or a press operation.
In the embodiment of the application, the subtitle display device can detect the target operation triggered by a user on the user interface and used for adjusting the font size of the subtitle.
As a possible implementation manner, in order to facilitate the user operation and improve the use experience of the user, the user can trigger the target operation in any area of the user interface.
As another possible implementation manner, the user may also trigger the target operation in a setting area of the user interface. Therefore, on one hand, the user only adjusts the font size of the subtitles in the set area to meet the actual watching requirement of the user, and on the other hand, the user can perform other operations in the area except the set area on the user interface, for example, the user can trigger user operations such as volume adjustment, brightness adjustment, forward movement, backward movement and the like, so as to meet the personalized control requirement of the user.
Step 103, responding to the target operation, and adjusting the font size of the subtitle.
In the embodiment of the application, after the subtitle display device detects the target operation, the font size of the subtitle can be adjusted in response to the target operation.
As an example, when the target operation is a click operation, for example, a click is performed to reduce the font size of the font, and a double click is performed to enlarge the font size of the font, where the adjustment range of the font size of the font may be fixed, for example, each time the user clicks the user interface, the font size of the font is reduced by 1, each time the user double clicks the user interface, the font size of the font is increased by 1, if the user wants to adjust the font size of the font by a small range, the target operation may be triggered only once, and if the user wants to adjust the font size of the font by a large range, the target operation may be triggered multiple times continuously.
Of course, the adjustment range of the font size of the font may also be non-fixedly set, for example, the adjustment range of the font size may be determined according to the click speed of the click operation, for example, the adjustment range is larger as the click speed is faster.
As another example, when the target operation is a pressing operation, for example, when the pressing force belongs to a first pressure range, the font size of the font is reduced, and when the pressing force belongs to a second pressure range, which is larger than the first pressure range, the font size of the font is enlarged, or when the pressing force belongs to the first pressure range, the font size of the font is enlarged, and when the pressing force belongs to the second pressure range, the font size of the font is reduced.
The adjustment range of the font size of the font may be fixed, for example, the font size of the font is decreased or increased by 1 size each time the pressing operation is triggered. Of course, the adjustment range of the font size of the font may also be non-fixedly set, for example, the adjustment range of the font size may be determined according to the pressing force degree of the pressing operation, for example, the adjustment range is larger when the pressing force degree is larger.
As yet another example, when the target operation is a sliding operation, for example, for a touch screen device, the font size of the font may be enlarged by sliding the user's finger upward, i.e., the sliding direction is upward, and the font size of the font may be reduced by sliding the user's finger downward, i.e., the sliding direction is downward. Or, the font size can be enlarged by sliding the finger of the user to the right, i.e. the sliding direction is right, and the font size can be reduced by sliding the finger of the user to the left, i.e. the sliding direction is left.
It should be understood that the sliding direction and the corresponding font size adjustment manner are only examples, and in practical applications, the font size may be reduced when the sliding direction is upward, the font size may be enlarged when the sliding direction is downward, or the font size may be reduced when the sliding direction is rightward, the font size may be enlarged when the sliding direction is leftward, and the like, and the present application does not limit this.
That is to say, in the present application, the font size adjustment direction and the font size adjustment range of the subtitle can be determined according to the target operation, so that the font size of the subtitle can be adjusted according to the font size adjustment direction and the font size adjustment range.
And 104, displaying the subtitle in the adjusted font size on the user interface.
In the embodiment of the application, after the font size of the subtitle is adjusted according to the target operation, the subtitle can be displayed on the user interface according to the adjusted font size, so that the actual watching requirement of a user is met.
As an application scene, for the middle-aged and the elderly users, the font size of the font can be enlarged due to weakened eyesight, so that the users can watch the subtitle conveniently, and for the teenager users, the font size of the font can be reduced due to better eyesight, so that the users can set the font size of the subtitle according to the requirements of the users, the actual watching requirements of the users can be met, and the use experience of the users is improved.
According to the subtitle display method, the subtitles are generated according to the audio part of the video played by the user interface, the subtitles are synchronously displayed on the user interface, and when the target operation is detected, the font size of the subtitles is adjusted according to the target operation, so that the subtitles are displayed on the user interface in the adjusted font size. Therefore, the user can set the font size of the subtitle through triggering target operation so as to meet the actual watching requirement of the user and improve the watching experience of the user.
The simultaneous interpretation can serve various international conferences, and with the rapid development of Artificial Intelligence (AI) technology, the quality of simultaneous interpretation is gradually improved. Currently, the channel and appeal for client users to obtain cross-language content are rapidly increasing, for example, for users without a cross-language base, when playing foreign language videos, the users want to display Chinese subtitles on a playing interface.
In the related art, the simultaneous interpretation plug-in can be embedded into the video player, the simultaneous interpretation plug-in can automatically identify the audio part of the video, and the simultaneous interpretation subtitle can be displayed on the player picture, so as to help users without cross-language base to smoothly watch the video content.
However, the font size of the co-transmission subtitle is fixedly set in the playing interface, and the user cannot set the font size of the subtitle, which may reduce the viewing experience of the user.
In the application, when the subtitles are the co-transmission subtitles, the target operation can be triggered by the user to set the font size of the co-transmission subtitles so as to meet the actual watching requirement of the user and improve the watching experience of the user. And through target operation, the character size of the co-transmission caption is quickly adjusted, no interference is caused to a video picture during adjustment, the playing of the video is not required to be interrupted, and the watching experience and the co-transmission experience of a user can be greatly improved.
As a possible implementation manner, when the user triggers the target operation in the setting area of the user interface, the subtitle display apparatus may detect only the target operation in the setting area. The user interface can be divided into a plurality of areas, and the area is set to be one of the plurality of areas and used for executing target operation so as to adjust the font size of the subtitle.
Therefore, on one hand, the user only adjusts the font size of the subtitles in the set area to meet the actual watching requirement of the user, and on the other hand, the user can perform other operations in other areas except the set area in the plurality of areas, for example, the user can trigger user operations such as volume adjustment, brightness adjustment, forward movement, backward movement and the like, so as to meet the personalized control requirement of the user.
In practical application, the brightness and volume of the video are frequently adjusted by the user, so that for the convenience of user operation, a corresponding area can be set on the user interface for adjusting the brightness and audio of the video. Specifically, the plurality of regions may further include a brightness setting region for adjusting brightness of the video in response to a user operation, and/or a volume setting region for adjusting volume of the video in response to a user operation. Therefore, the user can adjust the brightness of the video and/or the volume of the video in the corresponding area according to the requirement of the user, so that the actual watching requirement of the user is met, and the use experience of the user can be further improved.
The plurality of regions may be set in equal proportion, for example, the user interface may be divided into three regions in equal proportion, which are a brightness setting region for adjusting the brightness of the video in response to the user operation, a setting region, and a volume setting region for adjusting the volume of the video in response to the user operation. Of course, the plurality of regions may be set in unequal proportions, for example, the priority corresponding to each region may be set in advance, and the area corresponding to each region may be determined according to the priority, for example, if the priority of the set region is the largest, the area of the set region may be larger than the areas of the other regions.
As an example, referring to fig. 2, fig. 2 is a schematic view of a user interface in an embodiment of the present application. Here, a region 21 indicates a brightness setting region for adjusting the brightness of a video in response to a user operation, a region 22 indicates a setting region, and a region 23 indicates a volume setting region for adjusting the volume of a video in response to a user operation. The user may trigger a target operation in the area 22 to adjust the font size of the subtitle, for example, the user may slide the finger up to enlarge the font size, and slide the finger down to reduce the font size.
Further, as shown in fig. 2, the user may adjust the brightness of the video in the area 21 according to his or her own needs, for example, when the user operates as a sliding operation, the brightness of the video may be increased when the sliding direction is upward, and the brightness of the video may be decreased when the sliding direction is downward. Also, the user can adjust the volume of the video in the area 23 according to the requirement, for example, when the user operates as a sliding operation, the volume of the video can be increased when the sliding direction is upward, and the volume of the video can be decreased when the sliding direction is downward.
It should be noted that, the current video player can support setting of the bullet screen font size, and the user can click the bullet screen setting control, then set the bullet screen font size in the popped-up setting floating layer, and click the region outside the floating layer to exit after completion. Although the font size of the subtitle can be set by using a bullet screen setting scheme, the setting of the option floating layer can cover a larger screen area, seriously interfere with a video picture being played and interrupt the watching experience of a user. Or, the setting of the font size of the subtitle can be realized by setting the option, but the user needs to pause the video first and then set the video in the process of playing the video, thereby increasing the operation path and the operation cost.
In the present application, for example, referring to fig. 2, adjusting the brightness in the area 21, adjusting the size of the font size in the area 22, and adjusting the brightness in the area 23 can greatly improve the efficiency of adjusting the font size on the basis of not breaking the existing operation cognition (i.e., left brightness and right volume) of the user, and can realize adjusting the font size under the normal playing condition of the video, thereby avoiding affecting the user to watch the video picture, and simultaneously, facilitating the user operation and reducing the operation path and the operation cost.
That is to say, in the application, the video picture is expanded into three operation areas, namely, a left operation area, a middle operation area and a right operation area, wherein the middle operation area is a setting area and is used for adjusting the font size of the subtitles, a user can rapidly adjust the font size of the subtitles or the co-transmission subtitles through target operation under the condition of not triggering any floating layer, the video picture is not interfered during adjustment, the playing of the video is not required to be interrupted, and the watching experience and the co-transmission experience of the user can be greatly improved.
As a possible implementation, when the target operation is a sliding operation, the font size of the subtitle may be adjusted according to the sliding direction and the sliding distance. The above process is described in detail with reference to example two.
Fig. 3 is a flowchart illustrating a subtitle display method according to a second embodiment of the present application.
As shown in fig. 3, the subtitle display method may include the steps of:
step 201, generating subtitles according to an audio part of a video played by a user interface, and synchronously displaying the subtitles on the user interface.
In step 202, a sliding operation of the user interface is detected.
The execution process of steps 201 to 202 may refer to the execution process of steps 101 to 102 in the above embodiments, which is not described herein again.
And step 203, determining the character size adjusting direction of the subtitle according to the sliding direction of the sliding operation.
In the embodiment of the present application, the relationship between the sliding direction and the font size adjusting direction is preset. For example, the font size adjustment direction is increased when the sliding direction is upward, and the font size adjustment direction is decreased when the sliding direction is downward. Alternatively, the font size adjustment direction is increased when the sliding direction is rightward, and the font size adjustment direction is decreased when the sliding direction is leftward.
It should be understood that the above-mentioned sliding direction and the corresponding font size adjusting direction are only examples, and in practical applications, when the sliding direction is upward, the font size adjusting direction may be decreased, when the sliding direction is downward, the font size adjusting direction may be increased, or, when the sliding direction is rightward, the font size adjusting direction may be decreased, when the sliding direction is leftward, the font size adjusting direction may be increased, and the like, and the present application does not limit the present invention.
And step 204, determining the character size adjustment range of the subtitle according to the sliding distance of the sliding operation.
In the embodiment of the present application, the relationship between the sliding distance and the font size adjustment amplitude is also preset, where the font size adjustment amplitude and the sliding distance are in a positive relationship, that is, the font size adjustment amplitude increases with the increase of the sliding distance.
In the embodiment of the application, after the target operation is monitored, the initial position point of the target operation can be determined, then the track of the target operation can be tracked from the initial position point, each position point of the track path is obtained, and the sliding distance of the sliding operation can be determined according to each position point of the track path. After the sliding distance of the sliding operation is determined, the corresponding relationship between the sliding distance and the font size adjustment amplitude can be inquired according to the sliding distance, and the corresponding font size adjustment amplitude is determined.
And step 205, adjusting the font size of the caption according to the font size adjustment direction and the font size adjustment amplitude.
In the embodiment of the application, after the font size adjusting direction and the font size adjusting range are determined, the font size of the caption can be adjusted according to the font size adjusting direction and the font size adjusting range.
Step 206, displaying the subtitle in the adjusted font size on the user interface.
In the embodiment of the application, the font size of the subtitle can be quickly adjusted through sliding operation, the video picture is free from interference during adjustment, the playing of the video is not required to be interrupted, and the watching experience of a user can be greatly improved.
As an example, taking the subtitle containing the subtitles in the first language and the second language as an example, that is, taking the subtitle as the simulcast subtitle, when playing a video, the font size of the simulcast subtitle displayed on the user interface may be as shown in fig. 4, the user's finger may slide upwards to obtain the augmented simulcast subtitle as shown in fig. 5, and when the user's finger slides downwards, the augmented simulcast subtitle as shown in fig. 6 may be obtained.
According to the subtitle display method, the font size of the subtitle is quickly adjusted through sliding operation, the video picture is free of interference during adjustment, the playing of the video is not required to be interrupted, and the watching experience of a user can be greatly improved.
Taking the subtitles as the co-transmission subtitles for example, if the user has a basis of two or even multiple languages, the user does not need to translate the audio portion of the video, and the user may not want the user interface to display the subtitles in order to view the high-quality video picture. Therefore, as a possible implementation manner of the embodiment of the application, whether to display the subtitles can be determined according to the user requirement, so as to improve the applicability of the method. The above process is described in detail with reference to example three.
Fig. 7 is a flowchart illustrating a subtitle display method according to a third embodiment of the present application.
As shown in fig. 7, the subtitle display method may include the steps of:
step 301, detecting a trigger operation on a target control in a user interface.
In the embodiment of the application, the target control is used for displaying the subtitles.
In the embodiment of the application, in the video played by the user interface, the subtitle display device can detect the triggering operation of the target control in the user interface.
As an example, taking the subtitle as the simultaneous interpretation subtitle, the target control may be a "translation" control as shown in an area 81 in fig. 8, for starting the simultaneous interpretation function.
And step 302, responding to the triggering operation of the target control, generating a subtitle according to the audio part of the video played by the user interface, and synchronously displaying the subtitle on the user interface.
In the embodiment of the application, when the user triggers the target control in the user interface, the subtitle can be generated according to the audio part of the video, and the subtitle is synchronously displayed in the user interface. For a specific execution process, reference may be made to the execution process of step 101 in the foregoing embodiment, which is not described herein again.
Step 303, detect a target operation of the user interface.
Step 304, adjusting the font size of the subtitle in response to the target operation.
Step 305, displaying the subtitle in the adjusted font size on the user interface.
The execution process of steps 303 to 305 can refer to the execution process of steps 102 to 104 in the above embodiments, which is not described herein again.
Therefore, whether the subtitles are displayed or not is determined according to the user requirements, and the applicability of the method can be improved. Therefore, when the user does not want to watch the subtitles, for example, in order to watch a high-quality video picture, the user may not want the user interface to display the subtitles, and at this time, the user does not need to trigger the target control, so that personalized watching requirements of different users can be met.
As a possible implementation manner, in order to facilitate user operation, when a user performs a trigger operation on the target control for the first time, a guidance floating layer may be displayed on the user interface to guide the user to adjust the font of the subtitle. The above process is described in detail with reference to example four.
Fig. 9 is a flowchart illustrating a subtitle display method according to a fourth embodiment of the present application.
As shown in fig. 9, the subtitle display method may include the steps of:
step 401, detecting a trigger operation on a target control in a user interface.
The execution process of step 401 may refer to the execution process in the above embodiments, which is not described herein again.
Step 402, determining whether to perform a trigger operation on the target control for the first time, if yes, performing step 403, and if not, performing step 405.
And step 403, displaying a guiding floating layer on the user interface, wherein operation prompt information is displayed in the guiding floating layer.
In this embodiment of the application, when the user performs the trigger operation on the target control for the first time, a guidance floating layer may be displayed on the user interface, where operation prompt information is shown in the guidance floating layer, for example, referring to fig. 2, the operation prompt information may include upward sliding and/or downward sliding, so that the user may adjust the font size of the subtitle according to the operation prompt information.
In step 404, the guiding floating layer is closed in response to the touch operation on the guiding floating layer.
In the embodiment of the application, after the user knows how to adjust the font size of the subtitle, the guidance floating layer can be closed in order to avoid interfering with the video picture, thereby influencing the user to watch the video picture. Specifically, the user may trigger a touch operation on the guidance floating layer, so that the subtitle display apparatus may close the guidance floating layer in response to the touch operation on the guidance floating layer. For example, the user may click on the guide float to close the guide float so that the video picture may display the simulcast subtitle as shown in fig. 4.
And 405, responding to the triggering operation of the target control, generating a subtitle according to the audio part of the video played by the user interface, and synchronously displaying the subtitle on the user interface.
In step 406, a target operation of the user interface is detected.
Step 407, adjusting the font size of the subtitle in response to the target operation.
Step 408, displaying the subtitle in the adjusted font size on the user interface.
The execution process of steps 405 to 408 can refer to the execution process of the above embodiment, which is not described herein again.
In order to implement the above embodiments, the present application further provides a subtitle display apparatus.
Fig. 10 is a schematic structural diagram of a subtitle display apparatus according to a fifth embodiment of the present application.
As shown in fig. 10, the subtitle presentation apparatus 100 includes: a playing module 110, a detecting module 120, an adjusting module 130 and a displaying module 140.
The playing module 110 is configured to generate subtitles according to an audio portion of a video played by a user interface, and synchronously display the subtitles on the user interface.
A detection module 120 for detecting a target operation of the user interface.
And an adjusting module 130, configured to adjust a font size of the subtitle in response to the target operation.
And a display module 140 for displaying the subtitles in the adjusted font size on the user interface.
As a possible implementation manner, the detection module 120 is specifically configured to: detecting a target operation in a set area of a user interface; the user interface is divided into a plurality of areas, and the set area in the plurality of areas is used for executing target operation so as to adjust the font size of the subtitle.
As a possible implementation, the plurality of regions further include a brightness setting region for adjusting brightness of the video in response to a user operation, and/or include a volume setting region for adjusting volume of the video in response to a user operation.
As a possible implementation manner, the target operation includes a sliding operation, and the adjusting module 130 is specifically configured to: determining the character size adjusting direction of the caption according to the sliding direction of the sliding operation, and determining the character size adjusting amplitude of the caption according to the sliding distance of the sliding operation; and adjusting the direction and the amplitude according to the font size, and adjusting the font size of the caption.
As a possible implementation manner, the playing module 110 is specifically configured to: detecting a trigger operation on a target control in a user interface; and responding to the triggering operation of the target control, generating a subtitle according to the audio part of the video, and synchronously displaying the subtitle on a user interface.
As a possible implementation manner, the playing module 110 is further configured to: if the triggering operation of the target control is determined to be executed for the first time, displaying a guide floating layer on a user interface, wherein operation prompt information is displayed in the guide floating layer; and closing the guide floating layer in response to the touch operation on the guide floating layer.
It should be noted that the explanation of the subtitle display method in the foregoing embodiments of fig. 1 to 9 is also applicable to the subtitle display apparatus of this embodiment, and is not repeated here.
According to the subtitle display device, the subtitles are generated according to the audio part of the video played by the user interface, the subtitles are synchronously displayed on the user interface, and when the target operation is detected, the font size of the subtitles is adjusted according to the target operation, so that the subtitles are displayed on the user interface in the adjusted font size. Therefore, the user can set the font size of the subtitle through triggering target operation so as to meet the actual watching requirement of the user and improve the watching experience of the user.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
Fig. 11 is a block diagram of an electronic device according to the subtitle display method according to the embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 11, the electronic apparatus includes: one or more processors 1101, a memory 1102, and interfaces for connecting the various components, including a high speed interface and a low speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 11, a processor 1101 is taken as an example.
The memory 1102 is a non-transitory computer readable storage medium as provided herein. The memory stores instructions executable by at least one processor, so that the at least one processor executes the subtitle display method provided by the application. The non-transitory computer-readable storage medium of the present application stores computer instructions for causing a computer to execute the subtitle presentation method provided by the present application.
The memory 1102, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the subtitle presentation method in the embodiment of the present application (for example, the playing module 110, the detecting module 120, the adjusting module 130, and the presentation module 140 shown in fig. 10). The processor 1101 executes various functional applications of the server and data processing by running non-transitory software programs, instructions, and modules stored in the memory 1102, that is, implements the subtitle presentation method in the above-described method embodiment.
The memory 1102 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the electronic device of the subtitle presentation method, and the like. Further, the memory 1102 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 1102 may optionally include memory located remotely from the processor 1101, which may be connected to the electronic device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device may further include: an input device 1103 and an output device 1104. The processor 1101, the memory 1102, the input device 1103 and the output device 1104 may be connected by a bus or other means, and are exemplified by being connected by a bus in fig. 11.
The input device 1103 may receive input numeric or character information and generate key signal inputs related to user settings and function controls of the electronic apparatus, such as a touch screen, keypad, mouse, track pad, touch pad, pointer, one or more mouse buttons, track ball, joystick, or other input device. The output devices 1104 may include a display device, auxiliary lighting devices (e.g., LEDs), tactile feedback devices (e.g., vibrating motors), and the like. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server may be a cloud Server, which is also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service extensibility in the traditional physical host and Virtual Private Server (VPS) service.
According to the technical scheme of the embodiment of the application, the subtitles are generated according to the audio part of the video played by the user interface, the subtitles are synchronously displayed on the user interface, and when the target operation is detected, the font size of the subtitles is adjusted according to the target operation, so that the subtitles are displayed on the user interface in the adjusted font size. Therefore, the user can set the font size of the subtitle through triggering target operation so as to meet the actual watching requirement of the user and improve the watching experience of the user.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (14)

1. A method of subtitle presentation, the method comprising:
generating a subtitle according to an audio part of a video played by a user interface, and synchronously displaying the subtitle on the user interface;
detecting a target operation of the user interface;
adjusting the font size of the subtitle in response to the target operation;
and displaying the subtitle in the adjusted font size on the user interface.
2. The caption presentation method according to claim 1, wherein the detecting a target operation at the user interface includes:
detecting the target operation in a set area of the user interface;
the user interface is divided into a plurality of areas, and a set area in the plurality of areas is used for executing the target operation so as to adjust the font size of the subtitle.
3. The subtitle presentation method according to claim 2, wherein the plurality of regions further include a brightness setting region for adjusting brightness of the video in response to a user operation, and/or include a volume setting region for adjusting volume of the video in response to a user operation.
4. The caption presentation method according to any one of claims 1-3, wherein the target operation includes a slide operation, and the adjusting the font size of the caption in response to the target operation includes:
determining the character size adjusting direction of the caption according to the sliding direction of the sliding operation;
determining the word size adjustment range of the caption according to the sliding distance of the sliding operation;
and adjusting the font size of the caption according to the font size adjusting direction and the font size adjusting amplitude.
5. The caption presentation method according to any one of claims 1-3, wherein the generating of the caption according to the audio portion of the video played by the user interface and the synchronous presentation of the caption on the user interface comprises:
detecting a trigger operation on a target control in the user interface;
and responding to the triggering operation of the target control, generating a subtitle according to the audio part of the video, and synchronously displaying the subtitle on the user interface.
6. The caption presentation method according to claim 5, wherein the generating of the caption according to the audio portion of the video played by the user interface further comprises, before the user interface synchronously presents the caption:
if the trigger operation is determined to be executed on the target control for the first time, displaying a guide floating layer on the user interface, wherein operation prompt information is displayed in the guide floating layer;
and closing the guide floating layer in response to the touch operation of the guide floating layer.
7. A subtitle exhibiting apparatus, the apparatus comprising:
the playing module is used for generating subtitles according to an audio part of a video played by a user interface and synchronously displaying the subtitles on the user interface;
the detection module is used for detecting the target operation of the user interface;
the adjusting module is used for responding to the target operation and adjusting the font size of the subtitle;
and the display module is used for displaying the subtitles on the user interface according to the adjusted font size.
8. The caption presentation device according to claim 7, wherein the detection module is specifically configured to:
detecting the target operation in a set area of the user interface;
the user interface is divided into a plurality of areas, and a set area in the plurality of areas is used for executing the target operation so as to adjust the font size of the subtitle.
9. The caption presentation device according to claim 8, wherein the plurality of regions further include a brightness setting region for adjusting brightness of the video in response to a user manipulation, and/or a volume setting region for adjusting volume of the video in response to a user manipulation.
10. The caption presentation device according to any one of claims 7 to 9, wherein the target operation comprises a slide operation, and the adjustment module is specifically configured to:
determining the character size adjusting direction of the caption according to the sliding direction of the sliding operation;
determining the word size adjustment range of the caption according to the sliding distance of the sliding operation;
and adjusting the font size of the caption according to the font size adjusting direction and the font size adjusting amplitude.
11. The subtitle display apparatus according to any one of claims 7 to 9, wherein the playing module is specifically configured to:
detecting a trigger operation on a target control in the user interface;
and responding to the triggering operation of the target control, generating a subtitle according to the audio part of the video, and synchronously displaying the subtitle on the user interface.
12. The caption presentation device of claim 11, wherein the playback module is further configured to:
if the trigger operation is determined to be executed on the target control for the first time, displaying a guide floating layer on the user interface, wherein operation prompt information is displayed in the guide floating layer;
and closing the guide floating layer in response to the touch operation of the guide floating layer.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the subtitle presentation method of any one of claims 1-6.
14. A non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the subtitle presentation method according to any one of claims 1 to 6.
CN202010674904.8A 2020-07-14 2020-07-14 Subtitle display method and device, electronic equipment and storage medium Pending CN112055261A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010674904.8A CN112055261A (en) 2020-07-14 2020-07-14 Subtitle display method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010674904.8A CN112055261A (en) 2020-07-14 2020-07-14 Subtitle display method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112055261A true CN112055261A (en) 2020-12-08

Family

ID=73601916

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010674904.8A Pending CN112055261A (en) 2020-07-14 2020-07-14 Subtitle display method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112055261A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113542903A (en) * 2021-07-16 2021-10-22 思享智汇(海南)科技有限责任公司 Subtitle generating method and device supporting font size self-adaption
CN113656130A (en) * 2021-08-16 2021-11-16 北京百度网讯科技有限公司 Data display method, device, equipment and storage medium
CN114139073A (en) * 2021-10-29 2022-03-04 北京达佳互联信息技术有限公司 Object display method and device, electronic equipment and storage medium
CN115442667A (en) * 2021-06-01 2022-12-06 脸萌有限公司 Video processing method and device

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050105891A1 (en) * 2003-10-04 2005-05-19 Samsung Electronics Co., Ltd. Information storage medium storing text-based subtitle, and apparatus and method for processing text-based subtitle
CN101086834A (en) * 2006-06-06 2007-12-12 华为技术有限公司 A method for controlling display effect of caption and control device
CN101437121A (en) * 2008-12-19 2009-05-20 中兴通讯股份有限公司 Mobile terminal and method for implementing dynamic zoom of mobile phone television subtitling
CN101594481A (en) * 2008-05-30 2009-12-02 新奥特(北京)视频技术有限公司 A kind of method of making and revising captions
CN103226947A (en) * 2013-03-27 2013-07-31 广东欧珀移动通信有限公司 Mobile terminal-based audio processing method and device
CN106792071A (en) * 2016-12-19 2017-05-31 北京小米移动软件有限公司 Method for processing caption and device
CN110035326A (en) * 2019-04-04 2019-07-19 北京字节跳动网络技术有限公司 Subtitle generation, the video retrieval method based on subtitle, device and electronic equipment
CN110399082A (en) * 2019-07-05 2019-11-01 北京达佳互联信息技术有限公司 A kind of terminal attribute control method, device, electronic equipment and medium
CN110460907A (en) * 2019-08-16 2019-11-15 维沃移动通信有限公司 A kind of video playing control method and terminal
CN110769265A (en) * 2019-10-08 2020-02-07 深圳创维-Rgb电子有限公司 Simultaneous caption translation method, smart television and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050105891A1 (en) * 2003-10-04 2005-05-19 Samsung Electronics Co., Ltd. Information storage medium storing text-based subtitle, and apparatus and method for processing text-based subtitle
CN101086834A (en) * 2006-06-06 2007-12-12 华为技术有限公司 A method for controlling display effect of caption and control device
CN101594481A (en) * 2008-05-30 2009-12-02 新奥特(北京)视频技术有限公司 A kind of method of making and revising captions
CN101437121A (en) * 2008-12-19 2009-05-20 中兴通讯股份有限公司 Mobile terminal and method for implementing dynamic zoom of mobile phone television subtitling
CN103226947A (en) * 2013-03-27 2013-07-31 广东欧珀移动通信有限公司 Mobile terminal-based audio processing method and device
CN106792071A (en) * 2016-12-19 2017-05-31 北京小米移动软件有限公司 Method for processing caption and device
CN110035326A (en) * 2019-04-04 2019-07-19 北京字节跳动网络技术有限公司 Subtitle generation, the video retrieval method based on subtitle, device and electronic equipment
CN110399082A (en) * 2019-07-05 2019-11-01 北京达佳互联信息技术有限公司 A kind of terminal attribute control method, device, electronic equipment and medium
CN110460907A (en) * 2019-08-16 2019-11-15 维沃移动通信有限公司 A kind of video playing control method and terminal
CN110769265A (en) * 2019-10-08 2020-02-07 深圳创维-Rgb电子有限公司 Simultaneous caption translation method, smart television and storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115442667A (en) * 2021-06-01 2022-12-06 脸萌有限公司 Video processing method and device
CN115442667B (en) * 2021-06-01 2023-10-20 脸萌有限公司 Video processing method and device
CN113542903A (en) * 2021-07-16 2021-10-22 思享智汇(海南)科技有限责任公司 Subtitle generating method and device supporting font size self-adaption
CN113656130A (en) * 2021-08-16 2021-11-16 北京百度网讯科技有限公司 Data display method, device, equipment and storage medium
CN113656130B (en) * 2021-08-16 2024-05-17 北京百度网讯科技有限公司 Data display method, device, equipment and storage medium
CN114139073A (en) * 2021-10-29 2022-03-04 北京达佳互联信息技术有限公司 Object display method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN112055261A (en) Subtitle display method and device, electronic equipment and storage medium
JP6111030B2 (en) Electronic device and control method thereof
JP5746111B2 (en) Electronic device and control method thereof
US10134364B2 (en) Prioritized display of visual content in computer presentations
JP5819269B2 (en) Electronic device and control method thereof
US10542323B2 (en) Real-time modifiable text captioning
KR102320708B1 (en) Video playing method and device, electronic device, and readable storage medium
KR20130018464A (en) Electronic apparatus and method for controlling electronic apparatus thereof
JP2013037689A (en) Electronic equipment and control method thereof
JP2014532933A (en) Electronic device and control method thereof
US10474669B2 (en) Control apparatus, control method and computer program
KR101587625B1 (en) The method of voice control for display device, and voice control display device
CN110134973A (en) Video caption real time translating method, medium and equipment based on artificial intelligence
KR102358012B1 (en) Speech control method and apparatus, electronic device, and readable storage medium
CN111787387A (en) Content display method, device, equipment and storage medium
CN114697721B (en) Bullet screen display method and electronic equipment
CN111225261B (en) Multimedia device for processing voice command and control method thereof
CN111107283A (en) Information display method, electronic equipment and storage medium
US20220191556A1 (en) Method for processing live broadcast information, electronic device and storage medium
US20210392394A1 (en) Method and apparatus for processing video, electronic device and storage medium
CN110780749B (en) Character string error correction method and device
CN111901668B (en) Video playing method and device
CN116600174A (en) Subtitle display control method, device, electronic equipment and storage medium
CN110716653B (en) Method and device for determining association source
CN111124142B (en) Input method, device and device for inputting

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201208

RJ01 Rejection of invention patent application after publication