CN115952319A - Information display method and device, electronic equipment and storage medium - Google Patents

Information display method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115952319A
CN115952319A CN202310010032.9A CN202310010032A CN115952319A CN 115952319 A CN115952319 A CN 115952319A CN 202310010032 A CN202310010032 A CN 202310010032A CN 115952319 A CN115952319 A CN 115952319A
Authority
CN
China
Prior art keywords
video
information
subtitle
keyword
historical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310010032.9A
Other languages
Chinese (zh)
Inventor
叶子慧
舒莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202310010032.9A priority Critical patent/CN115952319A/en
Publication of CN115952319A publication Critical patent/CN115952319A/en
Pending legal-status Critical Current

Links

Images

Abstract

The disclosure provides an information display method, an information display device, electronic equipment and a storage medium, and belongs to the technical field of multimedia. The method comprises the following steps: displaying a video playing interface, wherein a video and subtitle viewing control is displayed in the video playing interface; responding to the triggering operation of the subtitle viewing control, and displaying the video subtitle of the video in the video playing interface, wherein the video subtitle is used for representing the voice information in the video; highlighting at least one keyword in the video caption based on screening information, wherein the screening information is used for selecting key information concerned by an object, and the at least one keyword is used for representing the key information in the video. According to the method, the user can acquire all information expressed by the voice in the video at one time through the video subtitles, time is saved, the keywords in the video subtitles can be highlighted, the user can quickly find the concerned key information, and the information acquisition efficiency is improved.

Description

Information display method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of multimedia technologies, and in particular, to an information display method and apparatus, an electronic device, and a storage medium.
Background
In recent years, the video industry has been rapidly developed, and more users tend to learn information through videos. In the process that a user watches the video, the user gradually knows the voice information in the video along with the video playing progress. In general, the voice information in the video may include not only key information required by the user, but also irrelevant information not required by the user. Because the user cannot determine the position of the key information, the user can effectively know the key information in the video after needing to watch all the contents of the video according to the time sequence, and the efficiency of acquiring the information is low.
Disclosure of Invention
The present disclosure provides an information display method, an information display apparatus, an electronic device, and a storage medium, which enable a user to acquire all information expressed by a voice in a video at one time through a video subtitle, thereby saving time, and enable the user to quickly find key information of interest by highlighting at least one keyword in the video subtitle, thereby improving information acquisition efficiency. The technical scheme of the disclosure is as follows:
according to an aspect of the embodiments of the present disclosure, there is provided an information display method including:
displaying a video playing interface, wherein a video and subtitle viewing control is displayed in the video playing interface;
responding to the triggering operation of the subtitle viewing control, and displaying the video subtitle of the video in the video playing interface, wherein the video subtitle is used for representing the voice information in the video;
highlighting at least one keyword in the video subtitle based on screening information, wherein the screening information is used for selecting key information concerned by an object, and the at least one keyword is used for representing the key information in the video.
According to another aspect of the embodiments of the present disclosure, there is provided an information display device including:
the first display unit is configured to execute displaying of a video playing interface, and a video and subtitle viewing control is displayed in the video playing interface;
a second display unit configured to execute a triggering operation responding to the subtitle viewing control, and display a video subtitle of the video in the video playing interface, wherein the video subtitle is used for representing voice information in the video;
the second display unit is further configured to perform highlighting of at least one keyword in the video subtitle based on screening information for selecting key information of interest to an object, the at least one keyword being used to represent the key information in the video.
In some embodiments, the second display unit includes:
an acquisition subunit configured to perform acquisition of the filtering information, the filtering information including at least one of a video type of the video, historical viewing information, historical publishing information, and a search word, the historical viewing information including a video viewed by the object in a historical period, the historical publishing information including a video published by the object in the historical period, the search word being a word employed when searching for the video;
a display subunit configured to perform highlighting the at least one keyword based on the filtering information.
In some embodiments, the obtaining subunit is configured to perform determining a video type of the video based on a video copy of the video;
the display subunit configured to perform highlighting the at least one keyword related to the video type.
In some embodiments, the obtaining subunit is configured to perform obtaining the historical viewing information, where the historical viewing information is used to represent key information of videos viewed by the object in the historical time period;
the display subunit is configured to perform highlighting of the at least one keyword related to the historical viewing information.
In some embodiments, the display subunit is configured to perform determining target viewing information based on the historical viewing information, the target viewing information including at least one of videos that the subject likes, videos that are collected, and videos that are shared during the historical period of time; highlighting the at least one keyword associated with the target viewing information.
In some embodiments, the obtaining subunit is configured to perform obtaining the historical publishing information, where the historical publishing information is used to represent key information of videos published by the object in the historical time period;
the display subunit is configured to perform highlighting of the at least one keyword related to the historical release information.
In some embodiments, the obtaining subunit is configured to perform obtaining the search term;
the display subunit configured to perform highlighting the at least one keyword related to the search term.
In some embodiments, the second display unit is configured to perform displaying the video subtitle on an upper layer of the video in the video playing interface in response to a triggering operation on the subtitle viewing control, wherein the video subtitle overlaps with the video;
the second display unit is further configured to execute, in response to a triggering operation on the subtitle viewing control, displaying the video subtitle in a subtitle area of the video playing interface, where the video is located in a video area of the video playing interface, and the subtitle area is not overlapped with the video area.
In some embodiments, the apparatus further comprises:
an acquisition unit configured to execute a sharing operation in response to the at least one keyword, and acquire a video cover of the video;
the third display unit is configured to display a shared target picture in an information interaction interface based on the video cover and the at least one keyword, the information interaction interface is used for realizing information interaction between different objects, and the video cover and the at least one keyword are displayed in the target picture.
According to another aspect of the embodiments of the present disclosure, there is provided an electronic device including:
one or more processors;
a memory for storing the processor executable program code;
wherein the processor is configured to execute the program code to implement the information display method described above.
According to another aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium in which program codes, when executed by a processor of an electronic device, enable the electronic device to perform the above-described information display method.
According to another aspect of embodiments of the present disclosure, there is provided a computer program product comprising computer programs/instructions which, when executed by a processor, implement the information display method described above.
The embodiment of the disclosure provides an information display method, which can display video subtitles of a video by triggering a subtitle viewing control displayed in a video playing interface, so that a user can acquire all information expressed by voice in the video at one time without watching a complete video, time is saved, and at least one keyword in the video subtitles can be highlighted based on screening information on the basis of displaying the video subtitles, so that the user can quickly find the concerned keyword information, and the information acquisition efficiency is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
Fig. 1 is a schematic diagram illustrating an implementation environment of an information display method according to an exemplary embodiment.
Fig. 2 is a flow chart illustrating a method of displaying information according to an example embodiment.
Fig. 3 is a flow chart illustrating another method of displaying information according to an example embodiment.
FIG. 4 is a schematic diagram illustrating a video playback interface in accordance with an exemplary embodiment.
FIG. 5 is a schematic diagram illustrating another video playback interface in accordance with an illustrative embodiment.
FIG. 6 is a schematic diagram illustrating another video playback interface in accordance with an illustrative embodiment.
Fig. 7 is a schematic diagram illustrating video subtitle generation according to an exemplary embodiment.
Fig. 8 is a block diagram illustrating an information display apparatus according to an exemplary embodiment.
Fig. 9 is a block diagram illustrating an information display apparatus according to an exemplary embodiment.
Fig. 10 is a block diagram illustrating a terminal according to an example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in other sequences than those illustrated or described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
It should be noted that information (including but not limited to user equipment information, user personal information, etc.), data (including but not limited to data for analysis, stored data, presented data, etc.), and signals referred to in this disclosure are authorized by the user or sufficiently authorized by various parties, and the collection, use, and processing of the relevant data requires compliance with relevant laws and regulations and standards in relevant countries and regions. For example, the historical viewing information and the historical publishing information referred to in this disclosure are obtained with sufficient authorization.
Fig. 1 is a schematic diagram illustrating an implementation environment of an information display method according to an exemplary embodiment. Taking an electronic device as an example provided as a terminal, referring to fig. 1, the implementation environment specifically includes: a terminal 101 and a server 102.
The terminal 101 is at least one of a smart phone, a smart watch, a desktop computer, a laptop computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a laptop computer, and the like. The terminal 101 is installed with an application program supporting video playing, and a user can log in the application program through the terminal 101 to acquire a service provided by the application program. The service may be a service for watching a video, a service for shooting a video, or a service for publishing a video, and the like, which is not limited in this disclosure. The terminal 101 can be connected to the server 102 through a wireless network or a wired network, and can acquire video subtitles of video from the server 102.
The terminal 101 generally refers to one of a plurality of terminals, and the present embodiment is illustrated with the terminal 101. Those skilled in the art will appreciate that the number of terminals described above may be greater or fewer. For example, the number of the terminals may be several, or the number of the terminals may be several tens or hundreds, or more, and the number of the terminals and the type of the device are not limited in the embodiments of the present disclosure.
The server 102 is at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. The server 102 can be connected to the terminal 101 and other terminals through a wireless network or a wired network. Under the condition that the subtitle viewing control is triggered, the server 102 can receive a subtitle viewing request sent by the terminal 101, return a video subtitle of a video to the terminal 101, and display the video subtitle to a user by the terminal 101, so that the user can find key information focused by the user from the video subtitle. In some embodiments, the number of the servers may be more or less, and the embodiments of the present disclosure do not limit this. Of course, the server 102 may also include other functional servers to provide more comprehensive and diverse services.
Fig. 2 is a flowchart illustrating an information display method according to an exemplary embodiment, referring to fig. 2, the information display method being applied to a terminal, the information display method including the steps of:
in step 201, the terminal displays a video playing interface, where a video and a subtitle viewing control are displayed in the video playing interface.
In the embodiment of the disclosure, the terminal displays a video in the video playing interface. The video may be a video being played, a video to be played, or a video paused to be played, which is not limited in this disclosure. And a subtitle viewing control is also displayed in the video playing interface. The subtitle viewing control is used for providing the video subtitle of the video for the user after being triggered so as to facilitate the user to view. The subtitle viewing control may be a square control, a circular control, an irregular control, or the like, which is not limited in this disclosure.
In step 202, in response to the triggering operation of the subtitle viewing control, the terminal displays a video subtitle of a video in the video playing interface, where the video subtitle is used to represent voice information in the video.
In the embodiment of the disclosure, the terminal can display the video subtitles of the video in the video playing interface under the condition that the subtitle viewing control is triggered. The video caption is obtained by recognizing the voice in the video, and can represent the voice information of the video. The user can quickly know the voice information of the video by looking over the video subtitles.
In step 203, the terminal highlights at least one keyword in the video caption based on the filtering information, wherein the filtering information is used for selecting the key information concerned by the object, and the at least one keyword is used for representing the key information in the video.
In the embodiment of the present disclosure, the object refers to a user account currently logged in by the terminal. The screening information may be set by the user through the user account, or may be determined by the terminal according to the history information of the user account, which is not limited in the embodiment of the present disclosure. The terminal can screen out key information which can be concerned by the user from the video caption according to the screening information, and the user can quickly find the concerned key information by highlighting the form of the key words. The highlighting may be highlighting, bolding, underlining, or the like, which is not limited in this disclosure.
The terminal may perform the above steps 202 and 203 at the same time. That is, in response to the triggering operation of the subtitle viewing control, the terminal displays the video subtitles of the video in the video playing interface and highlights at least one keyword in the video subtitles.
The embodiment of the disclosure provides an information display method, which can display video subtitles of a video by triggering a subtitle viewing control displayed in a video playing interface, so that a user can obtain all information expressed by voice in the video at one time without watching a complete video, time is saved, and at least one keyword in the video subtitles can be highlighted based on screening information on the basis of displaying the video subtitles, so that the user can quickly find the concerned keyword information, and the information acquisition efficiency is improved.
In some embodiments, highlighting at least one keyword in the video title based on the filtering information comprises:
acquiring screening information, wherein the screening information comprises at least one of video type, historical watching information, historical publishing information and search words of a video, the historical watching information comprises the video watched by an object in a historical time period, the historical publishing information comprises the video published by the object in the historical time period, and the search words are words adopted when the video is searched;
at least one keyword is highlighted based on the screening information.
According to the scheme provided by the embodiment of the disclosure, screening information is acquired from at least one of video types, historical viewing information, historical publishing information and search terms of videos, so that the screening information comprises the information of at least one dimensionality, and subsequently, the screened keywords can be highlighted according to the information of at least one dimensionality, so that a user can quickly find the concerned keyword information, the information acquisition efficiency is improved, the highlighted keywords can be more accurate, the requirements of the user are better met, and the information acquisition accuracy is improved.
In some embodiments, obtaining the screening information comprises:
determining the video type of the video based on the video file of the video;
highlighting at least one keyword based on the screening information, including:
at least one keyword associated with the video type is highlighted.
According to the scheme provided by the embodiment of the disclosure, the video type of the video is determined through the video file of the video, so that the key information in the video caption is screened according to the video type, the highlighted key words are attached to the video type of the video, and since the user watches the video of the type, the user generally wants to acquire the information related to the video type, the user can quickly find the concerned key information and improve the information acquisition efficiency by highlighting at least one key word related to the video type, and the highlighted key words can be more accurate, meet the requirements of the user and improve the information acquisition accuracy.
In some embodiments, obtaining the screening information comprises:
acquiring historical viewing information, wherein the historical viewing information is used for representing key information of videos viewed by an object in a historical time period;
highlighting at least one keyword based on the screening information, including:
highlighting at least one keyword associated with the historical viewing information.
According to the scheme provided by the embodiment of the disclosure, the key information in the video caption is screened through the history watching information, so that the highlighted key words are attached to the history watching information, and the history watching information can represent the key information of the video watched by the user in the history time period, so that by highlighting at least one key word related to the history watching information, the user can quickly find the concerned key information, the information acquisition efficiency is improved, the highlighted key words can be more accurate, the requirements of the user are better met, and the information acquisition accuracy is improved.
In some embodiments, highlighting at least one keyword associated with the historical viewing information comprises:
determining target viewing information based on the historical viewing information, wherein the target viewing information comprises at least one of videos which are praised by the object in the historical time period, collected videos and shared videos;
at least one keyword associated with the target viewing information is highlighted.
According to the scheme provided by the embodiment of the disclosure, the key information in the video caption is screened through the target watching information, so that the highlighted key words are attached to the target watching information, and the target watching information comprises at least one of videos which are praised by a user in a historical time period, collected videos and shared videos, and the video concerned by the user can be represented.
In some embodiments, obtaining the screening information comprises:
acquiring historical release information, wherein the historical release information is used for representing key information of videos released by an object in a historical time period;
highlighting at least one keyword based on the screening information, including:
at least one keyword associated with the historical release information is highlighted.
According to the scheme provided by the embodiment of the disclosure, the key information in the video caption is screened through the historical release information, so that the highlighted key words are attached to the historical release information, and the historical release information can represent the key information of the video released by the user in the historical time period, so that the user can quickly find the concerned key information and the information acquisition efficiency is improved by highlighting at least one key word related to the historical release information, the highlighted key words can be more accurate, the requirements of the user are better met, and the information acquisition accuracy is improved.
In some embodiments, obtaining screening information includes:
acquiring a search word;
highlighting at least one keyword based on the screening information, including:
at least one keyword associated with the search term is highlighted.
According to the scheme provided by the embodiment of the disclosure, the key information in the video caption is screened through the search word, so that the highlighted key word fits the search word, and the search word is the word adopted during searching the video and can indicate the information concerned by the user, so that the user can quickly find the concerned key information and improve the information acquisition efficiency by highlighting at least one key word related to the search word, the highlighted key word can be more accurate, the requirements of the user can be met, and the information acquisition accuracy is improved.
In some embodiments, in response to a triggering operation on the subtitle viewing control, displaying a video subtitle of the video in the video playing interface, including:
responding to the triggering operation of the subtitle viewing control, and displaying a video subtitle on the upper layer of a video in a video playing interface, wherein the video subtitle is overlapped with the video; alternatively, the first and second electrodes may be,
and responding to the triggering operation of the subtitle viewing control, displaying the video subtitle in a subtitle area of the video playing interface, wherein the video is positioned in a video area of the video playing interface, and the subtitle area is not overlapped with the video area.
According to the scheme provided by the embodiment of the disclosure, under the condition that the subtitle viewing control is triggered, the video subtitle of the video can be displayed on the upper layer of the video, or the video and the video subtitle are displayed in different areas, so that the display form of the video subtitle is enriched.
In some embodiments, the method further comprises:
responding to the sharing operation of at least one keyword, and acquiring a video cover of a video;
displaying a shared target picture in an information interaction interface based on the video cover and at least one keyword, wherein the information interaction interface is used for realizing information interaction among different objects, and the video cover and the at least one keyword are displayed in the target picture.
According to the scheme provided by the embodiment of the disclosure, the at least one keyword and the video cover of the video can be synthesized into the target picture by triggering the sharing operation of the at least one keyword, and the target picture is shared, so that the user can share the key information concerned by the user to other people, the man-machine interaction mode is enriched, and the requirement of the user on information sharing can be met.
Fig. 2 is a basic flow chart of the present disclosure, and the scheme provided by the present disclosure is further explained below based on another implementation manner, and fig. 3 is a flow chart of another information display method according to an exemplary embodiment. Taking the electronic device as an example provided as a terminal, referring to fig. 3, the information display method includes:
in step 301, the terminal displays a video playing interface, where a video and a subtitle viewing control are displayed in the video playing interface.
In the disclosed embodiment, the terminal is a device on which a user watching a video logs in. The video playing interface is used for playing videos. The video playing interface not only displays videos, but also displays a subtitle viewing control. The subtitle viewing control may be displayed at the top, bottom, left side, or the like in the video playing interface, which is not limited in this disclosure. At least one of a video file, a user identifier, a like control, a comment control, a collection control, and a sharing control can also be displayed in the video playing interface, which is not limited in the embodiment of the present disclosure. The user identifier may be an account ID (Identity Document) of the user or a user icon, which is not limited in the embodiment of the present disclosure.
For example, FIG. 4 is a schematic diagram illustrating a video playback interface in accordance with an exemplary embodiment. Referring to fig. 4, a video, a subtitle viewing control, a video file, a user avatar, a like control, a comment control, a favorite control, and a share control are displayed in the video playing interface. The video subtitle viewing control is positioned at the bottom of the video playing interface.
In step 302, in response to the triggering operation of the subtitle viewing control, the terminal displays a video subtitle of the video in the video playing interface, wherein the video subtitle is used for representing voice information in the video.
In the embodiment of the disclosure, after the terminal detects that the subtitle control is triggered, the video subtitle of the video is displayed in the video playing interface. The video subtitle may be displayed on an upper layer of the video, or may be displayed in a different area within the video playing interface separately from the video, which is not limited in this disclosure.
In some embodiments, the video captions are displayed on an upper layer of the video. Accordingly, this step 302 includes the steps of: and responding to the triggering operation of the subtitle viewing control, and displaying the video subtitle on the upper layer of the video in the video playing interface by the terminal. Wherein the video subtitle overlaps the video. The upper layer on which the video subtitle is located may be a fully transparent layer, a semi-transparent layer, an opaque layer, or the like, which is not limited in this disclosure. The color of the upper layer may be white, black, or gray, etc., which is not limited in the embodiments of the present disclosure. According to the scheme provided by the embodiment of the disclosure, the video subtitles are displayed on the upper layer of the video, so that a larger display area is provided for the display of the video subtitles, more video subtitles can be displayed at one time, a user can conveniently know all information expressed by voice in the video at one time, the time is saved, and the information acquisition efficiency is improved.
For example, FIG. 5 is a schematic diagram illustrating another video playback interface in accordance with an illustrative embodiment. Referring to fig. 5, in the video playing interface, in the area where the video was previously displayed, the video subtitle of the video is now displayed. And the subtitle viewing control at the bottom in the video playing interface is switched into a video viewing control. And responding to the triggering operation of the video viewing control, and displaying the video in the video playing interface by the terminal, namely, switching the video subtitle into the video.
In some embodiments, the video subtitle is displayed in a different area within the video playback interface than the video. Accordingly, this step 302 includes the steps of: responding to the triggering operation of the subtitle viewing control, the terminal displays the video subtitle in a subtitle area of the video playing interface, the video is located in a video area of the video playing interface, and the subtitle area is not overlapped with the video area. The video area may be located below or above the subtitle area, and the embodiment of the present disclosure does not limit this. According to the scheme provided by the embodiment of the disclosure, the video subtitles and the videos are respectively displayed in different areas of the video playing interface, so that the videos and the video subtitles can be displayed to a user at the same time, the user can conveniently and quickly know all information expressed by voice in the videos, the information acquisition efficiency is improved, the user can know the videos corresponding to the positions while knowing the video subtitles, and more video information can be acquired.
For example, FIG. 6 is a schematic diagram illustrating another video playback interface, according to an example embodiment. Referring to fig. 5, in the video playing interface, a video area is located above the video playing interface, and a subtitle area is located below the video playing interface. That is, the terminal can display the video at the upper side in the video playing interface and display the video subtitle at the lower side in the video playing interface.
In the embodiment of the present disclosure, the video subtitle may automatically slide upwards along with the progress of the video, or may slide up and down in response to a sliding operation on the video subtitle, which is not limited by the embodiment of the present disclosure. Alternatively, the terminal can highlight the video subtitle corresponding to the progress of the video in the process of automatically sliding up along with the progress of the video. The highlighting may be highlighting, bolding, underlining, or the like, which is not limited in this disclosure. According to the scheme provided by the embodiment of the disclosure, the video subtitle corresponding to the video progress is highlighted, the video subtitle is associated with the video, the user can be attracted to the video subtitle, and therefore the information of the video is acquired from the video subtitle.
In the embodiment of the present disclosure, the video subtitle may be generated based on the video before the video is published, or may be generated based on the video after the video is published, which is not limited by the embodiment of the present disclosure.
In some embodiments, the video captions are generated based on the video prior to video distribution. Accordingly, the process of generating the video caption is as follows: and the terminal displays a video editing interface. And displaying a video to be edited and a subtitle generating control in the video editing interface. And responding to the triggering operation of the subtitle generating control, and displaying a subtitle editing interface by the terminal. The identified video subtitles are displayed in the subtitle editing interface. In response to the editing operation on the video subtitle, the terminal updates and displays the video subtitle. And responding to the completion of the video editing, and obtaining the final video and the video subtitles by the terminal. The terminal is a device which is logged in by a user who issues the video. According to the scheme provided by the embodiment of the disclosure, the video subtitles are generated based on the video before the video is published, so that when a user watching the video watches the video, a terminal which the user watching the video logs on can directly acquire the generated video subtitles to display, and voice recognition of the video is not needed, so that all information expressed by the voice in the video can be acquired at one time, the time is saved, and the information acquisition efficiency is improved.
For example, fig. 7 is a schematic diagram illustrating generation of a video subtitle according to an exemplary embodiment. Referring to fig. 7, (a) of fig. 7 exemplarily shows a video editing interface. And a subtitle generating control is displayed at the bottom of the video editing interface. And responding to the triggering operation of the subtitle generating control, and displaying a subtitle editing interface by the terminal. Referring to fig. 7 (b), fig. 7 (b) exemplarily shows a subtitle editing interface. The upper half part of the subtitle editing interface displays the video, and the lower half part of the subtitle editing interface displays the video subtitle. For any subtitle in the video subtitles, responding to the triggering operation of a text editing control of the subtitle, and editing the characters of the subtitle by a terminal; or responding to the triggering operation of the time editing control of the subtitle, and editing the time of the subtitle by the terminal so as to adjust the progress of the subtitle. And a subtitle mode control is also displayed at the top of the video editing interface. And responding to the triggering operation of the subtitle mode control, and displaying a subtitle preview interface by the terminal. Referring to fig. 7 (c), fig. 7 (c) exemplarily shows a subtitle preview interface. The subtitle preview interface is used for previewing the edited video subtitles. The terminal is a device on which a user who issues the video logs, and is different from a terminal which executes the information display method in the embodiment of the present disclosure.
In step 303, the terminal acquires filtering information including at least one of a video type of the video, historical viewing information including a video watched by the object during the historical period, historical publishing information including a video published by the object during the historical period, and a search word that is a word used when searching for the video.
In the embodiment of the disclosure, the terminal can acquire at least one of the video type, the historical viewing information, the historical publishing information and the search term of the video as the screening information, and then the terminal can screen out the key information concerned by the user from the video subtitles of the video according to the screening information. Since the screening information includes information of at least one dimension, and different dimensions can be combined with each other, the content of the screening information is not limited in the embodiments of the present disclosure. Based on different contents of the screening information, five ways of obtaining the screening information are exemplarily shown below.
In a first mode, the screening information is a video type of the video. The video type may be a food type, a sports type, a movie type, a music type, or the like, which is not limited by the embodiment of the present disclosure. After displaying the video subtitles of the video, the terminal can acquire the video file of the video. The video file may include at least one of a video tag, a related topic of the video, and a summary of video content, which is not limited in the embodiment of the present disclosure. Then, the terminal determines the video type of the video according to the video file of the video.
In a second mode, the screening information is the video watching condition of the user in the historical time period. Accordingly, after displaying a video subtitle of a video, the terminal can acquire the historical viewing information. The historical viewing information is used to represent key information of a video viewed by the subject over a historical period of time. The object is an account number for logging in by the terminal and is used for representing the user. The terminal can determine the information which the user is accustomed to paying attention to from the condition that the user watches the video in the historical time period, so that the key information which the user pays attention to can be found in the video caption in the subsequent process.
And in a third mode, the screening information is the video interaction condition of the user in the historical time period. The video interaction condition refers to at least one of video praise, video collection or video sharing. Accordingly, the terminal determines target viewing information based on the historical viewing information. The target viewing information includes at least one of videos that the object likes, videos that the object favorite, and videos that the object shares in a historical period of time. The above video interaction situation represents that the user pays more attention to the information in the interacted video. Then, the terminal takes the target viewing information as filtering information, so that key information which can be focused by the user can be found in the video caption in the following process.
And in a fourth mode, the screening information is the video release condition of the user in the historical time period. Accordingly, after displaying the video subtitles of the video, the terminal can acquire the historical distribution information. The historical distribution information is used for representing key information of videos distributed by the object in the historical time period. The terminal can determine the information which the user is accustomed to paying attention to from the video distribution condition of the user in the historical time period, so that the key information which the user pays attention to can be found in the video subtitles in the subsequent process.
In a fifth mode, the screening information is a search term. In the case where the video is retrieved through the search term, it indicates that the user is searching for a video related to the search term through the terminal. As can be seen, the search term can represent information of interest to the user. After the video subtitles of the video are displayed, the terminal can acquire the search terms and take the search terms as screening information, so that key information which can be concerned by the user can be found in the video subtitles in the following process.
The terminal can also combine any of the above modes to serve as screening information, so that key information concerned by a user is screened from the video subtitles subsequently according to the screening information, and further description is omitted here.
In step 304, the terminal highlights at least one keyword based on the filtering information.
In this embodiment of the present disclosure, the terminal may screen at least one keyword from the video subtitles based on the above-mentioned screening information. Then, the terminal highlights the at least one keyword so that the user can quickly find the key information in the video title.
In some embodiments, since the content of the filtering information is different, the keywords obtained by the corresponding filtering are different, that is, the highlighted keywords are different. The following exemplary shows five ways of highlighting keywords.
In a first mode, the screening information is a video type of the video. Accordingly, the terminal highlights at least one keyword related to the video type. According to the first mode provided by the embodiment of the application, the video type of the video is determined through the video file of the video, so that the key information in the video caption is screened according to the video type, the highlighted key words are made to be attached to the video type of the video, and because a user watches the video of the type, the user generally wants to acquire the information related to the video type from the video, the user can quickly find the concerned key information and improve the information acquisition efficiency by highlighting at least one key word related to the video type, the highlighted key words can be more accurate, the requirements of the user can be met more, and the information acquisition accuracy is improved.
For example, the topic carried in the video file of the video is "tomato-egg-frying practice". And the terminal determines that the video type of the video is the food type according to the video file. Then, the terminal highlights at least one keyword related to the food. The at least one keyword comprises tomato, egg, oil, salt, chopped green onion and other foods.
In a second mode, the screening information is the video watching condition of the user in the historical time period. Accordingly, the terminal highlights at least one keyword related to the historical viewing information. According to the second mode provided by the embodiment of the application, the key information in the video subtitles is screened through the historical watching information, so that the highlighted key words are attached to the historical watching information, and the historical watching information can represent the key information of the video watched by the user in the historical time period, so that at least one key word related to the historical watching information is highlighted, the user can quickly find the concerned key information, the information acquisition efficiency is improved, the highlighted key words can be more accurate, the requirements of the user are better met, and the information acquisition accuracy is improved.
For example, the video watched by the user in the historical period is mostly a motion type video, that is, the historical watching information includes a plurality of motion type videos. The terminal can screen out at least one keyword related to the movement from the video caption according to the historical watching information. Then, the terminal highlights the at least one keyword. The at least one keyword may be a motion related to a sport, such as swing arm, squat, or leg curl.
And thirdly, the screening information is the video interaction condition of the user in the historical time period. Accordingly, the terminal highlights at least one keyword related to the target viewing information. According to the third mode provided by the embodiment of the application, the key information in the video subtitles is screened through the target watching information, so that the highlighted key words are attached to the target watching information, and the target watching information comprises at least one of videos praised by the user in a historical time period, collected videos and shared videos, and the videos concerned by the user can be represented, so that the user can quickly find the concerned key information by highlighting at least one key word related to the target watching information, the information acquisition efficiency is improved, the highlighted key words can be more accurate, the requirements of the user are better met, and the information acquisition accuracy is improved.
For example, videos watched by the user in the historical time period include food-class videos, sports-class videos, movie-and-television-class videos, and the like. And the terminal determines that most of the videos collected in the historical time period of the user are sports videos according to the condition that the user watches the videos in the historical time period. The terminal can then screen the video title for at least one keyword that is motion related and highlight the at least one keyword.
And in a fourth mode, the screening information is the video distribution condition of the user in the historical time period. Accordingly, the terminal highlights at least one keyword related to the history release information. According to the method provided by the embodiment of the application, the key information in the video caption is screened through the historical release information, so that the highlighted key words are attached to the historical release information, and the historical release information can represent the key information of the video released by the user in the historical time period, so that at least one key word related to the historical release information is highlighted, the user can quickly find the concerned key information, the information acquisition efficiency is improved, the highlighted key words can be more accurate, the requirements of the user are better met, and the information acquisition accuracy is improved.
For example, videos that users publish over a historical period of time are mostly videos of meals done in daily life. And the terminal determines that the information concerned by the user is mainly the information related to the food according to the condition that the user publishes the video in the historical time period. Then, the terminal can screen at least one keyword related to the food from the video caption and highlight the at least one keyword.
In a fifth mode, the screening information is a search term. Accordingly, the terminal highlights at least one keyword related to the search term. The method fifth provided by the embodiment of the application screens the key information in the video subtitles through the search word, so that the highlighted key word fits the search word, the search word is the word adopted when the video is searched, the information concerned by the user can be indicated, and therefore, by highlighting at least one key word related to the search word, the user can quickly find the concerned key information, the information acquisition efficiency is improved, the highlighted key word can be more accurate, the requirements of the user are better met, and the information acquisition accuracy is improved.
For example, the search term is skipping rope. And the terminal determines the information concerned by the user as the information related to the rope skipping according to the search word. The information may be a tutorial for skipping a rope, how to achieve weight loss or rope skipping competition by skipping a rope, etc. The terminal can screen at least one keyword related to rope skipping from the video subtitles and highlight the at least one keyword.
In some embodiments, the terminal can share the screened keywords to other objects. Correspondingly, in response to the sharing operation of the at least one keyword, the terminal acquires a video cover of the video. And then, the terminal displays the shared target picture in the information interaction interface based on the video cover and at least one keyword. The information interaction interface is used for realizing information interaction between different objects. The target picture has a video cover and at least one keyword displayed therein. The video cover can be any frame of image in the video, which is not limited in the embodiment of the disclosure. According to the scheme provided by the embodiment of the disclosure, the sharing operation of the at least one keyword is triggered, the at least one keyword and the video cover of the video can be synthesized into the target picture, and the target picture is shared, so that the user can share the concerned key information with others, the man-machine interaction mode is enriched, and the requirement of the user for sharing the information can be met.
In some embodiments, the terminal is also able to search for videos related to any keyword based on that keyword. Correspondingly, for any keyword, in the condition that the keyword is selected, in response to the triggering operation of the search control in the video playing interface, the terminal displays at least one video related to the keyword. According to the scheme provided by the embodiment of the disclosure, for any keyword, by searching the keyword, at least one video related to the keyword can be displayed, so that a user can acquire more concerned key information, the operation is simple, and the information acquisition efficiency can be improved.
The embodiment of the disclosure provides an information display method, which can display video subtitles of a video by triggering a subtitle viewing control displayed in a video playing interface, so that a user can acquire and know all information expressed by voice in the video at one time without watching a complete video, time is saved, and at least one keyword in the video subtitles can be highlighted based on screening information on the basis of displaying the video subtitles, so that the user can quickly find the concerned keyword information, and the information acquisition efficiency is improved.
All the above optional technical solutions may be combined arbitrarily to form optional embodiments of the present disclosure, and are not described in detail herein.
Fig. 8 is a block diagram illustrating an information display apparatus according to an exemplary embodiment. Referring to fig. 8, the information display device includes: a first display unit 801 and a second display unit 802.
A first display unit 801 configured to execute displaying a video playing interface in which a video and a subtitle viewing control are displayed;
a second display unit 802 configured to perform, in response to a trigger operation on the subtitle viewing control, displaying a video subtitle of a video in the video playing interface, where the video subtitle is used to represent voice information in the video;
the second display unit 802 is further configured to perform highlighting of at least one keyword in the video subtitle based on the filtering information, the filtering information being used to select the key information of interest to the object, the at least one keyword being used to represent the key information in the video.
The embodiment of the disclosure provides an information display device, which can display video subtitles by triggering a subtitle viewing control displayed in a video playing interface, so that a user can acquire all information expressed by voice in a video at one time without watching a complete video, time is saved, and at least one keyword in the video subtitles can be highlighted based on screening information on the basis of displaying the video subtitles, so that the user can quickly find out the concerned keyword information, and the information acquisition efficiency is improved.
In some embodiments, fig. 9 is a block diagram illustrating another information display device according to an example embodiment. Referring to fig. 9, the second display unit 802 includes:
an acquisition subunit 901 configured to perform acquisition of filtering information including at least one of a video type of a video, historical viewing information including a video viewed by an object during a historical period, historical publishing information including a video published by an object during a historical period, and a search word that is a word employed when searching for a video;
a display subunit 902 configured to perform highlighting of the at least one keyword based on the filtering information.
In some embodiments, with continued reference to fig. 9, an acquisition subunit 901 configured to execute a video-based video copy, determine a video type of the video;
a display subunit 902 configured to perform highlighting of at least one keyword related to the video type.
In some embodiments, with continued reference to fig. 9, an obtaining subunit 901 configured to perform obtaining historical viewing information for key information representing videos viewed by a subject over a historical period of time;
a display subunit 902 configured to perform highlighting of at least one keyword related to the historical viewing information.
In some embodiments, with continued reference to fig. 9, the display subunit 902 is configured to perform determining target viewing information based on the historical viewing information, the target viewing information including at least one of videos that the object likes, videos that are favorite, and videos that are shared during the historical period; highlighting at least one keyword associated with the target viewing information.
In some embodiments, with continued reference to fig. 9, an obtaining subunit 901 configured to perform obtaining historical publishing information for key information representing videos for which a subject publishes within a historical period of time;
a display subunit 902 configured to perform highlighting of at least one keyword related to the historical release information.
In some embodiments, with continued reference to fig. 9, an obtaining subunit 901 configured to perform obtaining a search term;
a display subunit 902 configured to perform highlighting of at least one keyword related to a search term.
In some embodiments, with continued reference to fig. 9, a second display unit 802 configured to perform displaying a video subtitle on an upper layer of a video in a video playback interface in response to a triggering operation on a subtitle viewing control, the video subtitle overlapping the video;
the second display unit 802 is further configured to perform, in response to the triggering operation on the subtitle viewing control, displaying a video subtitle in a subtitle region of the video playing interface, where the video is located in a video region of the video playing interface, and the subtitle region is not overlapped with the video region.
In some embodiments, with continued reference to fig. 9, the information display apparatus further comprises:
an acquisition unit 803 configured to perform a video cover of a video in response to a sharing operation on at least one keyword;
the third display unit 804 is configured to display the shared target picture in an information interaction interface based on the video cover and the at least one keyword, wherein the information interaction interface is used for realizing information interaction between different objects, and the video cover and the at least one keyword are displayed in the target picture.
It should be noted that, when the information display device provided in the above embodiment displays the video subtitles, the division of the above functional units is merely exemplified, and in practical applications, the above functions may be distributed to different functional units according to needs, that is, the internal structure of the electronic device may be divided into different functional units to complete all or part of the above described functions. In addition, the information display device and the information display method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments in detail and are not described herein again.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
When the electronic device is provided as a terminal, fig. 10 is a block diagram illustrating a terminal 1000 in accordance with an exemplary embodiment. Fig. 10 shows a block diagram of a terminal 1000 according to an exemplary embodiment of the disclosure. The terminal 1000 can be: a smart phone, a tablet computer, an MP3 player, an MP4 player, a notebook computer or a desktop computer. Terminal 1000 can also be referred to as user equipment, portable terminal, laptop terminal, desktop terminal, or the like by other names.
In general, terminal 1000 can include: a processor 1001 and a memory 1002.
Processor 1001 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor 1001 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1001 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in a wake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1001 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, the processor 1001 may further include an AI (Artificial Intelligence) processor for processing a computing operation related to machine learning.
Memory 1002 may include one or more computer-readable storage media, which may be non-transitory. The memory 1002 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 1002 is used to store at least one program code for execution by the processor 1001 to implement the information display method provided by the method embodiments of the present disclosure.
In some embodiments, terminal 1000 can also optionally include: a peripheral interface 1003 and at least one peripheral. The processor 1001, memory 1002 and peripheral interface 1003 may be connected by a bus or signal line. Various peripheral devices may be connected to peripheral interface 1003 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1004, display screen 1005, camera assembly 1006, audio circuitry 1007, positioning assembly 1008, and power supply 1009.
The peripheral interface 1003 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 1001 and the memory 1002. In some embodiments, processor 1001, memory 1002, and peripheral interface 1003 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1001, the memory 1002, and the peripheral interface 1003 may be implemented on separate chips or circuit boards, which are not limited by this embodiment.
The Radio Frequency circuit 1004 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 1004 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 1004 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1004 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1004 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 1004 may also include NFC (Near Field Communication) related circuits, which are not limited by this disclosure.
The display screen 1005 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1005 is a touch display screen, the display screen 1005 also has the ability to capture touch signals on or over the surface of the display screen 1005. The touch signal may be input to the processor 1001 as a control signal for processing. At this point, the display screen 1005 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, display screen 1005 can be one, providing a front panel of terminal 1000; in other embodiments, display 1005 can be at least two, respectively disposed on different surfaces of terminal 1000 or in a folded design; in still other embodiments, display 1005 can be a flexible display disposed on a curved surface or a folded surface of terminal 1000. Even more, the display screen 1005 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display screen 1005 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and the like.
The camera assembly 1006 is used to capture images or video. Optionally, the camera assembly 1006 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1006 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 1007 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1001 for processing or inputting the electric signals to the radio frequency circuit 1004 for realizing voice communication. For stereo sound collection or noise reduction purposes, multiple microphones can be provided, each at a different location of terminal 1000. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1001 or the radio frequency circuit 1004 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuit 1007 may also include a headphone jack.
A Location component 1008 is employed to locate a current geographic Location of terminal 1000 for purposes of navigation or LBS (Location Based Service). The Positioning component 1008 may be a Positioning component based on the Global Positioning System (GPS) in the united states, the beidou System in china, the graves System in russia, or the galileo System in the european union.
Power supply 1009 is used to supply power to various components in terminal 1000. The power source 1009 may be alternating current, direct current, disposable battery, or rechargeable battery. When the power source 1009 includes a rechargeable battery, the rechargeable battery may support wired charging or wireless charging. The rechargeable battery can also be used to support fast charge technology.
In some embodiments, terminal 1000 can also include one or more sensors 1010. The one or more sensors 1010 include, but are not limited to: acceleration sensor 1011, gyro sensor 1012, pressure sensor 1013, fingerprint sensor 1014, optical sensor 1015, and proximity sensor 1016.
Acceleration sensor 1011 can detect acceleration magnitudes on three coordinate axes of a coordinate system established with terminal 1000. For example, the acceleration sensor 1011 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 1001 may control the display screen 1005 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1011. The acceleration sensor 1011 may also be used for acquisition of motion data of a game or a user.
Gyroscope sensor 1012 can detect the body direction and rotation angle of terminal 1000, and gyroscope sensor 1012 can cooperate with acceleration sensor 1011 to acquire the 3D motion of the user on terminal 1000. The processor 1001 may implement the following functions according to the data collected by the gyro sensor 1012: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensor 1013 can be disposed on a side frame of terminal 1000 and/or underneath display screen 1005. When the pressure sensor 1013 is disposed on a side frame of the terminal 1000, a user's grip signal of the terminal 1000 can be detected, and left-right hand recognition or shortcut operation can be performed by the processor 1001 according to the grip signal collected by the pressure sensor 1013. When the pressure sensor 1013 is disposed at a lower layer of the display screen 1005, the processor 1001 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 1005. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1014 is used to collect a fingerprint of the user, and the processor 1001 identifies the user according to the fingerprint collected by the fingerprint sensor 1014, or the fingerprint sensor 1014 identifies the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 1001 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. Fingerprint sensor 1014 can be disposed on the front, back, or side of terminal 1000. When a physical key or vendor Logo is provided on terminal 1000, fingerprint sensor 1014 can be integrated with the physical key or vendor Logo.
The optical sensor 1015 is used to collect the ambient light intensity. In one embodiment, the processor 1001 may control the display brightness of the display screen 1005 according to the ambient light intensity collected by the optical sensor 1015. Specifically, when the ambient light intensity is high, the display brightness of the display screen 1005 is increased; when the ambient light intensity is low, the display brightness of the display screen 1005 is turned down. In another embodiment, the processor 1001 may also dynamically adjust the shooting parameters of the camera assembly 1006 according to the intensity of the ambient light collected by the optical sensor 1015.
Proximity sensor 1016, also known as a distance sensor, is typically disposed on a front panel of terminal 1000. Proximity sensor 1016 is used to gather the distance between a user and the front face of terminal 1000. In one embodiment, when proximity sensor 1016 detects that the distance between the user and the front surface of terminal 1000 is gradually reduced, processor 1001 controls display screen 1005 to switch from a bright screen state to a dark screen state; when proximity sensor 1016 detects that the distance between the user and the front of terminal 1000 is gradually increased, display screen 1005 is controlled by processor 1001 to switch from a breath-screen state to a bright-screen state.
Those skilled in the art will appreciate that the configuration shown in FIG. 10 is not intended to be limiting and that terminal 1000 can include more or fewer components than shown, or some components can be combined, or a different arrangement of components can be employed.
In an exemplary embodiment, a computer readable storage medium comprising instructions, such as memory 1002 comprising instructions, executable by processor 1001 of terminal 1000 to perform the above-described information display method is also provided. Alternatively, the computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
A computer program product comprising a computer program which, when executed by a processor, implements the above-described information display method.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice in the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (12)

1. An information display method, characterized in that the method comprises:
displaying a video playing interface, wherein a video and subtitle viewing control is displayed in the video playing interface;
responding to the triggering operation of the subtitle viewing control, and displaying the video subtitle of the video in the video playing interface, wherein the video subtitle is used for representing the voice information in the video;
highlighting at least one keyword in the video caption based on screening information, wherein the screening information is used for selecting key information concerned by an object, and the at least one keyword is used for representing the key information in the video.
2. The information display method according to claim 1, wherein the highlighting at least one keyword in the video subtitle based on the filtering information comprises:
obtaining the screening information, wherein the screening information comprises at least one of a video type, historical watching information, historical publishing information and a search word of the video, the historical watching information comprises the video watched by the object in a historical time period, the historical publishing information comprises the video published by the object in the historical time period, and the search word is a word adopted when searching the video;
highlighting the at least one keyword based on the screening information.
3. The information display method according to claim 2, wherein the acquiring the filtering information includes:
determining a video type of the video based on a video copy of the video;
the highlighting the at least one keyword based on the screening information includes:
highlighting the at least one keyword associated with the video type.
4. The information display method according to claim 2, wherein the acquiring the filtering information includes:
acquiring the historical watching information, wherein the historical watching information is used for representing key information of videos watched by the object in the historical time period;
the highlighting the at least one keyword based on the screening information includes:
highlighting the at least one keyword associated with the historical viewing information.
5. The information display method according to claim 4, wherein the highlighting the at least one keyword related to the historical viewing information comprises:
determining target viewing information based on the historical viewing information, wherein the target viewing information comprises at least one of videos like the object likes, videos collected and videos shared in the historical time period;
highlighting the at least one keyword associated with the target viewing information.
6. The information display method according to claim 2, wherein the acquiring the filtering information includes:
acquiring historical release information, wherein the historical release information is used for representing key information of videos released by the object in the historical time period;
the highlighting, based on the filtering information, the at least one keyword comprises:
highlighting the at least one keyword associated with the historical release information.
7. The information display method according to claim 2, wherein the acquiring the filtering information includes:
acquiring the search word;
the highlighting, based on the filtering information, the at least one keyword comprises:
highlighting the at least one keyword related to the search term.
8. The information display method according to claim 1, wherein the displaying a video subtitle of the video in the video playback interface in response to the triggering operation of the subtitle viewing control includes:
responding to the triggering operation of the subtitle viewing control, and displaying the video subtitle on the upper layer of the video in the video playing interface, wherein the video subtitle is overlapped with the video; alternatively, the first and second liquid crystal display panels may be,
and responding to the triggering operation of the subtitle viewing control, and displaying the video subtitle in a subtitle area of the video playing interface, wherein the video is located in a video area of the video playing interface, and the subtitle area is not overlapped with the video area.
9. The information display method according to claim 1, characterized in that the method further comprises:
responding to the sharing operation of the at least one keyword, and acquiring a video cover of the video;
displaying a shared target picture in an information interaction interface based on the video cover and the at least one keyword, wherein the information interaction interface is used for realizing information interaction among different objects, and the video cover and the at least one keyword are displayed in the target picture.
10. An information display apparatus, characterized in that the apparatus comprises:
the first display unit is configured to execute displaying of a video playing interface, and a video and a subtitle viewing control are displayed in the video playing interface;
a second display unit configured to execute, in response to a trigger operation on the subtitle viewing control, displaying a video subtitle of the video in the video playing interface, where the video subtitle is used to represent voice information in the video;
the second display unit is further configured to perform highlighting of at least one keyword in the video subtitle based on screening information for selecting key information of interest to an object, the at least one keyword being used to represent the key information in the video.
11. An electronic device, characterized in that the electronic device comprises:
one or more processors;
a memory for storing the processor executable program code;
wherein the processor is configured to execute the program code to implement the information display method of any one of claims 1 to 9.
12. A computer-readable storage medium, wherein instructions in the computer-readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the information display method of any one of claims 1 to 9.
CN202310010032.9A 2023-01-04 2023-01-04 Information display method and device, electronic equipment and storage medium Pending CN115952319A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310010032.9A CN115952319A (en) 2023-01-04 2023-01-04 Information display method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310010032.9A CN115952319A (en) 2023-01-04 2023-01-04 Information display method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115952319A true CN115952319A (en) 2023-04-11

Family

ID=87287471

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310010032.9A Pending CN115952319A (en) 2023-01-04 2023-01-04 Information display method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115952319A (en)

Similar Documents

Publication Publication Date Title
CN108737897B (en) Video playing method, device, equipment and storage medium
CN111079012A (en) Live broadcast room recommendation method and device, storage medium and terminal
CN109327608B (en) Song sharing method, terminal, server and system
CN110572716B (en) Multimedia data playing method, device and storage medium
CN109068160B (en) Method, device and system for linking videos
CN110300274B (en) Video file recording method, device and storage medium
CN109144346B (en) Song sharing method and device and storage medium
CN109922356B (en) Video recommendation method and device and computer-readable storage medium
CN113411680B (en) Multimedia resource playing method, device, terminal and storage medium
CN110932963B (en) Multimedia resource sharing method, system, device, terminal, server and medium
CN111880888B (en) Preview cover generation method and device, electronic equipment and storage medium
CN113490010B (en) Interaction method, device and equipment based on live video and storage medium
CN109982129B (en) Short video playing control method and device and storage medium
CN112181573A (en) Media resource display method, device, terminal, server and storage medium
CN110750734A (en) Weather display method and device, computer equipment and computer-readable storage medium
CN111031391A (en) Video dubbing method, device, server, terminal and storage medium
CN109618192B (en) Method, device, system and storage medium for playing video
CN109547847B (en) Method and device for adding video information and computer readable storage medium
CN112511889B (en) Video playing method, device, terminal and storage medium
CN113886611A (en) Resource display method and device, computer equipment and medium
CN113609358A (en) Content sharing method and device, electronic equipment and storage medium
CN111368114A (en) Information display method, device, equipment and storage medium
CN111782950A (en) Sample data set acquisition method, device, equipment and storage medium
CN110996115B (en) Live video playing method, device, equipment, storage medium and program product
CN114186083A (en) Information display method, device, terminal, server and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination