CN111753135B - Video display method, device, terminal, server, system and storage medium - Google Patents

Video display method, device, terminal, server, system and storage medium Download PDF

Info

Publication number
CN111753135B
CN111753135B CN202010437842.9A CN202010437842A CN111753135B CN 111753135 B CN111753135 B CN 111753135B CN 202010437842 A CN202010437842 A CN 202010437842A CN 111753135 B CN111753135 B CN 111753135B
Authority
CN
China
Prior art keywords
video
target
keyword
keywords
playing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010437842.9A
Other languages
Chinese (zh)
Other versions
CN111753135A (en
Inventor
韩金泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202010437842.9A priority Critical patent/CN111753135B/en
Publication of CN111753135A publication Critical patent/CN111753135A/en
Application granted granted Critical
Publication of CN111753135B publication Critical patent/CN111753135B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/735Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/738Presentation of query results

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Library & Information Science (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The disclosure relates to a video display method, a device, a terminal, a server, a system and a storage medium, wherein the video display method comprises the following steps: acquiring a corresponding target keyword according to the playing state of the currently played target video; generating a corresponding keyword label according to the target keyword, and displaying the keyword label in a playing area of the target video; and responding to the triggering operation applied to the keyword label, and acquiring and displaying the associated video corresponding to the target keyword. According to the method and the device, when the video is played, the target keywords corresponding to the video playing state can be used for providing the acquisition way for acquiring the associated video corresponding to the target keywords for the user watching the video, so that the user can quickly acquire the associated video meeting the self requirements without complex operations such as self-search, and the video interaction experience of the user is improved.

Description

Video display method, device, terminal, server, system and storage medium
Technical Field
The present disclosure relates to internet technologies, and in particular, to a video display method, device, terminal, server, system, and storage medium.
Background
With the development of internet technology, more and more users wish to acquire information meeting the requirements of themselves by watching videos, however, currently, after watching videos, users need to determine keywords by themselves and search and acquire the keywords according to the keywords if the users have similar video requirements, so that the operation is complex, and the user experience is affected.
Disclosure of Invention
The disclosure provides a video display method, a device, a terminal, a server, a system and a storage medium, so as to at least solve the problem of complex operation when a user obtains an associated video in the related technology. The technical scheme of the present disclosure is as follows:
according to a first aspect of an embodiment of the present disclosure, there is provided a video display method, including:
acquiring a corresponding target keyword according to the playing state of the currently played target video;
generating a corresponding keyword label according to the target keyword, and displaying the keyword label in a playing area of the target video;
and responding to the triggering operation applied to the keyword label, and acquiring and displaying the associated video corresponding to the target keyword.
Optionally, the obtaining the corresponding target keyword according to the playing state of the currently played target video includes:
Determining playing content corresponding to the playing state according to the playing state of the currently played target video;
and selecting a candidate keyword associated with the playing content from the pre-acquired candidate keywords as the target keyword.
Optionally, the method further comprises:
carrying out image recognition on each frame in the target video, and determining the image content in each frame;
and extracting keywords corresponding to the image content from the image content of each frame to obtain candidate keywords corresponding to the target video.
Optionally, the method further comprises:
acquiring user information corresponding to a user currently playing the target video;
and screening the pre-acquired candidate keywords corresponding to the target video according to the user information, acquiring candidate keywords associated with the user information, and acquiring the updated candidate keywords to select the target keywords.
Optionally, the filtering, according to the user information, the candidate keywords that are acquired in advance and correspond to the target video, to obtain candidate keywords associated with the user information includes:
the user information and the candidate keywords corresponding to the target video are taken as input, the candidate keywords are processed through a keyword screening model, and the candidate keywords associated with the user information are output;
Wherein the user portrait information at least comprises user portrait information and user historical behavior information;
the keyword screening model is obtained by training a sample set through a machine learning algorithm of a preset type, each sample included in the sample set comprises historical behavior data of keywords displayed in a user account playing video, and the user account has a corresponding user portrait.
Optionally, the playing state includes a playing time point;
the obtaining the corresponding target keyword according to the playing state of the currently played target video includes:
and acquiring a target keyword corresponding to the playing time point according to the playing time point of the currently played target video.
Optionally, displaying the keyword tag in a playing area of the target video includes:
determining a video content area corresponding to the target keyword in the playing area, and displaying the keyword label in the video content area; or alternatively
And displaying the keyword label at a preset display position of a playing area of the target video.
Optionally, the obtaining the associated video corresponding to the target keyword for display includes:
Searching in a preset video database according to a target keyword, and acquiring an associated video corresponding to the target keyword;
and displaying the associated video in a corresponding page display area.
Optionally, the method further comprises:
and after the target video is played, displaying all target keywords corresponding to the target video.
Optionally, displaying all target keywords corresponding to the target video includes:
displaying a cover layer on the target video, and displaying all target keywords corresponding to the target video on the cover layer.
Optionally, the method further comprises:
when the target video is played, a keyword acquisition request is sent to a server, and the server is triggered to feed back the corresponding target keyword; and/or
The obtaining the associated video corresponding to the target keyword for display comprises the following steps:
sending an associated video acquisition request to a server, and triggering the server to feed back an associated video corresponding to the target keyword;
and displaying the associated video.
According to a second aspect of embodiments of the present disclosure, there is provided a video display method, including:
receiving a keyword acquisition request sent by a client, wherein the keyword acquisition request comprises the playing state of a target video currently played by the client;
Acquiring a target keyword corresponding to the playing state, and feeding the target keyword back to the client, wherein the target keyword is used for displaying in a playing area of a target video of the client;
receiving an associated video acquisition request sent by the client, wherein the associated video acquisition request comprises the target keyword;
and acquiring an associated video corresponding to the target keyword, and feeding the associated video back to the client, wherein the associated video is used for displaying in the client.
Optionally, the method further comprises:
carrying out image recognition on each frame in the target video in advance, and determining the image content in each frame;
extracting candidate keywords from the image content of each frame to obtain candidate keywords corresponding to the target video, and correspondingly storing the candidate keywords and the playing state of the target video;
the obtaining the target keyword corresponding to the playing state includes:
and acquiring candidate keywords corresponding to the playing state as target keywords.
Optionally, the obtaining the candidate keyword corresponding to the playing state as the target keyword includes:
Acquiring user information corresponding to the client;
and screening candidate keywords corresponding to the playing state according to the user information, and obtaining candidate keywords associated with the user information as the target keywords.
Optionally, the user information includes user portrait information and user historical behavior information.
According to a third aspect of embodiments of the present disclosure, there is provided a video display apparatus comprising:
the target keyword acquisition module is configured to acquire corresponding target keywords according to the playing state of the currently played target video;
the first keyword display module is configured to generate a corresponding keyword label according to the target keyword and display the keyword label in a playing area of the target video;
and the associated video acquisition module is configured to respond to the triggering operation applied to the keyword label, acquire the associated video corresponding to the target keyword and display the associated video.
Optionally, the target keyword obtaining module includes:
a play content determining unit configured to determine play content corresponding to a play state according to the play state of the target video currently played;
And a target keyword acquisition unit configured to select, as the target keyword, a candidate keyword associated with the play content from among the candidate keywords acquired in advance.
Optionally, the apparatus further includes:
the image recognition module is configured to recognize images of each frame in the target video and determine the image content in each frame;
and the keyword extraction module is configured to extract keywords corresponding to the image content from the image content of each frame to obtain candidate keywords corresponding to the target video.
Optionally, the apparatus further includes:
the user information acquisition module is configured to acquire user information corresponding to a user currently playing the target video;
and the keyword screening module is configured to screen the candidate keywords which are acquired in advance and correspond to the target video according to the user information, acquire the candidate keywords associated with the user information, acquire the updated candidate keywords and select the target keywords.
Optionally, the keyword screening module is specifically configured to:
the user information and the candidate keywords corresponding to the target video are taken as input, the candidate keywords are processed through a keyword screening model, and the candidate keywords associated with the user information are output;
Wherein the user portrait information at least comprises user portrait information and user historical behavior information;
the keyword screening model is obtained by training a sample set through a machine learning algorithm of a preset type, each sample included in the sample set comprises historical behavior data of keywords displayed in a user account playing video, and the user account has a corresponding user portrait.
Optionally, the playing state includes a playing time point;
the target keyword acquisition module is specifically configured to:
and acquiring a target keyword corresponding to the playing time point according to the playing time point of the currently played target video.
Optionally, the first keyword display module is specifically configured to:
determining a video content area corresponding to the target keyword in the playing area, and displaying the keyword label in the video content area; or alternatively
And displaying the keyword label at a preset display position of a playing area of the target video.
Optionally, the associated video acquisition module includes:
the related video acquisition unit is configured to search a preset video database according to a target keyword and acquire related videos corresponding to the target keyword;
And the associated video display unit is configured to display the associated video in the corresponding page display area.
Optionally, the apparatus further includes:
and the second keyword display module is configured to display all target keywords corresponding to the target video after the target video is played.
Optionally, the second keyword display module is specifically configured to:
and after the target video is played, displaying a cover layer on the target video, and displaying all target keywords corresponding to the target video on the cover layer.
Optionally, the apparatus further includes:
the keyword acquisition request sending module is configured to send a keyword acquisition request to a server when the target video is played, and trigger the server to feed back the corresponding target keywords; and/or
The associated video acquisition module comprises:
the associated video acquisition request sending unit is configured to send an associated video acquisition request to a server, and trigger the server to feed back an associated video corresponding to the target keyword;
and the associated video display unit is configured to display the associated video.
According to a fourth aspect of embodiments of the present disclosure, there is provided a video display apparatus comprising:
The system comprises a keyword acquisition request receiving module, a keyword acquisition request processing module and a keyword processing module, wherein the keyword acquisition request receiving module is configured to receive a keyword acquisition request sent by a client, and the keyword acquisition request comprises the playing state of a target video currently played by the client;
the keyword feedback module is configured to acquire a target keyword corresponding to the playing state and feed the target keyword back to the client, wherein the target keyword is used for displaying in a playing area of a target video of the client;
the associated video acquisition request receiving module is configured to receive an associated video acquisition request sent by the client, wherein the associated video acquisition request comprises the target keyword;
and the associated video feedback module is configured to acquire an associated video corresponding to the target keyword and feed the associated video back to the client, wherein the associated video is used for displaying in the client.
Optionally, the apparatus further includes:
the image recognition module is configured to perform image recognition on each frame in the target video in advance and determine the image content in each frame;
the keyword extraction module is configured to extract candidate keywords from the image content of each frame to obtain candidate keywords corresponding to the target video, and store the candidate keywords in correspondence with the playing state of the target video;
The keyword feedback module comprises:
and a keyword acquisition unit configured to acquire a candidate keyword corresponding to the play state as a target keyword.
Optionally, the keyword obtaining unit is specifically configured to:
acquiring user information corresponding to the client;
and screening candidate keywords corresponding to the playing state according to the user information, and obtaining candidate keywords associated with the user information as the target keywords.
Optionally, the user information includes user portrait information and user historical behavior information.
According to a fifth aspect of embodiments of the present disclosure, there is provided a terminal comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the video presentation method of the first aspect.
According to a sixth aspect of embodiments of the present disclosure, there is provided a server comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the video presentation method as described in the second aspect.
According to a seventh aspect of embodiments of the present disclosure, there is provided a storage medium, which when executed by a processor of a terminal, enables the terminal to perform the video presentation method as described in the first aspect, or which when executed by a processor of a server, enables the server to perform the video presentation method as described in the second aspect.
According to an eighth aspect of embodiments of the present disclosure, there is provided a computer program product comprising readable program code which, when executed by a processor of a terminal, enables the terminal to perform the video presentation method as described in the first aspect, or which, when executed by a processor of a server, enables the server to perform the video presentation method as described in the second aspect.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
according to the method and the device for displaying the related videos, the target keywords corresponding to the playing states of the target videos which are currently played are obtained, the keyword labels corresponding to the target keywords are generated and displayed in the playing areas of the target videos, the related videos corresponding to the target keywords are obtained and displayed in response to triggering operations applied to the keyword labels, the obtaining paths for obtaining the related videos corresponding to the target keywords can be provided for users watching the videos through the target keywords corresponding to the video playing states when the videos are played, the users can quickly obtain the related videos meeting the demands of the users without complex operations such as self-search, and video interaction experience of the users is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure and do not constitute an undue limitation on the disclosure.
FIG. 1 is a flowchart illustrating a video presentation method according to an exemplary embodiment;
FIGS. 2a-2c are exemplary diagrams showing keyword tags in regions of video content corresponding to target keywords, respectively, in embodiments of the present disclosure;
FIG. 3 is an exemplary diagram showing all target keywords after the target video playback is completed in an embodiment of the present disclosure;
FIG. 4 is a flowchart illustrating a video presentation method according to an exemplary embodiment;
FIG. 5 is an exemplary diagram of one video frame in an embodiment of the present disclosure;
FIG. 6 is a flowchart illustrating a video pushing method according to an exemplary embodiment;
FIG. 7 is a block diagram of a video presentation device, according to an example embodiment;
FIG. 8 is a block diagram of a video pushing device, according to an example embodiment;
FIG. 9 is a block diagram of a terminal shown in accordance with an exemplary embodiment;
FIG. 10 is a block diagram of a server shown in accordance with an exemplary embodiment;
FIG. 11 is a block diagram of a video presentation system, shown according to an exemplary embodiment;
fig. 12 is a flowchart illustrating a video presentation method according to an exemplary embodiment.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
Fig. 1 is a flowchart of a video presentation method, as shown in fig. 1, according to an exemplary embodiment, including the following steps.
In step S11, according to the playing state of the currently played target video, a corresponding target keyword is obtained.
The playing state may include a playing time point and/or playing content of the target video.
And in the process of playing the target video, acquiring a target keyword corresponding to the playing state according to the current playing state. The target keywords corresponding to the playing states can be obtained by identifying the playing contents of the corresponding playing states of the target video in advance.
In step S12, a corresponding keyword label is generated according to the target keyword, and is displayed in the playing area of the target video.
After the target keyword corresponding to the playing state is obtained, generating a keyword label corresponding to the target keyword, and displaying the keyword label in the playing area of the target video. The display position of the keyword tag in the playing area is not limited, for example, the keyword tag may be displayed in the vicinity of the video content corresponding to the target keyword.
In an exemplary embodiment, the keyword tag is displayed in a playing area of the target video, including: determining a video content area corresponding to the target keyword in the playing area, and displaying the keyword label in the video content area; or displaying the keyword label at a preset display position of the playing area of the target video.
When the keyword label is displayed, firstly, the display position of the keyword label is determined, wherein the display position of the keyword label can be a video content area corresponding to the target keyword, or can be a preset display position, for example, the preset display position can be the position of the upper right corner of the playing area and the like.
Fig. 2a-2c are respectively exemplary diagrams for showing keyword labels in video content areas corresponding to target keywords in the embodiments of the present disclosure, as shown in fig. 2a, a video frame corresponding to a playing state includes a caste, then the target keywords shown may be "beautiful castes", and the keyword labels are shown near the castes, as shown in fig. 2b, if, with playing of the target video, a caste of the middle century in the united kingdom is previously identified in the corresponding video frame, then the target keywords shown are "castes", as shown in fig. 2c, and further played, the video content is switched to the inside of the castes, and then the target keywords shown are "inside of the castes".
By displaying the keyword labels in the video content areas corresponding to the target keywords, the target keywords and the video content can be connected by the user, and by displaying the keyword labels in the preset display positions of the playing areas of the target videos, the displayed keyword labels can be prevented from interfering the user to watch the target videos.
In step S13, in response to the triggering operation performed on the keyword label, an associated video corresponding to the target keyword is acquired and displayed.
The displayed keyword label can be used for triggering operation by a user, and when the triggering operation of the user on the keyword label is detected, the associated video corresponding to the target keyword is obtained and displayed. When the associated video is displayed, the associated video can be jumped to an associated video display page and displayed on the associated video display page, or the associated video can be displayed in a preset area of a current playing page of the target video. Wherein, the associated video corresponding to the target keyword may be a video including the target keyword.
In an exemplary embodiment, the obtaining the associated video corresponding to the target keyword for presentation includes: searching in a preset video database according to a target keyword, and acquiring an associated video corresponding to the target keyword; and displaying the associated video in a corresponding page display area.
The method comprises the steps that a plurality of videos can be stored in a preset video database, keywords corresponding to the videos can be stored, so that videos comprising target keywords can be searched in the preset video database according to the target keywords, associated videos corresponding to the target keywords are obtained, and the associated videos are displayed after the associated videos corresponding to the target keywords are obtained. When the associated video is displayed, the associated video can be displayed in the playing area of the target video, or the associated video can be displayed on the display page by jumping to the display page of the associated video. The related videos corresponding to the target keywords are searched in the preset video database, and the related videos are displayed in the corresponding page display areas, so that all the related videos corresponding to the target keywords can be displayed in time. When the associated video is displayed through the display page of the associated video, if the triggering operation of the user on the return button is detected, the playing page of the target video is jumped to, and the target video is continuously played.
According to the video display method provided by the embodiment, the target keyword corresponding to the playing state of the target video which is currently played is obtained, the keyword label corresponding to the target keyword is generated and displayed in the playing area of the target video, the associated video corresponding to the target keyword is obtained for display in response to the triggering operation applied to the keyword label, the obtaining way for obtaining the associated video corresponding to the target keyword can be provided for a user watching the video through the target keyword corresponding to the video playing state when the video is played, the user can quickly obtain the associated video meeting the self requirements without complex operations such as self-search, and the video interaction experience of the user is improved.
On the basis of the technical scheme, the playing state comprises a playing time point;
the obtaining the corresponding target keyword according to the playing state of the currently played target video includes: and acquiring a target keyword corresponding to the playing time point according to the playing time point of the currently played target video.
For a target video, the playing content of a playing time point is fixed, so that the target keyword can be corresponding to the playing time point of the target video, when the target video is currently played, whether the target keyword corresponding to the playing time point exists or not is checked according to the playing time point, and if the target keyword corresponding to the playing time point exists, the target keyword is acquired, and therefore the target keyword corresponding to the current playing content can be acquired more quickly and displayed in time.
On the basis of the technical scheme, the method further comprises the following steps:
and after the target video is played, displaying all target keywords corresponding to the target video.
After the target video is played, a replay button can be displayed, all target keywords corresponding to the target video are displayed at the same time, the displayed target keywords can be used for triggering operation by a user, and if triggering operation of the user on one of the target keywords is detected, the associated video corresponding to the target keywords can be searched in a preset video database and displayed. By displaying all target keywords corresponding to the target video after the target video is played, another acquisition way for acquiring the associated video corresponding to the target keywords is provided for the user watching the video, and the video interaction experience of the user is further improved.
On the basis of the technical scheme, displaying all target keywords corresponding to the target video comprises the following steps: displaying a cover layer on the target video, and displaying all target keywords corresponding to the target video on the cover layer.
As shown in fig. 3, after the target video is played, a cover layer is displayed on the target video, and the cover layer may cover the entire display screen, and a replay button is displayed on the cover layer, and all target keywords corresponding to the target video are displayed on the cover layer. The target keywords are displayed through the cover layer, so that the displayed target keywords are clearer, and the change of the video picture of the target video can be avoided.
On the basis of the technical scheme, the method further comprises the following steps: when the target video is played, a keyword acquisition request is sent to a server, and the server is triggered to feed back the corresponding target keyword; and/or
The obtaining the associated video corresponding to the target keyword for display comprises the following steps:
sending an associated video acquisition request to a server, and triggering the server to feed back an associated video corresponding to the target keyword; and displaying the associated video.
The target keywords and associated videos may be fed back by the client trigger server. When the client plays the target video, a keyword acquisition request is sent to the server to trigger the server to feed back the target keyword corresponding to the target video, or the keyword acquisition request is generated according to the playing state of the currently played target video and sent to the server to trigger the server to feed back the target keyword corresponding to the playing state; when the triggering operation of the keyword label applied to the user is detected, an associated video acquisition request is sent to the server, and the associated video acquisition request comprises a target keyword corresponding to the keyword label, so that the server can be triggered to feed back the associated video corresponding to the target keyword. The server feeds back the target keywords and the associated videos, so that the task with large calculation amount is completed by the server, and the response speed of the client is ensured.
Fig. 4 is a flowchart of a video presentation method, as shown in fig. 4, according to an exemplary embodiment, including the following steps.
In step S41, according to the playing state of the target video currently played, the playing content corresponding to the playing state is determined.
In the process of playing the target video, the playing content corresponding to the playing state is determined, for example, basketball court exists in the picture of the currently played target video, and then the playing content corresponding to the playing state can be determined as basketball court by a method.
In step S42, a candidate keyword associated with the play content is selected from among the candidate keywords acquired in advance as the target keyword.
The candidate keywords obtained in advance may be all candidate keywords corresponding to the target video. After determining the play content corresponding to the play state, a candidate keyword associated with the play content can be selected from the pre-acquired candidate keywords to serve as a target keyword, and a keyword label corresponding to the target keyword is displayed in the current video picture. For example, the pre-obtained candidate keywords include keywords such as basketball court, basketball, etc., and the playing content corresponding to the playing state is basketball court, and the keyword "basketball court" associated with the playing content may be selected from the pre-obtained candidate keywords as the target keyword.
In step S43, a corresponding keyword label is generated according to the target keyword, and is displayed in the playing area of the target video.
In step S44, in response to the triggering operation performed on the keyword label, the associated video corresponding to the target keyword is acquired and displayed.
According to the video display method provided by the embodiment, the playing content corresponding to the playing state is determined, the candidate keyword associated with the playing content is selected from the pre-selected candidate keywords and is used as the target keyword, so that the determined target keyword is associated with the current playing content, different target keywords can be displayed according to different playing contents, the current playing content of a user can be reminded through the displayed keyword label, the associated video can be directly displayed according to the triggering operation of the user on the keyword label, the operation that the user independently abstracts the keyword according to the current playing content and searches the associated video is simplified, the corresponding associated video can be obtained through the displayed keyword label, and the problem that the user abstract keyword is inaccurate and cannot obtain the corresponding associated video is avoided.
On the basis of the technical scheme, the method further comprises the following steps: carrying out image recognition on each frame in the target video, and determining the image content in each frame; and extracting keywords corresponding to the image content from the image content of each frame to obtain candidate keywords corresponding to the target video.
Image recognition is carried out on each frame in the target video in advance, image content in each frame is determined, keyword extraction is carried out on the image content of each frame to obtain keywords corresponding to each frame, and duplication removal is carried out on the keywords corresponding to each frame to obtain candidate keywords corresponding to the target video. The candidate keywords corresponding to the target video are determined by carrying out image recognition on each frame in the target video in advance, so that the target keywords can be timely selected from the candidate keywords and displayed when the target video is played, and the problem that the target keywords are not timely displayed due to image recognition when the target video is played is avoided.
When each frame is subjected to image recognition, the recognition can be performed based on the video tag, and the video tag can be a tag with a hierarchy, so that a higher hierarchy can be recognized first, and then specific image content can be recognized. In the layered tag, for example, a first-level tag may be sports, under which a plurality of second-level tags such as diving, roller skating, sports, body building, skiing and the like may be included, and for sports, a third-level tag such as balls may be included, and a fourth-level tag such as football, basketball, table tennis, baseball, golf and the like may be included. Taking the video picture shown in fig. 5 as an example, when the content of the video picture is identified, it can be determined that the objects therein include people and balls, the people include multiple people, the balls are basketball, and then it can be determined that the scene of the video picture is basketball court, that is, the keywords corresponding to the video picture can include multiple people, basketball court, etc.
On the basis of the technical scheme, the method further comprises the following steps: acquiring user information corresponding to a user currently playing the target video; and screening the pre-acquired candidate keywords corresponding to the target video according to the user information, acquiring candidate keywords associated with the user information, and acquiring the updated candidate keywords to select the target keywords.
Wherein, the user information can comprise user portrait information and user historical behavior information.
The candidate keywords corresponding to the target video are keywords associated with video content in the target video, and for a user, some keywords are interesting to the user, and some keywords are not interesting to the user, at this time, the candidate keywords corresponding to the target video, which are acquired in advance, can be screened according to user information, the keywords possibly clicked by the current user are reserved, and the candidate keywords associated with the user information are obtained, so that the candidate keywords are selected from the updated candidate keywords when the target keywords are selected, and the displayed target keywords are in line with the interests of the user, so that the video watching interference caused by displaying keywords which are not interesting to the excessive user can be avoided.
In an exemplary embodiment, the filtering, according to the user information, the pre-acquired candidate keywords corresponding to the target video, to acquire candidate keywords associated with the user information includes: the user information and the candidate keywords corresponding to the target video are taken as input, the candidate keywords are processed through a keyword screening model, and the candidate keywords associated with the user information are output; wherein the user portrait information at least comprises user portrait information and user historical behavior information; the keyword screening model is obtained by training a sample set through a machine learning algorithm of a preset type, each sample included in the sample set comprises historical behavior data of keywords displayed in a user account playing video, and the user account has a corresponding user portrait.
The user history behavior information may include a user play record and user interaction behavior information. The user interaction behavior information may include at least one of praise behavior information, comment behavior information, and share behavior information.
When screening candidate keywords associated with user information from all candidate keywords corresponding to a target video, screening can be performed based on a trained keyword screening model, the user information and all candidate keywords corresponding to the target video are input into the keyword screening model, the keyword screening model is used for processing, the probability of clicking each candidate keyword by a current user is estimated, and the candidate keywords with the probability larger than a threshold value are determined as candidate keywords associated with the user information. Because the keyword screening model is obtained by training a sample set through a preset type of machine learning algorithm, candidate keywords related to the user information are screened through the keyword screening model, the screening speed can be improved, and the accuracy of the screened candidate keywords can be improved.
Fig. 6 is a flowchart of a video presentation method, as shown in fig. 6, according to an exemplary embodiment, including the following steps.
In step S61, a keyword acquisition request sent by a client is received, where the keyword acquisition request includes a playing state of a target video currently played by the client.
In the process of playing the target video, the client generates a keyword acquisition request corresponding to the playing state according to the current playing state, sends the keyword acquisition request to the server, and the server receives the keyword acquisition request sent by the client.
In step S62, a target keyword corresponding to the playing state is obtained, and the target keyword is fed back to the client, where the target keyword is used for displaying in a playing area of a target video of the client.
The server can identify the video content corresponding to the playing state of the target video in advance, so as to generate a keyword corresponding to the playing state, acquire the target keyword corresponding to the current playing state of the client after receiving a keyword acquisition request sent by the client, feed back the target keyword to the client, and display the target keyword in the playing area of the target video after the client receives the target keyword.
In step S63, an associated video acquisition request sent by the client is received, where the associated video acquisition request includes the target keyword.
If the target keyword displayed in the client is triggered by the user, the client generates an associated video acquisition request and sends the associated video acquisition request to the server, so that the server receives the associated video acquisition request comprising the target keyword.
In step S64, an associated video corresponding to the target keyword is obtained, and the associated video is fed back to the client, where the associated video is used for being displayed in the client.
Searching the associated video corresponding to the target keyword in a preset video database according to the target keyword in the associated video acquisition request, feeding the associated video back to the client, and displaying the associated video after the client receives the associated video.
According to the video display method provided by the embodiment of the invention, the target keyword corresponding to the playing state of the target video currently played by the client is obtained by receiving the keyword obtaining request sent by the client, the target keyword is fed back to the client, the client can display the target keyword in the playing area of the target video, the displayed target keyword can be operated by a user, and the associated video obtaining request is generated, so that the server can feed back the associated video corresponding to the target keyword to the client for display after receiving the associated video obtaining request, an obtaining way for obtaining the associated video corresponding to the target keyword can be provided for a user watching the video through the target keyword corresponding to the video playing state during video playing, the user can quickly obtain the associated video meeting the self requirement without complicated operations such as self-search, and the video interaction experience of the user is improved.
On the basis of the technical scheme, the method further comprises the following steps: carrying out image recognition on each frame in the target video in advance, and determining the image content in each frame; extracting candidate keywords from the image content of each frame to obtain candidate keywords corresponding to the target video, and correspondingly storing the candidate keywords and the playing state of the target video;
the obtaining the target keyword corresponding to the playing state includes: and acquiring candidate keywords corresponding to the playing state as target keywords.
Carrying out image recognition on each frame in the target video in advance, determining the image content in each frame, extracting keywords from the image content of each frame to obtain candidate keywords corresponding to each frame, de-duplicating the candidate keywords corresponding to each frame to obtain candidate keywords corresponding to the target video, and obtaining the corresponding relation between the candidate keywords and the playing state of the target video according to the corresponding relation between the candidate keywords and each frame, so as to correspondingly store the candidate keywords and the playing state of the target video. The candidate keywords corresponding to the target video are determined by carrying out image recognition on each frame in the target video in advance, so that when the client plays the target video request to acquire keywords, the target keywords can be timely selected from the candidate keywords and fed back to the client, the problem that the target keywords are not timely displayed due to image recognition when the client plays the target video is avoided, and repeated image recognition on the keyword acquisition request of each client is avoided.
When each frame is subjected to image recognition, the recognition can be performed based on the video tag, and the video tag can be a tag with a hierarchy, so that a higher hierarchy can be recognized first, and then specific image content can be recognized. In the layered tag, for example, a first-level tag may be sports, under which a plurality of second-level tags such as diving, roller skating, sports, body building, skiing and the like may be included, and for sports, a third-level tag such as balls may be included, and a fourth-level tag such as football, basketball, table tennis, baseball, golf and the like may be included. Taking the video picture shown in fig. 5 as an example, when the content of the video picture is identified, it can be determined that the objects therein include people and balls, the people include multiple people, the balls are basketball, and then it can be determined that the scene of the video picture is basketball court, that is, the keywords corresponding to the video picture can include multiple people, basketball court, etc.
On the basis of the above technical solution, the obtaining the candidate keyword corresponding to the playing state as the target keyword includes: acquiring user information corresponding to the client; and screening candidate keywords corresponding to the playing state according to the user information, and obtaining candidate keywords associated with the user information as the target keywords.
Wherein, the user information can comprise user portrait information and user historical behavior information. The user historical behavior information may include user play records and user interaction behavior information. The user interaction behavior information may include at least one of praise behavior information, comment behavior information, and share behavior information.
The candidate keywords corresponding to the target video are keywords associated with video contents in the target video, for a user, video contents corresponding to some keywords are interesting to the user, video contents corresponding to some keywords are not interesting to the user, at this time, candidate keywords corresponding to the playing state can be screened according to user information, candidate keywords possibly clicked by the current user are reserved, candidate keywords associated with the user information are obtained, and the candidate keywords are used as target keywords, so that the target keywords displayed by the client are in line with the interests of the user, and viewing interference caused by displaying keywords which are not interesting to the excessive user can be avoided.
Fig. 7 is a block diagram illustrating a video presentation device according to an example embodiment. Referring to fig. 7, the apparatus includes a target keyword acquisition module 71, a first keyword presentation module 72, and an associated video acquisition module 73.
The target keyword obtaining module 71 is configured to obtain a corresponding target keyword according to a playing state of a target video currently played;
the first keyword display module 72 is configured to generate a corresponding keyword tag according to the target keyword, and display the keyword tag in a playing area of the target video;
the associated video acquisition module 73 is configured to acquire an associated video corresponding to the target keyword for presentation in response to a triggering operation performed on the keyword tag.
Optionally, the target keyword obtaining module includes:
a play content determining unit configured to determine play content corresponding to a play state according to the play state of the target video currently played;
and a target keyword acquisition unit configured to select, as the target keyword, a candidate keyword associated with the play content from among the candidate keywords acquired in advance.
Optionally, the apparatus further includes:
the image recognition module is configured to recognize images of each frame in the target video and determine the image content in each frame;
and the keyword extraction module is configured to extract keywords corresponding to the image content from the image content of each frame to obtain candidate keywords corresponding to the target video.
Optionally, the apparatus further includes:
the user information acquisition module is configured to acquire user information corresponding to a user currently playing the target video;
and the keyword screening module is configured to screen the candidate keywords which are acquired in advance and correspond to the target video according to the user information, acquire the candidate keywords associated with the user information, acquire the updated candidate keywords and select the target keywords.
Optionally, the keyword screening module is specifically configured to:
the user information and the candidate keywords corresponding to the target video are taken as input, the candidate keywords are processed through a keyword screening model, and the candidate keywords associated with the user information are output;
wherein the user portrait information at least comprises user portrait information and user historical behavior information;
the keyword screening model is obtained by training a sample set through a machine learning algorithm of a preset type, each sample included in the sample set comprises historical behavior data of keywords displayed in a user account playing video, and the user account has a corresponding user portrait.
Optionally, the playing state includes a playing time point;
the target keyword acquisition module is specifically configured to:
and acquiring a target keyword corresponding to the playing time point according to the playing time point of the currently played target video.
Optionally, the first keyword display module is specifically configured to:
determining a video content area corresponding to the target keyword in the playing area, and displaying the keyword label in the video content area; or alternatively
And displaying the keyword label at a preset display position of a playing area of the target video.
Optionally, the associated video acquisition module includes:
the related video acquisition unit is configured to search a preset video database according to a target keyword and acquire related videos corresponding to the target keyword;
and the associated video display unit is configured to display the associated video in the corresponding page display area.
Optionally, the apparatus further includes:
and the second keyword display module is configured to display all target keywords corresponding to the target video after the target video is played.
Optionally, the second keyword display module is specifically configured to:
And after the target video is played, displaying a cover layer on the target video, and displaying all target keywords corresponding to the target video on the cover layer.
Optionally, the apparatus further includes:
the keyword acquisition request sending module is configured to send a keyword acquisition request to a server when the target video is played, and trigger the server to feed back the corresponding target keywords; and/or
The associated video acquisition module comprises:
the associated video acquisition request sending unit is configured to send an associated video acquisition request to a server, and trigger the server to feed back an associated video corresponding to the target keyword;
and the associated video display unit is configured to display the associated video.
According to the video display device provided by the embodiment, the target keyword corresponding to the playing state of the target video which is currently played is obtained, the keyword label corresponding to the target keyword is generated and displayed in the playing area of the target video, the associated video corresponding to the target keyword is obtained for display in response to the triggering operation applied to the keyword label, the obtaining way of the associated video for obtaining the target keyword can be provided for a user watching the video through the target keyword corresponding to the video playing state when the video is played, the user can quickly obtain the associated video meeting the self requirement without complex operations such as self-search, and the video interaction experience of the user is improved.
Fig. 8 is a block diagram illustrating a video presentation device according to an example embodiment. The device can be applied to a server. Referring to fig. 8, the apparatus includes a keyword acquisition request receiving module 81, a keyword feedback module 82, an associated video acquisition request receiving module 83, and an associated video feedback module 84.
The keyword acquisition request receiving module 81 is configured to receive a keyword acquisition request sent by a client, where the keyword acquisition request includes a playing state of a target video currently played by the client;
the keyword feedback module 82 is configured to obtain a target keyword corresponding to the playing state, and feed back the target keyword to the client, where the target keyword is used for displaying in a playing area of a target video of the client;
the associated video acquisition request receiving module 83 is configured to receive an associated video acquisition request sent by the client, where the associated video acquisition request includes the target keyword;
the associated video feedback module 84 is configured to obtain an associated video corresponding to the target keyword, and to feed the associated video back to the client, where the associated video is used for presentation.
Optionally, the apparatus further includes:
the image recognition module is configured to perform image recognition on each frame in the target video in advance and determine the image content in each frame;
the keyword extraction module is configured to extract candidate keywords from the image content of each frame to obtain candidate keywords corresponding to the target video, and store the candidate keywords in correspondence with the playing state of the target video;
the keyword feedback module comprises:
and a keyword acquisition unit configured to acquire a candidate keyword corresponding to the play state as a target keyword.
Optionally, the keyword obtaining unit is specifically configured to:
acquiring user information corresponding to the client;
and screening candidate keywords corresponding to the playing state according to the user information, and obtaining candidate keywords associated with the user information as the target keywords.
Optionally, the user information includes user portrait information and user historical behavior information. The user portrait information at least comprises one of age, sex, geographic position and interest type of the user, and the user history behavior information comprises video related behavior data of the user in a preset history period, wherein the video related behavior data comprise video watching frequencies, video watching duration, video praise frequencies, video sharing frequencies, video comment frequencies and the like of different types.
According to the video display device provided by the embodiment of the invention, the target keyword corresponding to the playing state of the target video currently played by the client is obtained by receiving the keyword obtaining request sent by the client, the target keyword is fed back to the client, the client can display the target keyword in the playing area of the target video, the displayed target keyword can be operated by a user, and the associated video obtaining request is generated, so that the server can feed back the associated video corresponding to the target keyword to the client for display after receiving the associated video obtaining request, and can provide an obtaining way of the associated video for obtaining the target keyword for a user watching the video through the target keyword corresponding to the video playing state during video playing, so that the user can quickly obtain the associated video meeting the self requirements without complicated operations such as self-search, and the video interaction experience of the user is improved.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Fig. 9 is a block diagram of a terminal according to an exemplary embodiment. For example, terminal 900 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, or the like.
Referring to fig. 9, a terminal 900 may include one or more of the following components: a processing component 902, a memory 904, a power component 906, a multimedia component 908, an audio component 910, an input/output (I/O) interface 912, a sensor component 914, and a communication component 916.
The processing component 902 generally controls overall operation of the terminal 900, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 902 may include one or more processors 920 to execute instructions to perform all or part of the steps of the video presentation method described above. Further, the processing component 902 can include one or more modules that facilitate interaction between the processing component 902 and other components. For example, the processing component 902 can include a multimedia module to facilitate interaction between the multimedia component 908 and the processing component 902.
The memory 904 is configured to store various types of data to support operation at the terminal 900. Examples of such data include instructions for any application or method operating on terminal 900, contact data, phonebook data, messages, pictures, videos, and the like. The memory 904 may be implemented by any type of volatile or nonvolatile memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 906 provides power to the various components of the terminal 900. Power supply components 906 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for terminal 900.
The multimedia component 908 includes a screen between the terminal 900 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 908 includes a front-facing camera and/or a rear-facing camera. The front camera and/or the rear camera may receive external multimedia data when the terminal 900 is in an operation mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 910 is configured to output and/or input audio signals. For example, the audio component 910 includes a Microphone (MIC) configured to receive external audio signals when the terminal 900 is in an operation mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 904 or transmitted via the communication component 916. In some embodiments, the audio component 910 further includes a speaker for outputting audio signals.
The I/O interface 912 provides an interface between the processing component 902 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 914 includes one or more sensors for providing status assessment of various aspects of the terminal 900. For example, sensor assembly 914 may detect the on/off state of terminal 900, the relative positioning of the components, such as the display and keypad of terminal 1000, the sensor assembly 914 may also detect the change in position of terminal 900 or a component of terminal 900, the presence or absence of user contact with terminal 900, the orientation or acceleration/deceleration of terminal 900, and the change in temperature of terminal 900. The sensor assembly 914 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 914 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 914 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
Communication component 916 is configured to facilitate communication between terminal 900 and other devices, either wired or wireless. Terminal 900 can access a wireless network based on a communication standard, such as WiFi, an operator network (e.g., 2G, 3G, 4G, or 5G), or a combination thereof. In one exemplary embodiment, the communication component 916 receives broadcast signals or broadcast-related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 916 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the terminal 900 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for performing the video presentation methods described above.
Fig. 10 is a block diagram of a server, according to an example embodiment. Referring to fig. 10, server 1000 includes a processing component 1022 that further includes one or more processors and memory resources represented by memory 1032 for storing instructions, such as application programs, executable by processing component 1022. The application programs stored in memory 1032 may include one or more modules each corresponding to a set of instructions. Further, the processing component 1022 is configured to execute instructions to perform the video presentation methods described above.
The server 1000 may also include a power component 1026 configured to perform power management of the server 1000, a wired or wireless network interface 1050 configured to connect the server 1000 to a network, and an input output (I/O) interface 1058. The server 1000 may operate based on an operating system stored in memory 1032, such as Windows Server, mac OS XTM, unixTM, linuxTM, freeBSDTM, or the like.
Fig. 11 is a block diagram of a video presentation system, according to an exemplary embodiment, and referring to fig. 11, the video presentation system 1100 includes a terminal 900 and a server 1000. The terminal 900 and the server 1000 in the video presentation system may interact to implement presentation of video, and fig. 12 is a flowchart illustrating a video presentation method implemented by the terminal 900 and the server 1000 in the video presentation system 1100 in interaction, as shown in fig. 12, according to an exemplary embodiment, the video presentation method includes the following steps.
In step S121, the client transmits a keyword acquisition request to the server when playing the target video.
In step S122, the server screens the candidate keywords corresponding to the target video according to the user information corresponding to the user currently playing the target video, and obtains the candidate keywords associated with the user information, thereby obtaining updated candidate keywords.
The candidate keywords corresponding to the target video are obtained by carrying out image recognition on each frame in the target video in advance by the server, determining the image content in each frame by carrying out image recognition on each frame, and extracting the corresponding keywords from the image content of each frame to obtain the candidate keywords corresponding to the target video.
The process of obtaining the candidate keywords associated with the user information by the server is the same as that of the above embodiment, and will not be described here again.
In step S123, the server sends the updated correspondence between the candidate keywords and the playing status to the client.
In step S124, the client obtains the corresponding target keyword from the updated candidate keywords according to the playing state of the target video currently played.
In step S125, the client generates a corresponding keyword tag according to the target keyword, and displays the keyword tag in the playing area of the target video.
In step S126, the client transmits an associated video acquisition request to the server in response to the trigger operation performed on the keyword tag.
In step S127, the server feeds back the associated video corresponding to the target keyword.
In step S128, the client presents the associated video.
The specific implementation manner of each step in the present exemplary embodiment is the same as the relevant steps in the foregoing embodiments, and will not be repeated here.
According to the method and the device for obtaining the related videos, when the target videos are played, the keyword obtaining request is sent to the server through the client, the candidate keywords associated with the user information and the target videos are sent to the client, the target keywords corresponding to the playing states are displayed according to the playing states of the target videos, the user can trigger keyword labels corresponding to the target keywords to obtain the related videos corresponding to the target keywords, accordingly, the keywords related to pushing of the user information and the like are achieved, the keywords related to the pushing are displayed on the client, the obtaining way of obtaining the related videos corresponding to the target keywords can be provided for users watching the videos through the target keywords corresponding to the video playing states when the videos are played, the users can quickly obtain the related videos meeting the requirements of the users without complicated operations such as self-search, and video interaction experience of the users is improved.
In an exemplary embodiment, a storage medium is also provided, such as a memory 904 comprising instructions executable by the processor 920 of the terminal 900 to perform the video presentation method described above, or a memory 1032 comprising instructions executable by the processing component 1022 of the server 1000 to perform the video presentation method described above. Alternatively, the storage medium may be a non-transitory computer readable storage medium, which may be, for example, ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the invention is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (26)

1. A video presentation method, comprising:
acquiring a corresponding target keyword according to the playing state of the currently played target video;
generating a corresponding keyword label according to the target keyword, and displaying the keyword label in a playing area of the target video;
responding to the triggering operation implemented on the keyword label, and acquiring and displaying the associated video corresponding to the target keyword;
The target keywords are candidate keywords which are selected from candidate keywords associated with user information and are associated with the playing content;
the method further comprises the steps of:
acquiring user information corresponding to a user currently playing the target video;
screening the pre-acquired candidate keywords corresponding to the target video according to the user information, acquiring candidate keywords associated with the user information, and acquiring the updated candidate keywords to select the target keywords, wherein the method comprises the following steps:
the user information and the candidate keywords corresponding to the target video are taken as input, the candidate keywords are processed through a keyword screening model, and the candidate keywords associated with the user information are output;
wherein the user information at least comprises user portrait information and user historical behavior information;
the keyword screening model is obtained by training a sample set through a machine learning algorithm of a preset type, each sample included in the sample set comprises historical behavior data of keywords displayed in a user account playing video, and the user account has a corresponding user portrait.
2. The method according to claim 1, wherein the obtaining the corresponding target keyword according to the playing state of the currently played target video includes:
determining playing content corresponding to the playing state according to the playing state of the currently played target video;
and selecting a candidate keyword associated with the playing content from the pre-acquired candidate keywords as the target keyword.
3. The method as recited in claim 2, further comprising:
carrying out image recognition on each frame in the target video, and determining the image content in each frame;
and extracting keywords corresponding to the image content from the image content of each frame to obtain candidate keywords corresponding to the target video.
4. The method of claim 1, wherein the play status comprises a play time point;
the obtaining the corresponding target keyword according to the playing state of the currently played target video includes:
and acquiring a target keyword corresponding to the playing time point according to the playing time point of the currently played target video.
5. The method of claim 1, wherein displaying the keyword tag in the play area of the target video comprises:
Determining a video content area corresponding to the target keyword in the playing area, and displaying the keyword label in the video content area; or alternatively
And displaying the keyword label at a preset display position of a playing area of the target video.
6. The method of claim 1, wherein the obtaining the associated video corresponding to the target keyword for presentation comprises:
searching in a preset video database according to a target keyword, and acquiring an associated video corresponding to the target keyword;
and displaying the associated video in a corresponding page display area.
7. The method as recited in claim 1, further comprising:
and after the target video is played, displaying all target keywords corresponding to the target video.
8. The method of claim 7, wherein displaying all target keywords corresponding to the target video comprises:
displaying a cover layer on the target video, and displaying all target keywords corresponding to the target video on the cover layer.
9. The method as recited in claim 1, further comprising:
When the target video is played, a keyword acquisition request is sent to a server, and the server is triggered to feed back the corresponding target keyword; and/or
The obtaining the associated video corresponding to the target keyword for display comprises the following steps:
sending an associated video acquisition request to a server, and triggering the server to feed back an associated video corresponding to the target keyword;
and displaying the associated video.
10. A video presentation method, comprising:
receiving a keyword acquisition request sent by a client, wherein the keyword acquisition request comprises the playing state of a target video currently played by the client;
acquiring a target keyword corresponding to the playing state, and feeding the target keyword back to the client, wherein the target keyword is used for displaying in a playing area of a target video of the client;
receiving an associated video acquisition request sent by the client, wherein the associated video acquisition request comprises the target keyword;
acquiring an associated video corresponding to the target keyword, and feeding the associated video back to the client, wherein the associated video is used for displaying in the client;
The target keywords are candidate keywords which are selected from candidate keywords associated with user information and are associated with the playing content;
the method further comprises the steps of:
acquiring user information corresponding to the client;
screening candidate keywords corresponding to the playing state according to the user information, and obtaining candidate keywords associated with the user information as the target keywords;
the candidate keywords associated with the user information are obtained by taking the user information and the candidate keywords corresponding to the target video as inputs and processing the candidate keywords through a keyword screening model;
wherein the user information comprises user portrait information and user historical behavior information;
the keyword screening model is obtained by training a sample set through a machine learning algorithm of a preset type, each sample included in the sample set comprises historical behavior data of keywords displayed in a user account playing video, and the user account has a corresponding user portrait.
11. The method as recited in claim 10, further comprising:
carrying out image recognition on each frame in the target video in advance, and determining the image content in each frame;
Extracting candidate keywords from the image content of each frame to obtain candidate keywords corresponding to the target video, and correspondingly storing the candidate keywords and the playing state of the target video;
the obtaining the target keyword corresponding to the playing state includes:
and acquiring candidate keywords corresponding to the playing state as target keywords.
12. A video display apparatus, comprising:
the target keyword acquisition module is configured to acquire corresponding target keywords according to the playing state of the currently played target video;
the first keyword display module is configured to generate a corresponding keyword label according to the target keyword and display the keyword label in a playing area of the target video;
the associated video acquisition module is configured to respond to the triggering operation applied to the keyword label, and acquire associated videos corresponding to the target keywords for display;
the target keywords are candidate keywords which are selected from candidate keywords associated with user information and are associated with the playing content;
the apparatus further comprises:
the user information acquisition module is configured to acquire user information corresponding to a user currently playing the target video;
The keyword screening module is configured to screen the candidate keywords which are acquired in advance and correspond to the target video according to the user information, acquire the candidate keywords associated with the user information, acquire the updated candidate keywords and select the target keywords;
the keyword screening module is specifically configured to:
the user information and the candidate keywords corresponding to the target video are taken as input, the candidate keywords are processed through a keyword screening model, and the candidate keywords associated with the user information are output;
wherein the user information at least comprises user portrait information and user historical behavior information;
the keyword screening model is obtained by training a sample set through a machine learning algorithm of a preset type, each sample included in the sample set comprises historical behavior data of keywords displayed in a user account playing video, and the user account has a corresponding user portrait.
13. The apparatus of claim 12, wherein the target keyword acquisition module comprises:
a play content determining unit configured to determine play content corresponding to a play state according to the play state of the target video currently played;
And a target keyword acquisition unit configured to select, as the target keyword, a candidate keyword associated with the play content from among the candidate keywords acquired in advance.
14. The apparatus of claim 13, wherein the apparatus further comprises:
the image recognition module is configured to recognize images of each frame in the target video and determine the image content in each frame;
and the keyword extraction module is configured to extract keywords corresponding to the image content from the image content of each frame to obtain candidate keywords corresponding to the target video.
15. The apparatus of claim 12, wherein the play status comprises a play time point;
the target keyword acquisition module is specifically configured to:
and acquiring a target keyword corresponding to the playing time point according to the playing time point of the currently played target video.
16. The apparatus of claim 12, wherein the first keyword presentation module is specifically configured to:
determining a video content area corresponding to the target keyword in the playing area, and displaying the keyword label in the video content area; or alternatively
And displaying the keyword label at a preset display position of a playing area of the target video.
17. The apparatus of claim 12, wherein the associated video acquisition module comprises:
the related video acquisition unit is configured to search a preset video database according to a target keyword and acquire related videos corresponding to the target keyword;
and the associated video display unit is configured to display the associated video in the corresponding page display area.
18. The apparatus of claim 12, wherein the apparatus further comprises:
and the second keyword display module is configured to display all target keywords corresponding to the target video after the target video is played.
19. The apparatus of claim 18, wherein the second keyword display module is specifically configured to:
and after the target video is played, displaying a cover layer on the target video, and displaying all target keywords corresponding to the target video on the cover layer.
20. The apparatus of claim 12, wherein the apparatus further comprises:
the keyword acquisition request sending module is configured to send a keyword acquisition request to a server when the target video is played, and trigger the server to feed back the corresponding target keywords; and/or
The associated video acquisition module comprises:
the associated video acquisition request sending unit is configured to send an associated video acquisition request to a server, and trigger the server to feed back an associated video corresponding to the target keyword;
and the associated video display unit is configured to display the associated video.
21. A video display apparatus, comprising:
the system comprises a keyword acquisition request receiving module, a keyword acquisition request processing module and a keyword processing module, wherein the keyword acquisition request receiving module is configured to receive a keyword acquisition request sent by a client, and the keyword acquisition request comprises the playing state of a target video currently played by the client;
the keyword feedback module is configured to acquire a target keyword corresponding to the playing state and feed the target keyword back to the client, wherein the target keyword is used for displaying in a playing area of a target video of the client;
the associated video acquisition request receiving module is configured to receive an associated video acquisition request sent by the client, wherein the associated video acquisition request comprises the target keyword;
the associated video feedback module is configured to acquire an associated video corresponding to the target keyword and feed the associated video back to the client, wherein the associated video is used for being displayed in the client;
The target keywords are candidate keywords which are selected from candidate keywords associated with user information and are associated with the playing content;
the candidate keywords are obtained through a module in the video display device, wherein the module is used for executing the following processes:
acquiring user information corresponding to the client;
screening candidate keywords corresponding to the playing state according to the user information, and obtaining candidate keywords associated with the user information as the target keywords;
the candidate keywords associated with the user information are obtained by taking the user information and the candidate keywords corresponding to the target video as inputs and processing the candidate keywords through a keyword screening model;
wherein the user information comprises user portrait information and user historical behavior information;
the keyword screening model is obtained by training a sample set through a machine learning algorithm of a preset type, each sample included in the sample set comprises historical behavior data of keywords displayed in a user account playing video, and the user account has a corresponding user portrait.
22. The apparatus of claim 21, wherein the apparatus further comprises:
The image recognition module is configured to perform image recognition on each frame in the target video in advance and determine the image content in each frame;
the keyword extraction module is configured to extract candidate keywords from the image content of each frame to obtain candidate keywords corresponding to the target video, and store the candidate keywords in correspondence with the playing state of the target video;
the keyword feedback module comprises:
and a keyword acquisition unit configured to acquire a candidate keyword corresponding to the play state as a target keyword.
23. A terminal, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the video presentation method of any one of claims 1 to 9.
24. A server, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the video presentation method of any one of claims 10 to 11.
25. A video display system, comprising:
A terminal as claimed in claim 23 and a server as claimed in claim 24.
26. A storage medium, which when executed by a processor of a terminal, enables the terminal to perform the video presentation method of any one of claims 1 to 9, or which when executed by a processor of a server, enables the server to perform the video presentation method of any one of claims 10 to 11.
CN202010437842.9A 2020-05-21 2020-05-21 Video display method, device, terminal, server, system and storage medium Active CN111753135B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010437842.9A CN111753135B (en) 2020-05-21 2020-05-21 Video display method, device, terminal, server, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010437842.9A CN111753135B (en) 2020-05-21 2020-05-21 Video display method, device, terminal, server, system and storage medium

Publications (2)

Publication Number Publication Date
CN111753135A CN111753135A (en) 2020-10-09
CN111753135B true CN111753135B (en) 2024-02-06

Family

ID=72673919

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010437842.9A Active CN111753135B (en) 2020-05-21 2020-05-21 Video display method, device, terminal, server, system and storage medium

Country Status (1)

Country Link
CN (1) CN111753135B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112528076A (en) * 2020-12-18 2021-03-19 浙江同花顺智能科技有限公司 Video recommendation method, device, equipment and storage medium
CN113079417B (en) * 2021-03-25 2023-01-17 北京百度网讯科技有限公司 Method, device and equipment for generating bullet screen and storage medium
CN113111221A (en) * 2021-05-13 2021-07-13 北京字节跳动网络技术有限公司 Video-based searching method and device
CN113596562B (en) * 2021-08-06 2023-03-28 北京字节跳动网络技术有限公司 Video processing method, apparatus, device and medium
CN113779381B (en) * 2021-08-16 2023-09-26 百度在线网络技术(北京)有限公司 Resource recommendation method, device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106600343A (en) * 2016-12-30 2017-04-26 中广热点云科技有限公司 Method and system for managing online video advertisement associated with video content
CN107180058A (en) * 2016-03-11 2017-09-19 百度在线网络技术(北京)有限公司 A kind of method and apparatus for being inquired about based on caption information
CN109558513A (en) * 2018-11-30 2019-04-02 百度在线网络技术(北京)有限公司 A kind of content recommendation method, device, terminal and storage medium
CN110121093A (en) * 2018-02-06 2019-08-13 优酷网络技术(北京)有限公司 The searching method and device of target object in video
CN110688527A (en) * 2019-09-27 2020-01-14 北京达佳互联信息技术有限公司 Video recommendation method and device, storage medium and electronic equipment
CN111163348A (en) * 2020-01-08 2020-05-15 百度在线网络技术(北京)有限公司 Searching method and device based on video playing

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107180058A (en) * 2016-03-11 2017-09-19 百度在线网络技术(北京)有限公司 A kind of method and apparatus for being inquired about based on caption information
CN106600343A (en) * 2016-12-30 2017-04-26 中广热点云科技有限公司 Method and system for managing online video advertisement associated with video content
CN110121093A (en) * 2018-02-06 2019-08-13 优酷网络技术(北京)有限公司 The searching method and device of target object in video
CN109558513A (en) * 2018-11-30 2019-04-02 百度在线网络技术(北京)有限公司 A kind of content recommendation method, device, terminal and storage medium
CN110688527A (en) * 2019-09-27 2020-01-14 北京达佳互联信息技术有限公司 Video recommendation method and device, storage medium and electronic equipment
CN111163348A (en) * 2020-01-08 2020-05-15 百度在线网络技术(北京)有限公司 Searching method and device based on video playing

Also Published As

Publication number Publication date
CN111753135A (en) 2020-10-09

Similar Documents

Publication Publication Date Title
CN111753135B (en) Video display method, device, terminal, server, system and storage medium
CN111083512A (en) Switching method and device of live broadcast room, electronic equipment and storage medium
CN112069358B (en) Information recommendation method and device and electronic equipment
CN110121093A (en) The searching method and device of target object in video
CN111638832A (en) Information display method, device, system, electronic equipment and storage medium
CN109168062B (en) Video playing display method and device, terminal equipment and storage medium
CN110688527A (en) Video recommendation method and device, storage medium and electronic equipment
CN109189986B (en) Information recommendation method and device, electronic equipment and readable storage medium
CN111556366A (en) Multimedia resource display method, device, terminal, server and system
CN107784045B (en) Quick reply method and device for quick reply
EP3796317A1 (en) Video processing method, video playing method, devices and storage medium
CN106559712A (en) Video playback processing method, device and terminal device
CN112672208B (en) Video playing method, device, electronic equipment, server and system
CN112464031A (en) Interaction method, interaction device, electronic equipment and storage medium
CN111970566A (en) Video playing method and device, electronic equipment and storage medium
CN112000266B (en) Page display method and device, electronic equipment and storage medium
CN114722238B (en) Video recommendation method and device, electronic equipment, storage medium and program product
CN110019897B (en) Method and device for displaying picture
CN113901241B (en) Page display method and device, electronic equipment and storage medium
US20220060760A1 (en) Method and apparatus for pushing information in live broadcast room
CN112015277B (en) Information display method and device and electronic equipment
CN111629270A (en) Candidate item determination method and device and machine-readable medium
CN112115341B (en) Content display method, device, terminal, server, system and storage medium
CN114464186A (en) Keyword determination method and device
CN114554271B (en) Information pushing and displaying method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant