CN110730382A - Video interaction method, device, terminal and storage medium - Google Patents

Video interaction method, device, terminal and storage medium Download PDF

Info

Publication number
CN110730382A
CN110730382A CN201910927844.3A CN201910927844A CN110730382A CN 110730382 A CN110730382 A CN 110730382A CN 201910927844 A CN201910927844 A CN 201910927844A CN 110730382 A CN110730382 A CN 110730382A
Authority
CN
China
Prior art keywords
target
video
attitude
user
tag
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910927844.3A
Other languages
Chinese (zh)
Other versions
CN110730382B (en
Inventor
陈纯
马小坤
王磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN201910927844.3A priority Critical patent/CN110730382B/en
Publication of CN110730382A publication Critical patent/CN110730382A/en
Application granted granted Critical
Publication of CN110730382B publication Critical patent/CN110730382B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/475End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data

Abstract

The present disclosure relates to a video interaction method, apparatus, terminal and storage medium, the video interaction method comprising: playing the target video; displaying a target attitude label corresponding to the target video in a playing interface of the target video, wherein the target attitude label is used for representing the evaluation attitude of a user on the target video; judging whether a trigger operation of a target user using the terminal on the target attitude tag of the target video is received or not; and when receiving the trigger operation of the target user on the target attitude tag, executing a response operation corresponding to the trigger operation. Therefore, according to the technical scheme provided by the embodiment of the disclosure, the attitude tag is displayed in the video playing interface, so that a user can interact with the terminal through the attitude tag in the process of watching the video, and the interaction mode between the user and the terminal is enriched.

Description

Video interaction method, device, terminal and storage medium
Technical Field
The present application relates to the field of video technologies, and in particular, to a video interaction method, an apparatus, a terminal, and a storage medium.
Background
With the continuous development of technology, terminals represented by mobile phones are widely popularized in life, and users can realize various functions through various application programs on the mobile phones. For example, many users watch their favorite videos online through cell phone networking.
However, in the related art, the user can only make comments in the comment area, or approve or comment the comments made by other users, and thus, in the related art, the user has a single interaction mode with the terminal in the process of watching the video.
Disclosure of Invention
The present disclosure provides a video interaction method, apparatus, electronic device and storage medium, and the technical scheme of the present disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided a video interaction method, which is applied to a terminal, and the method includes:
playing the target video;
displaying a target attitude label corresponding to the target video in a playing interface of the target video, wherein the target attitude label is used for representing the evaluation attitude of a user on the target video;
judging whether a trigger operation of a target user using the terminal on the target attitude tag of the target video is received or not;
and when receiving the trigger operation of the target user on the target attitude tag, executing a response operation corresponding to the trigger operation.
Optionally, before the playing the target video, the method further includes:
acquiring evaluation information of a user on the target video;
classifying the evaluation information according to evaluation attitudes, wherein different types of evaluation information identify different evaluation attitudes;
and generating at least one target attitude tag, wherein each target attitude tag corresponds to the same category of evaluation information.
Optionally, before the playing the target video, the method further includes:
generating at least one target attitude tag according to attribute information of the target video, wherein the attribute information comprises at least one of the following: the video content of the target video, the video field to which the target video belongs, and the video title of the target video.
Optionally, the displaying, in the play interface of the target video, the target attitude tag corresponding to the target video includes:
simultaneously displaying a plurality of target attitude tags corresponding to the target video in a playing interface of the target video; alternatively, the first and second electrodes may be,
and displaying a plurality of target attitude tags corresponding to the target videos in turn in the playing interface of the target videos.
Optionally, a like control is displayed on the target attitude label;
the judging whether the triggering operation of the target attitude tag of the target video by the target user using the terminal is received or not comprises the following steps:
judging whether trigger operation of the target user on a praise control on the target attitude label is received;
and when receiving the trigger operation of the target user on the approval control on the target attitude label, judging that the trigger operation of the target user on the target attitude label is received.
Optionally, the executing the response operation corresponding to the trigger operation includes:
and switching the state of the praise control from the state of not praise to the state of praise, and changing the display mode of the target attitude label on the playing interface.
Optionally, the determining whether the trigger operation of the target attitude tag of the target video by the target user using the terminal is received includes:
judging whether a trigger operation of the target user on a target area on the target attitude tag is received, wherein the target area is as follows: a region on the target attitude label except for the like control;
and when receiving the trigger operation of the target user on the target area, judging that the trigger operation of the target user on the target attitude tag is received.
Optionally, the executing the response operation corresponding to the trigger operation includes:
and displaying a target video list corresponding to the target attitude label, wherein each video in the target video list has the target attitude label.
Optionally, the playing interface further includes a preset attitude control;
the method further comprises the following steps:
and displaying at least one recommended attitude label in response to the received touch operation of the target user on the attitude control.
Optionally, the recommended at least one attitude label is at least one attitude label with a heat degree greater than a preset heat degree;
the presenting of the recommended at least one attitude tag comprises:
and displaying the obtained at least one attitude label according to the hot degree sequence.
According to a second aspect of the embodiments of the present disclosure, there is provided a video interaction apparatus, which is applied to a terminal, the apparatus including:
a video playing module configured to execute playing of a target video;
the tag display module is configured to display a target attitude tag corresponding to the target video in a playing interface of the target video, wherein the target attitude tag is used for representing an evaluation attitude of a user on the target video;
an operation judgment module configured to execute judgment on whether a trigger operation of a target user using the terminal on the target attitude tag of the target video is received;
and the operation execution module is configured to execute a response operation corresponding to the trigger operation when the operation judgment module receives the trigger operation of the target user on the target attitude tag.
Optionally, the apparatus further comprises:
the information acquisition module is configured to acquire evaluation information of a user on the target video before the target video is played;
an information classification module configured to perform classification of the evaluation information according to evaluation attitudes, wherein different categories of evaluation information identify different evaluation attitudes;
a tag obtaining module configured to perform generation of at least one target attitude tag, where each target attitude tag corresponds to the same category of evaluation information.
Optionally, the apparatus further comprises:
a tag generation module configured to execute, before the target video is played, generating at least one target attitude tag according to attribute information of the target video, where the attribute information includes at least one of: the video content of the target video, the video field to which the target video belongs, and the video title of the target video.
Optionally, the tag display module is configured to perform:
simultaneously displaying a plurality of target attitude tags corresponding to the target video in a playing interface of the target video; alternatively, the first and second electrodes may be,
and displaying a plurality of target attitude tags corresponding to the target videos in turn in the playing interface of the target videos.
Optionally, a like control is displayed on the target attitude label;
the operation judgment module is configured to execute:
judging whether trigger operation of the target user on a praise control on the target attitude label is received;
and when receiving the trigger operation of the target user on the approval control on the target attitude label, judging that the trigger operation of the target user on the target attitude label is received.
Optionally, the operation execution module is configured to execute:
and switching the state of the praise control from the state of not praise to the state of praise, and changing the display mode of the target attitude label on the playing interface.
Optionally, the operation determining module is configured to perform:
judging whether a trigger operation of the target user on a target area on the target attitude tag is received, wherein the target area is as follows: a region on the target attitude label except for the like control;
and when receiving the trigger operation of the target user on the target area, judging that the trigger operation of the target user on the target attitude tag is received.
Optionally, the operation execution module is configured to execute:
and displaying a target video list corresponding to the target attitude label, wherein each video in the target video list has the target attitude label.
Optionally, the playing interface further includes a preset attitude control;
the device further comprises:
and the operation response module is configured to execute touch operation of the target user on the attitude control in response to the received request, and display at least one recommended attitude label.
Optionally, the recommended at least one attitude label is at least one attitude label with a heat degree greater than a preset heat degree;
the operation response module is specifically configured to:
and displaying the obtained at least one attitude label according to the hot degree sequence.
According to a third aspect of the embodiments of the present disclosure, there is provided a terminal, including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the video interaction method of the first aspect.
According to a fourth aspect of embodiments of the present disclosure, there is provided a storage medium, wherein instructions, when executed by a processor of an electronic device, enable the electronic device to perform the video interaction method of the first aspect.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product containing instructions that, when run on an electronic device, cause the electronic device to implement the video interaction method of the first aspect.
According to the technical scheme provided by the embodiment of the disclosure, a terminal plays a target video; displaying a target attitude label corresponding to the target video in a playing interface of the target video, wherein the target attitude label is used for representing the evaluation attitude of the user on the target video; judging whether a trigger operation of a target user using a terminal on a target attitude tag of a target video is received; and when receiving the trigger operation of the target user on the target attitude tag, executing response operation corresponding to the trigger operation. Therefore, according to the technical scheme provided by the embodiment of the disclosure, the attitude tag is displayed in the video playing interface, so that a user can interact with the terminal through the attitude tag in the process of watching the video, and the interaction mode between the user and the terminal is enriched.
Drawings
FIG. 1 is a flow diagram illustrating a video interaction method in accordance with an exemplary embodiment;
FIG. 2 is a diagram illustrating an object attitude tag on a play interface in accordance with an illustrative embodiment;
FIG. 3 is a diagram illustrating the target attitude tags of FIG. 2 after they have been complied with, according to an illustrative embodiment;
FIG. 4 is a diagram illustrating a target video list presented after a target region of the target attitude tag of FIG. 2 has been manipulated in accordance with an illustrative embodiment;
fig. 5 is a schematic diagram illustrating an attitude label with a higher heat value displayed after a preset attitude control in a play interface is touch-operated according to an exemplary embodiment;
FIG. 6 is a diagram illustrating a personal data page in a play interface, according to an exemplary embodiment;
FIG. 7 is a flow diagram illustrating another video interaction method in accordance with an illustrative embodiment;
FIG. 8 is a flow diagram illustrating another video interaction method in accordance with an illustrative embodiment;
FIG. 9 is a block diagram illustrating a video interaction device, according to an example embodiment;
FIG. 10 is a block diagram illustrating an electronic device in accordance with an exemplary embodiment;
fig. 11 is a block diagram illustrating another video interaction device, according to an example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Fig. 1 is a flowchart illustrating a video interaction method according to an exemplary embodiment, where the method is applied to a terminal, and the terminal may be a smartphone or a tablet computer, and the terminal is not particularly limited in the embodiments of the present disclosure.
As shown in fig. 1, the method may include the following steps.
In step S11, the target video is played.
The target video can be any video in a video library. When a user is interested in any video in the video library, the user can watch the video, namely the terminal plays the video, and the video is the target video. The embodiment of the present disclosure does not specifically limit the target video.
In step S12, a target attitude label corresponding to the target video is displayed in the play interface of the target video, and the target attitude label is used to represent the evaluation attitude of the user on the target video.
The target attitude tag corresponding to the target video may be generated according to evaluation information of the user on the target video. Alternatively, it is reasonable that the target attitude tag corresponding to the target video is also generated according to the attribute information of the target video. And, the target attitude tag may include: the name of the target attitude label, the expression corresponding to the target attitude label, the praise control and the praise quantity of the user on the target attitude label.
As shown in fig. 2, the name of the target attitude tag is: this is just what one likes; the expression corresponding to the target attitude label is as follows: it is the expression on the left side that love it; the like controls are: this is the thumb control that loves the right side of something; the praise number of the user to the target attitude tag is as follows: 2345. it is understood that "this is what is displayed on the page is what love the sample bar", "I am very sour! ", and" this is "whitehead and old", are all comments from the user.
Specifically, in an embodiment, the terminal may classify the evaluation information according to the evaluation attitude, and classify the evaluation information into different categories, where the evaluation information of each category corresponds to one target attitude tag.
In another embodiment, the terminal may generate the target attitude tag according to the attribute information of the target video. Specifically, the attribute information of the target video may be video content of the target video, a video field to which the target video belongs, and a video title of the target video. The terminal can presume the feeling of the user watching the target video according to the attribute information of the target video, so as to generate a target attitude tag capable of expressing the feeling of the user.
As an implementation manner of the embodiment of the present disclosure, displaying, in a playing interface of a target video, a target attitude tag corresponding to the target video may include:
simultaneously displaying a plurality of target attitude tags corresponding to the target video in a playing interface of the target video;
alternatively, the first and second electrodes may be,
and displaying a plurality of target attitude tags corresponding to the target videos in turn in a playing interface of the target videos.
In this implementation, since the viewing experience is usually not exactly the same when different users view the same target video, the target video usually corresponds to a plurality of target attitude tags. In practical application, a plurality of target attitude tags corresponding to the target videos can be simultaneously displayed in the playing interface. Or only one target attitude label is displayed at a time, and a plurality of target attitude labels corresponding to the target video are displayed in turn. When different video frames of the target video are played, the target attitude tag corresponding to the current video frame can be displayed. The display time interval of two adjacent target attitude tags can be set according to actual conditions.
In step S13, it is determined whether a trigger operation of the target user using the terminal on the target attitude tag of the target video has been received. If so, go to step S14.
The praise control can be displayed on the target attitude label, and at this time, the area of the target attitude label can be divided into an area where the praise control is located and a target area except the area where the praise control is located.
Therefore, in an embodiment, determining whether a trigger operation of a target user using a terminal on a target attitude tag of a target video is received may include:
judging whether trigger operation of a target user using the terminal on a praise control on the target attitude label is received; and the number of the first and second groups,
and when receiving the trigger operation of the target user on the approval control on the target attitude label, judging that the trigger operation of the target user on the target attitude label is received.
In another embodiment, the determining whether a trigger operation of a target user using a terminal on a target attitude tag of a target video is received may include:
judging whether a trigger operation of a target user on a target area on a target attitude tag is received, wherein the target area is as follows: a region on the target attitude tag except for the complimentary control; and the number of the first and second groups,
and when receiving the trigger operation of the target user on the target area, judging that the trigger operation of the target user on the target attitude tag is received.
In step S14, a response operation corresponding to the trigger operation is performed.
As can be seen from the above description, the terminal may receive a trigger operation of the target user on the approval control on the target attitude tag, and may also receive a trigger operation of the target user on the target area.
In one embodiment, a terminal receives a trigger operation of a target user on a complimentary control on a target attitude tag, and at this time, executes a response operation corresponding to the trigger operation, including:
and switching the state of the praise control from the state of not praise to the state of praise, and changing the display of the target attitude label on the playing interface.
In this embodiment, the state of the complimentary control can be switched from a hollow state to a solid state, i.e., from a non-complimentary state to a complimentary state. And changing the display mode of the target attitude label on the playing interface.
The display mode of the target attitude tag on the play interface can be changed in various ways.
In an implementation manner, the implementation manner for changing the display manner of the target attitude tag on the play interface may be: and reducing the display area of the target attitude label on the playing interface. For example, as shown in fig. 3, the target attitude tag is obviously reduced in the display area of the playing interface compared with fig. 2.
Moreover, as can be seen from fig. 3, the number of works with the target attitude label may also be shown in the play interface, which is 235. In addition, when the user clicks or double-clicks the complimentary control again, the state of the complimentary control can be changed from the complimentary state to the non-complimentary state.
In another implementation manner, the implementation manner for changing the display manner of the target attitude tag on the play interface may be: and changing the display position of the target attitude label in the playing interface.
In another implementation manner, the implementation manner for changing the display manner of the target attitude tag on the play interface may be: changing the display content in the target attitude tag, for example, the word "liked" may be displayed in the target attitude tag.
In another embodiment, the terminal receives a trigger operation of a target user on a target area. At this time, a response operation corresponding to the trigger operation is performed, including:
and displaying a target video list corresponding to the target attitude label, wherein each video in the target video list has the target attitude label.
In this embodiment, after receiving a trigger operation of a target user on a target area, a terminal may display a target video list corresponding to a target attitude tag on a play interface, where a first video in the target video list may be a hottest video today, that is, a video with a higher heat may be displayed in front of the target video list, and a video with a lower heat may be displayed in back of the target video list. As shown in fig. 4. As can be seen from fig. 4, in addition to displaying the target video list, the playing interface may also display the name, the expression, the number of prawns, the number of works with the target attitude tag, and the like of the target attitude tag.
In an embodiment, the video interaction method provided by the embodiment of the present disclosure may further include:
responding to the received touch operation of the attitude control by the target user;
and displaying the recommended at least one attitude label.
In this embodiment, after receiving a touch operation of a target user on a preset attitude control in a play interface, the terminal may respond to the touch operation and display at least one recommended attitude tag in the play interface. The recommended at least one attitude label may be an attitude label recommended according to the heat of the attitude label.
As an implementation manner of the embodiment of the present invention, the recommended at least one attitude label is at least one attitude label with a heat degree greater than a preset heat degree; at this time, presenting the recommended at least one attitude tag may include:
and displaying the obtained at least one attitude label according to the hot degree sequence. The preset heat may be set according to an actual situation, and this is not specifically limited in the embodiment of the present disclosure.
In this implementation manner, the obtained at least one attitude tag may be displayed according to the rank ordering of the heat degree, that is, the attitude tag with the higher heat degree may be displayed in the play interface. As shown in FIG. 5, the first-in-heat attitude tag name is: this is just what one likes; the second attitude tag name of the heat is: the crazy appearance of the you is real and beautiful; the third attitude tag name of the heat is: sweet to me; the third attitude tag name of the heat is: see the world together; the attitude tag name of heat fourth is: this is the true net red.
In addition, in the playing interface of the target video, the target user may also enter a profile page, as shown in fig. 6, the profile page may include an approval control, and when the target user clicks the approval control, the profile page may display the attitude tags approved by the target user and the video list.
And the attitude tags can be arranged in a reverse order according to the attitude expression to which the video that the target user approves for the last time belongs, so that an attitude tag list is obtained.
For each attitude tag, all videos under that attitude tag can be sorted in reverse order by the agreed upon time or number. Each video may show a dynamic cover page and an amount of approval that the video belongs to the attitude tag.
According to the technical scheme provided by the embodiment of the disclosure, a terminal plays a target video; displaying a target attitude label corresponding to the target video in a playing interface of the target video, wherein the target attitude label is used for representing the evaluation attitude of the user on the target video; judging whether a trigger operation of a target user using a terminal on a target attitude tag of a target video is received; and when receiving the trigger operation of the target user on the target attitude tag, executing response operation corresponding to the trigger operation. Therefore, according to the technical scheme provided by the embodiment of the disclosure, the attitude tag is displayed in the video playing interface, so that a user can interact with the terminal through the attitude tag in the process of watching the video, and the interaction mode between the user and the terminal is enriched.
Fig. 7 is a flow chart illustrating a video interaction method according to an example embodiment.
As shown in fig. 7, the method may include the following steps.
In step S71, evaluation information of the user on the target video is acquired.
The user can evaluate the target video in the process of watching the video so as to express the watching experience of the user. Therefore, the terminal can obtain evaluation information of a large number of users on the target video.
In step S72, the evaluation information is classified according to evaluation attitudes, where different categories of evaluation information identify different evaluation attitudes.
After acquiring a large amount of evaluation information, the terminal may classify the evaluation information according to the evaluation attitude, that is, classify the evaluation attitude into different categories. For example, the evaluation information of the positive evaluation attitude may be classified into one category, the evaluation information of the neutral evaluation attitude may be classified into one category, and the evaluation information of the negative evaluation attitude may be classified into one category.
In step S73, at least one target attitude label is generated, wherein each target attitude label corresponds to the same category of rating information.
After the terminal classifies the evaluation information, a target attitude tag can be determined for each category of evaluation information.
In step S74, the target video is played.
In step S75, a target attitude label corresponding to the target video is displayed in the play interface of the target video, and the target attitude label is used to represent the evaluation attitude of the user on the target video.
In step S76, it is determined whether a trigger operation of the target user using the terminal on the target attitude tag of the target video has been received. If so, go to step S77.
In step S77, a response operation corresponding to the trigger operation is performed.
According to the technical scheme provided by the embodiment of the disclosure, a terminal plays a target video; displaying a target attitude label corresponding to the target video in a playing interface of the target video, wherein the target attitude label is used for representing the evaluation attitude of the user on the target video; judging whether a trigger operation of a target user using a terminal on a target attitude tag of a target video is received; and when receiving the trigger operation of the target user on the target attitude tag, executing response operation corresponding to the trigger operation. Therefore, according to the technical scheme provided by the embodiment of the disclosure, the attitude tag is displayed in the video playing interface, so that a user can interact with the terminal through the attitude tag in the process of watching the video, and the interaction mode between the user and the terminal is enriched.
Fig. 8 is a flow chart illustrating a video interaction method according to an example embodiment.
As shown in fig. 8, the method may include the following steps.
In step S81, at least one target attitude tag is generated according to the attribute information of the target video, where the attribute information includes at least one of: the video content of the target video, the video domain to which the target video belongs, and the video title of the target video.
Specifically, the terminal may generate the target attitude tag for the target video according to the attribute information of the target video. For example, the terminal may extract a keyword of the video content according to the video content of the target video, and generate the target attitude tag according to the keyword. For example, if the video content of a certain video is a sweet love, the target attitude tag of the target video may be: this is just as good as it is.
In step S82, the target video is played.
In step S83, a target attitude label corresponding to the target video is displayed in the play interface of the target video, and the target attitude label is used to represent the evaluation attitude of the user on the target video.
In step S84, it is determined whether a trigger operation of the target user using the terminal on the target attitude tag of the target video has been received. If so, go to step S85.
In step S85, a response operation corresponding to the trigger operation is performed.
According to the technical scheme provided by the embodiment of the disclosure, a terminal plays a target video; displaying a target attitude label corresponding to the target video in a playing interface of the target video, wherein the target attitude label is used for representing the evaluation attitude of the user on the target video; judging whether a trigger operation of a target user using a terminal on a target attitude tag of a target video is received; and when receiving the trigger operation of the target user on the target attitude tag, executing response operation corresponding to the trigger operation. Therefore, according to the technical scheme provided by the embodiment of the disclosure, the attitude tag is displayed in the video playing interface, so that a user can interact with the terminal through the attitude tag in the process of watching the video, and the interaction mode between the user and the terminal is enriched.
Fig. 9 is a diagram illustrating a video interaction apparatus applied to a terminal according to an exemplary embodiment, the apparatus including:
a video playing module 910 configured to perform playing of a target video;
a tag display module 920, configured to perform displaying, in a playing interface of the target video, a target attitude tag corresponding to the target video, where the target attitude tag is used to represent an evaluation attitude of a user on the target video;
an operation determining module 930 configured to perform determining whether a trigger operation of a target user using a terminal on the target attitude tag of the target video is received;
an operation executing module 940, configured to execute a response operation corresponding to the trigger operation when the operation judging module receives the trigger operation of the target user on the target attitude tag.
According to the technical scheme provided by the embodiment of the disclosure, a terminal plays a target video; displaying a target attitude label corresponding to the target video in a playing interface of the target video, wherein the target attitude label is used for representing the evaluation attitude of the user on the target video; judging whether a trigger operation of a target user using a terminal on a target attitude tag of a target video is received; and when receiving the trigger operation of the target user on the target attitude tag, executing response operation corresponding to the trigger operation. Therefore, according to the technical scheme provided by the embodiment of the disclosure, the attitude tag is displayed in the video playing interface, so that a user can interact with the terminal through the attitude tag in the process of watching the video, and the interaction mode between the user and the terminal is enriched.
Optionally, the apparatus further comprises:
the information acquisition module is configured to acquire evaluation information of a user on the target video before the target video is played;
an information classification module configured to perform classification of the evaluation information according to evaluation attitudes, wherein different categories of evaluation information identify different evaluation attitudes;
a tag obtaining module configured to perform generation of at least one target attitude tag, where each target attitude tag corresponds to the same category of evaluation information.
Optionally, the apparatus further comprises:
a tag generation module configured to execute, before the target video is played, generating at least one target attitude tag according to attribute information of the target video, where the attribute information includes at least one of: the video content of the target video, the video field to which the target video belongs, and the video title of the target video.
Optionally, the tag display module is configured to perform:
simultaneously displaying a plurality of target attitude tags corresponding to the target video in a playing interface of the target video;
alternatively, the first and second electrodes may be,
and displaying a plurality of target attitude tags corresponding to the target videos in turn in the playing interface of the target videos.
Optionally, a like control is displayed on the target attitude label;
the operation judgment module is configured to execute:
judging whether trigger operation of the target user on a praise control on the target attitude label is received;
and when receiving the trigger operation of the target user on the approval control on the target attitude label, judging that the trigger operation of the target user on the target attitude label is received.
Optionally, the operation execution module is configured to execute:
and switching the state of the praise control from the state of not praise to the state of praise, and changing the display mode of the target attitude label on the playing interface.
Optionally, the operation determining module is configured to perform:
judging whether a trigger operation of the target user on a target area on the target attitude tag is received, wherein the target area is as follows: a region on the target attitude label except for the like control;
and when receiving the trigger operation of the target user on the target area, judging that the trigger operation of the target user on the target attitude tag is received.
Optionally, the operation execution module is configured to execute:
and displaying a target video list corresponding to the target attitude label, wherein each video in the target video list has the target attitude label.
Optionally, the playing interface further includes a preset attitude control;
the device further comprises:
and the operation response module is configured to execute touch operation of the target user on the attitude control in response to the received request, and display at least one recommended attitude label.
Optionally, the recommended at least one attitude label is at least one attitude label with a heat degree greater than a preset heat degree;
the operation response module is specifically configured to:
and displaying the obtained at least one attitude label according to the hot degree sequence.
FIG. 10 is a block diagram of an electronic device shown in accordance with an example embodiment. Referring to fig. 10, the electronic device includes:
a processor 1010;
a memory 1020 for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the video interaction method provided by the present disclosure.
According to the technical scheme provided by the embodiment of the disclosure, a terminal plays a target video; displaying a target attitude label corresponding to the target video in a playing interface of the target video, wherein the target attitude label is used for representing the evaluation attitude of the user on the target video; judging whether a trigger operation of a target user using a terminal on a target attitude tag of a target video is received; and when receiving the trigger operation of the target user on the target attitude tag, executing response operation corresponding to the trigger operation. Therefore, according to the technical scheme provided by the embodiment of the disclosure, the attitude tag is displayed in the video playing interface, so that a user can interact with the terminal through the attitude tag in the process of watching the video, and the interaction mode between the user and the terminal is enriched.
Fig. 11 is a block diagram illustrating an apparatus 1100 for use in accordance with an example embodiment. For example, the apparatus 1100 may be a mobile phone, a computer, a digital broadcast electronic device, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 11, apparatus 1100 may include one or more of the following components: a processing component 1102, a memory 1104, a power component 1106, a multimedia component 1108, an audio component 1110, an input/output (I/O) interface 1112, a sensor component 1114, and a communication component 1116.
The processing component 1102 generally controls the overall operation of the device 1100, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 1102 may include one or more processors 1120 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 1102 may include one or more modules that facilitate interaction between the processing component 1102 and other components. For example, the processing component 1102 may include a multimedia module to facilitate interaction between the multimedia component 1108 and the processing component 1102.
The memory 1104 is configured to store various types of data to support operation at the device 1100. Examples of such data include instructions for any application or method operating on device 1100, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 1104 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
A power component 1106 provides power to the various components of the device 1100. The power components 1106 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 1100.
The multimedia component 1108 includes a screen that provides an output interface between the device 1100 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 1108 includes a front facing camera and/or a rear facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 1100 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 1110 is configured to output and/or input audio signals. For example, the audio component 1110 includes a Microphone (MIC) configured to receive external audio signals when the apparatus 1100 is in operating modes, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 404 or transmitted via the communication component 1116. In some embodiments, the audio assembly 1110 further includes a speaker for outputting audio signals.
The I/O interface 1112 provides an interface between the processing component 1102 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 1114 includes one or more sensors for providing various aspects of state assessment for the apparatus 1100. For example, the sensor assembly 1114 may detect an open/closed state of the device 1100, the relative positioning of components, such as a display and keypad of the apparatus 1100, the sensor assembly 1114 may also detect a change in position of the apparatus 1100 or a component of the apparatus 1100, the presence or absence of user contact with the apparatus 1100, an orientation or acceleration/deceleration of the apparatus 1100, and a change in temperature of the apparatus 1100. The sensor assembly 1114 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 1114 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 1114 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 1116 is configured to facilitate wired or wireless communication between the apparatus 1100 and other devices. The apparatus 1100 may access a wireless network based on a communication standard, such as WiFi, an operator network (such as 2G, 3G, 4G, or 5G), or a combination thereof. In an exemplary embodiment, the communication component 4111 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 1116 also includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 1100 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described video interaction methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 1104 comprising instructions, executable by the processor 1120 of the apparatus 1100 to perform the method described above is also provided. Alternatively, for example, the storage medium may be a non-transitory computer-readable storage medium, such as a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
According to the technical scheme provided by the embodiment of the disclosure, a terminal plays a target video; displaying a target attitude label corresponding to the target video in a playing interface of the target video, wherein the target attitude label is used for representing the evaluation attitude of the user on the target video; judging whether a trigger operation of a target user using a terminal on a target attitude tag of a target video is received; and when receiving the trigger operation of the target user on the target attitude tag, executing response operation corresponding to the trigger operation. Therefore, according to the technical scheme provided by the embodiment of the disclosure, the attitude tag is displayed in the video playing interface, so that a user can interact with the terminal through the attitude tag in the process of watching the video, and the interaction mode between the user and the terminal is enriched.
In yet another aspect of the present disclosure, the present disclosure also provides a storage medium, and when executed by a processor of an electronic device, the instructions in the storage medium enable the electronic device to perform the video interaction method provided by the present disclosure.
According to the technical scheme provided by the embodiment of the disclosure, a terminal plays a target video; displaying a target attitude label corresponding to the target video in a playing interface of the target video, wherein the target attitude label is used for representing the evaluation attitude of the user on the target video; judging whether a trigger operation of a target user using a terminal on a target attitude tag of a target video is received; and when receiving the trigger operation of the target user on the target attitude tag, executing response operation corresponding to the trigger operation. Therefore, according to the technical scheme provided by the embodiment of the disclosure, the attitude tag is displayed in the video playing interface, so that a user can interact with the terminal through the attitude tag in the process of watching the video, and the interaction mode between the user and the terminal is enriched.
According to yet another aspect of the embodiments of the present disclosure, there is provided a computer program product containing instructions, which when run on an electronic device, causes the electronic device to implement the video interaction method according to the first aspect.
According to the technical scheme provided by the embodiment of the disclosure, a terminal plays a target video; displaying a target attitude label corresponding to the target video in a playing interface of the target video, wherein the target attitude label is used for representing the evaluation attitude of the user on the target video; judging whether a trigger operation of a target user using a terminal on a target attitude tag of a target video is received; and when receiving the trigger operation of the target user on the target attitude tag, executing response operation corresponding to the trigger operation. Therefore, according to the technical scheme provided by the embodiment of the disclosure, the attitude tag is displayed in the video playing interface, so that a user can interact with the terminal through the attitude tag in the process of watching the video, and the interaction mode between the user and the terminal is enriched.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A video interaction method is applied to a terminal, and comprises the following steps:
playing the target video;
displaying a target attitude label corresponding to the target video in a playing interface of the target video, wherein the target attitude label is used for representing the evaluation attitude of a user on the target video;
judging whether a trigger operation of a target user using the terminal on the target attitude tag of the target video is received or not;
and when receiving the trigger operation of the target user on the target attitude tag, executing a response operation corresponding to the trigger operation.
2. The method of claim 1, wherein prior to said playing the target video, the method further comprises:
acquiring evaluation information of a user on the target video;
classifying the evaluation information according to evaluation attitudes, wherein different types of evaluation information identify different evaluation attitudes;
and generating at least one target attitude tag, wherein each target attitude tag corresponds to the same category of evaluation information.
3. The method of claim 1, wherein prior to said playing the target video, the method further comprises:
generating at least one target attitude tag according to attribute information of the target video, wherein the attribute information comprises at least one of the following: the video content of the target video, the video field to which the target video belongs, and the video title of the target video.
4. The method according to claim 1, wherein the displaying, in the playing interface of the target video, a target attitude tag corresponding to the target video comprises:
simultaneously displaying a plurality of target attitude tags corresponding to the target video in a playing interface of the target video;
alternatively, the first and second electrodes may be,
and displaying a plurality of target attitude tags corresponding to the target videos in turn in the playing interface of the target videos.
5. The method according to any one of claims 1 to 4, wherein a like control is displayed on the target degree label;
the judging whether the triggering operation of the target attitude tag of the target video by the target user using the terminal is received or not comprises the following steps:
judging whether trigger operation of the target user on a praise control on the target attitude label is received;
and when receiving the trigger operation of the target user on the approval control on the target attitude label, judging that the trigger operation of the target user on the target attitude label is received.
6. The method of claim 5, wherein the performing the response operation corresponding to the trigger operation comprises:
and switching the state of the praise control from the state of not praise to the state of praise, and changing the display mode of the target attitude label on the playing interface.
7. The method according to claim 5, wherein the determining whether the trigger operation of the target attitude tag of the target video by the target user using the terminal is received comprises:
judging whether a trigger operation of the target user on a target area on the target attitude tag is received, wherein the target area is as follows: a region on the target attitude label except for the like control;
and when receiving the trigger operation of the target user on the target area, judging that the trigger operation of the target user on the target attitude tag is received.
8. A video interaction apparatus, wherein the apparatus is applied to a terminal, and the apparatus comprises:
a video playing module configured to execute playing of a target video;
the tag display module is configured to display a target attitude tag corresponding to the target video in a playing interface of the target video, wherein the target attitude tag is used for representing an evaluation attitude of a user on the target video;
an operation judgment module configured to execute judgment on whether a trigger operation of a target user using the terminal on the target attitude tag of the target video is received;
and the operation execution module is configured to execute a response operation corresponding to the trigger operation when the operation judgment module receives the trigger operation of the target user on the target attitude tag.
9. A terminal, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the video interaction method of any of claims 1 to 7.
10. A storage medium in which instructions, when executed by a processor of an electronic device, enable the electronic device to perform the video interaction method of any one of claims 1 to 7.
CN201910927844.3A 2019-09-27 2019-09-27 Video interaction method, device, terminal and storage medium Active CN110730382B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910927844.3A CN110730382B (en) 2019-09-27 2019-09-27 Video interaction method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910927844.3A CN110730382B (en) 2019-09-27 2019-09-27 Video interaction method, device, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN110730382A true CN110730382A (en) 2020-01-24
CN110730382B CN110730382B (en) 2020-10-30

Family

ID=69219575

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910927844.3A Active CN110730382B (en) 2019-09-27 2019-09-27 Video interaction method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN110730382B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111818371A (en) * 2020-07-17 2020-10-23 腾讯科技(深圳)有限公司 Interactive video management method and related device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102073677A (en) * 2010-12-01 2011-05-25 北京开心人信息技术有限公司 Comment method and system based on tag
CN202110541U (en) * 2010-12-01 2012-01-11 北京开心人信息技术有限公司 Comment system based on label
CN104166648A (en) * 2013-05-16 2014-11-26 百度在线网络技术(北京)有限公司 Recommendation data excavation method and device based on labels
CN104731873A (en) * 2015-03-05 2015-06-24 北京汇行科技有限公司 Evaluation information generation method and device
CN105144736A (en) * 2013-04-30 2015-12-09 索尼公司 Information processing device and information processing method
CN105872822A (en) * 2015-12-15 2016-08-17 乐视网信息技术(北京)股份有限公司 Video playing method and video playing system
CN106303726A (en) * 2016-08-30 2017-01-04 北京奇艺世纪科技有限公司 The adding method of a kind of video tab and device
CN107679217A (en) * 2017-10-19 2018-02-09 北京百度网讯科技有限公司 Association method for extracting content and device based on data mining

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102073677A (en) * 2010-12-01 2011-05-25 北京开心人信息技术有限公司 Comment method and system based on tag
CN202110541U (en) * 2010-12-01 2012-01-11 北京开心人信息技术有限公司 Comment system based on label
CN105144736A (en) * 2013-04-30 2015-12-09 索尼公司 Information processing device and information processing method
CN104166648A (en) * 2013-05-16 2014-11-26 百度在线网络技术(北京)有限公司 Recommendation data excavation method and device based on labels
CN104731873A (en) * 2015-03-05 2015-06-24 北京汇行科技有限公司 Evaluation information generation method and device
CN105872822A (en) * 2015-12-15 2016-08-17 乐视网信息技术(北京)股份有限公司 Video playing method and video playing system
CN106303726A (en) * 2016-08-30 2017-01-04 北京奇艺世纪科技有限公司 The adding method of a kind of video tab and device
CN107679217A (en) * 2017-10-19 2018-02-09 北京百度网讯科技有限公司 Association method for extracting content and device based on data mining

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111818371A (en) * 2020-07-17 2020-10-23 腾讯科技(深圳)有限公司 Interactive video management method and related device
CN111818371B (en) * 2020-07-17 2021-12-24 腾讯科技(深圳)有限公司 Interactive video management method and related device

Also Published As

Publication number Publication date
CN110730382B (en) 2020-10-30

Similar Documents

Publication Publication Date Title
CN105426152B (en) The display methods and device of barrage
US10509540B2 (en) Method and device for displaying a message
CN108038102B (en) Method and device for recommending expression image, terminal and storage medium
CN107463643B (en) Barrage data display method and device and storage medium
CN109660873B (en) Video-based interaction method, interaction device and computer-readable storage medium
CN111556352B (en) Multimedia resource sharing method and device, electronic equipment and storage medium
CN113065008A (en) Information recommendation method and device, electronic equipment and storage medium
CN111857897A (en) Information display method and device and storage medium
US20220137756A1 (en) Method for displaying interactive content, electronic device, and storage medium
CN112445970A (en) Information recommendation method and device, electronic equipment and storage medium
CN111369271A (en) Advertisement sorting method and device, electronic equipment and storage medium
CN112464031A (en) Interaction method, interaction device, electronic equipment and storage medium
CN108803892B (en) Method and device for calling third party application program in input method
CN113988021A (en) Content interaction method and device, electronic equipment and storage medium
WO2019095810A1 (en) Interface display method and device
CN112685599B (en) Video recommendation method and device
CN110730382B (en) Video interaction method, device, terminal and storage medium
CN113032627A (en) Video classification method and device, storage medium and terminal equipment
CN110650364B (en) Video attitude tag extraction method and video-based interaction method
CN106886541B (en) Data searching method and device for data searching
CN112130719A (en) Page display method, device and system, electronic equipment and storage medium
CN109960444B (en) Method, device and equipment for presenting shortcut of application program
CN112685641B (en) Information processing method and device
CN114666643A (en) Information display method and device, electronic equipment and storage medium
CN114245154A (en) Method and device for displaying virtual articles in game live broadcast room and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant