CN110401865B - Method and device for realizing video interaction function - Google Patents

Method and device for realizing video interaction function Download PDF

Info

Publication number
CN110401865B
CN110401865B CN201810373881.XA CN201810373881A CN110401865B CN 110401865 B CN110401865 B CN 110401865B CN 201810373881 A CN201810373881 A CN 201810373881A CN 110401865 B CN110401865 B CN 110401865B
Authority
CN
China
Prior art keywords
interaction
historical
current
video
interactive
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810373881.XA
Other languages
Chinese (zh)
Other versions
CN110401865A (en
Inventor
邵和明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201810373881.XA priority Critical patent/CN110401865B/en
Publication of CN110401865A publication Critical patent/CN110401865A/en
Application granted granted Critical
Publication of CN110401865B publication Critical patent/CN110401865B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44204Monitoring of content usage, e.g. the number of times a movie has been viewed, copied or the amount which has been watched
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • User Interface Of Digital Computer (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application relates to a method and a device for realizing a video interaction function, wherein the method comprises the following steps: detecting a current playing progress point of a target video; acquiring a historical interaction record matched with the current playing progress point, wherein the matched historical interaction record comprises a historical interaction position; and displaying the historical interaction elements at the position corresponding to the historical interaction position on the current video picture. The scheme provided by the application can realize interaction between the user and the video.

Description

Method and device for realizing video interaction function
Technical Field
The present application relates to the field of video technologies, and in particular, to a method and an apparatus for implementing a video interaction function.
Background
As is known, video can carry information such as image and sound at the same time, so that playing video by using terminal equipment can vividly deliver information to users. The terminal device is used as a playing tool of the video, and often transmits information to the user in a unidirectional way, namely, the user only serves as a receiver of the information.
However, as the era grows, the self-expression appeal of users is increasing. In this case, it is necessary for the terminal device to implement an interactive function of the video, that is, to support the user to perform an interactive operation on the video, and when the terminal detects the interactive operation, the terminal device displays an interactive effect to the user.
Disclosure of Invention
Therefore, it is necessary to provide a method and an apparatus for implementing a video interaction function for the interaction problem between a user and a video.
A method for realizing a video interaction function comprises the following steps:
detecting a current playing progress point of a target video;
acquiring a historical interaction record matched with the current playing progress point, wherein the matched historical interaction record comprises a historical interaction position;
and displaying the historical interaction elements at the position corresponding to the historical interaction position on the current video picture.
A method for realizing a video interaction function comprises the following steps:
when an interactive display request carrying a video identifier is received, acquiring a historical interactive record of a target video corresponding to the video identifier, wherein the historical interactive record comprises a historical interactive position, and the historical interactive position is a position in a video picture;
and sending the historical interaction record to a terminal corresponding to the interaction display request, wherein the historical interaction record is used for the terminal to acquire the historical interaction record matched with the current playing progress point of the target video, and displaying the historical interaction elements at the position corresponding to the historical interaction position on the current video picture of the target video.
An apparatus for implementing video interaction function, comprising:
the current progress detection module is used for detecting a current playing progress point of the target video;
the matching record acquisition module is used for acquiring a historical interaction record matched with the current playing progress point, and the matched historical interaction record comprises a historical interaction position;
and the historical element display module is used for displaying the historical interaction elements at the position corresponding to the historical interaction position on the current video picture.
According to the method and the device for realizing the video interaction function, the historical interaction record matched with the current playing progress point is obtained, the historical interaction record comprises the historical interaction position, and then the historical interaction elements are displayed at the position, corresponding to the historical interaction position, on the current video picture. Firstly, the historical interaction position corresponds to a specific position in a specific video picture of the target video, so that the interaction pertinence is high. Secondly, according to the historical interaction records, the interaction situation of each user who has watched the target video and the target video in the past can be presented at a specific position in a specific video picture of the target video, a hot interaction atmosphere can be effectively created, and the interaction participation rate is improved. In addition, the bright spots in the target video can be effectively mined, and the information transmission efficiency is improved.
Drawings
FIG. 1 is a diagram of an application environment in which a method for implementing a video interaction function is implemented in one embodiment;
FIG. 2 is a flow chart illustrating a method for implementing video interaction functionality in one embodiment;
FIG. 3 is a diagram illustrating a playback interface for a target video in one embodiment;
FIGS. 4a and 4b are diagrams of a selection interface and a playback interface for a target video in one embodiment;
FIG. 5 is a schematic diagram of an interface displaying historical interaction elements, in one embodiment;
FIG. 6 is a flowchart illustrating a method for implementing video interaction functionality according to an embodiment;
FIG. 7 is a diagram of an interface for a conventional video application;
FIG. 8 is a flowchart illustrating a method for implementing video interaction functionality according to one embodiment;
FIGS. 9 a-9 c are schematic diagrams of interfaces during a predetermined interaction operation in one embodiment;
FIG. 10 is a flowchart illustrating a method for implementing video interaction functionality according to an embodiment;
FIG. 11 is a flowchart illustrating a method for implementing video interaction functionality according to an embodiment;
FIG. 12 is a flowchart illustrating the steps of generating viewpoint information in one embodiment;
FIG. 13 is a flowchart illustrating the steps of determining a watchpoint progress interval in one embodiment;
FIG. 14 is a flowchart illustrating the steps for sending resources to an initial interactive user in one embodiment;
FIG. 15 is a flowchart illustrating the steps of detecting a hotspot interaction target in one embodiment;
FIG. 16 is a diagram illustrating corresponding regions in a multi-frame video frame according to an embodiment;
FIG. 17 is a flowchart illustrating a method for implementing video interaction functionality according to an embodiment;
FIG. 18 is a block diagram showing an example of an apparatus for implementing a video interactive function according to an embodiment;
FIG. 19 is a block diagram showing an apparatus for realizing the video interaction function in one embodiment;
FIG. 20 is a block diagram showing the construction of a computer device according to one embodiment;
FIG. 21 is a block diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
The implementation method of the video interaction function provided by the embodiments of the present application can be applied to the application environment shown in fig. 1. The application environment may relate to a terminal 110 and a server 120, and the terminal 110 and the server 120 may be connected through a network. It will be appreciated that in other application environments, there may be more terminals involved, each of which may be connected to the server 120 via a network.
The server 120 may send the historical interaction record of the target video corresponding to the video identifier to the terminal 110 after receiving the interaction display request carrying the video identifier sent by the terminal 110. The historical interaction record comprises historical interaction positions, and the historical interaction positions are positions in the video pictures.
Accordingly, the terminal 110 may detect a current playing progress point of the target video and obtain a historical interaction record matched with the current playing progress point, where the matched historical interaction record includes a historical interaction position. When the matched historical interaction record is obtained, the terminal 110 displays the historical interaction element at the position corresponding to the historical interaction position on the current video picture.
It should be noted that the terminal 110 may be a desktop terminal or a mobile terminal, the desktop terminal may include a desktop computer, and the mobile terminal may include at least one of a mobile phone, a tablet computer, a notebook computer, a personal digital assistant, a wearable device, and the like. The server 120 may be implemented as a stand-alone physical server or as a server cluster of multiple physical servers.
In one embodiment, as shown in fig. 2, a method for implementing a video interaction function is provided. The method is described as applied to the terminal 110 in fig. 1. The method may include the following steps S202 to S206.
S202, detecting the current playing progress point of the target video.
It can be understood that the terminal 110 may perform video playing through an application with a video playing function (hereinafter, referred to as a video application) installed therein, such as a browser client, a video playing client, and a video playing applet.
In practical application, the video application may display a playing control of a video, when the video is in an initial state to be played, after a user clicks the playing control corresponding to the video, the terminal 110 creates a playing task corresponding to the video, and the video enters a playing state from the initial state to be played, that is, the terminal 110 plays the video from an initial progress point of the video. When the play termination condition is not met, the terminal 110 may pause the play task when the pause operation is detected, and continue to execute the play task when the resume trigger operation is detected, where the play termination condition may include leaving the video play interface of the video or the video being played completely. When the play termination condition is satisfied, the terminal 110 destroys the play task. And after the playing task is destroyed, the video is in the initial state to be played again. In one embodiment, the play task may be a thread.
The target video may be a video having a corresponding play task, such as a video currently being played, or a video currently in a pause play state.
In one embodiment, a video playing window is arranged on a video playing interface of a video application, and a playing control is arranged in the video playing window. And after the user clicks the playing control, the video application directly plays the corresponding video in the video playing window. In this case, if the video corresponding to the play control is in the initial state to be played before the user clicks the play control, the terminal 110 creates a play task corresponding to the video after the user clicks the play control, and the video is the target video.
Taking fig. 3 as an example, a video playing window W-3A corresponding to the video 3A and a video playing window W-3B corresponding to the video 3B are arranged on a video playing interface 3I of the video application, a playing control C-3A is arranged in the video playing window W-3A, and a playing control C-3B is arranged in the video playing window W-3B. In this case, if the video 3A is in the initial state to be played before the user clicks the playing control C-3A, and the user clicks the playing control C-3A, the terminal 110 creates a playing task corresponding to the video 3A, where the video 3A is the target video. Similarly, if the video 3B is in the initial state to be played before the user clicks the playing control C-3B, and the user clicks the playing control C-3B, the terminal 110 creates a playing task corresponding to the video 3B, where the video 3B is the target video.
In another embodiment, a play control is arranged on a video selection interface of the video application. And after the user clicks the playing control, the video application jumps to a video playing interface provided with a video playing window, and then the terminal 110 directly creates a playing task corresponding to the video, where the video is the target video.
Taking fig. 4A and 4B as an example, the video selection interface 4I-1 of the video application is provided with play controls C-4A, C-4B, C-4C and C-4D corresponding to the videos 4A, 4B, 4C and 4D, respectively. After the user clicks the play control C-4D, the application jumps to the video play interface 4I-2 provided with the video play window W-4D corresponding to the video 4D, and then the terminal 110 directly creates a play task corresponding to the video 4D, where the video 4D is the target video.
The playing progress point refers to a video time point reached by the video playing progress. Accordingly, the current playing time point refers to a video time point at which the video playing progress reaches at the time of detection by the terminal 110. For example, the total duration of the target video is 30 minutes and 10 seconds, and if the video playing progress of the target video reaches the 23 th minute and 50 seconds at the time of detection by the terminal 110, the current playing progress point of the target video is 23 minutes and 50 seconds.
S204, obtaining a history interaction record matched with the current playing progress point, wherein the matched history interaction record comprises a history interaction position.
And the historical interaction records can be used for representing the operation conditions of the historical preset interaction operation corresponding to the historical interaction records. The historical interaction records can correspond to the historical scheduled interaction operations one by one, namely, one historical interaction record only corresponds to one historical scheduled interaction operation. The historical scheduled interaction operation refers to scheduled interaction operation occurring before the time when the historical interaction record is acquired. The predetermined interactive operation is an operation that acts on a video playing area of the target video, and can be used to trigger a terminal on which the predetermined interactive operation is performed to execute a service logic for implementing the predetermined interactive function.
In one embodiment, the historical interaction location may be included in any historical interaction record. The historical interaction position is a target picture position corresponding to an action position in a video playing area of the target video, wherein the historical interaction position corresponds to historical preset interaction operation. Specifically, the target picture position is a pixel point coordinate matched with the action position in the video picture of the target video when the terminal 110 detects the historical predetermined interactive operation. In one embodiment, the coordinates of the pixel points in the video frame that match the action positions of the historical predefined interactive operations may be coordinates of the action positions relative to a predefined reference position point in the video frame. In one particular example, the predetermined reference location point may be a vertex in the upper left corner of the video picture. Accordingly, the terminal 110 may perform coordinate transformation on the action position based on the playing layout state of the target video when the action position is obtained, so as to obtain the target picture position corresponding to the action position, where the playing layout state may include a full-screen playing state or a window playing state.
In addition, for the history interaction record matched with the current playing progress point, the history interaction position included in the history interaction record corresponds to the position in the video picture corresponding to the current playing progress point.
In one embodiment, the historical interaction record may be generated by the server 120 and sent to the terminal 110. Specifically, the terminal that detects the historical scheduled interaction operation sends a historical interaction request corresponding to the detected historical scheduled interaction operation to the server 120, where the historical interaction request carries operation information related to the historical scheduled interaction operation. Accordingly, the server 120 obtains a historical interaction record corresponding to the historical scheduled interaction operation according to the received historical interaction request.
In terms of the implemented interactive function, in an embodiment, the predetermined interactive operation may include an approval operation, which may be used to trigger the terminal acted by the approval operation to execute the service logic implementing the approval function. In practical application, in the process of playing a video by a terminal, a user can perform a praise operation at any position in a video playing area of the terminal, so that the praise operation of the user on a corresponding target picture position in a current video picture is realized. It is to be appreciated that in another embodiment, the predetermined interaction operation can also include a comment operation.
In terms of specific implementation, the predetermined interactive operation may be set based on actual conditions, and it is only necessary to ensure that the terminal can determine the intended action position of the user according to the predetermined interactive operation and does not conflict with other functional operations of the video application. Taking the praise operation as an example, in one embodiment, the preset praise operation may include an interface double-click operation, in which case, the terminal starts to execute the service logic implementing the praise function as long as the double-click operation in the video playing area acting on the target video is detected. In another embodiment, the preset approval operation may also include a long press operation, for example, a press duration of 2 seconds, in which case, the terminal starts to execute the service logic implementing the approval function as long as the terminal detects an operation in the video playing area acting on the target video and the press duration of 2 seconds.
It should be noted that, for a current playing progress point, the history interaction record matching with the current playing progress point may not exist, and may be equal to or greater than one. That is, the number of historical interaction records matching the current play progress point may be any non-negative integer.
If the number of the historical interaction records matched with the current playing progress point acquired by the terminal 110 is 0, that is, the historical interaction records matched with the current playing progress point are not acquired, which indicates that no user has performed historical scheduled interaction operation on the video picture corresponding to the current playing progress point before the time when the terminal 110 acquires the historical interaction records.
If the number of the acquired historical interaction records matched with the current playing progress point is a positive integer, it indicates that before the time when the terminal 110 acquires the historical interaction records, a corresponding number of users perform historical scheduled interaction operations on corresponding target picture positions in the video picture corresponding to the current playing progress point.
In one embodiment, the terminal 110 may obtain the historical interaction record of the target video matching the current playing progress point only when the record obtaining condition is satisfied. Correspondingly, when the record acquisition condition is not met, the historical interaction record matched with the current playing progress point of the target video is not acquired.
In one embodiment, the record obtaining condition may include that a play task of the target video has been created, that is, a historical interaction record of the target video matching the current play progress point is obtained from a time when the terminal 110 starts to play the target video from the start progress point of the target video.
In another embodiment, the recording acquisition condition may include detecting an interactive presentation triggering operation. The interactive display triggering operation is applied by a user and can be used for representing the interactive situation of the user needing to know the target video. It can be understood that the historical interaction record matched with the current playing progress point is obtained only when the user needs the historical interaction record, and the user requirements can be flexibly adapted.
In a specific example, an interactive display switch is disposed on a video playing interface of a video application, and a user may perform an interactive display triggering operation through the interactive display switch. When the user turns on the interactive display switch, the terminal 110 detects the interactive display triggering operation, and then obtains the historical interactive record of the target video matched with the current playing progress point. In addition, the user does not turn on the interactive display switch, the terminal 110 does not detect the interactive display triggering operation, and accordingly, the terminal 110 does not obtain the historical interactive record of the target video, which is matched with the current playing progress point.
In addition, in an embodiment, before the step of the terminal 110 obtaining the historical interaction record matched with the current playing progress point, the method may further include the steps of: and acquiring a historical interaction record set of the target video, wherein the historical interaction record set comprises historical interaction records. In this case, the step of the terminal 110 acquiring the historical interaction record matching with the current playing progress point may include the terminal 110 searching the historical interaction record matching with the current playing progress point in the acquired historical interaction record set. In one particular example, the set of historical interactions may be a list of historical interactions.
In one embodiment, the historical interaction record set may include all historical interaction records of the target video that have been stored in the server 120 before the time of acquiring the historical interaction record set. In this case, the server 120 may transmit all the historical interaction records of the target video to the terminal 110 at one time, thereby reducing the number of data interactions.
And S206, displaying the historical interaction elements at the positions corresponding to the historical interaction positions in the matched historical interaction records on the current video picture.
The current video picture refers to a video picture corresponding to the current playing progress point of the target video.
It can be understood that the historical interaction position in the historical interaction record matched with the current playing progress point corresponds to the position in the video frame corresponding to the current playing progress point, that is, in the current video frame, the position corresponding to the historical interaction position in the matched historical interaction record can be found without fail.
And the historical interaction elements can be used for expressing the interaction effect matched with the corresponding historical preset interaction operation. The historical interaction elements may be set based on actual demand. In one embodiment, the historical interaction elements may include animation elements, such as a flashing five-pointed star animation. In another embodiment, the historical interaction elements may include static elements, wherein the static elements may include static graphics, such as a five-pointed star static graphic.
It can be understood that the terminal 110 obtains the historical interaction record matched with the current playing progress point, which indicates that a user has performed historical scheduled interaction operation on a video picture corresponding to the current playing progress point before the time when the terminal 110 plays the target video to the current playing progress point. In this case, the terminal 110 may display the history interaction element at a position on the current video screen corresponding to the history interaction position in the matched history interaction record. Taking fig. 5 as an example, the video frame corresponding to the current playing progress point detected by the terminal 110 is 5I, and the terminal 110 obtains 4 historical interaction records matched with the current playing progress point, where the historical interaction elements are all five-pointed star static graphs, and the 4 historical interaction positions are P51, P52, P53, and P54, respectively. In this case, the terminal 110 may display one pentagram static graphic at P51, P52, P53, and P54 of the video screen I5, respectively.
In addition, as shown in fig. 6, in the process of completely playing the target video by the terminal 110, the current playing progress points of the target video may be detected in real time, and for each current playing progress point, it is determined whether there is a history interaction record matching with the current playing progress point. And when the matched historical interaction record is obtained, displaying the historical interaction elements at the position corresponding to the historical interaction position in the matched historical interaction record on the video picture corresponding to the current playing progress point according to the historical interaction position and the historical interaction element information in the matched historical interaction record. Then, detecting whether the target video is played completely, if not, skipping to the step of detecting the current playing point of the target video; if yes, the flow is ended.
It should be noted that, for the problem of the interaction between the user and the video, in other implementation manners, as shown in fig. 7, a like button G7 may also be set near the video playing area of the video, and after the user clicks the like button, the terminal changes the color of the like button to represent that the current user has liked the video, and displays the total number of people currently liked the video (not shown). However, the interactive form of the implementation is single.
The method for realizing the video interaction function obtains a historical interaction record matched with the current playing progress point, the historical interaction record comprises a historical interaction position, and then, historical interaction elements are displayed at a position corresponding to the historical interaction position on the current video picture. Firstly, the historical interaction position corresponds to a specific position in a specific video picture of the target video, so that the interaction pertinence is high. Secondly, according to the historical interaction records, the interaction situation between each user who has watched the target video in the past and the target video can be presented at a specific position in a specific video picture of the target video, so that the current watching user can know the interaction situation between the historical watching user and the target video, the efficiency of finding the bright spot in the target video by the current watching user is improved, and the probability that the current watching user misses the bright spot in the target video is reduced. Moreover, a more vigorous interactive atmosphere can be effectively created, so that the interactive participation rate is improved.
In one embodiment, the historical interaction record further comprises historical interaction progress points. Accordingly, obtaining the history interaction record matched with the current playing progress point may include the steps of: and acquiring a historical interaction record of which the historical interaction progress point is matched with the current playing progress point.
In this embodiment, any historical interaction record may include a historical interaction progress point and a historical interaction position, and the historical interaction progress point and the historical interaction position are associated with each other. The historical interaction progress point refers to a playing progress point of the target video at the moment when the terminal acted by the corresponding historical scheduled interaction operation detects the historical scheduled interaction operation.
And for any historical interaction record, the historical interaction position in the historical interaction record is the position in the video picture corresponding to the historical interaction progress point in the historical interaction record. In addition, as described above, for any historical interaction record, the historical interaction position included in the historical interaction record is the target screen position corresponding to the action position of the corresponding historical scheduled interaction operation in the video playing area of the target video. Correspondingly, in this embodiment, the target screen position is a pixel point coordinate matched with the action position in the video screen corresponding to the historical interaction progress point included in the historical interaction record.
For the history interaction record matched with the current playing progress point, the history interaction progress point matched with the current playing progress point may be, in one embodiment, the history interaction progress point in the history interaction record matched with the current playing progress point. In a specific example, the matching of the historical interaction progress point and the current playing progress point in the historical interaction records may mean that the historical interaction progress point is consistent with the current playing progress point, for example, the current playing progress point of the target video is 23 minutes and 50 seconds, and for each historical interaction record of the target video, the historical interaction progress point is also 23 minutes and 50 seconds, that is, the historical interaction record matched with the current playing progress point.
It should be noted that the historical interaction progress point in the historical interaction record, which is acquired by the terminal 110 and matched with the current playing progress point, is consistent with the current playing progress point of the target video detected by the terminal 110, which indicates that a video picture corresponding to the historical interaction progress point in the historical interaction record is also consistent with a video picture corresponding to the current playing progress point. And the historical interaction position in the historical interaction record is the position in the video picture corresponding to the historical interaction progress point in the historical interaction record, so that the position corresponding to the historical interaction position in the target historical interaction record can be found in the video picture corresponding to the current playing progress point.
In addition, for the historical interaction set described above, in another embodiment, the historical interaction record set may also include each historical interaction record that is stored in the server 120 before the time when the historical interaction record set is obtained, and the historical interaction progress point of each historical interaction record is located in the predetermined progress interval. One end point of the predetermined progress interval may be a current playing progress point, and the other end point may be a playing progress point separated from the current playing progress point by a predetermined time interval. In this case, the server 120 may transmit the historical interaction record to the terminal 110 multiple times during the playing of the target video, and transmit a part of the historical interaction record of the target video each time. That is, all historical interaction records of the target video are split according to the distribution of the historical interaction progress points, and then are sent to the terminal 110 in batches, so that the data transmission pressure is relieved.
In one embodiment, the historical interaction record may further include historical interaction element information, and the step of displaying the historical interaction element at a position on the current video picture corresponding to the historical interaction position in the matched historical interaction record may include: and displaying the historical interaction elements corresponding to the historical interaction element information in the matched historical interaction record at the position corresponding to the historical interaction position in the matched historical interaction record on the current video picture.
For the example related to fig. 5 above, the 4 history interaction records obtained by the terminal 110 and matched with the current play progress point may also include history interaction element information. If the 4 pieces of history interaction element information in the 4 pieces of history interaction records all represent a pentagram static figure, that is, the history interaction elements corresponding to the 4 pieces of history interaction element information are all pentagram static figures, the terminal 110 may display a pentagram static figure at P51, P52, P53, and P54 of the video picture I5, respectively.
In one specific example, the terminal 110 requests the server 120 to send the historical interaction record of the target video at time T1 (e.g., 24/2018, 13/40). If another terminal connected to the server 120 detects a predetermined interactive operation in the video playback area acting on the target video at a time T2 (e.g., 3/20/14/20/2018) before the time T1. At the time when the predetermined interactive operation is detected, the playing progress point of the target video is S1, the action position of the predetermined interactive operation in the video playing area is P1, and the coordinate of a pixel point in the video picture corresponding to the playing progress point S1, which is matched with the action position P1, is P2, that is, the target picture position corresponding to the action position P1 is P2. In addition, the interactive element information pre-stored in the server 120 and matched with the predetermined interactive operation is a five-pointed star static figure. In this case, the predetermined interactive operation is a historical interactive operation, and the interactive record obtained by the server 120 and corresponding to the predetermined interactive operation is a historical interactive record. In addition, the historical interaction progress point in the historical interaction record is the playing progress point S1, the historical interaction position is the target picture position P2, and the historical interaction element information is the identification information corresponding to the pentagram static graph.
In one embodiment, the historical interaction record further comprises a historical interaction user identification. In this case, the history interaction element matches the history interaction user identification.
The historical interactive user identifier is the unique identity identifier of the historical interactive user who performs the corresponding historical preset interactive operation. For example, if the historical interactive user Uh1 performs a historical predetermined interactive operation on the target video, in one embodiment, the historical interactive record corresponding to the historical predetermined interactive operation may include, in addition to the historical interactive progress point, the historical interactive position, and the historical interactive element information, a historical interactive user identifier of the historical interactive user Uh1, and the historical interactive user identifier are associated with each other. In a specific example, the historical interactive user identification may be a historical interactive user ID, that is, the unique identity of the historical interactive user is the user ID.
It should be noted that the historical interactive user identifier can be used to determine the historical interactive elements that the terminal 110 needs to display. In one embodiment, each historical interactive user identification may all match the same historical interactive element. That is, the historical predetermined interactive operations performed by each historical interactive user are all the same as the corresponding displayed historical interactive elements, for example, the praise operations performed by each historical interactive user are all the same as the displayed elements with the same color, size, shape and state. In another embodiment, different historical interactive user identifications may match different historical interactive elements. That is, the historical interaction elements displayed by the historical predetermined interaction operations performed by different users may be different. For example, the like operation performed by the history interactive user Uh1 corresponds to displaying a pentagram static graphic, and the like operation performed by the history interactive user Uh2 corresponds to displaying a heartleaf static graphic. It can be understood that the method can improve the diversity of the expression forms of the interactive effect.
It should be noted that the history interactive elements may be set by the corresponding user according to actual requirements. For example, candidate interactive elements are displayed on the terminal interface, the user selects among the candidate interactive elements, and the selected candidate interactive element is the interactive element matched with the user identifier of the user.
In one embodiment, when history interactive element information is included in the history interactive record, the history interactive user identifier may be used to determine a history interactive element that the terminal 110 needs to display, which may mean that the history interactive user identifier may be used to determine history interactive element information associated therewith, that is, the history interactive element information matches the history interactive user identifier associated therewith. In one embodiment, each historical interactive user identification may correspond to the same historical interactive element information. In another embodiment, different historical interactive user identifications may correspond to different historical interactive element information.
In one embodiment, as shown in fig. 8, the method for implementing the video interaction function may include the following steps S802 to S808. S802, detecting a preset interaction operation in a video playing area acting on the target video. S804, determining a current interaction position corresponding to the preset interaction operation, wherein the current interaction position is a position in a current video picture of the target video. S806, generating a current interaction request carrying the current interaction position. And S808, sending the current interaction request to a server, wherein the current interaction request is used for indicating the server to obtain a historical interaction record corresponding to the preset interaction operation based on the current interaction position.
It should be noted that, when the terminal 110 plays the target video, not only the interaction condition between the historical interaction user and the target video can be presented based on the current playing progress point, but also the predetermined interaction operation in the video playing area acting on the target video, that is, the predetermined interaction operation performed by the current interaction user watching the target video through the terminal 110, can be detected in real time. It can be understood that the current interactive user can apply the predetermined interactive operation to any position in the playing area of the target video according to the actual requirement.
In this embodiment, when the terminal 110 detects the predetermined interactive operation, the current interactive position corresponding to the detected predetermined interactive operation may be determined. Further, the terminal 110 may generate a current interaction request based on the current interaction location, and then transmit the current interaction request to the server 120. The current interaction request may cause the server 120 to obtain a historical interaction record corresponding to the predetermined interaction operation based on the current interaction location. Specifically, the current interaction request may cause the server 120 to store the current interaction location, so as to obtain a historical interaction record corresponding to the predetermined interaction operation.
In one embodiment, the method for implementing the video interaction function further includes: and determining a current interaction progress point corresponding to the detected preset interaction operation, wherein the current interaction progress point is a playing progress point of the target video when the preset interaction operation is detected. Accordingly, the current interaction request also carries the current interaction progress point, and the current interaction request is used for indicating the server to obtain a historical interaction record corresponding to the preset interaction operation based on the current interaction position and the current interaction progress point.
In this embodiment, when the terminal 110 detects the predetermined interactive operation, it may determine a current interactive position and a current interactive progress point corresponding to the detected predetermined interactive operation. Further, the terminal 110 may generate a current interaction request based on the current interaction position and the current interaction progress point, and further transmit the current interaction request to the server 120. The current interaction request may cause the server 120 to obtain a historical interaction record corresponding to the predetermined interaction operation based on the current interaction location and the current interaction progress point. Specifically, the current interaction request may cause the server 120 to store the current interaction position and the current interaction progress point in an associated manner, so as to obtain a historical interaction record corresponding to the predetermined interaction operation.
In one embodiment, after determining the current interaction location corresponding to the predetermined interaction operation, the method further includes: displaying a current interactive element at a current interactive position in a current video picture; wherein the current interactive element comprises a static element and/or an animation element.
In a specific example, the terminal 110 may obtain current interaction element information, and display a current interaction element corresponding to the current interaction element information at a current interaction position on a current video picture, that is, after a user performs a predetermined interaction operation, an interaction performance effect corresponding to the predetermined interaction operation is displayed at the current interaction position in the current video picture.
As shown in fig. 9a, 9b and 9c, when the terminal 110 does not detect a predetermined interactive operation acting in the video playback Area9, the display interface of the terminal 110 is as shown in fig. 9 a. Further, as shown in fig. 9b, the current interactive user performs a predetermined interactive operation in the video playing area of the terminal 110 by a finger. After detecting the predetermined interaction operation, the terminal 110 determines that the current interaction position corresponding to the predetermined interaction operation is P91. Moreover, the terminal 110 acquires the current interactive element information, and assumes that the acquired current interactive element information corresponds to the love peach static figure, and then the terminal 110 displays a love peach static figure at the current interactive position P91.
In addition, in an embodiment, the current interaction request sent by the terminal 110 may enable the server 120 to obtain the current interaction element information, and generate a current interaction record according to the current interaction position, the current interaction progress point, and the current interaction element information. Further, the server 120 may store the current interaction record, so as to obtain a historical interaction record of the target video corresponding to the predetermined interaction operation. Moreover, the server 120 may further send the current interaction record to the terminal 110, in which case, the step of displaying the current interaction element at the current interaction position in the current video frame may include: the terminal 110 displays a current interactive element corresponding to the current interactive element information at a current interactive position in the current video picture according to the received current interactive record.
In addition, as shown in fig. 10, in the process of completely playing the target video by the terminal 110, a predetermined interaction operation acting on the video playing area of the target video may be detected in real time, and when the predetermined interaction operation is detected, a current interaction position and a current interaction progress point corresponding to the predetermined interaction operation are determined. And then, on one hand, displaying a current interactive element corresponding to the current interactive element information at the current interactive position on the current video picture, and on the other hand, sending the current interactive position and the current interactive progress point to a server. Further, detecting whether the target video is played completely, if not, skipping to the step of detecting the preset interaction operation in the video playing area acting on the target video; if yes, the flow is ended.
In one embodiment, the terminal 110 may further detect a download trigger operation for the target video, and request all video picture data of the target video and all historical interaction records from the server 120 when the download trigger operation is detected.
It can be understood that the video application of the terminal 110 may generally support video downloading, and in this embodiment, in the process of downloading the target video, all the historical interaction records of the target video are obtained together, so that in the process of playing the target video offline, the terminal 110 may also execute the implementation method of the video interaction function in the foregoing embodiments. That is, during the offline viewing process, the terminal 110 may also present, according to the historical interaction record, the interaction situation between each user who has viewed the target video and the target video at a specific position in the specific video screen of the target video.
In one embodiment, the step of displaying the current interactive element at the current interactive position in the current video picture further comprises the following steps: acquiring a current interactive user identifier; determining a current interactive element matched with the current interactive user identification; wherein different current interactive user identities match different current interactive elements.
It should be noted that, determining the current interactive element matching the current interactive user identifier may refer to determining information of the current interactive element matching the current interactive user identifier. In this case, the current interaction request generated by the terminal 110 may further carry a current interaction user identifier, and in an embodiment, the current interaction request may be used to instruct the server 120 to obtain a historical interaction record corresponding to the predetermined interaction operation based on the current interaction position, the current interaction progress point, the current interaction user identifier, and the current interaction element information matched with the current interaction identifier. Specifically, the current interaction request may cause the server 120 to store the current interaction position, the current interaction progress point, the current interaction user identifier, and the current interaction element information matched with the current interaction identifier in an associated manner, so as to obtain a historical interaction record corresponding to the predetermined interaction operation.
It should be noted that, the current interactive user identifier and the historical interactive user identifier described above may have the same attributes, except that the time when the corresponding predetermined interactive operation occurs is different. In addition, the matching relationship between the current interactive element information and the current interactive user identifier may be the same as the matching relationship between the historical interactive element information and the historical interactive user identifier described above.
In one embodiment, as shown in fig. 11, a method for implementing a video interaction function is provided. The method is described as applied to the server 120 in fig. 1. The method may include steps S1102 and S1104 as follows. S1102, when an interaction display request carrying a video identifier is received, obtaining a historical interaction record of a target video corresponding to the video identifier, wherein the historical interaction record comprises a historical interaction position, and the historical interaction position is a position in a video picture. And S1104, sending the historical interaction record to a terminal corresponding to the interaction display request, wherein the historical interaction record is used for the terminal to acquire the historical interaction record matched with the current playing progress point of the target video, and displaying the historical interaction element at a position corresponding to the historical interaction position on the current video picture of the target video.
Video identification, which refers to a unique identification of the target video, may be used for the server 120 to determine the target video. The interactive display request refers to a command for triggering the server 120 to send the historical interactive record of the target video to the terminal 110, and may represent an interactive situation between the historical interactive user and the target video, which the terminal 110 needs to display.
In this embodiment, when receiving an interactive display request with a video identifier sent by the terminal 110, the server 120 may obtain a historical interactive record of a target video corresponding to the video identifier, and send the obtained historical interactive record to the terminal 110. The historical interaction record sent to the terminal 110 may be used for the terminal 110 to obtain a historical interaction record matched with the current playing progress point of the target video, and when the matched historical interaction record is obtained, the historical interaction elements are displayed at a position on the current video picture (the video picture corresponding to the current playing progress point) of the target video, where the position corresponds to the historical interaction position in the matched historical interaction record.
According to the method for realizing the video interaction function, the historical interaction record sent to the terminal by the server comprises the historical interaction position, and further, the terminal can display the historical interaction elements at the position corresponding to the historical interaction position on the current video picture. Firstly, the historical interaction position corresponds to a specific position in a specific video picture of the target video, so that the interaction pertinence is high. Secondly, according to the historical interaction records, the interaction situation between each user who has watched the target video in the past and the target video can be presented at a specific position in a specific video picture of the target video, so that the current watching user can know the interaction situation between the historical watching user and the target video, the efficiency of finding the bright spot in the target video by the current watching user is improved, and the probability that the current watching user misses the bright spot in the target video is reduced. Moreover, a more vigorous interactive atmosphere can be effectively created, so that the interactive participation rate is improved.
In one embodiment, the historical interaction record further comprises historical interaction progress points. Accordingly, the historical interaction positions are: and the position in the video picture corresponding to the historical interaction progress point in the historical interaction record where the user is located.
In one embodiment, the historical interaction record further includes historical interaction element information. Therefore, the historical interaction record is used for the terminal to obtain the historical interaction record matched with the current playing progress point of the target video, and the historical interaction elements corresponding to the historical interaction element information are displayed at the position corresponding to the historical interaction position on the current video picture of the target video.
In one embodiment, the historical interaction record further comprises a historical interaction user identification. Accordingly, the historical interaction element matches the historical interaction user identification. And, different historical interactive user identities may match different historical interactive elements. In one specific example, the history interactive element is matched with the history interactive user identifier, which may mean that the history interactive element information is matched with the history interactive user identifier.
It should be noted that, specific limitations of the history interaction record, the history interaction progress point, the history interaction position, the history interaction element information, the history interaction element, the history interaction user identifier, and the matching relationship between the history interaction element information and the history interaction user identifier in the foregoing embodiments may be the same as the limitations in the foregoing embodiments.
In one embodiment, the method for implementing the video interaction function may include the following steps: receiving a current interaction request carrying a current interaction position and a current interaction progress point; and storing the current interaction position and the current interaction progress point to obtain the historical interaction record of the target video.
It should be noted that this embodiment corresponds to the embodiment shown in fig. 8. When the terminal 110 plays the target video, the server 120 may send the history interaction record to the terminal 110 when receiving the interaction display request carrying the video identifier sent by the terminal 110, so that the terminal 110 may present the interaction condition between the history interaction user and the target video based on the current play progress point. In addition, when the terminal 110 detects a predetermined interaction operation in a video playing area acting on the target video and sends a current interaction request carrying a current interaction position and a current interaction progress point, the server 120 may receive the current interaction request sent by the terminal 110 and store the current interaction position and the current interaction progress point in an associated manner, thereby obtaining a historical interaction record with the target video. That is, the server 120 may further add a historical interaction record of the target video according to a predetermined interaction operation detected by the terminal 110 in real time. In one embodiment, the server 120 may store the historical interaction records of the target video in the form of a data dictionary.
It should be noted that, in another embodiment, after receiving a current interaction request carrying a current interaction position and a current interaction progress point sent by the terminal 110, the server 120 may further obtain predetermined current interaction element information, generate a current interaction record based on the current interaction position, the current interaction progress point, and the current interaction element information, and further perform associated storage on the current interaction record, thereby obtaining a historical interaction record of the target video.
In one embodiment, the step of performing associated storage on the current interaction position and the current interaction progress point to obtain a historical interaction record of the target video may include: and generating a current interaction record based on the current interaction position and the current interaction progress point, and storing the current interaction record to obtain a historical interaction record of the target video. In addition, in an embodiment, after the server 120 generates the current interaction record, the current interaction record may be stored, so as to obtain a newly added historical interaction record of the target video, and the current interaction record may also be sent to the terminal 110. The current interaction record sent to the terminal 110 may enable the terminal 110 to display the current interaction element at the current interaction position in the video picture corresponding to the current interaction progress point according to the current interaction record.
In one embodiment, the current interaction request further carries a current interaction user identifier. Therefore, the current interaction position and the current interaction progress point are stored, and the historical interaction record of the target video is obtained, and the method comprises the following steps: storing the current interaction position, the current interaction progress point and the current interaction user identification to obtain a historical interaction record of the target video; wherein the current interactive element matches the current interactive user identification and different current interactive user identifications may match different current interactive elements.
In a specific example, the matching of the current interactive element with the current interactive user identifier may mean that the current interactive element information matches the current interactive user identifier.
It should be noted that, the specific limitations of the current interactive user identifier and the matching relationship between the current interactive element information and the current interactive user identifier in this embodiment may be the same as the limitations in the foregoing embodiments.
In one embodiment, as shown in fig. 12, the method for implementing the video interaction function may further include the following steps S1202 and S1204. And S1202, determining a viewpoint progress interval of the target video based on the historical interaction progress points in the historical interaction records of the target video, wherein the viewpoint progress interval is a candidate progress interval of which the first total interaction times is greater than a first preset time threshold, and the first total interaction times of the candidate progress interval are the total number of the historical interaction records corresponding to the historical interaction progress points covered by the candidate progress interval. And S1204, generating viewpoint information based on the viewpoint progress interval, and sending the viewpoint information to the terminal.
It should be noted that, for a video with a long duration, the user may not have enough time to view the entire content of the video. In this case, the user often manually adjusts the progress bar to view part of the content of the video in a jumping manner, or directly abandons viewing the video, which may cause the user to miss the content with a viewpoint in the video. In order to solve the problem, the relevant staff can clip the content with the viewpoint in the video based on subjective judgment, so as to obtain the viewpoint segment of the video, and then provide the viewpoint segment for the user to watch, so as to help the user know the viewpoint in the video in a short time. However, the subjective judgment of the staff may be one-sidedness, which makes it difficult to accurately obtain the viewpoint section.
Based on this, in this embodiment, the server 120 may determine, based on the historical interaction progress points in each historical interaction record of the target video, the candidate progress intervals in which the total number of the historical interaction records corresponding to each historical interaction progress point covered by the candidate progress interval is greater than the first predetermined number threshold, as the viewpoint progress interval of the target video, and then the server 120 directly generates viewpoint information based on the viewpoint progress interval. That is, the server 120 determines the viewpoint segments of the target video through the historical interaction records of the target video. Wherein the viewpoint segments may include highlight segments or slot segments. It can be understood that each historical interaction record of the target video is generated according to historical predetermined interaction operations performed by each user, so that the attention point of the user to the target video can be accurately reflected by combining each historical interaction record, and the viewpoint segment of the target video can be accurately determined.
The candidate progress interval comprises two endpoints which are two playing progress points of the video respectively. For example, the total duration of a video is 30 minutes and 10 seconds, 05 minutes and 10 seconds to 08 minutes and 20 seconds can be used as one of the candidate progress intervals of the video, and 23 minutes and 10 seconds to 27 minutes and 20 seconds can also be used as one of the candidate progress intervals of the video.
The viewpoint progress interval refers to a candidate progress interval of which the first total interaction times are greater than a first predetermined time threshold. And the first total interaction times of the candidate progress interval are the total number of the historical interaction records corresponding to the historical interaction progress points covered by the candidate progress interval. It can be understood that the first time threshold may be set based on actual requirements, and the larger the first time threshold is, the higher the fire heat degree corresponding to the determined viewpoint progress interval is.
It should be noted that the candidate progress interval covers the historical interaction progress point, which means that the historical interaction progress point falls in the candidate progress interval. For example, the candidate progress interval I is 05 minutes 10 seconds to 08 minutes 20 seconds, the historical interaction progress point S1 is 06 minutes 10 seconds, and the historical interaction progress point S2 is 15 minutes 25 seconds, and thus, 06 minutes 10 seconds falls within the candidate progress interval I, so that the candidate progress interval I covers the historical interaction progress point S1, 15 minutes 25 seconds does not fall within the candidate progress interval I, and thus the candidate progress interval I does not cover the historical interaction progress point S2.
In addition, more than one historical interaction record corresponding to one historical interaction progress point may be provided. For example, when the historical interactive users Uh1, Uh2 and Uh3 perform predetermined interactive operations on the video playing area of the target video, the playing progress points of the target video are all S1, and the historical interactive records corresponding to the historical predetermined interactive operations of the three are the historical interactive records R1, R2 and R3, respectively, as can be seen, the historical interactive progress points in the historical interactive records R1, R2 and R3 are all S1. In this case, if a candidate progress interval only covers the historical interaction progress point S1, since the total number of the historical interactions corresponding to the historical interaction progress point S1 is 3 (the historical interactions are R1, R2, and R3), the first total number of interactions in the candidate progress interval is 3.
In one embodiment, the viewpoint information may refer to each video frame information corresponding to the viewpoint progress interval. In this case, after receiving the video image information sent by the server 120, the terminal 110 may directly obtain the viewpoint segment of the target video according to the video image information, so as to play the viewpoint segment or allow the user to download the viewpoint segment.
It can be understood that, in an actual application, a viewpoint segment playing button may be set on a video playing interface of a video application of the terminal 110, and after the user clicks the viewpoint segment playing button, the terminal 110 requests the server 120 for viewpoint information of the target video, so that the terminal 110 only plays a video segment corresponding to the viewpoint information.
In another embodiment, the viewpoint information may also refer to prompt information corresponding to the viewpoint progress interval. In this case, after receiving the prompt message sent by the server 120, the terminal 110 may display the prompt message, for example, display a certain playing progress point to a certain playing progress point as a viewpoint in a pop-up window manner, so that the user can know the distribution of the viewpoint content of the target video.
In one embodiment, as shown in fig. 13, the step S1002 may include the following steps S1302 to S1306. S1302, determining each candidate progress interval corresponding to each historical interaction progress point, wherein two end points of each candidate progress interval are separated from the corresponding historical interaction progress point by a first preset time interval. And S1304, determining a first total interaction frequency of each candidate progress interval, wherein the first total interaction frequency of each candidate progress interval is the total number of historical interaction records corresponding to each historical interaction progress point covered by the candidate progress interval. And S1306, determining the candidate progress interval with the first total interaction frequency larger than the first preset frequency threshold value as the viewpoint progress interval of the target video.
In this embodiment, candidate progress intervals corresponding to the historical interaction progress points of the target video one to one may be determined, and then the candidate progress interval in which the first total interaction time is greater than the first predetermined time threshold in the candidate progress intervals is determined as the viewpoint progress interval of the target video. Since only the historical interaction progress points are likely to correspond to the historical interaction records among all the playing progress points of the target video, and the common playing progress points do not have corresponding historical interaction records, the candidate progress intervals are determined based on the historical interaction progress points, and the watching point progress intervals can be determined more efficiently and accurately.
And the two endpoints of each candidate progress interval are respectively separated from the corresponding historical interaction progress point by a first preset time interval. For example, the target video has two historical interaction progress points, respectively, historical interaction progress point S1(06 minutes 10 seconds) and historical interaction progress point S2(15 minutes 25 seconds), and the first predetermined time interval is 1 minute. In this case, the candidate progress interval corresponding to the historical interaction progress point S1 is 05 minutes 10 seconds to 07 minutes 10 seconds, and the candidate progress interval corresponding to the historical interaction progress point S2 is 14 minutes 25 seconds to 16 minutes 25 seconds.
In another embodiment, after the first total number of interactions of each candidate progress interval is determined, a preset number of candidate progress intervals with the largest first total number of interactions may also be determined as the viewpoint progress interval.
In one embodiment, as shown in fig. 14, the historical interaction record further includes a historical interaction user identifier, and the historical interaction user identifier, the historical interaction progress point and the historical interaction position are associated with each other. Accordingly, the method for implementing the video interaction function may further include the following steps S1402 to S1406. S1402, detecting a hot spot interaction target, where the hot spot interaction target is a candidate interaction target whose corresponding second total interaction times exceed a second predetermined time threshold, and the second total interaction times are the total number of history interaction records whose history interaction positions are included in the range of the candidate interaction target. And S1404, determining an initial historical interaction record of the detected hotspot interaction target, wherein the initial historical interaction record is a historical interaction record corresponding to a first preset interaction operation acting on the hotspot interaction target. And S1406, sending the preset resource to the user account corresponding to the history interactive user identification in the initial history interactive record.
The hotspot interaction target refers to a candidate interaction target of which the corresponding second total interaction times exceed a second preset time threshold. And the second total interaction times are the number of historical interaction records of which the historical interaction positions are in the range of the candidate interaction target. For example, a range of a candidate interaction target covers 3 historical interaction positions, and only covers the 3 historical interaction positions, and a total number of historical interaction records corresponding to the 3 historical interaction positions is a second total interaction number corresponding to the candidate interaction target.
In addition, there may be more than one historical interaction record corresponding to one historical interaction location. For example, when the historical interactive users Uh1, Uh2 and Uh3 perform predetermined interactive operations on the video playing area of the target video, the historical interactive records corresponding to the predetermined interactive operations are R1, R2 and R3, respectively, and the historical interactive positions in the historical interactive records R1, R2 and R3 are P1. In this case, if the range of the candidate interaction target only covers the historical interaction position P1, since the total number of the historical interactions corresponding to the historical interaction position P1 is 3 (the historical interactions are R1, R2, and R3), the second total number of interactions corresponding to the candidate interaction target is 3.
In one embodiment, the candidate interactive target may be an interactive object preset based on actual requirements, for example, a certain type of bag appearing in the target video is preset as the interactive object. It should be noted that the preset interactive object may appear in the video frames corresponding to the multiple playing progress points, and therefore, the range covered by the interactive object in each video frame can be used as the range of the candidate interactive target.
The initial historical interaction record refers to a historical interaction record corresponding to a first preset interaction operation acting on the hotspot interaction target, namely the historical interaction record corresponding to the preset interaction operation acting on the hotspot interaction target firstly. For example, the historical interaction records whose historical interaction locations are included in (i.e., fall within) the scope of the hotspot interaction target are 1000 historical interaction records R1, R2, … and R1000, respectively, and if the predetermined interaction operation corresponding to the historical interaction record R5 is firstly applied to the hotspot interaction target, the historical interaction record R5 is an initial historical interaction record.
Correspondingly, the historical interaction user identifier in the initial historical interaction record corresponds to the first user performing the predetermined interaction operation on the hotspot interaction target. In this embodiment, a reward may be issued to the first user performing a predetermined interaction operation on the hotspot interaction target, that is, sending a predetermined resource to the user account. The predetermined resource may be set based on actual requirements, for example, the "special eye" medal.
In one embodiment, as shown in FIG. 15, the hotspot interaction targets are hotspot interaction areas. Accordingly, the method for determining the hotspot interaction target may include the following steps S1502 and S1504. S1502, approximate picture progress intervals respectively corresponding to the historical interaction progress points in the historical interaction records are determined, and two end points of the approximate picture progress intervals are respectively separated from the corresponding historical interaction progress points by a second preset time interval. And S1504, for each approximate picture progress interval, determining a position area with a third total interaction time exceeding a third preset time threshold value based on the historical interaction positions in the historical interaction records covered by the approximate picture progress interval, wherein the third total interaction time is the total number of the historical interaction records of which the historical interaction positions are contained in the position area. And the hotspot interaction target is a position area of which the third total interaction times exceed a third preset time threshold.
It should be noted that, if a plurality of users each perform a predetermined interactive operation on a certain object in a certain video frame of the target video, the object covers not a single position point but an area range in the video frame. In this case, although the objects to which the users need to aim at are the same, when the users perform the predetermined interaction operation on the object, the specific landing positions may be different, and the specific landing positions are usually distributed in the area range corresponding to the object, that is, the historical interaction position points corresponding to the historical predetermined interaction operation on the same object may be different, and the historical interaction position points are usually distributed in the area range corresponding to the object. Thus, in one embodiment, a hotspot interaction target may be a hotspot interaction area, such as a block of pixels. Moreover, the hotspot interaction area can be essentially an area range in a single video picture, and the area range covers a plurality of position points.
It will be appreciated that video is made up of successive video pictures and that the proximity of adjacent frame video pictures is generally high, i.e. most of the image content in adjacent frame video pictures is generally the same. In addition, it is highly likely that the corresponding area ranges of each frame of the video image with a high degree of similarity represent the same object. Therefore, in this embodiment, the hotspot interaction target may be a hotspot interaction area, and the hotspot interaction area may be essentially a location area, and the location area covers each area range corresponding to each other in each frame of video picture with higher approximation degree.
The corresponding area ranges in each frame of video picture refer to the area ranges with the same size and relative position in the corresponding video picture in each frame of video picture. Taking fig. 16 as an example, the video screen F1, the video screen F2, and the video screen F3 are continuous three-frame video screens having a high degree of approximation, the size of the region range D1 in the video screen F1, the size of the region range D2 in the video screen F2, and the size of the region range D3 in the video screen F3 are all the same, and the relative position of the region range D1 in the video screen F1, the relative position of the region range D2 in the video screen F2, and the relative position of the region range D3 in the video screen F3 are also all the same. In this case, the area range D1, the area range D2, and the area range D3 are 3 area ranges corresponding to each other in the three frames of video screen, i.e., the video screen F1, the video screen F2, and the video screen F3.
And when the total number of the historical interaction records corresponding to the historical interaction positions covered in the corresponding area ranges exceeds a third preset time threshold value, the position area covering each area range is the hotspot interaction area. Still taking fig. 16 as an example, if the sum of the number of the historical interaction records corresponding to the historical interaction positions covered by the area range D1, the number of the historical interaction records corresponding to the historical interaction positions covered by the area range D2, and the number of the historical interaction records corresponding to the historical interaction positions covered by the area range D3 exceeds a third predetermined threshold, the location areas covering the area range D1, the area range D2, and the area range D3 are the hotspot interaction areas.
In this embodiment, approximate screen progress intervals corresponding to the historical interaction progress points in each historical interaction record one to one may be determined, where two endpoints of the approximate screen progress intervals are separated from the corresponding historical interaction progress points by a second predetermined time interval. For example, the target video has two historical interaction records, and two historical interaction progress points in the two historical interaction records are respectively: historical interaction progress point S1(06 minutes 10 seconds) and historical interaction progress point S2(15 minutes 25 seconds), the second predetermined time interval being 1 second. In this case, the approximate screen progress interval corresponding to the historical interaction progress point S1 is 06 minutes 09 seconds to 06 minutes 11 seconds, and the approximate screen progress interval corresponding to the historical interaction progress point S2 is 15 minutes 24 seconds to 15 minutes 26 seconds.
And then, for each approximate picture progress interval, determining a position area with a third total interaction frequency exceeding a third preset frequency threshold value respectively based on the historical interaction positions in each historical interaction record covered by the approximate picture progress interval. Still taking fig. 16 as an example, if the target video has 1000 historical interaction records, wherein the historical interaction positions of 100 historical interaction records include the area range D1 in the video frame F1, the historical interaction positions of 300 historical interaction records fall in the area range D2 in the video frame F2, the historical interaction positions of 500 historical interaction records fall in the area range D3 in the video frame F3, and the third predetermined threshold is set to 800. It can be seen that the third total number of interactions (100+300+ 500: 900) for the location area covering the area range D1, the area range D2 and the area range D3 has exceeded the third predetermined number threshold (800), and thus the location area can be determined as a hotspot interaction area.
In one embodiment, as shown in fig. 17, a method for implementing a video interaction function is provided. The method is described as being applied to the terminal 110 and the server 120 shown in fig. 1. The method may comprise the following steps S17a and S17 b.
S17a is a flow step for displaying historical interaction conditions, and S17a may include the following steps S17a 1-S17 a 5. S17b is a flow step for detecting and displaying the current interactive operation, and S17b may include the following steps S17b 1-S17 b 8.
S17a1, the terminal sends an interactive display request to the server, and the interactive display request carries the video identifier. S17a2, the server receives the interactive display request, acquires the historical interactive record of the target video corresponding to the video identifier, and sends the acquired historical interactive record to the terminal; the historical interaction record comprises historical interaction progress points, historical interaction positions and historical interaction element information which are mutually associated, and the historical interaction positions are positions in the video pictures corresponding to the historical interaction progress points which are associated with the historical interaction positions. S17a3, the terminal detects a current playing progress point of the target video. S17a4, the terminal obtains a target historical interaction record, the target historical interaction record is a historical interaction record matched with the current playing progress point, and the historical interaction progress point in the target historical interaction record is consistent with the current playing progress point. S17a5, when the terminal acquires the target historical interaction record, displaying a historical interaction element at a target display position in a video picture corresponding to the current playing progress point; the target display position corresponds to the historical interaction position in the target historical interaction record, and the historical interaction element corresponds to the historical interaction element information in the target historical interaction record.
S17b1, the terminal detects the preset interactive operation in the video playing area of the target video. S17b2, when the terminal detects the preset interaction operation, determining a current interaction position and a current interaction progress point corresponding to the preset interaction operation; the current interaction progress point is the playing progress point of the target video when the preset interaction operation is detected, and the current interaction position is the position in the video picture corresponding to the playing progress point. S17b3, the terminal generates a current interaction request, and the current interaction request carries the current interaction position and the current interaction progress point. S17b4, sending the current interaction request to the server. S17b5, the server acquires the current interaction element information and generates a current interaction record based on the current interaction position, the current interaction progress point and the current interaction element information. S17b6, the server sends the current interaction record to the terminal. S17b7, the terminal displays the current interactive element corresponding to the current interactive element information at the current interactive position. S17b8, the server stores the current interaction record to obtain the historical interaction record corresponding to the preset interaction operation.
The sequence of step S17a and step S17b is not strictly limited. In addition, the technical features of each step in this embodiment may be the same as those of the corresponding step in the previous embodiments, and are not repeated herein.
It should be understood that, although the steps in the above-described flowcharts are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in the above-described flowcharts may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or the stages is not necessarily sequential, but may be performed alternately or alternatingly with other steps or at least a portion of the sub-steps or stages of other steps.
In one embodiment, as shown in FIG. 18, an apparatus 1800 for implementing a video interaction function is provided. The apparatus 1800 may include the following modules 1802-1806. And a current progress detection module 1802, configured to detect a current playing progress point of the target video. A matching record obtaining module 1804, configured to obtain a historical interaction record matched with the current play progress point, where the matched historical interaction record includes a historical interaction position. A history element display module 1806, configured to display a history interaction element at a position on the current video screen corresponding to the history interaction position.
The apparatus 1800 for implementing the video interaction function obtains a history interaction record matched with the current playing progress point, where the history interaction record includes a history interaction position, and then displays history interaction elements at a position corresponding to the history interaction position on the current video frame. Firstly, the historical interaction position corresponds to a specific position in a specific video picture of the target video, so that the interaction pertinence is high. Secondly, according to the historical interaction records, the interaction situation between each user who has watched the target video in the past and the target video can be presented at a specific position in a specific video picture of the target video, so that the current watching user can know the interaction situation between the historical watching user and the target video, the efficiency of finding the bright spot in the target video by the current watching user is improved, and the probability that the current watching user misses the bright spot in the target video is reduced. Moreover, a more vigorous interactive atmosphere can be effectively created, so that the interactive participation rate is improved.
In one embodiment, the historical interaction record may also include historical interaction progress points. At this time, the matching record obtaining module 1804 is configured to obtain a history interaction record of which the history interaction progress point matches the current playing progress point.
In one embodiment, the historical interaction record further comprises a historical interaction user identification. Accordingly, the historical interaction element matches the historical interaction user identification.
In one embodiment, the apparatus 1800 for implementing video interaction function may further include the following modules: and the current operation detection module is used for detecting the preset interactive operation in the video playing area acting on the target video. And the interactive position determining module is used for determining a current interactive position corresponding to the preset interactive operation, wherein the current interactive position is a position in a current video picture of the target video. And the current interaction request generation module is used for generating a current interaction request carrying the current interaction position. And the current interaction request sending module is used for sending the current interaction request to the server, and the current interaction request is used for indicating the server to obtain a historical interaction record corresponding to the preset interaction operation based on the current interaction position.
In an embodiment, the apparatus 1800 for implementing a video interaction function further includes a current progress point determining module, where the current progress point determining module is configured to determine a current interaction progress point corresponding to a predetermined interaction operation, and the current interaction progress point is a play progress point of a target video when the predetermined interaction operation is detected. Accordingly, the current interaction request also carries the current interaction progress point, and the current interaction request is used for indicating the server to obtain a historical interaction record corresponding to the preset interaction operation based on the current interaction position and the current interaction progress point.
In one embodiment, the apparatus 1800 for implementing video interaction function further comprises a current interaction element display module, which is configured to display a current interaction element at a current interaction position in a current video frame.
In one embodiment, the apparatus 1800 for implementing video interaction function may further include the following modules: and the current interactive user acquisition module is used for acquiring the current interactive user identifier. And the current interactive element determining module is used for determining the current interactive element matched with the current interactive user identifier.
It should be noted that, for specific limitations of the apparatus 1800 for implementing the video interaction function, reference may be made to the aforementioned limitations of the method for implementing the video interaction function applicable to the terminal 110, and details thereof are not repeated herein. All or part of the modules or units in the apparatus 1800 for implementing video interaction functions can be implemented by software, hardware and their combination. The modules or units may be embedded in hardware or independent from a processor in the computer device, or may be stored in a memory in the computer device in software, so that the processor can call and execute operations corresponding to the modules or units.
In one embodiment, as shown in FIG. 19, an apparatus 1900 for implementing a video interaction function is provided. The apparatus 1900 may include the following modules 1902 and 1904. The history record obtaining module 1902 is configured to obtain, when an interactive display request with a video identifier is received, a history interaction record of a target video corresponding to the video identifier, where the history interaction record includes a history interaction position, and the history interaction position is a position in a video frame. A history record sending module 1904, configured to send a history interaction record to a terminal corresponding to the interactive display request, where the history interaction record is used for the terminal to obtain a history interaction record matched with the current playing progress point of the target video, and display a history interaction element at a position on the current video picture of the target video corresponding to the history interaction position.
In the apparatus 1900 for implementing the video interaction function, the historical interaction record sent by the server to the terminal includes the historical interaction position, and then the terminal can display the historical interaction elements at the position corresponding to the historical interaction position on the current video frame. Firstly, the historical interaction position corresponds to a specific position in a specific video picture of the target video, so that the interaction pertinence is high. Secondly, according to the historical interaction records, the interaction situation between each user who has watched the target video in the past and the target video can be presented at a specific position in a specific video picture of the target video, so that the current watching user can know the interaction situation between the historical watching user and the target video, the efficiency of finding the bright spot in the target video by the current watching user is improved, and the probability that the current watching user misses the bright spot in the target video is reduced. Moreover, a more vigorous interactive atmosphere can be effectively created, so that the interactive participation rate is improved.
In one embodiment, the historical interaction record further comprises historical interaction progress points. Accordingly, the historical interaction positions are: and the position in the video picture corresponding to the historical interaction progress point in the historical interaction record where the user is located.
In one embodiment, the historical interaction record further comprises a historical interaction user identification. Accordingly, the historical interaction element matches the historical interaction user identification.
In one embodiment, the device 1900 for implementing the video interaction function may further include the following modules: and the current request acquisition module is used for receiving a current interaction request carrying a current interaction position and a current interaction progress point. And the historical record generating module is used for storing the current interaction position and the current interaction progress point to obtain the historical interaction record of the target video.
In one embodiment, the current interaction request further carries a current interaction user identifier. Accordingly, the historical record generating module is used for storing the current interaction position, the current interaction progress point and the current interaction user identifier to obtain the historical interaction record of the target video. Wherein the current interactive element matches the current interactive user identification.
In one embodiment, the device 1900 for implementing the video interaction function may further include the following modules: and the viewpoint interval determining module is used for determining a viewpoint progress interval of the target video based on the historical interaction progress points in the historical interaction records of the target video, wherein the viewpoint progress interval is a candidate progress interval of which the first total interaction times are greater than a first preset time threshold, and the first total interaction times of the candidate progress interval are the total number of the historical interaction records corresponding to the historical interaction progress points covered by the candidate progress interval. And the viewpoint information sending module is used for generating viewpoint information based on the viewpoint progress interval and sending the viewpoint information to the terminal.
In one embodiment, the viewpoint interval determination module may include the following elements: and the candidate interval determining unit is used for determining each candidate progress interval corresponding to each historical interaction progress point, and two end points of each candidate progress interval are separated from the corresponding historical interaction progress point by a first preset time interval. And the first total interaction times of the candidate progress intervals are the total number of the historical interaction records corresponding to the historical interaction progress points covered by the candidate progress intervals. And the viewpoint interval determining unit is used for determining the candidate progress interval with the first total interaction times larger than the first preset time threshold as the viewpoint progress interval of the target video.
In one embodiment, the historical interaction record further comprises a historical interaction user identifier, and the historical interaction user identifier, the historical interaction progress point and the historical interaction position are associated with each other. Accordingly, the device 1900 for implementing the video interaction function may further include the following modules: and the hot target detection module is used for detecting a hot interaction target, wherein the hot interaction target is a candidate interaction target of which the corresponding second total interaction times exceed a second preset time threshold, and the second total interaction times are the total number of historical interaction records of which the historical interaction positions are contained in the range of the candidate interaction target. The initial record determining module is used for determining an initial historical interaction record of the hotspot interaction target when the hotspot interaction target is detected, wherein the initial historical interaction record is a historical interaction record corresponding to a first preset interaction operation acting on the hotspot interaction target. And the resource sending module is used for sending the preset resources to the user account corresponding to the historical interactive user identification in the initial historical interactive record.
In one embodiment, the hot spot target detection module may include the following elements: and the approximate interval determining unit is used for determining approximate picture progress intervals respectively corresponding to the historical interaction progress points in the historical interaction records, and two end points of the approximate picture progress intervals are respectively separated from the corresponding historical interaction progress points by a second preset time interval. The position area determining unit is used for determining a position area of which the third total interaction times exceeds a third preset time threshold value on the basis of historical interaction positions in historical interaction records covered by the approximate picture progress interval respectively for each approximate picture progress interval, wherein the third total interaction times are the total number of the historical interaction records of which the historical interaction positions are contained in the position area; and the hotspot interaction target is a position area of which the third total interaction times exceed a third preset time threshold.
For specific limitations of the apparatus 1900 for implementing the video interaction function, reference may be made to the aforementioned limitations of the method for implementing the video interaction function applicable to the server 120, and details thereof are not repeated herein. All or part of the modules or units in the video interaction function implementation apparatus 1900 may be implemented by software, hardware, or a combination thereof. The modules or units may be embedded in hardware or independent from a processor in the computer device, or may be stored in a memory in the computer device in software, so that the processor can call and execute operations corresponding to the modules or units.
In one embodiment, a computer device is provided, which includes a memory and a processor, where the memory stores a computer program, and the processor implements the implementation method of the video interaction function that can be applied to a terminal when the processor executes the computer program.
The computer device may be the terminal 110 shown in fig. 1, and its internal structure diagram may be as shown in fig. 20. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor is configured to provide computational and control capabilities. The memory includes a nonvolatile storage medium and an internal memory, the nonvolatile storage medium stores an operating system and a computer program, the internal memory provides an environment for running the operating system and the computer program in the nonvolatile storage medium, and the computer program is executed by the processor to implement the implementation method of the video interaction function provided by any embodiment of the present application and applicable to the terminal. The network interface is used for communicating with an external terminal through a network connection. The display may be a liquid crystal display or an electronic ink display. The input device of the computer equipment can be a touch layer covered on a display screen, a key, a track ball or a touch pad arranged on a shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
In one embodiment, the apparatus 1800 for implementing video interaction functions provided in the embodiments of the present application may be implemented in the form of a computer program, and the computer program may be run on a computer device as shown in fig. 20. The memory of the computer device may store various program modules constituting the apparatus, such as a current progress detection module 1802, a target record acquisition module 1804, and a history element display module 1806 shown in fig. 18. The computer program constituted by the respective program modules causes the processor to execute the steps in the implementation method of the video interaction function of the corresponding embodiment of the present application described in the present specification. For example, the computer device shown in fig. 20 may execute step S202 by the current progress detection module 1802, the target record acquisition module 1804, and the history element display module 1806 in the implementation apparatus 1800 of the video interaction function shown in fig. 18, step S204 by the target record acquisition module 1804, and step S206 by the history element display module 1806.
In one embodiment, a computer device is provided, which includes a memory and a processor, the memory stores a computer program, and the processor implements the implementation method of the video interaction function provided by any embodiment of the present application when executing the computer program.
The computer device may be the server 120 shown in fig. 1, and its internal structure diagram may be as shown in fig. 21. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor is configured to provide computational and control capabilities. The memory comprises a nonvolatile storage medium and an internal memory, the nonvolatile storage medium stores an operating system and a computer program, the internal memory provides an environment for the operating system in the nonvolatile storage medium and the running of the computer program, the database is used for storing historical interaction records, and the computer program is executed by the processor to realize the implementation method of the video interaction function provided by any embodiment of the application and applied to the server. The network interface is used for communicating with an external terminal through a network connection.
In one embodiment, the apparatus 1900 for implementing video interaction function provided in the embodiments of the present application can be implemented in the form of a computer program, and the computer program can be run on a computer device as shown in fig. 21. The memory of the computer device may store various program modules constituting the apparatus, such as a history acquisition module 1902 and a history transmission module 1904 shown in fig. 19. The computer program constituted by the respective program modules causes the processor to execute the steps in the implementation method of the video interaction function of the corresponding embodiment of the present application described in the present specification. For example, the computer apparatus shown in fig. 21 may execute step S1102 by the history acquisition module 1902 in the implementation apparatus 1900 of the video interaction function shown in fig. 19, and execute step S1104 by the history transmission module 1904.
It will be appreciated by those skilled in the art that the configurations shown in fig. 20 and 21 are only block diagrams of some of the configurations relevant to the present application, and do not constitute a limitation on the computing devices to which the present application is applied, and a particular computing device may include more or less components than those shown in the figures, or may combine certain components, or have a different arrangement of components.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, and the program can be stored in a non-volatile computer readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
Accordingly, in one embodiment, a computer-readable storage medium is provided, on which a computer program is stored, and the computer program, when executed by a processor, implements the method for implementing the video interaction function provided by any of the embodiments of the present application.
Accordingly, in another embodiment, a computer-readable storage medium is provided, on which a computer program is stored, and the computer program, when executed by a processor, implements the method for implementing the video interaction function provided by any of the embodiments of the present application and applicable to a server.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (30)

1. A method for implementing video interaction function is characterized by comprising the following steps:
detecting a current playing progress point of a target video;
acquiring a historical interaction record matched with the current playing progress point, wherein the matched historical interaction record comprises a historical interaction position; the historical interaction position is a target picture position corresponding to an action position in a video playing area of the target video, and historical preset interaction operation is performed; the historical preset interaction operation is the operation of business logic which is triggered by a user and acts on a video playing area of the target video to realize a preset interaction function before the moment of acquiring the historical interaction record;
displaying historical interaction elements at a position corresponding to the historical interaction position on the current video picture; the history interactive element is used for expressing an interactive effect matched with the corresponding history preset interactive operation; and the history interactive elements are interactive elements which are selected by the user in the displayed candidate interactive elements and are matched with the user identification of the user.
2. The method of claim 1, wherein the historical interaction record further comprises historical interaction progress points;
the obtaining of the historical interaction record matched with the current playing progress point comprises the following steps:
and acquiring a historical interaction record of which the historical interaction progress point is matched with the current playing progress point.
3. The method of claim 2, wherein the historical interaction record further comprises a historical interaction user identification;
the historical interactive elements are matched with the historical interactive user identifications, and different historical interactive user identifications are matched with different historical interactive elements.
4. The method of claim 1, wherein the historical interaction elements comprise static elements and/or animation elements.
5. The method of any of claims 1 to 4, further comprising:
detecting a predetermined interactive operation acting in a video playing area of the target video;
determining a current interaction position corresponding to the preset interaction operation, wherein the current interaction position is a position in a current video picture of the target video;
generating a current interaction request carrying the current interaction position;
and sending the current interaction request to a server, wherein the current interaction request is used for indicating the server to obtain the historical interaction record corresponding to the preset interaction operation based on the current interaction position.
6. The method of claim 5, wherein the method further comprises: determining a current interaction progress point corresponding to the preset interaction operation, wherein the current interaction progress point is a playing progress point of the target video when the preset interaction operation is detected;
the current interaction request also carries the current interaction progress point, and the current interaction request is used for indicating the server to obtain the historical interaction record corresponding to the preset interaction operation based on the current interaction position and the current interaction progress point.
7. The method of claim 5, wherein after determining the current interaction location corresponding to the predetermined interaction operation, further comprising:
displaying a current interactive element at a current interactive position in the current video picture; wherein the current interactive element comprises a static element and/or an animation element.
8. The method of claim 7, wherein the method further comprises: before displaying a current interactive element at a current interactive position in the current video picture, the method further comprises:
acquiring a current interactive user identifier;
determining a current interactive element matched with the current interactive user identification; wherein different current interactive user identifications match different current interactive elements.
9. A method for implementing video interaction function is characterized by comprising the following steps:
when an interactive display request carrying a video identifier is received, acquiring a historical interactive record of a target video corresponding to the video identifier, wherein the historical interactive record comprises a historical interactive position, and the historical interactive position is a position in a video picture; the historical interaction position is a target picture position corresponding to an action position in a video playing area of the target video, and historical preset interaction operation is performed; the historical preset interaction operation is the operation of business logic which is triggered by a user and acts on a video playing area of the target video to realize a preset interaction function before the moment of acquiring the historical interaction record;
sending the historical interaction record to a terminal corresponding to the interaction display request, wherein the historical interaction record is used for the terminal to obtain the historical interaction record matched with the current playing progress point of the target video, and displaying a historical interaction element at a position corresponding to the historical interaction position on the current video picture of the target video; the history interactive element is used for expressing an interactive effect matched with the corresponding history preset interactive operation; and the history interactive elements are interactive elements which are selected by the user in the displayed candidate interactive elements and are matched with the user identification of the user.
10. The method of claim 9, wherein the historical interaction record further comprises historical interaction progress points and historical interaction user identifications;
the historical interaction positions are as follows: the position in the video picture corresponding to the historical interaction progress point in the historical interaction record in which the user is located;
the historical interactive elements are matched with the historical interactive user identifications, and different historical interactive user identifications are matched with different historical interactive elements.
11. The method of claim 9, wherein the method further comprises:
receiving a current interaction request, wherein the current interaction request carries a current interaction position, a current interaction progress point and a current interaction user identifier;
storing the current interaction position, the current interaction progress point and the current interaction user identification to obtain a historical interaction record of the target video;
the current interactive element is matched with the current interactive user identifier, and different current interactive user identifiers are matched with different current interactive elements.
12. The method of claim 10, wherein the method further comprises:
determining a viewpoint progress interval of the target video based on the historical interaction progress points in each historical interaction record of the target video, wherein the viewpoint progress interval is a candidate progress interval of which the first total interaction times are greater than a first preset time threshold, and the first total interaction times of the candidate progress interval are the total number of the historical interaction records corresponding to each historical interaction progress point covered by the candidate progress interval;
and generating viewpoint information based on the viewpoint progress interval, and sending the viewpoint information to the terminal.
13. The method of any of claims 9 to 12, wherein the historical interaction record further comprises a historical interaction user identification;
the method further comprises the following steps:
detecting a hotspot interaction target, wherein the hotspot interaction target is a candidate interaction target of which the corresponding second total interaction times exceed a second preset time threshold, and the second total interaction times are the total number of historical interaction records of which the historical interaction positions are included in the range of the candidate interaction target;
determining an initial historical interaction record of the detected hotspot interaction target, wherein the initial historical interaction record is a historical interaction record corresponding to a first preset interaction operation acting on the hotspot interaction target;
and sending the preset resources to the user account corresponding to the historical interactive user identification in the initial historical interactive record.
14. The method of claim 13, wherein detecting a hotspot interaction target comprises:
determining approximate picture progress intervals respectively corresponding to the historical interaction progress points in each historical interaction record, wherein two end points of the approximate picture progress intervals are respectively separated from the corresponding historical interaction progress points by a second preset time interval;
for each approximate picture progress interval, determining a position area with a third total interaction frequency exceeding a third preset frequency threshold value respectively based on the historical interaction positions in each historical interaction record covered by the approximate picture progress interval, wherein the third total interaction frequency is the total number of the historical interaction records of which the historical interaction positions are contained in the position area;
the hotspot interaction targets are as follows: a location area for which the third total number of interactions exceeds the third predetermined number threshold.
15. An apparatus for implementing video interaction function, comprising:
the current progress detection module is used for detecting a current playing progress point of the target video;
the matching record acquisition module is used for acquiring a historical interaction record matched with the current playing progress point, and the matched historical interaction record comprises a historical interaction position; the historical interaction position is a target picture position corresponding to an action position in a video playing area of the target video, and historical preset interaction operation is performed; the historical preset interaction operation is the operation of business logic which is triggered by a user and acts on a video playing area of the target video to realize a preset interaction function before the moment of acquiring the historical interaction record;
the historical element display module is used for displaying the historical interaction elements at the position corresponding to the historical interaction position on the current video picture; the history interactive element is used for expressing an interactive effect matched with the corresponding history preset interactive operation; and the history interactive elements are interactive elements which are selected by the user in the displayed candidate interactive elements and are matched with the user identification of the user.
16. The apparatus of claim 15, wherein the historical interaction record further comprises historical interaction progress points;
the matching record obtaining module is also used for obtaining the history interaction record of which the history interaction progress point is matched with the current playing progress point.
17. The apparatus of claim 16, wherein the historical interaction record further comprises a historical interaction user identification;
the historical interactive elements are matched with the historical interactive user identifications, and different historical interactive user identifications are matched with different historical interactive elements.
18. The apparatus of claim 15, in which the historical interaction elements comprise static elements and/or animation elements.
19. The apparatus of any of claims 15 to 18, further comprising:
the current operation detection module is used for detecting a preset interaction operation acting in a video playing area of the target video;
the interaction position determining module is used for determining a current interaction position corresponding to the preset interaction operation, wherein the current interaction position is a position in a current video picture of the target video;
the current interaction request generation module is used for generating a current interaction request carrying the current interaction position;
and the current interaction request sending module is used for sending the current interaction request to a server, and the current interaction request is used for indicating the server to obtain the historical interaction record corresponding to the preset interaction operation based on the current interaction position.
20. The apparatus of claim 19, wherein the apparatus further comprises:
a current progress point determining module, configured to determine a current interaction progress point corresponding to the predetermined interaction operation, where the current interaction progress point is a play progress point of the target video when the predetermined interaction operation is detected;
the current interaction request also carries the current interaction progress point, and the current interaction request is used for indicating the server to obtain the historical interaction record corresponding to the preset interaction operation based on the current interaction position and the current interaction progress point.
21. The apparatus of claim 19, wherein the apparatus further comprises:
and the current interactive element display module is used for displaying the current interactive element at the current interactive position in the current video picture.
22. The apparatus of claim 21, wherein the apparatus further comprises:
the current interactive user acquisition module is used for acquiring a current interactive user identifier;
the current interactive element determining module is used for determining a current interactive element matched with the current interactive user identifier; wherein different current interactive user identifications match different current interactive elements.
23. An apparatus for implementing video interaction function, comprising:
the history acquisition module is used for acquiring a history interaction record of a target video corresponding to a video identifier when an interaction display request carrying the video identifier is received, wherein the history interaction record comprises a history interaction position, and the history interaction position is a position in a video picture; the historical interaction position is a target picture position corresponding to an action position in a video playing area of the target video, and historical preset interaction operation is performed; the historical preset interaction operation is the operation of business logic which is triggered by a user and acts on a video playing area of the target video to realize a preset interaction function before the moment of acquiring the historical interaction record;
the historical interaction record sending module is used for sending the historical interaction record to a terminal corresponding to the interaction display request, wherein the historical interaction record is used for the terminal to obtain the historical interaction record matched with the current playing progress point of the target video, and the historical interaction element is displayed at the position corresponding to the historical interaction position on the current video picture of the target video; the history interactive element is used for expressing an interactive effect matched with the corresponding history preset interactive operation; and the history interactive elements are interactive elements which are selected by the user in the displayed candidate interactive elements and are matched with the user identification of the user.
24. The apparatus of claim 23, wherein the historical interaction record further comprises historical interaction progress points and historical interaction user identifications;
the historical interaction positions are as follows: the position in the video picture corresponding to the historical interaction progress point in the historical interaction record in which the user is located;
the historical interactive elements are matched with the historical interactive user identifications, and different historical interactive user identifications are matched with different historical interactive elements.
25. The apparatus of claim 23, wherein the apparatus further comprises:
the current request acquisition module is used for receiving a current interaction request, and the current interaction request carries a current interaction position, a current interaction progress point and a current interaction user identifier;
the historical record generating module is used for storing the current interaction position, the current interaction progress point and the current interaction user identifier to obtain a historical interaction record of the target video;
the current interactive element is matched with the current interactive user identifier, and different current interactive user identifiers are matched with different current interactive elements.
26. The apparatus of claim 24, wherein the apparatus further comprises:
a viewpoint interval determination module, configured to determine a viewpoint progress interval of the target video based on the historical interaction progress points in each historical interaction record of the target video, where the viewpoint progress interval is a candidate progress interval whose first total interaction times are greater than a first predetermined time threshold, and a first total interaction time of the candidate progress interval is a total number of historical interaction records corresponding to each historical interaction progress point covered by the candidate progress interval;
and the viewpoint information sending module is used for generating viewpoint information based on the viewpoint progress interval and sending the viewpoint information to the terminal.
27. The apparatus of any of claims 23 to 26, wherein the historical interaction record further comprises a historical interaction user identification;
the device further comprises:
the hot target detection module is used for detecting a hot interaction target, wherein the hot interaction target is a candidate interaction target of which the corresponding second total interaction times exceed a second preset time threshold, and the second total interaction times are the total number of historical interaction records of which the historical interaction positions are included in the range of the candidate interaction target;
the initial record determining module is used for determining an initial historical interaction record of the detected hotspot interaction target, wherein the initial historical interaction record is a historical interaction record corresponding to a first preset interaction operation acting on the hotspot interaction target;
and the resource sending module is used for sending the preset resources to the user account corresponding to the historical interaction user identification in the initial historical interaction record.
28. The apparatus of claim 27, wherein the apparatus further comprises:
an approximate interval determining unit, configured to determine approximate picture progress intervals corresponding to the historical interaction progress points in each historical interaction record, where two endpoints of the approximate picture progress intervals are separated from the historical interaction progress points corresponding thereto by a second predetermined time interval;
the position area determining unit is used for determining a position area of which the third total interaction frequency exceeds a third preset frequency threshold value for each approximate picture progress interval respectively based on the historical interaction positions in each historical interaction record covered by the approximate picture progress interval, wherein the third total interaction frequency is the total number of the historical interaction records of which the historical interaction positions are contained in the position area;
the hotspot interaction targets are as follows: a location area for which the third total number of interactions exceeds the third predetermined number threshold.
29. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 14.
30. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 14.
CN201810373881.XA 2018-04-24 2018-04-24 Method and device for realizing video interaction function Active CN110401865B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810373881.XA CN110401865B (en) 2018-04-24 2018-04-24 Method and device for realizing video interaction function

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810373881.XA CN110401865B (en) 2018-04-24 2018-04-24 Method and device for realizing video interaction function

Publications (2)

Publication Number Publication Date
CN110401865A CN110401865A (en) 2019-11-01
CN110401865B true CN110401865B (en) 2021-11-30

Family

ID=68321894

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810373881.XA Active CN110401865B (en) 2018-04-24 2018-04-24 Method and device for realizing video interaction function

Country Status (1)

Country Link
CN (1) CN110401865B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114501054B (en) * 2022-02-11 2023-04-21 腾讯科技(深圳)有限公司 Live interaction method, device, equipment and computer readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102833490A (en) * 2011-06-15 2012-12-19 新诺亚舟科技(深圳)有限公司 Method and system for editing and playing interactive video, and electronic learning device
CN104113786A (en) * 2014-06-26 2014-10-22 小米科技有限责任公司 Information acquisition method and device
CN104168491A (en) * 2013-05-17 2014-11-26 腾讯科技(北京)有限公司 Information processing method and device in video playing processes
CN104954879A (en) * 2015-06-17 2015-09-30 北京奇艺世纪科技有限公司 Video interaction content display method and device
CN105100926A (en) * 2015-07-06 2015-11-25 深圳市美贝壳科技有限公司 Real-time comment method and system for user interaction television or television program
CN106210127A (en) * 2016-08-15 2016-12-07 腾讯科技(深圳)有限公司 A kind of information processing method, server and client
CN107071587A (en) * 2017-04-25 2017-08-18 腾讯科技(深圳)有限公司 The acquisition methods and device of video segment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104735543B (en) * 2015-03-30 2018-09-28 北京奇艺世纪科技有限公司 A kind of barrage display methods and device
CN104811816B (en) * 2015-04-29 2018-04-13 北京奇艺世纪科技有限公司 A kind of is the method, apparatus and system that the object in video pictures plays barrage label
US20180018616A1 (en) * 2016-07-18 2018-01-18 Avaya Inc. Systems and methods for lifecycle management of limited duration knowledge in automated interaction systems
CN106412622A (en) * 2016-11-14 2017-02-15 百度在线网络技术(北京)有限公司 Method and apparatus for displaying barrage information during video content playing process

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102833490A (en) * 2011-06-15 2012-12-19 新诺亚舟科技(深圳)有限公司 Method and system for editing and playing interactive video, and electronic learning device
CN104168491A (en) * 2013-05-17 2014-11-26 腾讯科技(北京)有限公司 Information processing method and device in video playing processes
CN104113786A (en) * 2014-06-26 2014-10-22 小米科技有限责任公司 Information acquisition method and device
CN104954879A (en) * 2015-06-17 2015-09-30 北京奇艺世纪科技有限公司 Video interaction content display method and device
CN105100926A (en) * 2015-07-06 2015-11-25 深圳市美贝壳科技有限公司 Real-time comment method and system for user interaction television or television program
CN106210127A (en) * 2016-08-15 2016-12-07 腾讯科技(深圳)有限公司 A kind of information processing method, server and client
CN107071587A (en) * 2017-04-25 2017-08-18 腾讯科技(深圳)有限公司 The acquisition methods and device of video segment

Also Published As

Publication number Publication date
CN110401865A (en) 2019-11-01

Similar Documents

Publication Publication Date Title
CN106658199B (en) Video content display method and device
CN110225369B (en) Video selective playing method, device, equipment and readable storage medium
EP2728859B1 (en) Method of providing information-of-users' interest when video call is made, and electronic apparatus thereof
CN109660854B (en) Video recommendation method, device, equipment and storage medium
CN112272302A (en) Multimedia resource display method, device, system and storage medium
CN113727130B (en) Message prompting method, system and device for live broadcasting room and computer equipment
CN110703913A (en) Object interaction method and device, storage medium and electronic device
CN107733769B (en) Method and device for displaying user information
CN109495427B (en) Multimedia data display method and device, storage medium and computer equipment
CN109309851B (en) Information processing method, server and terminal
CN115022653A (en) Information display method and device, electronic equipment and storage medium
CN111359220B (en) Game advertisement generation method and device and computer equipment
CN113051493A (en) Application program display method and device, storage medium and terminal
CN113407436A (en) Play component compatibility detection method and device, computer equipment and storage medium
CN109120996B (en) Video information identification method, storage medium and computer equipment
CN110401865B (en) Method and device for realizing video interaction function
CN110780939A (en) Method and device for loading resource file, computer equipment and storage medium
CN110321042B (en) Interface information display method and device and electronic equipment
CN112822560B (en) Virtual gift giving method, system, computer device and storage medium
CN111897474A (en) File processing method and electronic equipment
CN115065870B (en) Target business list display method and device, electronic equipment and storage medium
CN113709300B (en) Display method and device
CN115017340A (en) Multimedia resource generation method and device, electronic equipment and storage medium
CN109726027B (en) Message viewing method and device and electronic equipment
CN113568551A (en) Picture saving method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant