CN108683952B - Video content segment pushing method and device based on interactive video - Google Patents

Video content segment pushing method and device based on interactive video Download PDF

Info

Publication number
CN108683952B
CN108683952B CN201810542897.9A CN201810542897A CN108683952B CN 108683952 B CN108683952 B CN 108683952B CN 201810542897 A CN201810542897 A CN 201810542897A CN 108683952 B CN108683952 B CN 108683952B
Authority
CN
China
Prior art keywords
information
user
video content
content segment
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810542897.9A
Other languages
Chinese (zh)
Other versions
CN108683952A (en
Inventor
刘杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Altstory Technology Beijing Co ltd
Original Assignee
Altstory Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Altstory Technology Beijing Co ltd filed Critical Altstory Technology Beijing Co ltd
Priority to CN201810542897.9A priority Critical patent/CN108683952B/en
Publication of CN108683952A publication Critical patent/CN108683952A/en
Application granted granted Critical
Publication of CN108683952B publication Critical patent/CN108683952B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4668Learning process for intelligent management, e.g. learning user preferences for recommending movies for recommending content, e.g. movies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/251Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4662Learning process for intelligent management, e.g. learning user preferences for recommending movies characterized by learning algorithms
    • H04N21/4665Learning process for intelligent management, e.g. learning user preferences for recommending movies characterized by learning algorithms involving classification methods, e.g. Decision trees
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4781Games

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention discloses a video content fragment pushing method and device based on interactive video, relates to the technical field of video, and can realize proper video content fragment pushing according to the personalized requirements of users. The method comprises the following steps: acquiring interaction information of a user in a current interaction video; inquiring next video content segment information corresponding to the interaction information, wherein different interaction information respectively corresponds to different next video content segment information; and pushing the next video content segment corresponding to the current interactive video according to the inquired next video content segment information. The method and the device are suitable for pushing the content segments based on the interactive video.

Description

Video content segment pushing method and device based on interactive video
Technical Field
The invention relates to the technical field of videos, in particular to a video content segment pushing method and device based on interactive videos.
Background
With the continuous development of video technology, interactive videos are more and more popular. The interactive video is a novel video integrating interactive experience into a linearly played video through various technical means, when the player plays the interactive video, options of different branch scenarios which can be selected by a user can be expanded and displayed at a specific video node, and after the user selects an option of a certain branch scenario, the user can play the selected branch scenario according to the selected branch scenario, so that the personalized watching requirements of different users can be met. For example, for the same interactive video, the user may select different branching scenarios to play according to the needs of the user, so that the video content actually watched by different users may be different.
Currently, the existing video content segment pushing method is linear playing, that is, videos of each set are played in a set by set according to a preset front-back sequence. However, for interactive videos, because a user watches different video contents when watching the same interactive video, by the above linearly played video content segment pushing manner, after different users watch different current interactive video contents according to their own needs, the same next video content segment content can only be passively received and watched, and thus proper video content segment pushing cannot be performed according to the personalized needs of the user, which affects the watching experience of the user.
Disclosure of Invention
In view of the above, the present invention provides a method and an apparatus for pushing a video content segment based on an interactive video, and mainly aims to solve the problem that the viewing experience of a user is affected because proper pushing of the video content segment cannot be performed according to the personalized requirements of the user in the current way of pushing the video content segment according to linear playing.
According to an aspect of the present invention, there is provided a method for pushing a video content segment based on an interactive video, the method comprising:
acquiring interaction information of a user in a current interaction video;
inquiring next video content segment information corresponding to the interaction information, wherein different interaction information respectively corresponds to different next video content segment information;
and pushing the next video content segment corresponding to the current interactive video according to the inquired next video content segment information.
According to another aspect of the present invention, there is provided an interactive video-based video content segment pushing apparatus, including:
the acquisition unit is used for acquiring the interaction information of the user in the current interaction video;
the query unit is used for querying next video content segment information corresponding to the interactive information acquired by the acquisition unit, wherein different interactive information respectively corresponds to different next video content segment information;
and the pushing unit is used for pushing the next video content segment corresponding to the current interactive video according to the next video content segment information inquired by the inquiring unit.
According to yet another aspect of the present invention, there is provided a storage device having stored thereon a computer program which, when executed by a processor, implements the interactive video-based video content clip pushing method described above.
According to still another aspect of the present invention, there is provided an entity apparatus for pushing video content segments based on interactive video, including a storage device, a processor, and a computer program stored on the storage device and executable on the processor, where the processor implements the above-mentioned method for pushing video content segments based on interactive video when executing the program.
Compared with the conventional method and device for pushing the video content segments according to the linear playing mode, the method and device for pushing the video content segments based on the interactive video can realize the purpose of pushing the next video set of interest of the user in combination with the interest expression of the user in the current interactive video by inquiring the next video content segment information corresponding to the interactive information of the user in the current interactive video and pushing the next video content segment corresponding to the current interactive video according to the inquired next video content segment information, thereby realizing the purpose of pushing the appropriate video content segments according to the personalized requirements of the user and improving the watching experience of the user.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 is a schematic flow chart of a video content segment pushing method based on interactive video according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of another interactive video-based video content segment pushing method provided by an embodiment of the invention;
fig. 3 is a schematic structural diagram illustrating an interactive video-based video content segment pushing apparatus according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram illustrating another interactive video-based video content segment pushing apparatus according to an embodiment of the present invention;
fig. 5 shows a schematic physical structure diagram of a video content segment pushing device based on interactive video according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The embodiment of the invention provides a video content fragment pushing method based on interactive video, which can realize proper video content fragment pushing according to the personalized requirements of users, and as shown in figure 1, the method comprises the following steps:
101. and acquiring the interactive information of the user in the current interactive video.
The current interactive video may be Two-dimensional (2D), Three-dimensional (3D), Four-dimensional (4D), and other multidimensional videos, and specifically may be a movie interaction video, an animation interaction video, a game interaction video, an advertisement promotion interaction video, a Virtual Reality (VR) image interaction video, an Augmented Reality (AR) image interaction video, and the like.
The interactive information may include a play selection condition of a plurality of branching scenario videos in the current interactive video by the user, and may further include a selection condition of a character prop, a role, a gateway violation, and the like in the scenario, which may be specifically determined according to actual content included in the interactive video, and as for the execution subject of the embodiment of the present invention, a video content segment pushing device based on the interactive video may be used.
In a specific application scene, in the process of playing the current interactive video by the player, the device can collect the interaction information related to the user, and particularly can collect the interaction information of the user in the current interactive video by identifying mouse click operation, finger touch operation, face identification, limb action identification, gesture identification and the like of the user. The process described in steps 102 to 103 is then performed.
102. And inquiring the next video content segment information corresponding to the acquired interaction information.
The different interactive information respectively corresponds to different next video content segment information, and the next video content segment information may include information such as a video identifier, a video storage location, and a video playing link address of the next video content segment.
In the embodiment of the invention, a plurality of next scenario videos related to the current interactive video can be configured in advance according to different interactive information, and the next scenario video information corresponding to each next scenario video can be generated. And storing the mapping condition between the interaction information and the next scenario video information corresponding to the interaction information in a preset storage position, wherein the preset storage position can be a specific storage position in the terminal equipment or a specific storage position in the cloud server, and can be specifically configured in advance according to actual requirements.
For example, three next scenario video information related to the current interactive video a, specifically, video information related to the candidate next episode videos 1, 2, and 3, respectively, are stored in the predetermined storage location, and interaction information corresponding to the three video information is also stored in the predetermined storage location; in a specific query process, according to the interaction information of the user in the interaction video a, it is determined that the user selects a branching scenario video related to the role of the character a for playing, and then video information of a candidate next episode video corresponding to the branching scenario video is queried in a predetermined storage location, where the video information may specifically include: the network playing address of the candidate video of the next collection, or the storage position of the terminal for locally storing the candidate video of the next collection, etc.
103. And pushing the next video content segment corresponding to the current interactive video according to the inquired next video content segment information.
For example, after the network playing address of the candidate next-set video 1 is queried, the candidate next-set video 1 is obtained through the network playing address, and after the current interactive video is played, the candidate next-set video 1 is pushed to be played, so that the next-set video which is interested by the user can be actively pushed.
Compared with the conventional method for pushing the video content segments according to the linear playing mode, the method for pushing the video content segments based on the interactive video provided by the embodiment of the invention can be used for pushing the next video content segment corresponding to the interactive information of the user in the current interactive video by inquiring the next video content segment information corresponding to the interactive information of the user according to the inquired next video content segment information, so that the video content segments can be pushed appropriately according to the personalized requirements of the user by combining the interesting expression of the user in the current interactive video.
Further, as a refinement and an extension of the above specific implementation of the embodiment of the present invention, another interactive video-based video content segment pushing method is provided, as shown in fig. 2, the method includes:
201. and detecting the interaction information of the user in the current interaction video.
Sometimes, there is no user interaction information in the current interactive video, for example, a user only watches the beginning part of the current interactive video and does not play the branching scenario expansion node, and then inputs an instruction to play the next episode video. In order to realize the appropriate pushing of the video content fragments according to the personalized requirements of the user under different conditions, the embodiment of the invention can detect whether the interactive information of the user exists in the video playing process and respectively process the interactive information according to the two detection results, so that the pushing of the next video set which is interested by the user can be realized.
202a, if the interactive information of the user is detected to exist in the current interactive video, acquiring the detected interactive information.
And if the user has a plurality of interactive information in the current interactive video, acquiring the interactive information.
203a, inquiring the next video content segment information corresponding to the acquired interaction information.
In an optional embodiment of the present invention, if the current interactive video is a movie-tv-series interactive video or an animation interactive video, and the interactive information of the user in the interactive video is a situation of selecting to play a branching-scenario video therein, in order to realize that a next video set interested by the user is pushed to the user under the situations, step 203a may specifically include: determining the branching scenario information of the current interactive video playing according to the acquired interactive information; and then inquiring next video content segment information corresponding to the determined branching scenario information. Wherein each branching scenario information may correspond to a next video content segment information.
For example, if a user has a plurality of interactive information in a current interactive video, the branching scenario information that the user selects to play and corresponds to the interactive information is determined first, and then the next video content segment information that the determined branching scenario information corresponds to is queried.
If the current interactive video is a game interactive video or the like, and the interactive information of the user in the interactive video is a situation of selecting a character prop, a character, a breakthrough level or the like in the play of the user, in order to realize that the next set of game level videos interested by the user is pushed to the user under the situations, in another optional embodiment of the present invention, step 203a may further include: and determining the related information of the character prop, the character and the gateway violation barrier selected by the user in the current interactive video according to the acquired interactive information, and then inquiring the next video content segment information corresponding to the related information of the character prop, the character and the gateway violation barrier.
204a, pushing the next video content segment corresponding to the current interactive video according to the inquired next video content segment information.
For example, if the user performs an interaction a in the current interactive video, the episode S1 video corresponding to the interaction a is pushed to the user; and if the user performs the B interaction in the current interaction video, pushing a scenario S2 video corresponding to the B interaction to the user in the downloading set.
Specifically, corresponding to an alternative embodiment in step 203a, if the user has a plurality of interactive information in the current interactive video and further queries a corresponding plurality of next video content segment information, step 204a may specifically include: and counting the occurrence times of the next video content segment information with the same content in the next video content segment information, and then pushing the next video content segment corresponding to the current interactive video according to the next video content segment information with the largest occurrence times.
For example, 5 pieces of candidate next video content segment information are obtained through statistics, wherein 3 pieces of candidate next video content segment information are the same, that is, the candidate next episode video corresponding to the 3 pieces of candidate video information is the same video content, a storage location of a corresponding candidate next episode video a is obtained through query according to the 3 pieces of candidate next episode video content segment information, the candidate next episode video a is obtained from the storage location, and after the current interactive video is played, the candidate next episode video a is pushed to be played.
Corresponding to another optional embodiment in step 203a, step 204a may further include: and after next video content segment information corresponding to the related information of the character props, roles and gateway violation checkpoints selected by the user in the current interactive video is obtained through inquiry, pushing a next video content segment corresponding to the current interactive video according to the next video content segment information.
For example, if the user selects a doctor role in the current interactive video, the prop is a doctor prop, and the gateway violation checkpoint is a checkpoint for rescue to the campsite a, the user inquires to obtain the corresponding next video information as video information related to doctor treatment, inquires the network playing address of the corresponding candidate next video b according to the video information, obtains the candidate next video b through the network playing address, and pushes the candidate next video b to play after the current interactive video is played.
Further, in order to enable the user to understand the next video content and enhance the viewing viscosity of the user, in yet another optional embodiment of the present invention, the corresponding next video content segment announcement information may be generated according to the queried next video content segment information, so that after the current interactive video is played, the generated next video content segment announcement information is pushed, and after the playing of the next video content segment announcement information is finished, the pushing of the next video content segment is performed, thereby improving the viewing experience of the user.
For example, after the next video content clip information is queried, the corresponding next video a content corresponding to the video information can be automatically generated to be watched by the user, and the next video a is arranged and recorded in the database, so that when the next video needs to be played, the next video a is pushed to the user.
Step 202b, which is parallel to step 202a, if it is detected that the interactive information of the user does not exist in the current interactive video, the user characteristic information of the user and the plot information of each candidate video of the next video content segment corresponding to the current interactive video are obtained.
The user characteristic information may include user personal attribute information (e.g., information such as user gender, user age, user occupation, viewing habits, video preference types, and the like), and/or viewing record information (e.g., information such as types of historically viewed films, film durations, viewing times, favorite comments and favorite films), and/or user social information (e.g., information such as genders, ages, occupation, viewing habits, video preference types, and the like of friends of users), and/or internet activity information (e.g., information such as application conditions frequently used by users, text contents frequently searched for internet surfing, text contents posted on the internet, and the like), and/or historical interaction information of viewed interactive videos (e.g., playing selection conditions of a plurality of branch storyline videos in viewed interactive videos by users, and the like).
203b, carrying out classification analysis according to the acquired user characteristic information and the plot information of each candidate video of the next video content segment.
For the embodiment of the invention, after the user characteristic information of the watching user is acquired, if the watching user is a new user, namely the user characteristic information of the watching user is not recorded, the personal portrait of the watching user is established according to the acquired user characteristic information; and if the watching user is an existing user, updating the personal portrait of the watching user according to the acquired user characteristic information. And establishing a multi-dimensional model by combining all the attributes of all the candidate video scenarios of the next video content segment, applying a classification algorithm to the model, and judging the pushing of the next video content segment corresponding to the current interactive video.
To illustrate the above process, in yet another alternative embodiment of the present invention, step 203b may specifically include: determining first film watching tendency information of a user according to the gender information, age information and film watching habit information of the user contained in the user attribute information of the watching user; and/or determining second film watching tendency information of the user according to film type information, film duration information, watching time information, film collection information and film evaluation information of the film watched by the user, which are contained in the film watching record information of the watching user; and/or determining third film watching tendency information of the user according to friend gender information, friend age information and friend film watching habit information of friends of the user, which are contained in the user social information of the watching user; and/or determining fourth film watching tendency information of the user according to application information, internet searching text content and network posting text content, contained in the internet activity information of the watching user, of which the use proportion in the applications used by the user is greater than a preset threshold; and/or determining fifth viewing tendency information of the user according to the branch video content information selected by the user and contained in the historical interactive information of the viewing user; and obtaining a classification analysis result by performing weighted calculation on the first viewing tendency information, the second viewing tendency information, the third viewing tendency information, the fourth viewing tendency information and/or the fifth viewing tendency information.
It should be noted that, in the above classification analysis process, the more comprehensive the reference information is, the more accurate the obtained classification analysis result is, besides the above information, the comprehensive analysis may be performed in combination with other feature information of the user, specific contents of the other feature information may be selected according to actual requirements, and is not limited in this embodiment.
In the embodiment of the invention, statistics can be carried out in advance based on the information of different users, user characteristics of the users with different viewing tendencies are obtained through statistics, and then the viewing tendencies of the target users are analyzed by combining the user characteristics and the user characteristics of the target users.
For example, based on the registered account information of the user, the information of the gender, age, viewing habit and the like of the user 1 can be inquired, and then the viewing tendency a suitable for the user 1 is determined according to the information; based on all the film watching records of the user 1 in the latest period of time, determining which types of films the user 1 mainly likes to watch in the latest period of time according to the types, duration, watching time, collected comments, praised film information and the like of the films watched by the user 1 in the latest period of time, and further calculating the film watching tendency b of the user 1; inquiring information such as friend gender, friend age, friend film watching habits and the like of friends of the user 1 based on the social data of the user 1, and calculating the film watching tendency c of the user 1 according to the film watching tendency of the friends of the user 1; if the user 1 uses the third-party application account to authorize login, based on data provided and imported by the third-party application, which applications are frequently used by the user 1, which contents are frequently searched on the internet, which contents are frequently posted by the network, user information filled by the user 1 during registration of the third-party application and the like can be inquired, and the viewing tendency d of the user 1 is calculated through the information; based on the interactive data of the user 1 in the watched interactive video, inquiring information such as which types of branch scenarios are frequently selected by the user 1 for playing, which types of character roles are frequently selected, which types of character props are frequently selected, and calculating the film watching tendency e of the user 1 through the information;
after the above viewing tendencies a, b, c, d, and e are obtained, weighting calculation may be performed to obtain a classification analysis result. Specifically, according to the respective influence degrees of the 5 different dimensions, the weight values corresponding to the viewing tendencies are configured in advance, and then weighting calculation is performed to obtain the final viewing tendency of the user 1.
And 204b, determining the next video content segment information corresponding to the current interactive video from each candidate video of the next video content segment according to the classification analysis result.
For example, in the case where it is determined that the last viewing tendency of the user 1 is a movie prone to the fantasy adventure type and the story line is comedy, the video content clip information of the fantasy adventure type and the story line is comedy is searched from the next video content clip candidates as the next video content clip information corresponding to the current interactive video.
205b, pushing the next video content segment corresponding to the current interactive video according to the determined next video content segment information.
For the embodiment of the invention, under the condition that the user interaction information does not exist in the current interaction video, the proper downloading video content fragment pushing can be carried out by combining the characteristic information of the user, the personalized requirements of the user are met, and the user experience is improved.
It should be noted that, if the current interactive video has the user interaction information, in order to further improve the accuracy of determining the next video content segment corresponding to the current interactive video, the interaction information of the user in the current interactive video and the feature information of the user may be combined to perform comprehensive analysis, and which next video content segment corresponds to the current interactive video may be determined more accurately, so as to perform corresponding pushing, and further, accurate pushing of the next video content segment may be implemented.
In order to illustrate the above process, in yet another alternative embodiment of the present invention, the process of comprehensive analysis may specifically include: determining corresponding next video content segment information A according to the interactive information of the user in the current interactive video; determining corresponding next video content segment information B according to the characteristic information of the user; and then, carrying out weighted calculation based on the next video content segment information A and B to obtain the finally determined next video content segment information of the current interactive video.
According to the other video content segment pushing method based on the interactive video, provided by the embodiment of the invention, under different conditions, the appropriate video content segment pushing can be realized according to the personalized requirements of the user, and after the current interactive video is played, the automatically generated next video content segment forecast information is pushed, so that the user knows the next video content, the watching viscosity of the user is enhanced, and if the current interactive video has the user interactive information, the accurate pushing of the next video content segment can be realized by performing comprehensive analysis by combining the interactive information of the user in the current interactive video and the characteristic information of the user.
Further, as a specific implementation of the method shown in fig. 1, an embodiment of the present invention provides an apparatus for pushing a video content segment based on an interactive video, where as shown in fig. 3, the apparatus includes: an acquisition unit 31, a query unit 32, and a push unit 33.
The acquiring unit 31 may be configured to acquire interaction information of a user in a current interaction video; the obtaining unit 31 is a main function module for obtaining user interaction information in the device, and after obtaining the interaction information, the obtaining unit triggers the query unit 32 to work.
The query unit 32 may be configured to query next video content segment information corresponding to the interaction information acquired by the acquisition unit 31, where different interaction information respectively corresponds to different next video content segment information; specifically, the rule mapping table may be invoked to query the next video content segment information corresponding to the acquired interaction information from the rule mapping table.
The pushing unit 33 may be configured to push a next video content segment corresponding to the current interactive video according to the next video content segment information queried by the querying unit 32; the push unit 33 is a main functional module for performing next-collection video push in the present apparatus.
In a specific application scenario, in order to realize suitable video content segment pushing according to personalized requirements of users under different situations, as shown in fig. 4, the apparatus may further include: a detection unit 34, an analysis unit 35, a determination unit 36.
The detecting unit 34 may be configured to detect whether there is interaction information of a user in a current interaction video;
the obtaining unit 31 may be further configured to, if the detecting unit 34 detects that the interaction information of the user does not exist in the current interactive video, obtain user feature information of the user and scenario information of each candidate video of a next video content segment corresponding to the current interactive video;
the analyzing unit 35 may be configured to perform classification analysis according to the user feature information acquired by the acquiring unit 31 and the scenario information of each candidate video of the next video content segment;
the determining unit 36 may be configured to determine, according to the classification analysis result of the analyzing unit 35, next video content segment information corresponding to the current interactive video from each candidate video of the next video content segment;
the pushing unit 33 may be further configured to push a next video content segment corresponding to the current interactive video according to the determined next video content segment information;
the obtaining unit 31 may be specifically configured to, if the detecting unit 34 detects that the interactive information of the user exists in the current interactive video, obtain the interactive information of the user in the current interactive video.
In a specific application scenario, the user characteristic information may include user personal attribute information, and/or viewing record information, and/or user social information, and/or internet activity information, and/or historical interaction information of viewed interactive videos.
Correspondingly, the analysis unit 35 may be specifically configured to determine first viewing tendency information of the user according to the gender information, age information, and viewing habit information of the user included in the user attribute information; and/or determining second film watching tendency information of the user according to film type information, film duration information, watching time information, film collection information and film evaluation information of the film watched by the user, which are contained in the film watching record information; and/or determining third film watching tendency information of the user according to friend gender information, friend age information and friend film watching habit information of friends of the user, which are contained in the social information of the user; and/or determining fourth film watching tendency information of the user according to application information, internet searching text content and network posting text content, contained in the internet activity information, of which the use proportion in the applications used by the user is greater than a preset threshold; and/or determining fifth viewing tendency information of the user according to the branch video content information selected by the user and contained in the historical interaction information; and obtaining a classification analysis result by performing weighted calculation on the first viewing tendency information, the second viewing tendency information, the third viewing tendency information, the fourth viewing tendency information and/or the fifth viewing tendency information.
In a specific application scenario, in order to make the user understand the next video content and enhance the viewing stickiness of the user, as shown in fig. 4, the apparatus further includes: a generation unit 37;
the generating unit 37 may be configured to generate corresponding next video content segment announcement information according to the queried next video content segment information, so that after the current interactive video is played, the next video content segment announcement information is pushed.
In a specific application scenario, the query unit 32 may be specifically configured to determine, according to the interaction information, branching scenario information of the current interactive video playing; and inquiring the next video content segment information corresponding to the branching scenario information.
It should be noted that other corresponding descriptions of the functional units involved in the device for pushing video content segments based on interactive video according to the embodiment of the present invention may refer to the corresponding descriptions in fig. 1 and fig. 2, and are not described herein again.
Based on the method shown in fig. 1 and fig. 2, correspondingly, the embodiment of the invention further provides a storage device, on which a computer program is stored, and the program, when executed by a processor, implements the interactive video based video content segment pushing method shown in fig. 1 and fig. 2.
Based on the foregoing embodiments of the method shown in fig. 1 and fig. 2 and the apparatus shown in fig. 3 and fig. 4, an embodiment of the present invention further provides an entity apparatus for pushing video content segments based on interactive video, as shown in fig. 5, the apparatus includes: a processor 41, a storage device 42, and a computer program stored on the storage device 42 and executable on the processor 41, the processor 41 implementing the interactive video based video content clip pushing method shown in fig. 1 and 2 when executing the program; the device also includes: a bus 43 configured to couple the processor 41 and the storage device 42.
By applying the technical scheme of the invention, the appropriate pushing of the video content segments can be realized according to the personalized requirements of the user under different conditions, and the automatically generated next video content segment preview information is pushed after the current interactive video is played, so that the user knows the next video content, the watching viscosity of the user is enhanced, and if the current interactive video has the user interactive information, the accurate pushing of the next video content segment can be realized by combining the interactive information of the user in the current interactive video and the characteristic information of the user to carry out comprehensive analysis.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present application can be implemented by hardware, and also by software plus a necessary general hardware platform. Based on such understanding, the technical solution of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.), and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the implementation scenarios of the present application.
Those skilled in the art will appreciate that the figures are merely schematic representations of one preferred implementation scenario and that the blocks or flow diagrams in the figures are not necessarily required to practice the present application.
Those skilled in the art will appreciate that the modules in the devices in the implementation scenario may be distributed in the devices in the implementation scenario according to the description of the implementation scenario, or may be located in one or more devices different from the present implementation scenario with corresponding changes. The modules of the implementation scenario may be combined into one module, or may be further split into a plurality of sub-modules.
The above application serial numbers are for description purposes only and do not represent the superiority or inferiority of the implementation scenarios.
The above disclosure is only a few specific implementation scenarios of the present application, but the present application is not limited thereto, and any variations that can be made by those skilled in the art are intended to fall within the scope of the present application.

Claims (10)

1. A video content segment pushing method based on interactive video is characterized by comprising the following steps:
acquiring one or more pieces of interaction information of a user in a current interaction video;
if the user has an interactive message in the current interactive video, inquiring next video content segment information corresponding to the interactive message, wherein different interactive messages respectively correspond to different next video content segment information;
if the user has a plurality of interactive information in the current interactive video, counting the occurrence times of the same next video content segment information in a plurality of next video content segment information corresponding to the plurality of interactive information, and determining the next video content segment information with the maximum occurrence times as the next video content segment information corresponding to the current interactive video;
and pushing the next video content segment forecast information and the next video content segment corresponding to the current interactive video according to the inquired next video content segment information.
2. The method of claim 1, wherein before the obtaining the interaction information of the user in the current interaction video, the method further comprises:
detecting whether the interactive information of the user exists in the current interactive video;
if the current interactive video does not exist, acquiring the user characteristic information of the user and the plot information of each candidate video of the next video content segment corresponding to the current interactive video;
carrying out classification analysis according to the user characteristic information and the plot information of each candidate video of the next video content segment;
determining next video content segment information corresponding to the current interactive video from each candidate video of the next video content segment according to the classification analysis result;
pushing a next video content segment corresponding to the current interactive video according to the determined next video content segment information;
the acquiring of the interaction information of the user in the current interaction video specifically includes:
and if so, acquiring the interaction information of the user in the current interaction video.
3. The method according to claim 2, wherein the user characteristic information includes user personal attribute information, and/or viewing record information, and/or user social information, and/or internet activity information, and/or historical interaction information of watched interaction videos, and the classification analysis is performed according to the user characteristic information and scenario information of each candidate video of the next video content segment, and specifically includes:
determining first film watching tendency information of the user according to the gender information, age information and film watching habit information of the user contained in the user attribute information; and/or
Determining second film watching tendency information of the user according to film type information, film duration information, watching time information, film collection information and film evaluation information of the film watched by the user, which are contained in the film watching record information; and/or
Determining third film watching tendency information of the user according to friend gender information, friend age information and friend film watching habit information of friends of the user, which are contained in the social information of the user; and/or
Determining fourth film watching tendency information of the user according to application information, internet searching text content and network posting text content, wherein the application information is contained in the internet activity information and the usage proportion of the applications used by the user is larger than a preset threshold; and/or
Determining fifth viewing tendency information of the user according to the branch video content information selected by the user and contained in the historical interaction information;
and obtaining a classification analysis result by performing weighted calculation on the first viewing tendency information, the second viewing tendency information, the third viewing tendency information, the fourth viewing tendency information and/or the fifth viewing tendency information.
4. The method of claim 1, wherein after querying video information of a next video content segment corresponding to the interaction information, the method further comprises:
and generating corresponding next video content segment forecast information according to the inquired next video content segment information, so that the next video content segment forecast information is pushed after the current interactive video is played.
5. The method according to claim 3, wherein the querying for next video content segment information corresponding to the interaction information specifically comprises:
determining the branching scenario information played by the current interactive video according to the interactive information;
and inquiring the next video content segment information corresponding to the branching scenario information.
6. An interactive video-based video content segment pushing device, comprising:
the acquisition unit is used for acquiring one or more interactive information of a user in the current interactive video;
the query unit is used for querying next video content segment information corresponding to the interactive information acquired by the acquisition unit if the user has an interactive information in the current interactive video, wherein different interactive information respectively corresponds to different next video content segment information;
the query unit is used for counting the occurrence times of the same next video content segment information in a plurality of next video content segment information corresponding to a plurality of interactive information if the user has a plurality of interactive information in the current interactive video, and determining the next video content segment information with the maximum occurrence times as the next video content segment information corresponding to the current interactive video;
and the pushing unit is used for pushing the next video content segment forecast information and the next video content segment corresponding to the current interactive video according to the next video content segment information inquired by the inquiring unit.
7. The apparatus of claim 6, further comprising: a detection unit, an analysis unit and a determination unit;
the detection unit is used for detecting whether the interaction information of the user exists in the current interaction video;
the acquiring unit is further configured to acquire user feature information of the user and scenario information of each candidate video of a next video content segment corresponding to the current interactive video if the detecting unit detects that no interactive information of the user exists in the current interactive video;
the analysis unit is used for carrying out classification analysis according to the user characteristic information acquired by the acquisition unit and the plot information of each candidate video of the next video content segment;
the determining unit is used for determining next video content segment information corresponding to the current interactive video from each candidate video of the next video content segment according to the classification analysis result of the analyzing unit;
the pushing unit is further configured to push a next video content segment corresponding to the current interactive video according to the determined next video content segment information;
the obtaining unit is specifically configured to obtain the interaction information of the user in the current interactive video if the detecting unit detects that the interaction information of the user exists in the current interactive video.
8. The apparatus according to claim 7, wherein the user characteristic information comprises user personal attribute information, and/or viewing record information, and/or user social information, and/or internet activity information, and/or historical interaction information of viewed interactive videos,
the analysis unit is specifically configured to determine first film watching tendency information of the user according to the gender information, the age information, and the film watching habit information of the user, which are included in the user attribute information; and/or
Determining second film watching tendency information of the user according to film type information, film duration information, watching time information, film collection information and film evaluation information of the film watched by the user, which are contained in the film watching record information; and/or
Determining third film watching tendency information of the user according to friend gender information, friend age information and friend film watching habit information of friends of the user, which are contained in the social information of the user; and/or
Determining fourth film watching tendency information of the user according to application information, internet searching text content and network posting text content, wherein the application information is contained in the internet activity information and the usage proportion of the applications used by the user is larger than a preset threshold; and/or
Determining fifth viewing tendency information of the user according to the branch video content information selected by the user and contained in the historical interaction information;
and obtaining a classification analysis result by performing weighted calculation on the first viewing tendency information, the second viewing tendency information, the third viewing tendency information, the fourth viewing tendency information and/or the fifth viewing tendency information.
9. A storage device having a computer program stored thereon, wherein the program, when executed by a processor, implements the interactive video-based video content clip pushing method of any one of claims 1 to 5.
10. An interactive video-based video content segment pushing apparatus, comprising a storage device, a processor and a computer program stored on the storage device and executable on the processor, wherein the processor implements the interactive video-based video content segment pushing method according to any one of claims 1 to 5 when executing the program.
CN201810542897.9A 2018-05-30 2018-05-30 Video content segment pushing method and device based on interactive video Active CN108683952B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810542897.9A CN108683952B (en) 2018-05-30 2018-05-30 Video content segment pushing method and device based on interactive video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810542897.9A CN108683952B (en) 2018-05-30 2018-05-30 Video content segment pushing method and device based on interactive video

Publications (2)

Publication Number Publication Date
CN108683952A CN108683952A (en) 2018-10-19
CN108683952B true CN108683952B (en) 2020-10-27

Family

ID=63809079

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810542897.9A Active CN108683952B (en) 2018-05-30 2018-05-30 Video content segment pushing method and device based on interactive video

Country Status (1)

Country Link
CN (1) CN108683952B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112104921A (en) * 2019-06-18 2020-12-18 上海哔哩哔哩科技有限公司 Video playing method and device and computer equipment
CN111031379B (en) * 2019-12-19 2022-04-12 北京奇艺世纪科技有限公司 Video playing method, device, terminal and storage medium
CN111432277B (en) * 2020-04-01 2022-10-14 咪咕视讯科技有限公司 Video playing method, electronic equipment and computer readable storage medium
CN111726694B (en) * 2020-06-30 2022-06-03 北京奇艺世纪科技有限公司 Interactive video recovery playing method and device, electronic equipment and storage medium
CN111935553A (en) * 2020-08-14 2020-11-13 付良辉 Interactive film shooting production method and projection method
CN111954065A (en) * 2020-08-14 2020-11-17 付良辉 Film shooting production method and showing method with random events
CN112165652B (en) * 2020-09-27 2022-09-20 北京字跳网络技术有限公司 Video processing method, device, equipment and computer readable storage medium
CN112333478A (en) * 2020-10-26 2021-02-05 深圳创维-Rgb电子有限公司 Video recommendation method, terminal device and storage medium
CN114629882B (en) * 2022-03-09 2024-08-13 北京字跳网络技术有限公司 Information display method, apparatus, electronic device, storage medium, and program product

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1737853A (en) * 2005-09-09 2006-02-22 湖南互动传媒有限公司 Method for making and playing interactive cartoon program
CN103024554A (en) * 2012-12-21 2013-04-03 深圳Tcl新技术有限公司 Method for previewing television programs
CN103079115A (en) * 2013-02-05 2013-05-01 北京奇艺世纪科技有限公司 Control method and device of multimedia data
CN105338542A (en) * 2014-06-30 2016-02-17 奇点新源国际技术开发(北京)有限公司 Information push method and information push device
CN106060637A (en) * 2016-06-29 2016-10-26 乐视控股(北京)有限公司 Video recommendation method, device and system
CN106611045A (en) * 2016-12-16 2017-05-03 北京智能管家科技有限公司 Intelligent interaction method and device and intelligent terminal
CN106792171A (en) * 2016-12-14 2017-05-31 宁夏灵智科技有限公司 Personalized recommendation method and system in intelligent video app
CN107832437A (en) * 2017-11-16 2018-03-23 北京小米移动软件有限公司 Audio/video method for pushing, device, equipment and storage medium
CN107888950A (en) * 2017-11-09 2018-04-06 福州瑞芯微电子股份有限公司 A kind of method and system for recommending video
CN108024139A (en) * 2017-12-08 2018-05-11 广州视源电子科技股份有限公司 Playing method and device of network video courseware, terminal equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020053089A1 (en) * 2000-10-30 2002-05-02 Kent Massey Methods and apparatus for presenting interactive entertainment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1737853A (en) * 2005-09-09 2006-02-22 湖南互动传媒有限公司 Method for making and playing interactive cartoon program
CN103024554A (en) * 2012-12-21 2013-04-03 深圳Tcl新技术有限公司 Method for previewing television programs
CN103079115A (en) * 2013-02-05 2013-05-01 北京奇艺世纪科技有限公司 Control method and device of multimedia data
CN105338542A (en) * 2014-06-30 2016-02-17 奇点新源国际技术开发(北京)有限公司 Information push method and information push device
CN106060637A (en) * 2016-06-29 2016-10-26 乐视控股(北京)有限公司 Video recommendation method, device and system
CN106792171A (en) * 2016-12-14 2017-05-31 宁夏灵智科技有限公司 Personalized recommendation method and system in intelligent video app
CN106611045A (en) * 2016-12-16 2017-05-03 北京智能管家科技有限公司 Intelligent interaction method and device and intelligent terminal
CN107888950A (en) * 2017-11-09 2018-04-06 福州瑞芯微电子股份有限公司 A kind of method and system for recommending video
CN107832437A (en) * 2017-11-16 2018-03-23 北京小米移动软件有限公司 Audio/video method for pushing, device, equipment and storage medium
CN108024139A (en) * 2017-12-08 2018-05-11 广州视源电子科技股份有限公司 Playing method and device of network video courseware, terminal equipment and storage medium

Also Published As

Publication number Publication date
CN108683952A (en) 2018-10-19

Similar Documents

Publication Publication Date Title
CN108683952B (en) Video content segment pushing method and device based on interactive video
CN108650558B (en) Method and device for generating video precondition based on interactive video
US12094209B2 (en) Video data processing method and apparatus, device, and medium
TWI702844B (en) Method, device, apparatus, and storage medium of generating features of user
US10735494B2 (en) Media information presentation method, client, and server
JP6855595B2 (en) Using machine learning to recommend live stream content
CN110209843B (en) Multimedia resource playing method, device, equipment and storage medium
CN110378732B (en) Information display method, information association method, device, equipment and storage medium
KR101816113B1 (en) Estimating and displaying social interest in time-based media
US20180047425A1 (en) Computerized system and method for automatically extracting gifs from videos
CN110727868B (en) Object recommendation method, device and computer-readable storage medium
CN105872717A (en) Video processing method and system, video player and cloud server
CN106294564A (en) A kind of video recommendation method and device
CN113779381B (en) Resource recommendation method, device, electronic equipment and storage medium
CN103929653A (en) Enhanced real video generator and player, generating method of generator and playing method of player
CN111026969B (en) Content recommendation method and device, storage medium and server
US20150073932A1 (en) Strength Based Modeling For Recommendation System
US20230164369A1 (en) Event progress detection in media items
CN108769831B (en) Video preview generation method and device
WO2015148420A1 (en) User inactivity aware recommendation system
WO2017011084A1 (en) System and method for interaction between touch points on a graphical display
CN105898426A (en) Multimedia content processing method and device and server
CN109299355B (en) Recommended book list display method and device and storage medium
CN111737517A (en) Instant recommendation method and system based on short video
WO2017200871A1 (en) Media file summarizer

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant