CN114827702A - Video pushing method, video playing method, device, equipment and medium - Google Patents

Video pushing method, video playing method, device, equipment and medium Download PDF

Info

Publication number
CN114827702A
CN114827702A CN202110087582.1A CN202110087582A CN114827702A CN 114827702 A CN114827702 A CN 114827702A CN 202110087582 A CN202110087582 A CN 202110087582A CN 114827702 A CN114827702 A CN 114827702A
Authority
CN
China
Prior art keywords
bullet screen
video
branch
screen information
barrage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110087582.1A
Other languages
Chinese (zh)
Other versions
CN114827702B (en
Inventor
刘艳峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110087582.1A priority Critical patent/CN114827702B/en
Publication of CN114827702A publication Critical patent/CN114827702A/en
Application granted granted Critical
Publication of CN114827702B publication Critical patent/CN114827702B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4668Learning process for intelligent management, e.g. learning user preferences for recommending movies for recommending content, e.g. movies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application discloses a video pushing method, a video playing device, video pushing equipment and a video playing medium, and belongs to the technical field of networks. According to the technical scheme, the server can acquire the bullet screen information sent by the user when watching the target video, and the corresponding bullet screen label is obtained according to the semantics of the bullet screen information. Because the bullet screen information can bear the feeling of watching the target video by the user, the bullet screen label obtained based on the semantics of the bullet screen information can reflect the preference of the user to a certain extent. The server recommends the second branch video obtained based on the bullet screen label to the user, namely the branch video which is possibly interested by the user, and the video is pushed to the user in such a mode, so that the human-computer interaction efficiency is high.

Description

Video pushing method, video playing method, device, equipment and medium
Technical Field
The present application relates to the field of network technologies, and in particular, to a video pushing method, a video playing method, an apparatus, a device, and a storage medium.
Background
With the development of computer technology, more and more users watch videos through various computer devices. For one video, a plurality of branch videos may be included, and the user can select to watch different branch videos according to the user's preference.
In the related art, when a user selects a branch video to be viewed, the user needs to manually select the branch video for viewing according to the titles of different branch videos. However, when the number of the branch videos is large, the user cannot easily find the branch video which the user wants to watch, and the user may need to switch for many times to find the branch video which the user wants to watch, which results in low efficiency of human-computer interaction.
Disclosure of Invention
The embodiment of the application provides a video pushing method, a video playing device, a video pushing device, a video playing device and a storage medium, and the efficiency of man-machine interaction can be improved. The technical scheme is as follows:
in one aspect, a video push method is provided, and the method includes:
displaying at least one piece of bullet screen information on a display interface of the content item;
acquiring first barrage information, wherein the first barrage information is transmitted when a user logging in a first account watches a first branch video of a target video;
acquiring a first bullet screen label corresponding to the first bullet screen information based on the first bullet screen information, wherein the first bullet screen label is used for representing the semantics of the first bullet screen information;
and acquiring a second branch video corresponding to the first bullet screen label in the target video, and pushing the second branch video to the first account.
In one possible embodiment, the method further comprises:
and storing the first barrage information as second barrage information corresponding to the second branch video.
In one aspect, a video playing method is provided, where the method includes:
responding to the barrage information sending operation of a user logging in a first account in the playing process of a first branch video of a target video, and sending first barrage information;
receiving a second branch video of the target video, wherein the second branch video corresponds to a first barrage tag of the first barrage information, and the first barrage tag is used for representing the semantic meaning of the first barrage information;
and playing based on the received second branch video.
In a possible embodiment, the playing based on the received second branch video includes:
and switching the first branch video into the second branch video for playing.
In a possible implementation manner, after the playing based on the received second branch video, the method further includes:
and responding to the completion of the playing of the second branch video, and playing a third branch video, wherein the third branch video is a branch video with the highest similarity with the second branch video of the target video.
In one aspect, a video push apparatus is provided, the apparatus including:
the system comprises a barrage information acquisition module, a barrage information acquisition module and a barrage information processing module, wherein the barrage information acquisition module is used for acquiring first barrage information, and the first barrage information is a barrage sent by a user logging in a first account when watching a first branch video of a target video;
the bullet screen label obtaining module is used for obtaining a first bullet screen label corresponding to the first bullet screen information based on the first bullet screen information, wherein the first bullet screen label is used for representing the semantics of the first bullet screen information;
and the pushing module is used for acquiring a second branch video corresponding to the first barrage label in the target video and pushing the second branch video to the first account.
In a possible implementation manner, the bullet screen label obtaining module is configured to input the first bullet screen information into a semantic recognition model, and perform semantic recognition on the first bullet screen information through the semantic recognition model to obtain a first semantic feature of the first bullet screen information; and acquiring the first bullet screen label corresponding to the first bullet screen information based on the first semantic features.
In a possible implementation manner, the bullet screen label obtaining module is configured to obtain similarities between the first semantic feature and semantic features of multiple labels, respectively; and determining the label with the similarity meeting the first similarity condition as the first bullet screen label.
In a possible implementation manner, the pushing module is configured to compare the first barrage label with a plurality of branch video labels of the target video, respectively, to obtain similarities between the first barrage label and the plurality of branch video labels, where the branch video labels are used to represent video contents of corresponding branch videos; and responding to the fact that the similarity between the first barrage label and any branch video label meets a target similarity condition, and obtaining the second branch video corresponding to any branch video label.
In a possible embodiment, the apparatus further comprises:
the bullet screen information classification module is used for inputting the first bullet screen information into a bullet screen classification model and classifying the first bullet screen information through the bullet screen classification model; and responding to the first bullet screen information as a first bullet screen type, executing the step of acquiring a first bullet screen label corresponding to the first bullet screen information based on the first bullet screen information, wherein the first bullet screen type is a bullet screen type allowed to be issued.
In a possible implementation manner, the bullet screen information classification module is configured to perform word segmentation on the first bullet screen information to obtain a plurality of first words; extracting first text features of the plurality of first words; and in response to the fact that the similarity between the first text feature and the text feature of any target vocabulary does not accord with a second similarity condition, determining the first bullet screen information as the first bullet screen type, wherein the target vocabulary is a vocabulary which is not allowed to be issued.
In a possible implementation manner, the bullet screen information classification module is further configured to input the first account into the bullet screen classification model; comparing the first account number with a plurality of second account numbers respectively; and responding to the fact that the first account number is different from any second account number, determining the first bullet screen information as the first bullet screen type, wherein the second account number is an account number which is not allowed to release bullet screen information.
In a possible implementation manner, the pushing module is further configured to push a plurality of second barrage information corresponding to the second branch video to the first account.
In one possible embodiment, the apparatus further comprises:
and the storage module is used for storing the first barrage information into second barrage information corresponding to the second branch video.
In one aspect, a video playing apparatus is provided, the apparatus including:
the system comprises a sending module, a display module and a display module, wherein the sending module is used for responding to the bullet screen information sending operation of a user logging in a first account in the playing process of a first branch video of a target video and sending first bullet screen information;
a receiving module, configured to receive a second branch video of the target video, where the second branch video corresponds to a first barrage tag of the first barrage information, and the first barrage tag is used to represent a semantic meaning of the first barrage information;
and the playing module is used for playing based on the received second branch video.
In a possible embodiment, the apparatus further comprises:
and the display module is used for displaying a plurality of second barrage information corresponding to the second branch video on the playing picture of the second branch video.
In a possible implementation manner, the playing module is further configured to determine a playing status of the first branch video; and responding to the completion of the playing of the first branch video, executing the step of playing based on the received second branch video.
In a possible implementation manner, the playing module is configured to switch the first branch video to the second branch video for playing.
In a possible implementation manner, the playing module is further configured to play a third branch video in response to that the second branch video is played completely, where the third branch video is a branch video of the target video with the highest similarity to the second branch video.
In one aspect, a computer device is provided that includes one or more processors and one or more memories having at least one instruction stored therein, the instruction being loaded and executed by the one or more processors to implement the video push method or the video play method.
In one aspect, a computer-readable storage medium is provided, where at least one instruction is stored, and the instruction is loaded and executed by a processor to implement the video pushing method or the video playing method.
In one aspect, a computer program product or a computer program is provided, and the computer program product or the computer program includes a program code, the program code is stored in a computer readable storage medium, a processor of a computer device reads the program code from the computer readable storage medium, and the processor executes the program code, so that the computer device executes the video push method or the video play method.
According to the technical scheme, the server can acquire the bullet screen information sent by the user when watching the target video, and the corresponding bullet screen label is obtained according to the semantics of the bullet screen information. Because the bullet screen information can bear the feeling of watching the target video by the user, the bullet screen label obtained based on the semantics of the bullet screen information can reflect the preference of the user to a certain extent. The server recommends the second branch video obtained based on the bullet screen label to the user, namely the branch video which is possibly interested by the user, and the video is pushed to the user in such a mode, so that the human-computer interaction efficiency is high.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of an implementation environment of a video push method provided in an embodiment of the present application;
FIG. 2 is a schematic view of an interface provided by an embodiment of the present application;
fig. 3 is a flowchart of a video pushing method provided in an embodiment of the present application;
fig. 4 is a flowchart of a video pushing method provided in an embodiment of the present application;
fig. 5 is a flowchart of a bullet screen classification model training method provided in an embodiment of the present application;
fig. 6 is a flowchart of a video playing method provided in an embodiment of the present application;
FIG. 7 is a schematic view of an interface provided by an embodiment of the present application;
FIG. 8 is a schematic view of an interface provided by an embodiment of the present application;
FIG. 9 is a schematic view of an interface provided by an embodiment of the present application;
FIG. 10 is an interaction diagram provided by an embodiment of the application;
fig. 11 is a schematic structural diagram of a video pushing apparatus according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a video playback device according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The terms "first," "second," and the like in this application are used for distinguishing between similar items and items that have substantially the same function or similar functionality, and it should be understood that "first," "second," and "nth" do not have any logical or temporal dependency or limitation on the number or order of execution.
The term "at least one" in this application means one or more, "a plurality" means two or more, for example, a plurality of reference face images means two or more reference face images.
In order to more clearly explain the technical solutions provided in the present application, first, terms related in the embodiments of the present application are described:
bullet screen information: the bullet screen information is a service for interaction provided by the server, and a user can send the bullet screen information in the process of watching a video, and the bullet screen information can be seen by other users watching the video. Correspondingly, other users can also send bullet screen information when watching the video, and the bullet screen information sent by other users can be seen by the user.
Fig. 1 is a schematic diagram of an implementation environment of a video pushing method according to an embodiment of the present application, and referring to fig. 1, the implementation environment may include a terminal 110 and a server 140.
The terminal 110 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart watch, a smart television, a smart car device, and the like. The terminal 110 is installed and operated with an application program supporting video push and video play.
Optionally, the server 140 may be an independent physical server, or may be a server cluster or a distributed system formed by a plurality of physical servers, and the number of the servers 140 is not limited in this embodiment of the application. The server 140 may also be a cloud server providing basic cloud computing services such as cloud services, cloud databases, cloud computing, cloud functions, cloud storage, web services, cloud communication, middleware services, domain name services, security services, a Delivery Network (CDN), and big data and artificial intelligence platforms. The terminal and the server may be directly or indirectly connected through wired or wireless communication, and the application is not limited herein.
Those skilled in the art will appreciate that the number of terminals described above may be greater or fewer. For example, the number of the terminal may be only one, or several tens or hundreds, or more, and other terminals may also be included in the implementation environment. The number of terminals and the type of the device are not limited in the embodiments of the present application.
In order to more clearly explain the technical solution provided in the present application, first, a method of producing a target video according to the present application will be explained.
In the embodiment of the present application, when a video producer produces a target video, different parts of content can be produced in different branch videos, and in this way, a user watching the target video can select video content to be watched according to own preference. For example, for an author of a digital product evaluation video, the content of evaluating different digital products can be put into different branch videos, and a plurality of branch videos are synthesized, so that a target video of the digital product evaluation is obtained. When a user watches a target video of the digital product evaluation class, the user can select a branch video which is interested by the user to watch. In some embodiments, the video producer can also produce the target video by using a main line video + a plurality of branch line videos, where the main line video is also a branch video of the target video, but when the user watches the target video, the main line video is preferentially played. The video producer can record a general introduction of a target video, namely an introduction of a plurality of branch videos in a main line video, and a user can know the video contents of different branch videos through the main line video so as to assist the user in selecting the branch videos.
In the following, an application scenario of the embodiment of the present application will be described with reference to the above-described manner of creating a target video.
In one possible implementation, after the video producer has produced the target video, the video producer can upload the target video to the server, and the server stores the target video in a correspondingly maintained video database. The server can send the target video stored in the video database to different terminals based on the video acquisition request. From the user's perspective, the user can view the target video stored on the server through the video playback client or the web page. Taking an example that a user uses a video playing client as an example, referring to fig. 2, a playing interface 201 of the video playing client is displayed on the terminal, a target video selection area 202 is displayed on one side of the playing interface 201, and the user can select a target video to be watched on the target video selection area 202. After the user selects the target video, the terminal can play the target video on the video play area 203. In some embodiments, the target video selection area 202 can also display thereon an identification and a profile of the target video's branching video, from which the user can view the target video selection area 202. In the embodiment of the application, in the process of watching the target video, the user can control the terminal to play the corresponding branch video in a manner of sending the bullet screen information.
In addition, the target video may be a live video in addition to the branch video created by the video creator, and an application scenario of the present embodiment will be described below with reference to the target video as a live video.
In one possible embodiment, in the live scene of the sports game, a plurality of cameras are arranged in the sports field of the sports game, and the plurality of cameras can record the process of the sports game more comprehensively. The plurality of cameras can upload shot videos to the server, and the video uploaded by each camera is a branch video of the live video. The server can push videos shot by the multiple cameras to a director of a sports match, the director switches images shot by different cameras through the server according to the live broadcast condition and pushes the images to a user, and the user can watch live videos through the terminal. For some users, when watching a sports game, the users prefer to watch the video through a specific machine position, and the users can watch the video of the corresponding machine position by sending barrage information during the live broadcasting of the sports game.
Fig. 3 is a flowchart of a video pushing method provided in an embodiment of the present application, taking an execution subject as a server as an example, and referring to fig. 3, the method includes:
301. the server acquires first barrage information, wherein the first barrage information is a barrage sent by a user logging in a first account when watching a first branch video of a target video.
The target video comprises a plurality of branch videos, each branch video corresponds to different content of the target video, and a user can select a branch video to be watched by himself. And in the process of watching the target video, the user can send the bullet screen information to communicate with other users watching the target video. In some embodiments, the user can send the bullet screen information after logging in the first account, so that traceability of the bullet screen information can be ensured, and the number of issued malicious bullet screen information is reduced.
302. The server acquires a first bullet screen label corresponding to the first bullet screen information based on the first bullet screen information, wherein the first bullet screen label is used for representing semantics of the first bullet screen information.
The first bullet screen label is used to represent semantics of the first bullet screen information, that is, one bullet screen label may correspond to a plurality of pieces of first bullet screen information, for example, the first bullet screen label "weather condition" may correspond to the first bullet screen information "how weather is today", "how weather is today" and "how it is raining today" at the same time. In other words, obtaining the first bullet screen label corresponding to the first bullet screen information may be regarded as classifying the first bullet screen information.
303. The server acquires a second branch video corresponding to the first bullet screen label in the target video, and pushes the second branch video to the first account.
The server can determine the corresponding second branch video according to the first barrage label, namely according to the semantic meaning of the first barrage information, and push the second branch video to the first account. In other words, the reason that the user sends the bullet screen information is to express the viewing experience of the user, then the server adds the first bullet screen label to the first bullet screen information according to the semantics of the first bullet screen information, which is equivalent to classifying the first bullet screen information, the classification can express the preference of the user to a certain extent, and the server obtains the second branch video based on the first bullet screen label, that is, the branch video that the user may be interested in.
According to the technical scheme, the server can acquire the bullet screen information sent by the user when watching the target video, and the corresponding bullet screen label is obtained according to the semantics of the bullet screen information. Because the bullet screen information can bear the feeling of watching the target video by the user, the bullet screen label obtained based on the semantics of the bullet screen information can reflect the preference of the user to a certain extent. The server recommends the second branch video obtained based on the bullet screen label to the user, namely the branch video which is possibly interested by the user, and the video is pushed to the user in such a mode, so that the human-computer interaction efficiency is high.
Fig. 4 is a flowchart of a video pushing method provided in an embodiment of the present application, and referring to fig. 4, the method includes:
401. the server acquires first barrage information, wherein the first barrage information is a barrage sent by a user logging in a first account when watching a first branch video of a target video.
The first account is an account used when the user watches the target video.
In one possible implementation, a user watches a first branch video of a target video through a video playing client, and a first account is logged on the video playing client. When watching the first branch video, the user can send the first barrage information through the video playing client. The terminal sends the first bullet screen information to the server through the video playing client, and the server receives the first bullet screen information sent by the terminal.
In one possible implementation, the user views a first branch video of the target video through a web page, where the first account is logged on. When watching the first branch video, the user can send first barrage information on the webpage, the terminal sends the first barrage information to the server through the webpage, and the server receives the first barrage information.
In a possible implementation manner, a user views a first branch video of a target video through an applet, and a first account is logged in the applet, where the applet is an application that can be used without downloading or installing, and the applet can rely on multiple types of applications, such as a social application, a shopping application, or a payment application, which is not limited in this application. When a user watches a target video through a small program, first bullet screen information can be sent through the small program, the terminal sends the first bullet screen information to a server, and the server acquires the first bullet screen information.
The server may obtain the first bullet screen information in any one of the above manners, which is not limited in the embodiment of the present application.
402. The server inputs the first bullet screen information into the bullet screen classification model, and the first bullet screen information is classified through the bullet screen classification model.
In a possible implementation manner, the server performs word segmentation on the first bullet screen information through the bullet screen classification model to obtain a plurality of first words. The server extracts first text features of the plurality of first words through the bullet screen classification model. And in response to the fact that the similarity between the first text characteristic and the text characteristic of any target word does not accord with a second similarity condition, the server determines the first bullet screen information as a first bullet screen type through a bullet screen classification model, the target word is a word which is not allowed to be issued, and the first bullet screen type is a bullet screen type which is allowed to be issued. And in response to the fact that the similarity between the first text feature and the text feature of any target word meets a second similarity condition, the server determines the first bullet screen information as a second bullet screen type through the bullet screen classification model, wherein the second bullet screen type is a bullet screen type which is not allowed to be issued. In some embodiments, such implementations are also referred to as content-based bullet screen information classification techniques.
In this embodiment, the server can compare first text features of a plurality of first words in the first bullet screen information with text features of target words through the bullet screen classification model to determine whether the target words exist in the first bullet screen information, and when it is determined that the target words do not exist in the first bullet screen information, determine the first bullet screen information as the first bullet screen type. That is to say, before the subsequent steps are performed, the server classifies the first barrage information through the barrage classification model, and only the first barrage type, that is, the barrage type allowed to be issued, is processed in the subsequent processing process, so that some barrage information which is not allowed to be issued is removed in advance, and the consumption of computing resources in the subsequent processing process is reduced.
For example, the bullet screen classification model includes a word segmentation submodel and a classification submodel, if the first bullet screen information is "i want to see the evaluation of the brand a", the server inputs the first bullet screen information "i want to see the evaluation of the brand a" into the word segmentation submodel, and performs word segmentation on the first bullet screen information "i want to see the evaluation of the brand a" through the word segmentation submodel to obtain five first words "i", "want to see", "brand a", "the" and "the" evaluation "of the first bullet screen information" i want to see the evaluation of the brand a ". The server can input five first words of 'I', 'want to see', 'brand a', 'of' and 'evaluation' into the classification submodel respectively, and 5 first text features of the first words of 'I', 'want to see', 'brand a', 'of' and 'evaluation' are extracted through the classification submodel. And the server compares the 5 first text characteristics with the text characteristics of the target vocabularies respectively through the classification submodels to obtain the similarity between the 5 first text characteristics and the text characteristics of the target vocabularies. And when the server determines that the similarity between the 5 first text features and the text features of the target vocabularies does not accord with the second similarity condition through the classification submodel, determining the first bullet screen information 'the evaluation that I want to see the brand A' as the first bullet screen type. And when the server determines that the similarity between the 5 first text features and the text feature of any target word meets the second similarity condition through the classification submodel, determining the first bullet screen information 'the evaluation that I want to see the brand A' as the second bullet screen type.
The following will further describe, with reference to the above example, a process in which the server processes the first bullet screen information through the word segmentation sub-model and the classification sub-model of the bullet screen classification model.
In the process that the server performs word segmentation on the first bullet screen information through the word segmentation submodel, the word segmentation submodel can sequentially intercept characters from the first bullet screen information 'the evaluation that the user wants to see the brand A', compare the intercepted characters with words in a dictionary, and when any character or character group is determined to be matched with the words in the dictionary, acquire the character or character group as a first word. For example, the word segmentation sub-model obtains the first character "me" from the first bullet screen information "i want to see the evaluation of brand a", compares the character "me" with the words in the dictionary, and if there are words in the dictionary that match "me", the word segmentation sub-model can obtain the character "me" as a first word. Then, the word segmentation submodel obtains a second character "want" from the first bullet screen information "i want to see the evaluation of brand a", compares the character "want" with the words in the dictionary, and if there is no word matching "want" in the dictionary, then the word segmentation submodel obtains a third character from the first bullet screen information "i want to see the evaluation of brand a": and (4) the characters 'want' and 'look' are spliced and then compared with the words in the dictionary. When determining that the dictionary has the vocabulary matched with the character group 'want to see', the server obtains the character group 'want to see' as another first vocabulary through the word segmentation submodel. Subsequently, the server can perform subsequent word segmentation on the character set "i want to see" right character set "evaluation of brand a" through the word segmentation submodel to obtain five first words "i", "want to see", "brand a", "of" and "evaluation" corresponding to the first bullet screen information "i want to see evaluation of brand a".
In the process that the server classifies the first bullet screen information based on 5 first words through the classification submodel, if the server expresses the first text features in a vector mode, the server can extract 5 first text features (1, 2, 1), (1, 3, 2), (2, 3, 4), (3, 1, 2,) and (1, 0, 4) of the 5 first words of "i", "want to see", "brand a", "of" and "evaluation" through the classification submodel. The server respectively obtains 5 first text features (1, 2, 1), (1, 3, 2), (2, 3, 4), (3, 1, 2,) and (1, 0, 4) and the similarity between the text features of the plurality of target words through the classification submodel. Taking 1 target vocabulary as an example, the text feature of the target vocabulary is (1, 1, 1), then the server can obtain 5 first text features (1, 2, 1), (1, 3, 2), (2, 3, 4), (3, 1, 2,) and (1, 0, 4) respectively with cosine similarities 0.943, 0.926, 0.966, 0.926 and 0.7 with the text feature (1, 1, 1) of the target vocabulary through the classification submodel. The condition that the second similarity is met means that the similarity is greater than a first similarity threshold, and if the first similarity threshold is 0.9, the classification submodel can determine the type of the first bullet screen information as the second bullet screen type. In the above example process, the text features are taken as an example for explanation, and in other possible embodiments, the text features may also be vectors with more dimensions, which is not limited in the embodiment of the present application.
In a possible implementation manner, the bullet screen classification model is a binary classification model, and the bullet screen classification model 1 is obtained based on the sample bullet screen information and the bullet screen type training corresponding to the sample bullet screen information, and has the capability of classifying the bullet screen information. The server inputs the first bullet screen information into the bullet screen classification model, and the first bullet screen information is mapped into corresponding word vectors through the bullet screen classification model. And the server multiplies the word vector of the first bullet screen information by the multiple weight matrixes of the bullet screen classification model through the bullet screen classification model to obtain the characteristic vector of the first bullet screen information. The server performs normalization processing on the feature vector of the first bullet screen information through the bullet screen classification model to obtain the probability that the first bullet screen information belongs to different bullet screen types. And the server determines the bullet screen type with the highest probability as the bullet screen type of the first bullet screen information.
Under this kind of embodiment, the server can classify first bullet screen information through bullet screen classification model fast, and bullet screen information classification's efficiency is higher.
For example, the first bullet screen letterThe comment is 'the evaluation that I want to see the brand A', the server inputs the first bullet screen information 'the evaluation that I want to see the brand A' into the bullet screen classification model, and the first bullet screen information 'the evaluation that I want to see the brand A' is mapped into a word vector (1, 1, 2, 1, 1) through the bullet screen classification model. The server combines the word vector (1, 1, 2, 1, 1) with the weight matrix of the bullet screen classification model
Figure BDA0002911451640000121
The multiplication results in the vector (4, 2). The server normalizes the vectors (4 and 2) through a bullet screen classification model to obtain vectors (0.67 and 0.33), wherein the numeral 0.67 represents that the probability that the first bullet screen information belongs to the first bullet screen type is 67%, and the numeral 0.33 represents that the probability that the first bullet screen information belongs to the second bullet screen type is 33%. The server can determine the type of the first bullet screen information as a first bullet screen type.
In a possible implementation manner, the server can classify the first bullet screen information based on the first account sending the first bullet screen information, in addition to directly classifying the first bullet screen information, that is, the server inputs the first account into the bullet screen classification model. And the server compares the first account with a plurality of second accounts through the bullet screen classification model. And responding to the fact that the first account is different from any second account, the server determines the first bullet screen information as a first bullet screen type, and the second account is an account which is not allowed to release the bullet screen information. In some embodiments, this implementation is also referred to as a user identity-based classification technique.
Under the implementation mode, the server can classify the first bullet screen information based on the identity of the first account sending the first bullet screen information, so that some bullet screen information which is issued before and is sent by the account in a punishment stage can be shielded, and the efficiency of shielding the bullet screen information is improved.
For example, the server can maintain a database of accounts for storing second accounts, and when the server enters a first account into the bullet screen classification model, the bullet screen classification model can compare the first account with the second accounts in the database of accounts. And when the first account number is determined to be different from any second account number in the account database, the server determines the first bullet screen information as a first bullet screen type. And when the first account number is determined to be the same as any second account number in the account database, the server determines the first barrage information as a second barrage type.
Of course, on the basis of the two embodiments, the server can also classify the first bullet screen information based on the first bullet screen information and the first account through the bullet screen classification model, that is, the server classifies the first account through the bullet screen classification model first. When the fact that the first account number is different from any second account number is determined, the server inputs the first bullet screen information into the bullet screen classification model, and the first bullet screen information is classified through the bullet screen classification model to obtain a bullet screen type of the first bullet screen information; when determining that the first account number is the same as any second account number, the server can directly determine the first barrage information as the second barrage type.
It should be noted that, in response to the first bullet screen information being the first bullet screen type, the server can execute the following step 403. Responding to the fact that the first bullet screen information is of the second bullet screen type, the server can send a bullet screen information updating prompt to the first account, and the bullet screen information updating prompt is used for reminding the first bullet screen information of the second bullet screen type. The first account can re-edit the first bullet screen information or give up sending the first bullet screen information based on the bullet screen information update prompt, if the first account re-edits the first bullet screen information, the server can classify the re-edited first bullet screen information, and the classification method is described in the related description of step 402 and is not described herein again.
The following describes a training method of the bullet screen classification model in conjunction with the above embodiments.
Referring to fig. 5, the theoretical basis of the server training barrage classification model includes content-based barrage information classification technology and user identity-based classification technology. In some embodiments, the bullet screen classification model includes two modules, one of the modules is a content-based learning module, and the other module is an identity-based learning module, wherein the content-based learning module includes a word segmentation unit and a classification unit, and the word segmentation unit and the classification unit correspond to the word segmentation sub-model and the classification sub-model, respectively. In the process of training the bullet screen classification model, when the learning module based on the content is trained, the adopted bullet screen training data set comprises sample bullet screen information and bullet screen types corresponding to the sample bullet screen information. When the bullet screen classification is carried out by adopting the learning module based on the identity identification, the bullet screen classification can be carried out based on the second account number stored in the bullet screen classification knowledge base. By constructing and training the bullet screen classification model in this way, the bullet screen classification model can classify the bullet screen information based on the content of the bullet screen information itself, and can also classify the bullet screen information based on the account number sending the bullet screen information. After the server inputs bullet screen information into the mixed type classification model, the type of the mixed type classification model capable of directly outputting bullet screen information is a normal bullet screen or a garbage bullet screen, wherein the normal bullet screen corresponds to a first bullet screen type, and the garbage bullet screen corresponds to a second bullet screen type. In order to test the classification effect of the mixed type classification model on the bullet screen information, after the mixed type classification model is trained, the server can adopt a test set to test the mixed type classification model, and the test set comprises a test bullet screen and a bullet screen type corresponding to the test bullet screen. In the process of training the hybrid classification model, the server can evaluate the hybrid classification model by using the accuracy, the recall ratio and the F measurement, and certainly, in other possible implementation manners, the server can also evaluate the hybrid classification model by using other indexes, which is not limited in the embodiment of the present application.
403. Responding to the first bullet screen information as a first bullet screen type, the server obtains a first bullet screen label corresponding to the first bullet screen information based on the first bullet screen information, and the first bullet screen label is used for representing the semantics of the first bullet screen information.
In a possible implementation manner, the server inputs the first bullet screen information into the semantic recognition model, and performs semantic recognition on the first bullet screen information through the semantic recognition model to obtain a first semantic feature of the first bullet screen information. The server obtains a first bullet screen label corresponding to the first bullet screen information based on the first semantic features.
Under this kind of implementation, the server can be based on the semantics of first bullet screen information and beat the label for first bullet screen information, in other words, because a bullet screen label probably corresponds to a plurality of bullet screen information, the process that the server beats the label to first bullet screen information also is the process of classifying the semantics of first bullet screen information, first bullet screen label just also can represent the semantics of first bullet screen information, follow-up server can be based on the swift branch line video matching that carries on of first bullet screen label, the efficiency of matching is higher.
For example, a technician can set a plurality of tags in advance on a server, the server extracts semantic features of the plurality of tags through a semantic recognition model, and binds and stores the plurality of tags and the corresponding semantic features, and in some embodiments, the server binds and stores the plurality of tags and the corresponding semantic features in a tag library. The server inputs the first bullet screen information into the semantic recognition model, the first bullet screen information is mapped into word vectors through the semantic recognition model, and attention coding and attention decoding are carried out on the word vectors to obtain first semantic features of the first bullet screen information. The server respectively obtains the similarity between the first semantic features and the semantic features of the labels. And the server determines the label with the similarity meeting the first similarity condition as a first bullet screen label.
In a possible implementation manner, if the server performs the word segmentation on the first bullet screen information through the word segmentation sub-model, the server can determine the occurrence frequency of each first word of the first bullet screen information in a corpus, where stored in the corpus is language material that actually appears in the actual use of the language, and in this embodiment, the corpus includes all bullet screen information sent by the user. The server can determine a first vocabulary with the occurrence frequency lower than a frequency threshold as a first bullet screen label of the first bullet screen information. It should be noted that the frequency threshold is set by a technician according to an actual situation, or is dynamically adjusted by the server, which is not limited in the embodiment of the present application.
In this embodiment, when the vocabulary appears less frequently in the corpus, it means that the vocabulary is not a common language word and verb, and the vocabulary is more likely to be the vocabulary used by the user to express the personalized viewing experience. The server can directly acquire the vocabulary as a first bullet screen label from the first bullet screen information, and the determining efficiency of the first bullet screen label is improved.
For example, the server can store all the bullet screen information sent by the user in the bullet screen information database, and the bullet screen information in the bullet screen information database can also be used as a corpus. When the server performs word segmentation on the first bullet screen information to obtain a plurality of first vocabularies, the server can respectively obtain the occurrence frequency of the plurality of first vocabularies in the corpus, for example, the first vocabulary of one of the first bullet screen information is "good looking", the server determines that the occurrence frequency of the first vocabulary "good looking" in the corpus is 0.3, and if the frequency threshold is 0.1, the server does not determine that the first vocabulary "good looking" is the first bullet screen label of the first bullet screen information. If another first word in the first bullet screen information is brand a, the server determines that the frequency of occurrence of the first word brand a in the corpus is 0.03, and if the frequency threshold is 0.1, the server determines the first word brand a as a first bullet screen label of the first bullet screen information.
404. And the server acquires a second branch video corresponding to the first barrage label in the target video.
In a possible implementation manner, the server compares the first barrage tag with a plurality of branch video tags of the target video respectively to obtain similarities between the first barrage tag and the plurality of branch video tags, and the branch video tags are used for representing video content of corresponding branch videos. And responding to the condition that the similarity between the first barrage label and any branch video label meets the target similarity, and acquiring a second branch video corresponding to any branch video label. The similarity meeting the target similarity condition means that the similarity is greater than or equal to a second similarity threshold.
The above embodiments are explained below by two examples.
Example 1, if the server acquires the first barrage tag and the branch video tag, the same tag library is used, that is, the server acquires the first barrage tag of the first barrage information from the tag library and the branch video tag of the branch video from the tag library, in this case, the server can set the second similarity threshold to 1, that is, when the first barrage tag is the same as the branch video tag, the similarity between the first barrage tag and the branch video tag meets the target similarity condition. The server can compare the first barrage label and the plurality of branch video labels respectively to obtain the similarity between the first barrage label and the plurality of branch video labels. Alternatively, the server can mark the same branch video label as the first barrage label as 1 and mark a different branch video label as 0. When it is determined that the first barrage tag is the same as any branch video tag, that is, the mark of the branch video tag is 1, the server can obtain a second branch video corresponding to the branch video tag. For example, the server adds a first barrage label "brand a" to the first barrage information "i want to see the evaluation of brand a", and the branch video label of a branch video of the target video is also "brand a", so that the server can determine that the first barrage label "brand a" matches with the branch video label "brand a", and determine the branch video corresponding to the branch video label "brand a" as the second branch video.
In this embodiment, the server can acquire the branch videos of the branch video tag and the first barrage tag as the second branch video, and the acquisition efficiency of the second branch video is high.
A method for determining a branching video tag will be described based on example 1.
In a possible embodiment, the server can automatically generate a tag library, that is, the server collects a plurality of tags from the network, and stores the plurality of tags as the tag library, where the tag library can be used not only in the process of determining the branch video tag of the branch video, but also in the process of determining the first bullet screen tag of the first bullet screen information. The server can perform image recognition on cover images of a plurality of branch videos of the target video to obtain branch video tags corresponding to the plurality of branch videos, wherein the cover images of the branch videos are set by a video producer of the target video when the target video is uploaded, or are randomly acquired from the branch videos by the server, which is not limited in the embodiment of the application.
For example, the server inputs the cover image of the branch video into the video tag determination model, and performs convolution processing, full-link processing and normalization processing on the cover image through the video tag determination model to obtain the probability that the cover image corresponds to different tags. The server determines the label with the highest probability as the branch video label of the branch video through a video label determination model, wherein the video label determination model is obtained based on the sample image and the label training corresponding to the sample image, and has the capability of determining the corresponding label based on the image. For example, the pixel value matrix of the cover image is
Figure BDA0002911451640000171
The server will matrix the pixel values
Figure BDA0002911451640000172
Inputting video tags to determine models, determining the convolution kernels of the models by video tags, e.g.
Figure BDA0002911451640000173
For pixel value matrix
Figure BDA0002911451640000174
Performing convolution processes, i.e. controlling the convolution kernel
Figure BDA0002911451640000175
At the pixel value matrix
Figure BDA0002911451640000176
Sliding the cover to obtain a feature matrix of the cover image
Figure BDA0002911451640000177
Server will feature matrix
Figure BDA0002911451640000178
Weight matrix of full connection layer with video label determination model (1.1, 1.3) T Multiplied by a bias matrix (134.6, -145.1) T And adding to obtain vectors (800, 500), and normalizing the vectors (800, 500) by the server through a video tag determination model to obtain vectors (0.62, 0.38), wherein the numeral 0.62 represents that the probability that the cover image corresponds to the tag A is 62%, the numeral 0.38 represents that the probability that the cover image corresponds to the tag B is 38%, and the server can determine the branch video tag of the branch video as the tag A.
It should be noted that, after the server adds corresponding branch video tags to the multiple branch videos of the target video, the server can subsequently perform video recommendation based on the branch video tags, so as to improve the video recommendation efficiency.
In a possible implementation manner, after a video producer uploads a target video to a server, in order to ensure that the content of the target video conforms to laws, regulations and public customs, an auditor is often required to perform manual audit on the target video, and in the process of performing manual audit on the target video, the auditor can set a branch video tag for an unnecessary branch video of the target video from a tag library.
In this embodiment, the label of the branch video is manually added by the auditor in the auditing process, and the auditor needs to completely watch the target video during auditing, so that the manually added branch video label can accurately represent the content of the corresponding branch video, and the matching accuracy based on the branch video label is higher.
Example 2, if the server does not adopt the same tag library when acquiring the first barrage tag and the branch video tags, the server may respectively acquire semantic similarities between the first barrage tag and the plurality of branch video tags, and acquire the second branch video corresponding to the branch video tag whose semantic similarity meets the third similarity condition. For example, the server adds a first barrage label "brand a" to the first barrage information "i want to see evaluation of brand a", and a branch video label of a branch video of the target video is "brand a new product". The server can map the first bullet screen label "brand a" and the branch video label "brand a new product" into a word vector, for example, the word vector of the first bullet screen label "brand a" is (1, 0, 1, 1, 1, 1, 1), the word vector of the branch video label "brand a new product" is (1, 1, 0, 1, 0, 1, 1), and the server obtains a cosine similarity between the word vector (1, 0, 1, 1, 1) and the word vector (1, 1, 0, 1, 0, 1, 1, 1) of 0.73. If the second similarity threshold is 0.7, the server can determine that the similarity between the first barrage tag "brand a" and the branch video tag "brand new product" meets the target similarity condition, and determine the branch video corresponding to the branch video tag "brand new product" as the second branch video.
405. And the server pushes the second branch video to the first account.
In a possible implementation manner, if the user watches the target video through the video playing client, and the video playing client logs in the first account, the server can push the second branch video to the first account through the video playing client, and the user can watch the second branch video of the target video through the video playing client.
In a possible implementation manner, if the user views the target video through a webpage, and a first account of the user is logged on the webpage, the server can push the second branch video to the first account through the webpage, and the user can view the second branch video of the target video through the webpage.
In one possible implementation, if the user views a first branch video of the target video through the applet, the server can push a second branch video to the first account through the applet, and the user can view the second branch video of the target video through the applet.
406. And the server pushes a plurality of pieces of second bullet screen information corresponding to the second branch videos to the first account.
For the target video, the content in different branch videos is different, the content of the barrage information sent by the user when watching different branch videos may also be different, and when the user watches different branch videos of the target video, the server can send the barrage information corresponding to the different branch videos to the first account. That is, the user can see different bullet screen information while watching different branch videos.
It should be noted that the step 406 is an optional step, and the server may perform the step 406 after performing the step 405, and also may perform the step 406 while performing the step 405, that is, while pushing the second branch video to the first account, simultaneously push the second barrage information corresponding to the second branch video to the first account, which is not limited in this embodiment of the application.
In a possible embodiment, before the server performs step 406, the server can further store the first barrage information as second barrage information corresponding to the second branch video.
In this embodiment, since the first barrage information is stored as the second barrage information corresponding to the second branch video, the server can also push the first barrage information to the first account, and the first account can see the first barrage information sent by the server in the process of watching the second branch video. In the process of watching the second branch video, other account numbers can also see the first barrage information sent by the first account number.
According to the technical scheme, the server can acquire the bullet screen information sent by the user when watching the target video, and the corresponding bullet screen label is obtained according to the semantics of the bullet screen information. Because the bullet screen information can bear the feeling of watching the target video by the user, the bullet screen label obtained based on the semantics of the bullet screen information can reflect the preference of the user to a certain extent. The server recommends the second branch video obtained based on the bullet screen label to the user, namely the branch video which is possibly interested by the user, and the video is pushed to the user in such a mode, so that the human-computer interaction efficiency is high.
The above-mentioned step 401 and step 406 are described by taking the server as an example to execute the technical solution provided by the embodiment of the present application, and the following description will be given by taking the terminal as an execution subject to execute the technical solution provided by the embodiment of the present application, referring to fig. 6, where the method includes:
601. and in the playing process of the first branch video of the target video, the terminal responds to the barrage information sending operation of the user logging in the first account and sends the first barrage information.
In one possible implementation, a user logs in to a video playing client using a first account, and watches a first branch video of a target video through the video playing client. The method comprises the steps that a bullet screen information acquisition area and a bullet screen sending control are displayed on a video playing client, wherein the bullet screen information acquisition area is used for acquiring bullet screen information input by a user, and the bullet screen sending control is used for triggering a bullet screen sending instruction. In the playing process of the first branch video of the target video, responding to the click operation of the bullet screen sending control, and triggering a bullet screen sending instruction by the terminal. Responding to the bullet screen sending instruction, and sending first bullet screen information in the bullet screen information acquisition area to the server by the terminal.
For example, referring to fig. 7, a playing interface 701 is displayed on the video playing client, and a target video selection area 702, a video playing area 703, a bullet screen information acquisition area 704, and a bullet screen sending control 705 are displayed on the playing interface 701. The user can select a target video desired to be viewed through the target video selection area 702, and after the user selects the target video, the terminal can play the target video on the video play area 703. In the process of playing the target video, the user can input the first bullet screen information to be sent in the bullet screen information acquisition area 704, and click the bullet screen sending control 705 to control the terminal to send the first bullet screen information.
In one possible implementation, the user views a first branch video of the target video through a web page, where the first account is logged on. The method comprises the steps that a bullet screen information acquisition area and a bullet screen sending control are displayed on a webpage, wherein the bullet screen information acquisition area is used for acquiring bullet screen information input by a user, and the bullet screen sending control is used for triggering a bullet screen sending instruction. In the playing process of the first branch video of the target video, responding to the click operation of the bullet screen sending control, and triggering a bullet screen sending instruction by the terminal. Responding to the bullet screen sending instruction, and sending first bullet screen information in the bullet screen information acquisition area to the server by the terminal.
For example, referring to fig. 8, a video selection web page 801 is displayed on the web page, and a cover page 802 of a plurality of videos is displayed on the video selection web page 801. In response to detecting a click operation on the cover page of the target video, the terminal jumps the video selection webpage 801 to the target video playing webpage 803 based on the hyperlink associated with the cover page of the target video. A video playing area 804, a bullet screen information acquiring area 805, and a bullet screen sending control 806 are displayed on the target video playing web page. The terminal can play the target video on the video play area 804. In the process of playing the target video, the user can input the first bullet screen information to be sent in the bullet screen information acquisition area 805, and click the bullet screen sending control 806 to control the terminal to send the first bullet screen information.
In one possible implementation, a user views a first branch video of a target video through an applet, on which a first account is logged. And a bullet screen information acquisition area and a bullet screen sending control are displayed on the small program, wherein the bullet screen information acquisition area is used for acquiring bullet screen information input by a user, and the bullet screen sending control is used for triggering a bullet screen sending instruction. In the playing process of the first branch video of the target video, responding to the click operation of the bullet screen sending control, and triggering a bullet screen sending instruction by the terminal. Responding to the bullet screen sending instruction, and sending first bullet screen information in the bullet screen information acquisition area to the server by the terminal.
For example, referring to fig. 9, a play interface 901 is displayed on the applet, and a target video selection area 902, a video play area 903, a bullet screen information acquisition area 904, and a bullet screen sending control 905 are displayed on the play interface 901. The user can select a target video desired to be viewed through the target video selection area 902, and after the user selects the target video, the terminal can play the target video on the video play area 903. In the process of playing the target video, the user can input the first bullet screen information to be sent in the bullet screen information acquisition area 904, and click the bullet screen sending control 905 to control the terminal to send the first bullet screen information.
602. And the terminal receives a second branch video of the target video, the second branch video corresponds to a first bullet screen label of the first bullet screen information, and the first bullet screen label is used for expressing the semantics of the first bullet screen information.
Optionally, under the condition that the second branch video of the target video does not exist, the terminal continues to play the first branch video of the target video, and certainly, in the process of playing the first branch video, the first account can also continue to send the barrage information, which is not limited in this embodiment of the application.
603. And the terminal plays the second branch video based on the received second branch video.
In one possible implementation, the terminal switches the first branch video to the second branch video for playing.
Under the implementation mode, the terminal can directly switch the first branch video into the second branch video for playing when receiving the second branch video, so that a user can more rapidly watch the video which is interested by the user, and the human-computer interaction efficiency is higher.
In one possible implementation, the terminal determines the play status of the first branch video. And responding to the completion of the playing of the first branch video, and playing the second branch video based on the received terminal.
Under the embodiment, the terminal can play the second branch video after the first branch is played, so that the integrity of the video watched by the user is ensured, and the experience of the video watched by the user is better.
In addition, optionally, during playing of the second branch video, the terminal may be capable of displaying a plurality of second barrage information corresponding to the second branch video on a playing screen of the second branch video.
Under the embodiment, when the user watches different branch videos, the bullet screen information corresponding to the different branch videos can be seen, and the film watching experience of the user watching the branch videos is improved.
Optionally, after step 603, in response to that the second branch video is played completely, the terminal can also play a third branch video, where the third branch video is a branch video of the target video with the highest similarity to the second branch video.
In this embodiment, the terminal can directly play the third branch video of the target video when the second branch video is played, and since the third branch video is the branch video with the highest similarity to the second branch video, the user may be interested in the third branch video on the premise that the user is interested in the second branch video. By playing the branch video in the mode, the played branch video can better accord with the preference of the user, and the experience of watching the branch video by the user is better.
Through the technical scheme provided by the embodiment of the application, a user can control the terminal to play different branch videos in a manner of sending the bullet screen information when watching the target video, and as the bullet screen information can bear the feeling of watching the target video by the user, the bullet screen label used for expressing the semantics of the bullet screen information can reflect the preference of the user to a certain extent. And the second branch video corresponding to the bullet screen label played by the terminal is also the branch video which may be interested by the user. By playing the branch video in such a way, the efficiency of searching the interested branch video in the target video by the user can be improved, and therefore the human-computer interaction efficiency is improved.
In order to more clearly describe the technical solution provided by the embodiment of the present application, the following description will be made with reference to the steps 401-.
Referring to fig. 10, the server is a server cluster formed by three physical servers, wherein the three physical servers are a video background server, a video branch server and a barrage server respectively, and the three physical servers can communicate with each other. The video background server is used for establishing communication connection with the terminal, and if the video playing client side runs on the terminal, the video background server can provide background service for the video playing client side. The video branch server is used for providing services related to branch video, and the bullet screen server is used for providing services related to bullet screen information.
In one possible implementation mode, a video playing client is operated on the terminal, and a user can select a target video to be watched through the video playing client. And responding to the detection of the trigger operation of the target video, and sending a video acquisition request to a video background server by the terminal, wherein the video acquisition request carries the identification of the target video. And after the video background server receives the video acquisition request, acquiring the identifier of the target video from the video acquisition request. If the target video has the first branch video serving as the main line video, the video background server can send a main line video acquisition request to the video branch server, wherein the main line video acquisition request carries the identification of the main line video. After receiving the main line video acquisition request, the video branch line server can acquire the identification of the main line video from the main line video acquisition request, search the main line video corresponding to the identification of the main line video, and send the main line video to the video background server. The video background server can transfer the main line video to the terminal, and the terminal plays the main line video through the video playing client. In the process of mainline video playing, a user can edit first barrage information through a video playing client, and the first barrage information is sent to a barrage server through a terminal. After receiving the first bullet screen information, the bullet screen server can classify the first bullet screen information. When determining that first bullet screen information is the second bullet screen type, bullet screen server can send bullet screen information to the terminal and update the suggestion, and the terminal shows bullet screen information through the video broadcast customer end and updates the suggestion, and the user decides to edit first bullet screen information or abandons to send first bullet screen information according to bullet screen information update the suggestion. When determining that first bullet screen information is first bullet screen type, bullet screen server can send bullet screen display instruction to the terminal, and the terminal response receives bullet screen display instruction, shows first bullet screen information on the video broadcast picture of first branch. In addition, the bullet screen server can determine a first bullet screen label corresponding to the first bullet screen information based on the first bullet screen information, and sends the first bullet screen label to the video branch server. After the video branch server receives the first barrage label, the first barrage label can be compared with a plurality of branch video labels of the target video. When it is determined that the first barrage label is matched with any branch video label of the target video, the video branch server can send the second branch video corresponding to the branch video label to the video background server, the video background server pushes the second branch video to the terminal, after the terminal receives the second branch video, the second branch video is presented to the user through the video playing client side, in addition, the video branch server can also send the identification of the second branch video and the first barrage information to the barrage server, and the barrage server binds and stores the identification of the second branch video and the first barrage information. And when the first bullet screen label is determined not to be matched with any branch video label of the target video, the video branch server does not execute the subsequent flow any longer, and the terminal continues to play the first branch video through the video playing client. When the terminal plays the second branch video through the video playing client, the terminal can send a bullet screen information acquisition request to the bullet screen server, wherein the bullet screen information acquisition request carries an identifier of the second branch video. After receiving the bullet screen information acquisition request, the bullet screen server can acquire the identifier of the second branch video from the bullet screen information acquisition request, search corresponding second bullet screen information based on the identifier of the second branch video, and send the second bullet screen information to the terminal. And after the terminal receives the second bullet screen information, the second bullet screen information can be displayed to the user through the video playing client.
Fig. 11 is a schematic structural diagram of a video pushing apparatus provided in an embodiment of the present application, and referring to fig. 11, the apparatus includes: the system comprises a bullet screen information acquisition module 1101, a bullet screen label acquisition module 1102 and a pushing module 1103.
The barrage information acquiring module 1101 is configured to acquire first barrage information, where the first barrage information is a barrage sent when a user logging in a first account watches a first branch video of a target video.
The bullet screen label obtaining module 1102 is configured to obtain a first bullet screen label corresponding to the first bullet screen information based on the first bullet screen information, where the first bullet screen label is used to represent the semantics of the first bullet screen information.
The pushing module 1103 is configured to obtain a second branch video corresponding to the first barrage tag in the target video, and push the second branch video to the first account.
In a possible implementation manner, the bullet screen label obtaining module is configured to input the first bullet screen information into the semantic recognition model, and perform semantic recognition on the first bullet screen information through the semantic recognition model to obtain a first semantic feature of the first bullet screen information. And acquiring a first bullet screen label corresponding to the first bullet screen information based on the first semantic features.
In a possible implementation manner, the bullet screen label obtaining module is configured to obtain similarities between the first semantic feature and semantic features of the multiple labels, respectively. And determining the label with the similarity meeting the first similarity condition as a first bullet screen label.
In a possible implementation manner, the pushing module is configured to compare the first barrage label with a plurality of branch video labels of the target video, respectively, to obtain a similarity between the first barrage label and the plurality of branch video labels, where the branch video labels are used to represent video content of corresponding branch videos. And responding to the condition that the similarity between the first barrage label and any branch video label meets the target similarity, and acquiring a second branch video corresponding to any branch video label.
In one possible embodiment, the apparatus further comprises:
and the bullet screen information classification module is used for inputting the first bullet screen information into the bullet screen classification model and classifying the first bullet screen information through the bullet screen classification model. And responding to the first bullet screen information as a first bullet screen type, and executing a step of acquiring a first bullet screen label corresponding to the first bullet screen information based on the first bullet screen information, wherein the first bullet screen type is a bullet screen type allowed to be issued.
In a possible implementation manner, the bullet screen information classification module is configured to perform word segmentation on the first bullet screen information to obtain a plurality of first words. First text features of a plurality of first words are extracted. And determining the first bullet screen information as a first bullet screen type and the target vocabulary is a vocabulary which is not allowed to be issued in response to the fact that the similarity between the first text characteristic and the text characteristic of any target vocabulary does not accord with the second similarity condition.
In a possible implementation manner, the bullet screen information classification module is further configured to input the first account into the bullet screen classification model. And comparing the first account number with a plurality of second account numbers respectively. And responding to the fact that the first account number is different from any second account number, determining the first bullet screen information as a first bullet screen type, and determining the second account number as an account number which is not allowed to release the bullet screen information.
In a possible implementation manner, the pushing module is further configured to push a plurality of second barrage information corresponding to the second branch video to the first account.
In one possible embodiment, the apparatus further comprises:
and the storage module is used for storing the first barrage information into second barrage information corresponding to the second branch video.
It should be noted that: in the video push apparatus provided in the foregoing embodiment, when pushing a video, only the division of the above function modules is used for illustration, and in practical applications, the above functions may be distributed by different function modules according to needs, that is, the internal structure of the computer device is divided into different function modules to complete all or part of the above described functions. In addition, the video push apparatus and the video push method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments and are not described herein again.
According to the technical scheme, the server can acquire the bullet screen information sent by the user when watching the target video, and the corresponding bullet screen label is obtained according to the semantics of the bullet screen information. Because the bullet screen information can bear the feeling that a user watches a target video, the bullet screen label obtained based on the semantics of the bullet screen information can reflect the preference of the user to a certain extent. The server recommends the second branch video obtained based on the bullet screen label to the user, namely the branch video which is possibly interested by the user, and the video is pushed to the user in such a mode, so that the human-computer interaction efficiency is high.
Fig. 12 is a schematic structural diagram of a video playback device according to an embodiment of the present application, and referring to fig. 12, the device includes: a sending module 1201, a receiving module 1202 and a playing module 1203.
A sending module 1201, configured to send first barrage information in response to a barrage information sending operation of a user logging in a first account during a playing process of a first branch video of a target video;
a receiving module 1202, configured to receive a second branch video of the target video, where the second branch video corresponds to a first barrage tag of the first barrage information, and the first barrage tag is used to represent semantics of the first barrage information;
a playing module 1203, configured to play based on the received second branch video.
In one possible embodiment, the apparatus further comprises:
and the display module is used for displaying a plurality of second barrage information corresponding to the second branch video on the playing picture of the second branch video.
In a possible implementation manner, the playing module is further configured to determine a playing status of the first branch video; and responding to the completion of the playing of the first branch video, executing the step of playing based on the received second branch video.
In a possible implementation manner, the playing module is configured to switch the first branch video to the second branch video for playing.
In a possible implementation manner, the playing module is further configured to play a third branch video in response to that the second branch video is played completely, where the third branch video is a branch video of the target video with the highest similarity to the second branch video.
It should be noted that: in the video playing apparatus provided in the above embodiment, when playing a video, only the division of the above functional modules is used for illustration, and in practical applications, the above functions may be distributed by different functional modules according to needs, that is, the internal structure of the computer device is divided into different functional modules to complete all or part of the above described functions. In addition, the video playing apparatus and the video playing method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments and are not described herein again.
Through the technical scheme provided by the embodiment of the application, when watching the target video, the user can control the terminal to play different branch videos by sending the bullet screen information, and as the bullet screen information can bear the feeling that the user watches the target video, the bullet screen label used for expressing the semantics of the bullet screen information can reflect the preference of the user to a certain extent. And the second branch video corresponding to the bullet screen label played by the terminal is also the branch video which may be interested by the user. By playing the branch video in such a way, the efficiency of searching the interested branch video in the target video by the user can be improved, and therefore the human-computer interaction efficiency is improved.
An embodiment of the present application provides a computer device, configured to perform the foregoing method, where the computer device may be implemented as a terminal or a server, and a structure of the terminal is introduced below:
fig. 13 is a schematic structural diagram of a terminal according to an embodiment of the present application. The terminal 1300 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart watch, a smart television, a smart car device, and the like.
In general, terminal 1300 includes: one or more processors 1301 and one or more memories 1302.
Processor 1301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 1301 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1301 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also referred to as a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1301 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing content that the display screen needs to display. In some embodiments, processor 1301 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 1302 may include one or more computer-readable storage media, which may be non-transitory. The memory 1302 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1302 is used to store at least one instruction for execution by processor 1301 to implement the video push method provided by method embodiments herein.
In some embodiments, terminal 1300 may further optionally include: a peripheral interface 1303 and at least one peripheral. The processor 1301, memory 1302 and peripheral interface 1303 may be connected by buses or signal lines. Each peripheral device may be connected to the peripheral device interface 1303 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1304, display 1305, camera 1306, audio circuitry 1307, positioning component 1308, and power supply 1309.
Peripheral interface 1303 may be used to connect at least one peripheral associated with I/O (Input/Output) to processor 1301 and memory 1302. In some embodiments, processor 1301, memory 1302, and peripheral interface 1303 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1301, the memory 1302, and the peripheral device interface 1303 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 1304 is used to receive and transmit RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 1304 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 1304 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1304 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth.
The display screen 1305 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1305 is a touch display screen, the display screen 1305 also has the capability to collect touch signals on or over the surface of the display screen 1305. The touch signal may be input to the processor 1301 as a control signal for processing. At this point, the display 1305 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard.
The camera assembly 1306 is used to capture images or video. Optionally, camera assembly 1306 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal.
The audio circuit 1307 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1301 for processing, or inputting the electric signals to the radio frequency circuit 1304 for realizing voice communication.
The positioning component 1308 is used for positioning the current geographic position of the terminal 1300 for implementing navigation or LBS (Location Based Service).
Power supply 1309 is used to provide power to various components in terminal 1300. The power source 1309 may be alternating current, direct current, disposable or rechargeable.
In some embodiments, terminal 1300 also includes one or more sensors 1310. The one or more sensors 1310 include, but are not limited to: acceleration sensor 1311, gyro sensor 1312, pressure sensor 1313, fingerprint sensor 1314, optical sensor 1315, and proximity sensor 1316.
The acceleration sensor 1311 can detect the magnitude of acceleration on three coordinate axes of the coordinate system established with the terminal 1300.
The gyro sensor 1312 may detect the body direction and the rotation angle of the terminal 1300, and the gyro sensor 1312 may cooperate with the acceleration sensor 1311 to acquire a 3D motion of the user with respect to the terminal 1300.
Pressure sensor 1313 may be disposed on a side bezel of terminal 1300 and/or underlying display 1305. When the pressure sensor 1313 is disposed on the side frame of the terminal 1300, a user's holding signal to the terminal 1300 may be detected, and the processor 1301 performs left-right hand recognition or shortcut operation according to the holding signal acquired by the pressure sensor 1313. When the pressure sensor 1313 is disposed at a lower layer of the display screen 1305, the processor 1301 controls an operability control on the UI interface according to a pressure operation of the user on the display screen 1305.
The fingerprint sensor 1314 is used for collecting the fingerprint of the user, and the processor 1301 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 1314, or the fingerprint sensor 1314 identifies the identity of the user according to the collected fingerprint.
The optical sensor 1315 is used to collect the ambient light intensity. In one embodiment, the processor 1301 may control the display brightness of the display screen 1305 according to the ambient light intensity collected by the optical sensor 1315. The proximity sensor 1316 is used to gather the distance between the user and the front face of the terminal 1300.
Those skilled in the art will appreciate that the configuration shown in fig. 13 is not intended to be limiting with respect to terminal 1300 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be employed.
The computer device may also be implemented as a server, and the following describes a structure of the server:
fig. 14 is a schematic structural diagram of a server according to an embodiment of the present application, where the server 1400 may generate a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 1401 and one or more memories 1402, where the one or more memories 1402 store at least one computer program, and the at least one computer program is loaded and executed by the one or more processors 1401 to implement the methods provided by the foregoing method embodiments. Certainly, the server 1400 may further have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input and output, and the server 1400 may further include other components for implementing the functions of the device, which is not described herein again.
In an exemplary embodiment, a computer-readable storage medium, such as a memory, including instructions executable by a processor to perform the video push method in the above embodiments is also provided. For example, the computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product is also provided, which includes one or more instructions that can be executed by a processor of an electronic device to perform the video push method provided by the above embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is intended only to illustrate the alternative embodiments of the present application, and should not be construed as limiting the present application, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the present application should be included in the scope of the present application.

Claims (15)

1. A video push method, the method comprising:
acquiring first barrage information, wherein the first barrage information is transmitted when a user logging in a first account watches a first branch video of a target video;
acquiring a first bullet screen label corresponding to the first bullet screen information based on the first bullet screen information, wherein the first bullet screen label is used for representing the semantics of the first bullet screen information;
and acquiring a second branch video corresponding to the first bullet screen label in the target video, and pushing the second branch video to the first account.
2. The method of claim 1, wherein the obtaining a first bullet screen label corresponding to the first bullet screen information based on the first bullet screen information comprises:
inputting the first bullet screen information into a semantic recognition model, and performing semantic recognition on the first bullet screen information through the semantic recognition model to obtain a first semantic feature of the first bullet screen information;
and acquiring the first bullet screen label corresponding to the first bullet screen information based on the first semantic features.
3. The method of claim 2, wherein the obtaining the first barrage label corresponding to the first barrage information based on the first semantic feature comprises:
respectively acquiring the similarity between the first semantic feature and the semantic features of the labels;
and determining the label with the similarity meeting the first similarity condition as the first bullet screen label.
4. The method of claim 1, wherein the obtaining of the second branch video corresponding to the first barrage label in the target video comprises:
comparing the first barrage label with a plurality of branch video labels of the target video respectively to obtain the similarity between the first barrage label and the plurality of branch video labels, wherein the branch video labels are used for representing the video content of corresponding branch videos;
and responding to the fact that the similarity between the first barrage label and any branch video label meets a target similarity condition, and obtaining the second branch video corresponding to any branch video label.
5. The method according to claim 1, wherein before the obtaining of the first bullet screen label corresponding to the first bullet screen information based on the first bullet screen information, the method further comprises:
inputting the first bullet screen information into a bullet screen classification model, and classifying the first bullet screen information through the bullet screen classification model;
and responding to the first bullet screen information as a first bullet screen type, executing the step of acquiring a first bullet screen label corresponding to the first bullet screen information based on the first bullet screen information, wherein the first bullet screen type is a bullet screen type allowed to be issued.
6. The method of claim 5, wherein the classifying the first barrage information comprises:
performing word segmentation on the first bullet screen information to obtain a plurality of first words;
extracting first text features of the plurality of first words;
and in response to the fact that the similarity between the first text feature and the text feature of any target vocabulary does not accord with a second similarity condition, determining the first bullet screen information as the first bullet screen type, wherein the target vocabulary is a vocabulary which is not allowed to be issued.
7. The method of claim 5, wherein prior to the classifying the first barrage information via the barrage classification model, the method further comprises:
inputting the first account number into the bullet screen classification model;
the classifying the first bullet screen information comprises:
comparing the first account number with a plurality of second account numbers respectively;
and responding to the fact that the first account number is different from any second account number, determining the first bullet screen information as the first bullet screen type, wherein the second account number is an account number which is not allowed to release bullet screen information.
8. The method of claim 1, wherein after the pushing the second branch video to the first account, the method further comprises:
and pushing a plurality of second barrage information corresponding to the second branch videos to the first account.
9. A video playback method, the method comprising:
responding to the barrage information sending operation of a user logging in a first account in the playing process of a first branch video of a target video, and sending first barrage information;
receiving a second branch video of the target video, wherein the second branch video corresponds to a first barrage tag of the first barrage information, and the first barrage tag is used for representing the semantic meaning of the first barrage information;
and playing based on the received second branch video.
10. The method of claim 9, further comprising:
and displaying a plurality of second barrage information corresponding to the second branch video on a playing picture of the second branch video.
11. The method of claim 9, wherein before playing based on the received second branch video, the method further comprises:
determining the playing state of the first branch video;
and responding to the completion of the playing of the first branch video, executing the step of playing based on the received second branch video.
12. A video push apparatus, characterized in that the apparatus comprises:
the system comprises a bullet screen information acquisition module, a bullet screen information acquisition module and a bullet screen information processing module, wherein the bullet screen information acquisition module is used for acquiring first bullet screen information which is sent by a first account when a first branch video of a target video is watched;
the bullet screen label acquiring module is used for acquiring a first bullet screen label corresponding to the first bullet screen information based on the first bullet screen information, wherein the first bullet screen label is used for representing the semantics of the first bullet screen information;
and the pushing module is used for acquiring a second branch video corresponding to the first barrage label in the target video and pushing the second branch video to the first account.
13. A video playback apparatus, comprising:
the system comprises a sending module, a display module and a display module, wherein the sending module is used for responding to the barrage information sending operation of a first account number in the playing process of a first branch video of a target video and sending first barrage information;
a receiving module, configured to receive a second branch video of the target video, where the second branch video corresponds to a first barrage tag of the first barrage information, and the first barrage tag is used to represent a semantic meaning of the first barrage information;
and the playing module is used for playing based on the received second branch video.
14. A computer device comprising one or more processors and one or more memories having stored therein at least one instruction, the instruction being loaded and executed by the one or more processors to implement the operations performed by the video push method of any of claims 1 to 8 or to implement the operations performed by the video playback method of any of claims 9 to 11.
15. A computer-readable storage medium, having at least one instruction stored therein, which is loaded and executed by a processor to implement the operations performed by the video push method of any one of claims 1 to 8 or the operations performed by the video play method of any one of claims 9 to 11.
CN202110087582.1A 2021-01-22 2021-01-22 Video pushing method, video playing method, device, equipment and medium Active CN114827702B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110087582.1A CN114827702B (en) 2021-01-22 2021-01-22 Video pushing method, video playing method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110087582.1A CN114827702B (en) 2021-01-22 2021-01-22 Video pushing method, video playing method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN114827702A true CN114827702A (en) 2022-07-29
CN114827702B CN114827702B (en) 2023-06-30

Family

ID=82524835

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110087582.1A Active CN114827702B (en) 2021-01-22 2021-01-22 Video pushing method, video playing method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN114827702B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115499704A (en) * 2022-08-22 2022-12-20 北京奇艺世纪科技有限公司 Video recommendation method and device, readable storage medium and electronic equipment

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050132420A1 (en) * 2003-12-11 2005-06-16 Quadrock Communications, Inc System and method for interaction with television content
CA2775700A1 (en) * 2012-05-04 2012-07-09 Microsoft Corporation Determining a future portion of a currently presented media program
CN103731690A (en) * 2013-11-22 2014-04-16 乐视致新电子科技(天津)有限公司 Message display method and message configuration method
CA2952461A1 (en) * 2015-06-26 2016-12-26 Rovi Guides, Inc. Systems and methods for automatic formatting of images for media assets based on user profile
WO2017181600A1 (en) * 2016-04-19 2017-10-26 乐视控股(北京)有限公司 Method and device for controlling overlay comment
US20190166394A1 (en) * 2017-11-30 2019-05-30 Shanghai Bilibili Technology Co., Ltd. Generating and presenting directional bullet screen
US20200145737A1 (en) * 2018-11-02 2020-05-07 International Business Machines Corporation System and method for adaptive video
CN111143610A (en) * 2019-12-30 2020-05-12 腾讯科技(深圳)有限公司 Content recommendation method and device, electronic equipment and storage medium
CN111601175A (en) * 2020-07-08 2020-08-28 腾讯科技(深圳)有限公司 Bullet screen pushing control method, device, equipment and storage medium
CN112131426A (en) * 2020-09-25 2020-12-25 腾讯科技(深圳)有限公司 Game teaching video recommendation method and device, electronic equipment and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050132420A1 (en) * 2003-12-11 2005-06-16 Quadrock Communications, Inc System and method for interaction with television content
CA2775700A1 (en) * 2012-05-04 2012-07-09 Microsoft Corporation Determining a future portion of a currently presented media program
CN103731690A (en) * 2013-11-22 2014-04-16 乐视致新电子科技(天津)有限公司 Message display method and message configuration method
CA2952461A1 (en) * 2015-06-26 2016-12-26 Rovi Guides, Inc. Systems and methods for automatic formatting of images for media assets based on user profile
WO2017181600A1 (en) * 2016-04-19 2017-10-26 乐视控股(北京)有限公司 Method and device for controlling overlay comment
US20190166394A1 (en) * 2017-11-30 2019-05-30 Shanghai Bilibili Technology Co., Ltd. Generating and presenting directional bullet screen
US20200145737A1 (en) * 2018-11-02 2020-05-07 International Business Machines Corporation System and method for adaptive video
CN111143610A (en) * 2019-12-30 2020-05-12 腾讯科技(深圳)有限公司 Content recommendation method and device, electronic equipment and storage medium
CN111601175A (en) * 2020-07-08 2020-08-28 腾讯科技(深圳)有限公司 Bullet screen pushing control method, device, equipment and storage medium
CN112131426A (en) * 2020-09-25 2020-12-25 腾讯科技(深圳)有限公司 Game teaching video recommendation method and device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115499704A (en) * 2022-08-22 2022-12-20 北京奇艺世纪科技有限公司 Video recommendation method and device, readable storage medium and electronic equipment
CN115499704B (en) * 2022-08-22 2023-12-29 北京奇艺世纪科技有限公司 Video recommendation method and device, readable storage medium and electronic equipment

Also Published As

Publication number Publication date
CN114827702B (en) 2023-06-30

Similar Documents

Publication Publication Date Title
CN111652678B (en) Method, device, terminal, server and readable storage medium for displaying article information
CN108304441B (en) Network resource recommendation method and device, electronic equipment, server and storage medium
CN111476306B (en) Object detection method, device, equipment and storage medium based on artificial intelligence
CN108776676B (en) Information recommendation method and device, computer readable medium and electronic device
CN109189879B (en) Electronic book display method and device
CN110740389B (en) Video positioning method, video positioning device, computer readable medium and electronic equipment
CN111241340B (en) Video tag determining method, device, terminal and storage medium
CN111491187B (en) Video recommendation method, device, equipment and storage medium
CN111311554A (en) Method, device and equipment for determining content quality of image-text content and storage medium
CN111368101B (en) Multimedia resource information display method, device, equipment and storage medium
CN112069414A (en) Recommendation model training method and device, computer equipment and storage medium
CN113515942A (en) Text processing method and device, computer equipment and storage medium
CN111209970A (en) Video classification method and device, storage medium and server
CN113395542A (en) Video generation method and device based on artificial intelligence, computer equipment and medium
CN112163428A (en) Semantic tag acquisition method and device, node equipment and storage medium
CN111858971A (en) Multimedia resource recommendation method, device, terminal and server
CN111897996A (en) Topic label recommendation method, device, equipment and storage medium
CN111491123A (en) Video background processing method and device and electronic equipment
CN113205183A (en) Article recommendation network training method and device, electronic equipment and storage medium
CN113596496A (en) Interaction control method, device, medium and electronic equipment for virtual live broadcast room
CN111339737B (en) Entity linking method, device, equipment and storage medium
CN114495916B (en) Method, device, equipment and storage medium for determining insertion time point of background music
CN111835621A (en) Session message processing method and device, computer equipment and readable storage medium
CN114281936A (en) Classification method and device, computer equipment and storage medium
CN114827702B (en) Video pushing method, video playing method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant