CN112579822A - Video data pushing method and device, computer equipment and storage medium - Google Patents

Video data pushing method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN112579822A
CN112579822A CN202011566675.4A CN202011566675A CN112579822A CN 112579822 A CN112579822 A CN 112579822A CN 202011566675 A CN202011566675 A CN 202011566675A CN 112579822 A CN112579822 A CN 112579822A
Authority
CN
China
Prior art keywords
video data
cluster
data
value
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011566675.4A
Other languages
Chinese (zh)
Inventor
杨甲东
孙立波
王友
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bigo Technology Singapore Pte Ltd
Original Assignee
Bigo Technology Singapore Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bigo Technology Singapore Pte Ltd filed Critical Bigo Technology Singapore Pte Ltd
Priority to CN202011566675.4A priority Critical patent/CN112579822A/en
Publication of CN112579822A publication Critical patent/CN112579822A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/735Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/74Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/75Clustering; Classification

Abstract

The embodiment of the invention provides a method and a device for pushing video data, computer equipment and a storage medium, wherein the method comprises the following steps: recalling part of the video data as original video data when receiving a request of a client, the video data being configured with a cluster including a plurality of categories to which the video data belongs, determining feature data of the video data in the cluster, selecting partial original video data as target video data according to the characteristic data, pushing the target video data to the client, the video data belong to a plurality of categories to form clusters, so that the dimensionality of the video data can be enriched, thereby improving the similarity of the video data in the same cluster, and the feature data of the video data in the cluster can be equivalent to the feature data of the original video data to screen the target video data, the accuracy of target video data can be guaranteed, the method is suitable for automatic screening of cold-start video data, dependence on manual selection of video data by operators is avoided, cost is greatly reduced, and efficiency is improved.

Description

Video data pushing method and device, computer equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of multimedia, in particular to a method and a device for pushing video data, computer equipment and a storage medium.
Background
With the rapid development of science and technology, new media based on digital technology, such as live broadcast, short video and the like, are continuously emerging in each video platform, and the new media often have the characteristics of innovation and mixed media under the support of network technology, so that video data represented by the new media becomes another hotspot field of a network.
In order to improve the user experience, the video platform generally filters video data that may be of interest to a user through feedback of a plurality of users, and pushes the video data to the user, so that the user obtains more information and uses the services provided by the platform.
Because the video data increase speed is fast, the newly increased quantity is large, the playing amount of partial video data is small, the feedback of users is lacked, the problem of cold start exists when the video data are screened, operators need to manually select the video data and put on labels, the video data are selected according to the labels when the video data are screened, and because the quantity of the video data is large, the cost is high and the efficiency is low depending on the mode that the operators manually select the video data.
Disclosure of Invention
The embodiment of the invention provides a method and a device for pushing video data, computer equipment and a storage medium, and aims to solve the problems of high cost and low efficiency in a mode of manually selecting the video data by operators during cold start of the video data.
In a first aspect, an embodiment of the present invention provides a method for pushing video data, including:
when a request of a client is received, recalling partial video data as original video data, wherein the video data is configured with a cluster, and the cluster comprises a plurality of categories to which the video data belongs;
determining feature data of the video data in the cluster;
selecting part of the original video data as target video data according to the characteristic data;
and pushing the target video data to the client.
In a second aspect, an embodiment of the present invention further provides a device for pushing video data, including:
the system comprises an original video data recall module, a video data retrieval module and a video data retrieval module, wherein the original video data recall module is used for recalling partial video data as original video data when receiving a request of a client, and the video data is configured with a cluster which comprises a plurality of categories to which the video data belong;
a feature data determination module for determining feature data of the video data in the cluster;
the target video data screening module is used for selecting part of the original video data as target video data according to the characteristic data;
and the client pushing module is used for pushing the target video data to the client.
In a third aspect, an embodiment of the present invention further provides a computer device, where the computer device includes:
one or more processors;
a memory for storing one or more programs,
when the one or more programs are executed by the one or more processors, the one or more processors implement the method for pushing video data according to the first aspect.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when executed by a processor, the computer program implements the method for pushing video data according to the first aspect.
In this embodiment, when a request of a client is received, part of video data is recalled as original video data, the video data is configured with a cluster, the cluster includes multiple categories to which the video data belongs, feature data of the video data in the cluster is determined, part of the original video data is selected according to the feature data to serve as target video data, the target video data is pushed to the client, the cluster is formed by the multiple categories to which the video data belongs, dimensionality of the video data can be enriched, similarity of the video data in the same cluster is improved, the feature data of the video data in the cluster can be equivalent to the feature data of the original video data, the target video data is screened according to the feature data, accuracy of the target video data can be guaranteed, the method is suitable for automatic screening of the cold-started video data, manual selection of the video data depending on operators is avoided, cost is greatly reduced, The efficiency is improved.
Drawings
Fig. 1 is a flowchart of a method for pushing video data according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a pushing framework for video data according to an embodiment of the present invention;
fig. 3 is a flowchart of a method for pushing video data according to a second embodiment of the present invention;
fig. 4 is a schematic diagram of a pushing framework of video data according to a second embodiment of the present invention;
fig. 5 is a schematic structural diagram of a video data pushing apparatus according to a third embodiment of the present invention;
fig. 6 is a schematic structural diagram of a computer device according to a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
A recommendation system is arranged in the video platform, and the main aim of the recommendation system is to recommend a large amount of video data to a user who may like the video data.
With the rapid improvement and wide popularization of the performance of mobile terminals such as mobile phones and the like, video data of new media such as live broadcast, short video and the like are continuously increased, a video platform frequently faces new video data, and the cold start problem of the recommendation system refers to how to distribute the new video data to the newly put-in video data and recommend the video data to users who like the video data.
For new video data, few users browse the new video data in the initial stage, few behaviors fed back by the users, and algorithms relying on a large number of user behaviors such as common collaborative filtering, deep learning and the like cannot well train accurate recommendation models, so that the recommendation system can well run to make the recommendation more and more accurate, and the problem is that the system is cold started.
For new video data, it is not known what user will like it, the actual preferences of the user are known from the user's historical behavior, and if the new video data can be linked to the similarity of the video data existing in the library, the video data can be recommended to the user who likes the similar subject matter to it based on the similarity.
However, in many cases, the information of the video data is incomplete, the included information is messy, or the speed of generating new video data is too fast (for example, short videos, users can create by using mobile terminals anytime and anywhere, a large number of short videos can be generated in a short time), the processing is too late in a short time or the processing cost is high, or the connection with the existing video data in the library cannot be well established due to completely new categories or fields, and all of these situations increase the difficulty of distributing the video data to users who like the video data.
Example one
Fig. 1 is a flowchart of a method for pushing video data according to an embodiment of the present invention, where the method is applicable to situations of video vectorization, video clustering, and recommendation model training, and the method may be executed by a video data pushing device, and the video data pushing device may be implemented by software and/or hardware, and may be configured in a computer device of a video platform, such as a server, a workstation, a personal computer, and the like, and specifically includes the following steps:
step 101, training an encoder.
In this embodiment, as shown in fig. 2, an encoder may be trained, and the encoder may be a neural network, such as I3D (dual stream dilated 3D convolutional network), R (2+1) D (approximating 3D convolution by 2D convolution and 1D convolution), SlowFast, etc., and may be used to encode video data into a fixed-length vector (e.g. 1024-dimensional vector), that is, the input of the encoder is video data and the output is vector.
Taking I3D as an example of an encoder, I3D may include the following structure:
convNet (convolutional network) + LSTM (Long Short-Term Memory network): the entire video data is Pooling after feature extraction for each frame, or feature extraction and LSTM for each frame.
Modified C3D: input is 16 frames of image data with a resolution of 112 x 112, and in a variation of C3D, there are 8 convolutional layers at the top, 5 posing layers, and 2 fully-connected layers. After all the convolution layers and the full link layer, a BN (Batch Normalization) layer is added, and the temporal stride of the first firing layer is changed from 1 to 2, so that the memory usage is reduced, and the size of Batch is increased.
Dual Stream network (Two-Stream): the underlying information is also important in some instances because the LSTM catches the higher layer of convolved information, which is very costly to training. Image data in RGB format and 10 stacked optical flow frames, the optical flow input being 2 times the optical flow frames (x, y horizontal vertical channels), can be trained efficiently.
New Dual Stream network (3D-Fused Two-Stream): the inclusion v1 was used, the latter fusion part was changed to 3D convolution, 3D posing, and finally classified by a full connectivity layer. The input of the network is image data in 5 continuous RGB formats sampled 10 frames apart, and an end-to-end training mode is adopted.
Dual Stream Infleted 3D convolution (Two-Stream 3D-ConvNet): the extended 2D convolution base model is a 3D base model (3D base network) convolution, and the convolution kernel and the posing add a time dimension, and although the 3D convolution can directly learn the time characteristics, adding the optical flow improves the performance.
In some cases, I3D may be extended, and if the 2D filter is N × N, then the 3D filter is N × N. This is done by repeating the 2D filter weights N times along the time dimension and normalizing by dividing by N.
In general, an encoder encodes image data in video data into a vector, and encodes contents in the video data into a vector.
In addition to encoding the image data in the video data as a vector, the encoder may encode the audio data and the text data (such as name, introduction, etc.) in the video data as a vector, which is not limited in this embodiment.
In one training approach, as shown in fig. 2, a video processing model pre-trained using a data set such as Kinetics400 may be obtained, where the video processing model includes an encoder and a classifier, and the classifier is used to classify video data, that is, the input of the classifier is a vector of video data, and the output of the classifier is a category of video data.
The sample video data is screened from the video library in advance through the behavior data of the user, the sample video data can be video data with denser user behaviors, and the problem of cold start does not exist generally.
The behavior data of the user comprises the number of clicks of the user, the number of praise of the user, the browsing time of the user, the number of comments of the user, the number of shares of the user, the number of authors concerned by the user, and the like.
In one example, the number of user clicks is greater than 50, which may be considered sample video data.
As shown in fig. 2, for these sample video data, categories may be labeled based on the classification model, as labels (tags), such as cat, dog, person, landscape, etc., and stored in the database.
In training the encoder, sample video data may be obtained from a database, input into the encoder, and encoded into vectors.
And inputting the vectors into a classifier, and dividing the sample video data into specified categories.
Training a video processing model by taking the difference between the converged and labeled class and the divided class as a training target, namely calculating the difference between the converged and labeled class and the divided class as a loss value, if the loss value is greater than a preset threshold value, performing back propagation on the video processing model (namely an encoder and a classifier), updating parameters in the video processing model (namely the encoder and the classifier) by using a random gradient descent mode and the like, and entering the next training; if the loss value is less than or equal to the preset threshold, the video processing model (i.e., encoder, classifier) training is considered complete.
When the video processing model training is complete, the classifier can be ignored and the encoder extracted, awaiting application.
Step 102, inputting video data into an encoder, and encoding the video data into vectors.
In this embodiment, video data with sparse user behaviors is screened in advance through behavior data of a user, and the video data generally has the problem of cold start.
The behavior data of the user includes the number of clicks of the user, the number of praise of the user, the browsing duration of the user, the number of comments of the user, the number of shares of the user, the number of authors concerned by the user, and the like.
In one example, the number of user clicks is less than or equal to 50, which may be considered sample video data.
As shown in fig. 2, for video data with sparse user behavior, the video data may be input into an encoder that has completed training, and the encoder encodes the video data into a vector with a fixed length.
And 103, classifying the video data for multiple times by using the vector of the video data to obtain multiple categories to which the video data belongs.
In this embodiment, as shown in fig. 2, for video data with sparse user behaviors, the video data may be classified multiple times using the vector thereof, and the classification categories are different for each classification, so that T categories may be obtained after T (T is a positive integer) classifications are performed for one video data.
In one classification approach, the class used in each classification may be determined, with the class having a plurality of center points configured with vectors of the same dimensions as the vectors of the video data, which are typically virtual points and not representative of a video data.
And calculating the similarity between the video data and the central point by using the vector of the video data and the vector of the central point, such as Euclidean distance, cosine included angle and the like.
The similarity between the video data and the center point of each category is compared.
And if the certain similarity is minimum, dividing the video data into categories corresponding to the similarity.
Further, as shown in fig. 2, sample video data, which are labeled with categories, may be obtained for the categories used in each classification.
Sample video data is input into an encoder, and image data in the sample video data is encoded into vectors.
The method comprises the steps of randomly generating a plurality of points for each classification, using a clustering algorithm such as K-means (K-means) clustering, mean shift clustering, a density-based clustering method (DBSCAN), maximum Expectation (EM) clustering using a Gaussian Mixture Model (GMM), coacervation hierarchy clustering, Graph Community Detection (Graph Community Detection) and the like as an initial central point in a plurality of classes used in each classification, clustering sample video data by using vectors of the sample video data, and iterating the central point, wherein the central point is changed during each iteration.
Taking k-means as an example, the clustering process is as follows:
1. their respective center points are randomly initialized for classification. The center point is a position having the same length as the vector of each video data, and the number of classifications (i.e., the number of center points) is generally predicted in advance, such as 5000.
2. The distance (e.g., euclidean distance between vectors) from the center point of each video data is calculated, and the center point from which the video data is closest is classified into the classification.
3. The center points in each classification are updated using means such as mean.
The above steps are repeated until the center of each class does not change much after each iteration, e.g., no (or minimum number) objects are reassigned to a different class, or no (or minimum number) center points change again, or the sum of squared errors is locally minimal.
Upon completion of clustering, it is determined that within each classification, a plurality of classes apply to the center point. Since the center points of the categories in each classification are randomly generated and generally not the same, after the center points are iterated multiple times, the center points of the categories in each classification are generally not the same, i.e., the categories in each classification are generally not the same.
Step 104, configuring the plurality of categories into clusters of video data.
As shown in fig. 2, for video data with sparse given user behavior, multiple categories can be obtained after multiple classifications, and the multiple categories can be set as clusters of the video data, and the video data can be described from different dimensions by multiple classifications, so that the problem of classification errors caused by the contingency of single classification is avoided or alleviated, the description accuracy of the video data is improved, and the classification accuracy is improved.
For example, for any video data v, assuming that 5000 categories are set in the jth classification, the central point p closest to the video data v can be found from the central points of the 5000 categoriesi(i is not less than 1 and not more than 5000) and is denoted as cj(v)=i。
The clustering operation is repeated 5 times in this manner, and 5-dimensional data can be obtained for any one video data v, and the obtained data is recorded as a cluster (c) of the video data v1(v)、c2(v)、c3(v)、c4(v)、c5(v))。
And step 105, counting the interest value of the audience user in the video data in the cluster.
In this embodiment, as shown in fig. 2, the interest value of each viewer user in the video data in the cluster may be respectively counted by using the behavior data of each viewer user to train the recommendation model of the video platform.
Wherein the interest value may represent the degree of interest of the viewer User in the video data in the cluster, and the interest value is the degree of interest of the viewer User in the video data under a certain cluster as a whole, but not the degree of interest of the viewer User in the specific video data under a certain cluster, and can be marked as User (c)1、c2、c3、c4、c5)。
In the same cluster, video data with denser behavior data of the user may be included, and video data with sparser behavior data of the user may be included.
In one example, a first click rate for each viewer user when the viewer user browses video data in a cluster over a period of time may be counted, the first click rate being a ratio between a number of clicks by the viewer user on video data in the cluster and a number of displays for the viewer user on video data in the cluster.
If the statistics are complete, the first click-through rate may be set to a value of interest of the viewer user in the video data in the cluster.
Of course, the interest values are only examples, and when implementing the embodiment of the present invention, other interest values may be set according to actual situations, for example, the probability of the viewer user performing a target action (such as praise, share, and author focus) on the video data in the cluster, the viewing time, and the like, which is not limited in this embodiment of the present invention. In addition, besides the above interest values, those skilled in the art may also adopt other interest values according to actual needs, and the embodiment of the present invention is not limited to this.
And step 106, counting the quality value of the content of the video data in the cluster made by the author user.
In this embodiment, as shown in fig. 2, the behavior data of the audience users and the attribute data of the video data may be used to respectively count the content quality value of the video data in the cluster created by each author user, so as to train the recommendation model of the video platform.
Wherein the quality value can represent the quality of the video data in the cluster made by the author user on the content, and the interest value is the quality of the video data in the cluster made by the author user on the content, but not the quality of the specific video data in the cluster made by the author user on the content, and can be marked as Poster (c)1、c2、c3、c4、c5)。
In the same cluster, video data with denser behavior data of the user may be included, or video data with sparser behavior data of the user may be included, and the video data with denser behavior data of the user may be used as the main statistical quality value because the statistical quality value is higher in probability to relate to the behavior data of the user.
The author user may upload the video data in the cluster to a video platform, or record the video data in the cluster and perform post-processing, which is not limited in this embodiment.
In one example, a second click-through rate received by the author user in producing video data in a cluster over a period of time may be counted for each video data, the second click-through rate being a ratio between a number of clicks by the audience user on the video data in the cluster and a number of displays for the audience user on the video data in the cluster.
If the statistics are complete, the second click rate may be set to a quality value of the content of the video data in the cluster made by the author user.
Of course, the above quality values are only used as examples, and when implementing the embodiment of the present invention, other quality values may be set according to actual situations, for example, probability that the video data in the cluster produced by the author user receives a target behavior (such as praise, share, and author focus), viewing duration, and the like, which is not limited in this embodiment of the present invention. In addition, besides the above quality values, those skilled in the art may also adopt other quality values according to actual needs, and the embodiment of the present invention is not limited thereto.
Example two
Fig. 3 is a flowchart of a video data pushing method according to a second embodiment of the present invention, where this embodiment is applicable to a situation where, for cold-started video data, video data is filtered with reference to characteristics of a cluster to which the video data belongs and is pushed to a client, and the method may be executed by a video data pushing device, where the video data pushing device may be implemented by software and/or hardware, and may be configured in computer equipment of a video platform, such as a server, a workstation, a personal computer, and the like, and specifically includes the following steps:
step 301, when receiving a request from a client, recalling part of video data as original video data.
In general, a user may access a video platform from various electronic devices, which may specifically include mobile devices, such as a mobile phone, a PDA (Personal Digital Assistant), a laptop computer, a palmtop computer, an intelligent wearable device (such as a smart watch and smart glasses), and may also include fixed devices, such as a Personal computer, a smart television, and the like.
These electronic devices may support operating systems including Android (Android), iOS, windows, etc., and may typically run clients that play videos, such as browsers, short video applications, live applications, animation applications, instant messaging applications, shopping applications, and so forth.
The user may log in the client by using an account, a password, or the like, or may not log in the client, which is not limited in this embodiment.
In practical applications, as shown in fig. 2, the client may send a request to the video platform in an active or passive manner according to a service scenario, and request the video platform to push video data for the user.
For an active mode, the user may input a keyword at the client and request the video platform to search for video data related to the keyword, or the user may pull down a list of existing video data to request the video platform to refresh the video data, and so on.
For the passive mode, the client may request the video platform to push the video data when displaying a specified page such as a homepage, or the client may request the video platform to push the video data before the current video data finishes playing, and so on.
The request of the client carries a user identifier representing the user, where the user identifier may be a code userId of the user when the user logs in, and the user identifier may be an Equipment identifier representing electronic Equipment where the client is located when the user does not log in, such as an International Mobile Equipment Identity (IMEI), for example.
As shown in fig. 2, when the video platform receives a request from a client, in view of more video data in cold start, in order to reduce the amount of calculation, part of the video data may be recalled as the original video data.
In this case, for the video data that is cold started, a cluster including a plurality of categories to which the video data belongs may be configured by classification in advance.
In one recall approach, cold-start video data, which was authored by the author user, may be looked up in a video library.
On one hand, from the pre-trained recommendation model, the interest value of the audience user logged in the current client for the video data in the cluster is inquired from the interest values which are counted for each audience user and are used for the video data in the cluster, and by multiplexing the statistical data of the recommendation model, not only more appropriate original video data can be screened, but also other statistical data can be avoided being applicable, and the calculation amount is reduced.
In one example, a viewer user logged into the client may be queried for a first click rate when browsing video data in the cluster, the first click rate being set to a value of interest in the viewer user for the video data in the cluster.
On the other hand, from the pre-trained recommendation model, the content quality value of the video data in the cluster made by the current author user is inquired from the content quality values of the video data in the cluster made by each author user, which are counted by each author user in advance, and the statistical data of the recommendation model is multiplexed, so that not only can more appropriate original video data be screened, but also other statistical data can be avoided being applied, and the calculation amount is reduced.
In one example, a second click-through rate received by video data in a cluster made by the author user may be queried and set to a quality value in content for the video data in the cluster made by the author user.
And combining the two aspects, taking the performance of the video data in the cluster on two dimensions of the interest value and the quality value as a reference, and selecting partial video data from the cold-started video data in the video library as original video data based on the interest value and the quality value.
In one example, the interest value (e.g., a first click-through rate) is compared to a first predetermined threshold and the quality value (e.g., a second click-through rate) is compared to a second predetermined threshold.
For a certain video data, if the interest value is greater than a preset first threshold and the quality value is greater than a preset second threshold, selecting the video data as the original video data.
For a certain video data, if its interest value (e.g. first click rate) is smaller than or equal to a preset first threshold value, and/or its quality value (e.g. second click rate) is smaller than or equal to a preset second threshold value, the video data is ignored.
Of course, the above-mentioned manner of recalling the original video data is only an example, and when implementing the embodiment of the present invention, other manners of recalling the original video data may be set according to actual situations, for example, recalling the video data in the cluster to which the user has subscribed, recalling the video data randomly, and the like, which is not limited in this embodiment of the present invention. In addition, besides the above-mentioned manner of recalling the original video data, those skilled in the art may also adopt other manners of recalling the original video data according to actual needs, and the embodiment of the present invention is not limited thereto.
Step 302, determining the characteristic data of the video data in the cluster.
For video data in different clusters, as shown in fig. 2, the clusters may be used as statistical dimensions, and features of the video data in the clusters may be extracted from a pre-trained recommendation model as feature data.
In a specific implementation, in one aspect, a value of interest of a viewer user logged in a client to video data in a cluster is queried as feature data of the video data in the cluster.
In one example, a viewer user logged into the client may be queried for a first click rate when browsing video data in the cluster, the first click rate being set to a value of interest in the viewer user for the video data in the cluster.
On the other hand, the quality value of the content of the video data in the cluster created by the author user is queried as the feature data of the video data in the cluster.
In one example, a second click-through rate received by video data in a cluster made by the author user may be queried and set to a quality value in content for the video data in the cluster made by the author user.
Of course, the above feature data is only an example, and when implementing the embodiment of the present invention, other feature data may be set according to actual situations, which is not limited in the embodiment of the present invention. In addition, besides the above feature data, other feature data may be adopted by those skilled in the art according to actual needs, and the embodiment of the present invention is not limited to this.
And step 303, selecting part of original video data as target video data according to the characteristic data.
Because the current original video data belongs to the cluster and the current original video data and the video data in the cluster have the same or similar attributes, the feature data of the video data in the cluster can be equivalent to the feature data of the current original video data, and a proper original video is selected by referring to the feature data to be used as target video data to wait for being pushed to the client.
As shown in fig. 2, in the case that the feature data is an interest value and a quality value, a comprehensive value may be generated based on the interest value and the quality value, where the comprehensive value is positively correlated with both the interest value and the quality value, i.e., the larger the interest value and the quality value, the larger the comprehensive value, and conversely, the smaller the interest value and the quality value, the smaller the comprehensive value.
In one example, where the interest value and the quality value are equally important, the product between the interest value and the quality value may be calculated as a composite value.
If the interest value is a first click rate when the audience user logs in the client and browses the video data in the cluster, and the quality value is a second click rate received by the video data in the cluster made by the author user (the author user is a user who made the original video data), a product between the first click rate and the second click rate may be calculated as a comprehensive value Score, which is expressed as follows:
Score=User(c1、c2、c3、c4、c5)*Poster(c1、c2、c3、c4、c5)
and sorting the original video data in a descending order according to the comprehensive value, and selecting a plurality of original video data with the highest comprehensive value (namely, the highest sorting) as target video data.
Of course, the above-mentioned manner of calculating the comprehensive value is only an example, and when the embodiment of the present invention is implemented, other manners of calculating the comprehensive value may be set according to actual situations, for example, linear fusion (also called weighted summation) is performed on the interest value and the quality value, and the like, which is not limited in this embodiment of the present invention. In addition, besides the above-mentioned manner of calculating the comprehensive value, a person skilled in the art may also adopt a manner of calculating the comprehensive value according to actual needs, and the embodiment of the present invention is not limited thereto.
And step 304, pushing the target video data to the client.
If the target video data of the cold start is screened out, video information of the target video data, such as a cover page, a name and the like, can be searched, the video information is packaged into a response, the response is sent to the client side, and the client side analyzes the video information from the response and displays the video information to an interface for a current audience user to browse.
Besides the target video frequency of the cold start, some non-cold start video data can be screened to serve as the target video data and pushed to the client together, so that the diversity of the target video data is kept.
In a specific implementation, as shown in fig. 4, the target video data may be screened for the video data not cold-started by:
1. recall from scratch
Recalling video data from the video library may narrow the set of selectable video data.
Further, for different service scenarios, different recall strategies may be used to recall a portion of video data from the video repository according to different service requirements (e.g., recall high-quality (non-personalized) video data, recall video data meeting personalized requirements of users, etc.).
In one example, recall policies include, but are not limited to:
online recalls (recall video data of online anchor users (i.e., live programs)), subscription recalls (recall video data of listings subscribed by users (e.g., certain games, restaurants, etc.), country-by-country recalls (recall video data that is the same as the country to which the user belongs), language-by-language recalls (recall video data that is the same as the language used by the user), collaborative filtering recalls (recall video data using a collaborative filtering algorithm), preference recalls (recall video data that is the same as the preference of the user), similar recalls (recall other video data that is similar to the recalled video data).
2. Row board
The number of recalled video data is large, and usually reaches the order of ten thousand or thousand, while an algorithm used for the refinement may be complex, in order to increase the speed of the sorting, a link of the refinement may be added between the recall and the refinement, and the recalled video data is loaded into a simple sorting model through the characteristics of a small number of users and the video data, for example, an LR (logical Regression) model, a GBDT (Gradient Boost Tree) model, and the like.
It should be noted that, according to the characteristics of the service scenario, the rough bar is often optional, that is, the rough bar may be applied, and the skip from the recall to the fine bar may also be performed directly, which is not limited in this embodiment.
3. Refined raft
The method includes the steps that characteristics of more users and video data are loaded into a more complex sequencing model, for example, CNN (Convolutional Neural Networks), RNN (Recurrent Neural Networks), and the like, roughly arranged video data are accurately sequenced, part of video data with higher sequencing is selected, sequencing accuracy is improved as much as possible, the number of video data sent to a client side is further reduced, and the number of video data can be reduced to the order of hundreds to ten.
4. Scattering up
The video platform generally recalls video data based on recent hotspots, user preferences and other manners, if characteristics of the video data are similar, the video data with repeated contents may be recalled, and in order to reduce the repetition degree of the video data, before the video data is pushed to a client where a user is located, the video data is generally scattered, that is, the video data is rearranged globally, so that various video data are distributed more uniformly.
In this embodiment, when a request of a client is received, part of video data is recalled as original video data, the video data is configured with a cluster, the cluster includes multiple categories to which the video data belongs, feature data of the video data in the cluster is determined, part of the original video data is selected according to the feature data to serve as target video data, the target video data is pushed to the client, the cluster is formed by the multiple categories to which the video data belongs, dimensionality of the video data can be enriched, similarity of the video data in the same cluster is improved, the feature data of the video data in the cluster can be equivalent to the feature data of the original video data, the target video data is screened according to the feature data, accuracy of the target video data can be guaranteed, the method is suitable for automatic screening of the cold-started video data, manual selection of the video data depending on operators is avoided, cost is greatly reduced, The efficiency is improved.
Furthermore, the target video data adapted to the user are pushed to the client, so that the operation that the user actively screens video data possibly interested through operations such as keywords, refreshing and the like can be reduced, the resource consumption of the electronic equipment is reduced, and the resource consumption and bandwidth consumption caused by the video platform responding to the operations are also reduced.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
EXAMPLE III
Fig. 5 is a block diagram of a structure of a video data pushing apparatus according to a third embodiment of the present invention, which may specifically include the following modules:
an original video data recall module 501, configured to recall, when receiving a request from a client, a portion of video data as original video data, where the video data is configured with a cluster, and the cluster includes multiple categories to which the video data belongs;
a feature data determining module 502, configured to determine feature data of the video data in the cluster;
a target video data screening module 503, configured to select, according to the feature data, a portion of the original video data as target video data;
a client pushing module 504, configured to push the target video data to the client.
In one embodiment of the present invention, further comprising:
the encoder training module is used for training an encoder;
the video data coding module is used for inputting video data into the coder and coding the video data into vectors;
the video data classification module is used for classifying the video data for multiple times by using the vector of the video data to obtain multiple categories to which the video data belongs;
a cluster setting module configured to configure the plurality of categories as clusters of the video data.
In one embodiment of the present invention, the encoder training module comprises:
the video processing model acquisition sub-module is used for acquiring a pre-trained video processing model, and the video processing model comprises an encoder and a classifier;
the sample video data acquisition sub-module is used for acquiring sample video data, and the sample video data are labeled with categories;
a sample video data encoding sub-module for inputting the sample video data into the encoder, encoding the sample video data into a vector;
a class division submodule for inputting the vector into the classifier and dividing the sample video data into the classes;
a video processing model training sub-module for training the video processing model with the difference between the converged labeled category and the classified category as a training target;
and the encoder extraction sub-module is used for extracting the encoder when the training of the video processing model is finished.
In one embodiment of the present invention, the video data classification module includes:
a category determination sub-module for determining a category for use in each classification, the category having a central point;
a similarity operator module for calculating a similarity between the video data and the center point using the vector of the video data and the vector of the center point;
and the similarity dividing submodule is used for dividing the video data into categories corresponding to the similarities if the similarities are minimum.
In one embodiment of the present invention, the category determination submodule includes:
the device comprises a sample video data obtaining unit, a classification judging unit and a classification judging unit, wherein the sample video data obtaining unit is used for obtaining sample video data, and the sample video data is labeled with a type;
a vector encoding unit for inputting the sample video data into the encoder, encoding image data in the sample video data into vectors;
a central point random generation unit for randomly generating a plurality of central points for each classification;
a clustering unit configured to cluster the sample video data using the vector of the sample video data to iterate the central point;
and the central point determining unit is used for determining that the central point is applicable to the category in the classification when the clustering is finished.
In one embodiment of the present invention, the raw video data recall module 501 includes:
the video data searching submodule is used for searching video data, and the video data is made by an author user;
an interest value query submodule, configured to query an interest value of the video data in the cluster for the viewer user who logs in the client;
a quality value query submodule for querying a content quality value of the video data in the cluster made by the author user;
and the video data selection sub-module is used for selecting part of the video data as original video data based on the interest value and the quality value.
In one embodiment of the present invention, the interest value query submodule includes:
the first click rate query unit is used for querying audience users logged in the client and the first click rate when the audience users browse the video data in the cluster;
a first click rate setting unit configured to set the first click rate to a value of interest of the viewer user in the video data in the cluster;
the quality value query submodule includes:
a second click rate query unit, configured to query a second click rate received by the video data in the cluster made by the author user;
and the second click rate setting unit is used for setting the second click rate as a quality value of the content of the video data in the cluster made by the author user.
In one embodiment of the invention, the video data selection sub-module comprises:
the interest value comparison unit is used for comparing the interest value with a preset first threshold value;
the quality value comparison unit is used for comparing the quality value with a preset second threshold value;
and the original video data selection unit is used for selecting the video data as original video data if the interest value is greater than a preset first threshold and the quality value is greater than a preset second threshold.
In one embodiment of the present invention, the feature data determination module 502 comprises:
an interest value setting sub-module, configured to query an interest value of the viewer user logged in the client for the video data in the cluster, where the interest value is used as feature data of the video data in the cluster;
and the quality value setting sub-module is used for inquiring the quality value of the content of the video data in the cluster made by the author user as the characteristic data of the video data in the cluster.
In an embodiment of the present invention, the target video data filtering module 503 includes:
a composite value generation submodule for generating a composite value based on the interest value and the quality value, the composite value being positively correlated with both the interest value and the quality value;
and the comprehensive value selection submodule is used for selecting a plurality of original video data with the highest comprehensive value as target video data.
In one embodiment of the present invention, the interest value is a first click rate of a viewer user logged in the client to browse video data in the cluster, the video data being produced by an author user, and the quality value is a second click rate received by the video data in the cluster produced by the author user;
the integrated value generation sub-module includes:
and the click rate product calculation unit is used for calculating the product between the first click rate and the second click rate as a comprehensive value.
The video data pushing device provided by the embodiment of the invention can execute the video data pushing method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
Example four
Fig. 6 is a schematic structural diagram of a computer device according to a fourth embodiment of the present invention. FIG. 6 illustrates a block diagram of an exemplary computer device 12 suitable for use in implementing embodiments of the present invention. The computer device 12 shown in FIG. 6 is only an example and should not bring any limitations to the functionality or scope of use of embodiments of the present invention.
As shown in FIG. 6, computer device 12 is in the form of a general purpose computing device. The components of computer device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Computer device 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)30 and/or cache memory 32. Computer device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 6, and commonly referred to as a "hard drive"). Although not shown in FIG. 6, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally carry out the functions and/or methodologies of the described embodiments of the invention.
Computer device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with computer device 12, and/or with any devices (e.g., network card, modem, etc.) that enable computer device 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Also, computer device 12 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via network adapter 20. As shown, network adapter 20 communicates with the other modules of computer device 12 via bus 18. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with computer device 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 16 executes various functional applications and data processing by executing programs stored in the system memory 28, for example, implementing a push method of video data provided by an embodiment of the present invention.
EXAMPLE five
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the video data pushing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
A computer readable storage medium may include, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (14)

1. A method for pushing video data is characterized by comprising the following steps:
when a request of a client is received, recalling partial video data as original video data, wherein the video data is configured with a cluster, and the cluster comprises a plurality of categories to which the video data belongs;
determining feature data of the video data in the cluster;
selecting part of the original video data as target video data according to the characteristic data;
and pushing the target video data to the client.
2. The method of claim 1, further comprising:
training an encoder;
inputting video data into the encoder, encoding the video data into vectors;
classifying the video data for multiple times by using the vector of the video data to obtain multiple categories to which the video data belongs;
configuring a plurality of the categories as clusters of the video data.
3. The method of claim 2, wherein the training encoder comprises:
acquiring a pre-trained video processing model, wherein the video processing model comprises an encoder and a classifier;
obtaining sample video data, wherein the sample video data are labeled with categories;
inputting the sample video data into the encoder, encoding the sample video data into a vector;
inputting the vector into the classifier, classifying the sample video data into the categories;
training the video processing model by taking the difference between the converged labeled category and the classified category as a training target;
when the video processing model training is completed, the encoder is extracted.
4. The method of claim 2, wherein the classifying the video data a plurality of times using the vector of the video data to obtain a plurality of classes to which the video data belongs comprises:
determining a class for use in each classification, the class having a center point;
calculating a similarity between the video data and the center point using the vector of the video data and the vector of the center point;
and if the similarity is minimum, dividing the video data into categories corresponding to the similarity.
5. The method of claim 4, wherein determining the class to be used in each classification comprises:
obtaining sample video data, wherein the sample video data are labeled with categories;
inputting the sample video data into the encoder, encoding image data in the sample video data into vectors;
randomly generating a plurality of center points for each classification;
clustering the sample video data using vectors of the sample video data to iterate the center point;
upon completion of clustering, determining that the category applies to the center point in the classification.
6. The method according to claim 1, wherein the recalling partial video data as original video data comprises:
searching video data, wherein the video data is made by an author user;
querying interest values of the audience users logged in the client for the video data in the cluster;
querying a quality value of video data in the cluster made by the author user on content;
selecting a portion of the video data as raw video data based on the interest value and the quality value.
7. The method of claim 6, wherein said querying a value of interest of a viewer user logged in to said client for video data in said cluster comprises:
querying a first click rate when a viewer user logged in the client browses the video data in the cluster;
setting the first click-through rate to a value of interest in the video data in the cluster by the viewer user;
said querying a quality value in content of video data in said cluster produced by said author user, comprising:
querying a second click rate received by the video data in the cluster made by the author user;
setting the second click rate to a quality value in content of video data in the cluster made by the author user.
8. The method of claim 6, wherein selecting the portion of the video data as raw video data based on the interest value and the quality value comprises:
comparing the interest value with a preset first threshold value;
comparing the quality value with a preset second threshold value;
and if the interest value is larger than a preset first threshold value and the quality value is larger than a preset second threshold value, selecting the video data as original video data.
9. The method of any of claims 1-8, wherein the video data is authored by an author user, and wherein said determining characteristic data of the video data in the cluster comprises:
inquiring interest values of the audience users logged in the client for the video data in the cluster, wherein the interest values are used as characteristic data of the video data in the cluster;
and inquiring the quality value of the content of the video data in the cluster made by the author user as the characteristic data of the video data in the cluster.
10. The method according to claim 9, wherein said selecting a portion of said original video data as target video data based on said feature data comprises:
generating a composite value based on the interest value and the quality value, the composite value being positively correlated with both the interest value and the quality value;
and selecting a plurality of original video data with the highest comprehensive value as target video data.
11. The method of claim 10, wherein the interest value is a first click rate of a viewer user logged in the client to browse video data in the cluster, the video data being produced by an author user, and wherein the quality value is a second click rate received by video data in the cluster produced by the author user;
generating a composite value based on the interest value and the quality value, comprising:
and calculating the product of the first click rate and the second click rate as a comprehensive value.
12. A push device for video data, comprising:
the system comprises an original video data recall module, a video data retrieval module and a video data retrieval module, wherein the original video data recall module is used for recalling partial video data as original video data when receiving a request of a client, and the video data is configured with a cluster which comprises a plurality of categories to which the video data belong;
a feature data determination module for determining feature data of the video data in the cluster;
the target video data screening module is used for selecting part of the original video data as target video data according to the characteristic data;
and the client pushing module is used for pushing the target video data to the client.
13. A computer device, characterized in that the computer device comprises:
one or more processors;
a memory for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method for pushing video data according to any one of claims 1-12.
14. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, implements a push method of video data according to any one of claims 1 to 12.
CN202011566675.4A 2020-12-25 2020-12-25 Video data pushing method and device, computer equipment and storage medium Pending CN112579822A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011566675.4A CN112579822A (en) 2020-12-25 2020-12-25 Video data pushing method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011566675.4A CN112579822A (en) 2020-12-25 2020-12-25 Video data pushing method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112579822A true CN112579822A (en) 2021-03-30

Family

ID=75140690

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011566675.4A Pending CN112579822A (en) 2020-12-25 2020-12-25 Video data pushing method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112579822A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112989116A (en) * 2021-05-10 2021-06-18 广州筷子信息科技有限公司 Video recommendation method, system and device
CN113190696A (en) * 2021-05-12 2021-07-30 百果园技术(新加坡)有限公司 Training method of user screening model, user pushing method and related devices
CN116069562A (en) * 2023-04-06 2023-05-05 北京中科开迪软件有限公司 Video data backup method, system, equipment and medium based on optical disc library

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105426548A (en) * 2015-12-29 2016-03-23 海信集团有限公司 Video recommendation method and device based on multiple users
CN106294783A (en) * 2016-08-12 2017-01-04 乐视控股(北京)有限公司 A kind of video recommendation method and device
CN108647293A (en) * 2018-05-07 2018-10-12 广州虎牙信息科技有限公司 Video recommendation method, device, storage medium and server
CN109104622A (en) * 2017-06-20 2018-12-28 深圳大森智能科技有限公司 Video recommendation method, smart television and computer readable storage medium
CN109299327A (en) * 2018-11-16 2019-02-01 广州市百果园信息技术有限公司 Video recommendation method, device, equipment and storage medium
CN110769283A (en) * 2019-10-31 2020-02-07 广州市网星信息技术有限公司 Video pushing method and device, computer equipment and storage medium
CN110807127A (en) * 2018-08-01 2020-02-18 北京优酷科技有限公司 Video recommendation method and device
CN111382283A (en) * 2020-03-12 2020-07-07 腾讯科技(深圳)有限公司 Resource category label labeling method and device, computer equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105426548A (en) * 2015-12-29 2016-03-23 海信集团有限公司 Video recommendation method and device based on multiple users
CN106294783A (en) * 2016-08-12 2017-01-04 乐视控股(北京)有限公司 A kind of video recommendation method and device
CN109104622A (en) * 2017-06-20 2018-12-28 深圳大森智能科技有限公司 Video recommendation method, smart television and computer readable storage medium
CN108647293A (en) * 2018-05-07 2018-10-12 广州虎牙信息科技有限公司 Video recommendation method, device, storage medium and server
CN110807127A (en) * 2018-08-01 2020-02-18 北京优酷科技有限公司 Video recommendation method and device
CN109299327A (en) * 2018-11-16 2019-02-01 广州市百果园信息技术有限公司 Video recommendation method, device, equipment and storage medium
CN110769283A (en) * 2019-10-31 2020-02-07 广州市网星信息技术有限公司 Video pushing method and device, computer equipment and storage medium
CN111382283A (en) * 2020-03-12 2020-07-07 腾讯科技(深圳)有限公司 Resource category label labeling method and device, computer equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112989116A (en) * 2021-05-10 2021-06-18 广州筷子信息科技有限公司 Video recommendation method, system and device
CN113190696A (en) * 2021-05-12 2021-07-30 百果园技术(新加坡)有限公司 Training method of user screening model, user pushing method and related devices
CN116069562A (en) * 2023-04-06 2023-05-05 北京中科开迪软件有限公司 Video data backup method, system, equipment and medium based on optical disc library
CN116069562B (en) * 2023-04-06 2023-07-14 北京中科开迪软件有限公司 Video data backup method, system, equipment and medium based on optical disc library

Similar Documents

Publication Publication Date Title
CN110309427B (en) Object recommendation method and device and storage medium
CN110321422B (en) Method for training model on line, pushing method, device and equipment
CN109086439B (en) Information recommendation method and device
US9449271B2 (en) Classifying resources using a deep network
CN109325148A (en) The method and apparatus for generating information
CN112579822A (en) Video data pushing method and device, computer equipment and storage medium
CN113158023B (en) Public digital life accurate classification service method based on mixed recommendation algorithm
CN110909182A (en) Multimedia resource searching method and device, computer equipment and storage medium
CN112052387B (en) Content recommendation method, device and computer readable storage medium
CN112989212B (en) Media content recommendation method, device and equipment and computer storage medium
CN111783712A (en) Video processing method, device, equipment and medium
WO2022042157A1 (en) Method and apparatus for manufacturing video data, and computer device and storage medium
WO2023231542A1 (en) Representation information determination method and apparatus, and device and storage medium
CN114417058A (en) Video material screening method and device, computer equipment and storage medium
CN111858972A (en) Movie recommendation method based on family knowledge graph
CN114299321A (en) Video classification method, device, equipment and readable storage medium
CN115168744A (en) Radio and television technology knowledge recommendation method based on user portrait and knowledge graph
US20200401880A1 (en) Generating a recommended target audience based on determining a predicted attendance utilizing a machine learning approach
CN113590898A (en) Data retrieval method and device, electronic equipment, storage medium and computer product
CN113190696A (en) Training method of user screening model, user pushing method and related devices
CN115935049A (en) Recommendation processing method and device based on artificial intelligence and electronic equipment
CN116956183A (en) Multimedia resource recommendation method, model training method, device and storage medium
CN112035740A (en) Project use duration prediction method, device, equipment and storage medium
CN111797765A (en) Image processing method, image processing apparatus, server, and storage medium
Harakawa et al. An efficient extraction method of hierarchical structure of web communities for web video retrieval

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination