CN114926771A - Video identification method and device - Google Patents

Video identification method and device Download PDF

Info

Publication number
CN114926771A
CN114926771A CN202210618307.2A CN202210618307A CN114926771A CN 114926771 A CN114926771 A CN 114926771A CN 202210618307 A CN202210618307 A CN 202210618307A CN 114926771 A CN114926771 A CN 114926771A
Authority
CN
China
Prior art keywords
video
target
feature vector
similarity
videos
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210618307.2A
Other languages
Chinese (zh)
Inventor
杨飞
刘亮
洪进栋
王红阳
李想
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202210618307.2A priority Critical patent/CN114926771A/en
Publication of CN114926771A publication Critical patent/CN114926771A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Analysis (AREA)

Abstract

The method comprises the steps of taking a first reference video meeting a clustering condition as a sample, comparing a second feature vector corresponding to the sample with a first feature vector corresponding to a video to be identified to obtain a target similarity, and judging whether the video to be identified belongs to a video of a target type according to the target similarity.

Description

Video identification method and device
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a video identification method and apparatus.
Background
In the scene of a video application program, videos are often pushed to a user according to the video category preferred by the user, and generally, a plurality of videos meeting the standard are taken as samples, the characteristics of watching users of the videos are extracted, and the characteristics are compared with the characteristics of the watching users of the videos to be identified to judge whether the videos to be identified meet the user preference. However, since the characteristics of the watching users represent the watching behaviors of the users, the characteristics of the watching users are similar and cannot represent that the video contents are similar, and such a judgment method has many misjudgments, which results in a decrease in the accuracy of video identification.
Disclosure of Invention
The embodiment of the disclosure at least provides a video identification method and a video identification device.
In a first aspect, an embodiment of the present disclosure provides a video identification method, including:
acquiring a video to be identified;
acquiring a first feature vector of the video to be identified; the first feature vector includes attribute features of a group of viewing users of the video to be identified;
acquiring second feature vectors corresponding to a plurality of first reference videos in a first reference video set respectively; the first reference video is a video of a target type; the first reference video set comprises a plurality of first reference videos in at least one cluster category meeting a clustering condition, wherein the cluster category meeting the clustering condition is that: the number of the first reference videos included in the cluster category is greater than a preset number, and the similarity between the second feature vectors of the first reference videos in the cluster category is greater than a set threshold;
and determining whether the video to be identified belongs to the video of the target type or not based on the target similarity between the second feature vector of each first reference video in the first reference video set and the first feature vector respectively.
In an alternative embodiment, the first reference video set is determined according to the following steps:
acquiring second feature vectors of a plurality of first reference videos of the target type;
clustering the first reference video based on the second feature vector to obtain a successfully clustered first reference video set, wherein the first reference video set comprises a plurality of first reference videos in at least one cluster category meeting a clustering condition, and the cluster category meeting the clustering condition means that: the number of the first reference videos included in the cluster category is larger than a preset number, and the similarity between the second feature vectors of the first reference videos in the cluster category is larger than a set threshold.
In an alternative embodiment, the target density clustering model is trained according to the following steps:
acquiring third feature vectors of a plurality of second reference videos of the target type and fourth feature vectors of a plurality of training video samples;
inputting the third feature vectors into density clustering models configured with different model parameters respectively to obtain a second reference video set successfully clustered under different model parameters;
for any second reference video set, determining a test result of the training video sample based on a target similarity between the third feature vector corresponding to each second reference video in the second reference video set and a fourth feature vector corresponding to the training video sample; the test result indicates whether the training video sample belongs to the video of the target type;
and determining target model parameters corresponding to the density clustering model based on the test results of the training video samples under different model parameters and the type labels corresponding to the training video samples, and taking the density clustering model with the target model parameters as a trained target density clustering model.
In an optional embodiment, the first feature vector is a feature vector of the video to be identified when a target presentation amount is reached; determining the target presentation amount by:
acquiring a fifth feature vector of a third reference video of the target type when the third reference video reaches a plurality of different display quantities;
determining target similarity between corresponding fifth feature vectors of the third reference video when the preset display quantity of the multiple different display quantities is reached and when other display quantities except the preset display quantity of the multiple different display quantities are reached;
taking a fifth feature vector corresponding to the gentle critical point indicated by the variation trend of the target similarity as a target feature vector;
and taking the display quantity corresponding to the target feature vector as the target display quantity.
In an optional embodiment, determining the target similarity includes:
determining cosine similarity and Euclidean distance similarity between fifth feature vectors respectively corresponding to the third reference video when the third reference video reaches a preset display quantity in the plurality of different display quantities and reaches other display quantities except the preset display quantity in the plurality of different display quantities;
respectively determining the showing quantities corresponding to the gentle critical points indicated by the variation trends of the Euclidean distance similarity and the cosine similarity;
and taking the similarity with the lowest display quantity corresponding to the gentle critical point in the Euclidean distance similarity and the cosine similarity as the target similarity.
In an optional embodiment, the determining, based on the target similarity between the second feature vector and the first feature vector of each first reference video in the first reference video set, whether the video to be identified belongs to the video of the target type includes:
determining whether a target similarity larger than a preset threshold exists in the obtained target similarities;
and if so, determining that the video to be identified belongs to the target type.
In a second aspect, an embodiment of the present disclosure further provides a video identification apparatus, including:
the first acquisition module is used for acquiring a video to be identified; acquiring a first feature vector of the video to be identified; the first feature vector comprises attribute features of a group of viewing users of the video to be identified;
the second obtaining module is used for obtaining second feature vectors corresponding to a plurality of first reference videos in the first reference video set respectively; the first reference video is a target type video; the first reference video set comprises a plurality of first reference videos in at least one cluster category meeting a clustering condition, wherein the cluster category meeting the clustering condition is that: the number of the first reference videos included in the cluster category is greater than a preset number, and the similarity between the second feature vectors of the first reference videos in the cluster category is greater than a set threshold;
a first determining module, configured to determine whether the video to be identified belongs to the video of the target type based on a target similarity between the second feature vector and the first feature vector of each first reference video in the first reference video set.
In an optional implementation manner, the apparatus further includes a second determining module, configured to:
acquiring second feature vectors of a plurality of first reference videos of the target type;
inputting the second feature vector into a pre-trained target density clustering model to obtain a successfully clustered first reference video set; the target density clustering model is determined from density clustering models configured with different model parameters according to a plurality of second reference videos of target types and a plurality of training video samples carrying type labels.
In an alternative embodiment, the apparatus further comprises a training module for:
acquiring third feature vectors of a plurality of second reference videos of the target type and fourth feature vectors of a plurality of training video samples;
inputting the third feature vectors into density clustering models configured with different model parameters respectively to obtain a second reference video set successfully clustered under different model parameters;
for any second reference video set, determining a test result of the training video sample based on a target similarity between the third feature vector corresponding to each second reference video in the second reference video set and the fourth feature vector corresponding to the training video sample; the test result indicates whether the training video sample belongs to the video of the target type;
and determining target model parameters corresponding to the density clustering model based on the test results of the training video samples under different model parameters and the type labels corresponding to the training video samples, and taking the density clustering model with the configured target model parameters as the trained target density clustering model.
In an optional embodiment, the first feature vector is a feature vector of the video to be identified when a target presentation amount is reached;
the apparatus also includes a third determining module to:
acquiring a fifth feature vector of a third reference video of the target type when the third reference video reaches a plurality of different display quantities;
determining target similarity between corresponding fifth feature vectors of the third reference video when the preset display quantity of the multiple different display quantities is reached and when other display quantities except the preset display quantity of the multiple different display quantities are reached;
taking a fifth feature vector corresponding to the gentle critical point indicated by the variation trend of the target similarity as a target feature vector;
and taking the display quantity corresponding to the target feature vector as the target display quantity.
In an optional implementation manner, when determining the target similarity, the third determining module is configured to:
determining cosine similarity and Euclidean distance similarity between fifth feature vectors respectively corresponding to the third reference video when the third reference video reaches a preset display quantity in the plurality of different display quantities and when the third reference video reaches other display quantities except the preset display quantity in the plurality of different display quantities;
respectively determining the showing quantities corresponding to the gentle critical points indicated by the variation trends of the Euclidean distance similarity and the cosine similarity;
and taking the similarity with the lowest display quantity corresponding to the gentle critical point in the Euclidean distance similarity and the cosine similarity as the target similarity.
In an optional implementation manner, the first determining module is specifically configured to:
determining whether a target similarity larger than a preset threshold exists in the obtained target similarities;
and if so, determining that the video to be identified belongs to the target type.
In a third aspect, an embodiment of the present disclosure further provides an electronic device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine-readable instructions being executable by the processor to perform the steps of the first aspect or any one of the possible implementations of the first aspect.
In a fourth aspect, this disclosed embodiment also provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps in the first aspect or any one of the possible implementation manners of the first aspect.
The video identification method and the video identification device provided by the embodiment of the disclosure acquire a video to be identified; acquiring a first feature vector of the video to be identified; the first feature vector includes attribute features of a group of viewing users of the video to be identified; acquiring second feature vectors respectively corresponding to a plurality of first reference videos in a first reference video set; the first reference video is a target type video; the first reference video set comprises a plurality of first reference videos in at least one clustering category meeting a clustering condition, wherein the clustering category meeting the clustering condition means that: the number of the first reference videos included in the cluster category is greater than a preset number, and the similarity between the second feature vectors of the first reference videos in the cluster category is greater than a set threshold; and determining whether the video to be identified belongs to the video of the target type or not based on the target similarity between the second feature vector of each first reference video in the first reference video set and the first feature vector. The embodiment of the disclosure takes the first reference video meeting the clustering condition as a sample, compares the corresponding second feature vector with the corresponding first feature vector of the video to be identified to obtain the target similarity, and judges whether the video to be identified belongs to the video of the target type according to the target similarity.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 shows a flow chart of a video recognition method provided by an embodiment of the present disclosure;
fig. 2 is a schematic diagram illustrating a variation curve of cosine similarity provided by an embodiment of the present disclosure;
FIG. 3 is a diagram illustrating a variation curve of Euclidean distance similarity provided by the embodiment of the present disclosure;
FIG. 4 is a flowchart illustrating a specific method for training a target density clustering model in the video recognition method provided by the embodiment of the present disclosure;
fig. 5 shows a flow chart of another video identification method provided by the embodiments of the present disclosure;
fig. 6 is a schematic diagram illustrating a video recognition apparatus provided by an embodiment of the present disclosure;
fig. 7 shows a schematic diagram of an electronic device provided by an embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions in the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The term "and/or" herein merely describes an associative relationship, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
It is understood that before the technical solutions disclosed in the embodiments of the present disclosure are used, the type, the use range, the use scene, etc. of the personal information related to the present disclosure should be informed to the user and obtain the authorization of the user through a proper manner according to the relevant laws and regulations.
In order to solve the technical problem of low video identification precision in the prior art, the disclosure provides a video identification method and a video identification device, a first reference video meeting a clustering condition is used as a sample, a second feature vector corresponding to the first reference video is compared with a first feature vector corresponding to a video to be identified to obtain a target similarity, and whether the video to be identified belongs to a video of a target type is judged according to the target similarity.
To facilitate understanding of the present embodiment, first, a video identification method disclosed in the embodiments of the present disclosure is described in detail, where an execution subject of the video identification method provided in the embodiments of the present disclosure is generally a computer device with certain computing capability, and the computer device includes, for example: terminal equipment or servers or other processing devices. In some possible implementations, the video recognition method may be implemented by a processor invoking computer readable instructions stored in a memory.
Referring to fig. 1, a flowchart of a video identification method provided in the embodiment of the present disclosure is shown, where the method includes steps S101 to S104, where:
s101, obtaining a video to be identified.
The video to be identified may be a small video, a long video, or the like, and may carry a type tag, and the embodiment of the present disclosure may determine whether the type of the video to be identified is consistent with the type tag, where the type of the video may refer to a type of content included in the video, such as a popular science type, a movie type, a song type, a ubiquitous life type, or the like.
S102, acquiring a first feature vector of the video to be identified; the first feature vector includes attribute features of a group of viewing users of the video to be identified.
The video can be put on a video platform for users to watch, the display amount of the video can be increased when the video is watched by a new user, the video also corresponds to a feature vector, because the content of the video is too complex to be expressed by using the feature vector, the watching users of a certain type of video are relatively fixed, the attribute features of the watching user group of the video can be used for replacing the content features of the video, the attribute features of the watching user group can be used for expressing the video, and the attribute features of the watching user group can express the features of individuals or groups of the users watching the video, such as watching history features, user group portrait features and the like.
Here, as described above with respect to the description of the personal information, the manner of acquiring and using the above-described attribute features of the viewing user group should be such that the user is informed of the type, the use range, the use scene, and the like of the personal information related to the present disclosure and obtains the authorization of the user in an appropriate manner in accordance with the relevant laws and regulations.
In the embodiment of the present disclosure, the feature vector of the video to be recognized may be referred to as a first feature vector.
The first feature vector can be a first feature vector when the video to be recognized reaches the target display amount, and because the first feature vector contains attribute features of a viewing user group corresponding to the video to be recognized, after the display amount rises, the corresponding viewing users also change, and newly added viewing users have different types, the first feature vector corresponding to the viewing user group also changes, however, after the display amount increases to a certain degree, the user group viewing the video to be recognized gradually tends to be stable, the change of the first feature vector also tends to be fixed, and when the change of the first feature vector tends to be fixed, audience users of the video to be recognized can be more accurately reflected.
The target expression quantity is a corresponding expression quantity when the first feature vector tends to be fixed.
Specifically, the target display amount may be determined by the following steps:
acquiring a fifth feature vector of a third reference video of the target type when the third reference video reaches a plurality of different display amounts;
determining target similarity between corresponding fifth feature vectors of the third reference video when the preset display quantity of the multiple different display quantities is reached and when other display quantities except the preset display quantity of the multiple different display quantities are reached;
taking a fifth feature vector corresponding to a gentle critical point indicated by the variation trend of the target similarity as a target feature vector;
and taking the display quantity corresponding to the target feature vector as the target display quantity.
In order to determine the display amount of the feature vector when the area is fixed, a fifth feature vector of a third reference video of a target type when multiple different display amounts are reached may be obtained first, where the content type of the third reference video may be the same as the content type of the first reference video, and thus, the feature vectors corresponding to reference videos of the same content type are more similar.
Then, a fifth feature vector in the third reference video when a preset display amount is reached may be determined, where the preset display amount may be one of the multiple different display amounts, and may be, for example, the display amount with the lowest value, for example, if the multiple display amounts include 50, 100, 150, and 200, 50 may be taken as the preset display amount.
The target similarity can reflect the change trend of the characteristic vector, when the target similarity tends to be stable, the change trend of the characteristic vector also tends to be stable, the change trend of the target similarity can be represented by a change curve, a gentle critical point of the change curve can be determined, and the corresponding display quantity is used as the target display quantity.
The similarity among the feature vectors can include various similarities, such as Euclidean distance similarity, cosine similarity, Jacard similarity coefficient, Pearson correlation coefficient and the like, corresponding target display amounts of the feature vectors under different types of similarities may be different, and in order to identify the video to be identified as soon as possible, the similarity with the lowest target display amount can be selected as the target similarity.
Referring to fig. 2 and fig. 3, fig. 2 is a schematic diagram of a change curve of cosine similarity provided in the embodiment of the present disclosure, and fig. 3 is a schematic diagram of a change curve of euclidean distance similarity provided in the embodiment of the present disclosure. As shown in fig. 2, in the case where the content type of the third reference video is a pan-live small video, the cosine similarity is substantially stable after the presentation amount reaches 300, and the euclidean distance similarity is substantially stable after the presentation amount reaches 6000.
S103, obtaining second feature vectors corresponding to a plurality of first reference videos in the first reference video set respectively; the first reference video is a video of a target type; the first reference video set comprises a plurality of first reference videos in at least one clustering category meeting a clustering condition, wherein the clustering category meeting the clustering condition means that: the number of the first reference videos included in the cluster category is larger than a preset number, and the similarity between the second feature vectors of the first reference videos in the cluster category is larger than a set threshold.
The first reference video set is obtained by clustering a plurality of first reference videos, and the first reference videos in the first reference video set meet the clustering condition and cannot be successfully clustered, that is, the first reference videos which do not meet the clustering condition do not exist in the first reference video set.
The plurality of first reference videos may be videos of a target type, and the content types of the plurality of first reference videos may be the same as the type tags carried by the videos to be recognized, and the content types of the videos to be recognized are used for judging whether the type tags of the videos to be recognized are correct, but because the feature vectors are from the watching behaviors of a user group, the content and the quality of the videos cannot be directly represented, if the first feature vectors of the videos to be recognized are similar to the second feature vectors of the first reference videos, there is also a possibility that the videos to be recognized are not videos of the target type, and if the second feature vectors corresponding to all the first reference videos are directly used for video recognition, erroneous judgment may be caused.
Therefore, the plurality of first reference videos can be clustered, videos corresponding to high-density regions in a feature vector space are obtained by performing density clustering on feature vectors, videos corresponding to the high-density regions, namely videos meeting clustering conditions are used as a first reference video set, and research shows that a user group watching videos meeting the clustering conditions is generally more interested in the videos and has more vertical interest preference on the videos, namely, the general probability of watching videos of the user group corresponding to the feature vectors belongs to a target type, the favorite video types of the user group corresponding to the feature vectors of the low-density regions are more extensive and are not beneficial to video identification, therefore, the videos meeting the clustering conditions are used as the first reference video set, and the second feature vectors corresponding to the first reference video set are used for video identification, the accuracy of video recognition can be improved.
For example, each second feature vector may be placed in a vector space, and a similarity between the second feature vectors may be determined, where the similarity between the second feature vectors may be an euclidean distance between two feature vectors, if the euclidean distance between two second feature vectors is smaller than a preset distance, that is, the similarity between two feature vectors is greater than a set threshold, the two second feature vectors may be neighbors of each other, if the number of neighbors of one second feature vector is greater than a preset number, the second feature vector and its neighbors may be regarded as a feature vector under a cluster type, a region in the vector space where the feature vector under the cluster type is located is the high-density region, so that a second reference vector whose number is smaller than or equal to the preset number does not satisfy the clustering condition, and the corresponding first reference video may be removed from the plurality of first reference videos, and the first reference video corresponding to the second feature vector meeting the clustering condition forms a first reference video set.
In a specific implementation process, the second feature vectors of a plurality of first reference videos of a target type may be obtained first, then the second feature vectors corresponding to the plurality of first reference videos are input into a pre-trained target density clustering model, and the output successfully clustered first reference video set is obtained through calculation of the target density clustering model. The target density clustering model may be determined from density clustering models configured with different model parameters according to a plurality of second reference videos of the target type and a plurality of training video samples carrying type labels.
Referring to fig. 4, a flowchart of a specific method for training a target density clustering model in the video recognition method provided in the embodiment of the present disclosure specifically includes:
s401, third feature vectors of a plurality of second reference videos of the target type and fourth feature vectors of a plurality of training video samples are obtained.
The second reference video may correspond to the first reference video, and the training video sample may correspond to the video to be recognized. The third feature vector and the fourth feature vector may be feature vectors of the corresponding videos when the target display amount is reached.
S402, inputting the third feature vectors into density clustering models configured with different model parameters respectively to obtain a second reference video set successfully clustered under different model parameters.
And respectively inputting the third feature vectors into density clustering models configured with different model parameters, so as to obtain a second reference video set which is successfully clustered and output by the models under different model parameters, wherein the model parameters can comprise a maximum Euclidean distance used for determining whether the feature vectors are neighbors or not, and a minimum feature vector quantity used for determining whether feature vectors which are neighbors mutually can form a clustering type or not.
By testing the density clustering models with different maximum Euclidean distances and minimum feature vector quantities, the model parameters with the highest accuracy can be determined.
S403, for any one second reference video set, determining a test result of the training video sample based on the target similarity between the third feature vector corresponding to each second reference video in the second reference video set and the fourth feature vector corresponding to the training video sample; the test result indicates whether the training video sample belongs to the target type of video.
In the step, the training video samples are identified by using the second reference video sets obtained under different model parameters, so that the reliability of each second reference video set can be verified, and further, the model parameters with higher accuracy are selected.
S404, determining target model parameters corresponding to the density clustering model based on the test results of the training video samples under different model parameters and the type labels corresponding to the training video samples, and taking the density clustering model with the configured target model parameters as the trained target density clustering model.
In the step, test results of each training video sample under different model parameters can be verified through a pre-configured type label, a model parameter with the highest recognition accuracy of the training video sample under each model parameter can be selected as a target model parameter, and a density clustering model with the configured target model parameter is used as a trained target density clustering model.
S104, determining whether the video to be identified belongs to the video of the target type or not based on the target similarity between the second feature vector and the first feature vector of each first reference video in the first reference video set.
In this step, it may be determined whether a target similarity greater than a preset threshold exists in the obtained target similarities, and if the target similarity exists, it indicates that a user group who likes the video to be recognized also likes the first reference video corresponding to the target similarity, and it may be determined that the video to be recognized belongs to the target type, that is, the video to be recognized is a high-quality video.
The video identification method provided by the embodiment of the disclosure comprises the following steps: acquiring a video to be identified; acquiring a first feature vector of the video to be identified; the first feature vector includes attribute features of a group of viewing users of the video to be identified; acquiring second feature vectors corresponding to a plurality of first reference videos in a first reference video set respectively; the first reference video is a target type video; the first reference video set comprises a plurality of first reference videos in at least one clustering category meeting a clustering condition, wherein the clustering category meeting the clustering condition means that: the number of the first reference videos included in the cluster type is larger than a preset number, and the similarity between the second characteristic vectors of the first reference videos in the cluster type is larger than a set threshold value; and determining whether the video to be identified belongs to the video of the target type or not based on the target similarity between the second feature vector of each first reference video in the first reference video set and the first feature vector. According to the embodiment of the disclosure, the first reference video meeting the clustering condition is taken as a sample, the corresponding second characteristic vector is compared with the corresponding first characteristic vector of the video to be identified to obtain the target similarity, and whether the video to be identified belongs to the video of the target type is judged according to the target similarity.
Referring to fig. 5, as a flowchart of another video identification method provided in the embodiment of the present disclosure, a to-be-identified video with a type tag as a target type may be obtained first, then, when the display amount of the to-be-identified video reaches a target display amount, a feature vector of the to-be-identified video is obtained, where the feature vector includes a feature of a viewing user, then, a feature vector of a reference video obtained through clustering by a clustering model may be obtained, a target similarity between the feature vector and the obtained to-be-identified video is calculated, and finally, the to-be-identified video is judged as the video of the target type based on the determined target similarity; the clustering model can be updated and trained periodically according to the clustering result and the type label corresponding to the clustering result, and the accuracy of the clustering model is improved.
It will be understood by those of skill in the art that in the above method of the present embodiment, the order of writing the steps does not imply a strict order of execution and does not impose any limitations on the implementation, as the order of execution of the steps should be determined by their function and possibly inherent logic.
Based on the same inventive concept, the embodiment of the present disclosure further provides a video identification apparatus corresponding to the video identification method, and as the principle of solving the problem of the apparatus in the embodiment of the present disclosure is similar to the video identification method in the embodiment of the present disclosure, the implementation of the apparatus may refer to the implementation of the method, and the repeated parts are not described again.
Referring to fig. 6, a schematic diagram of a video identification apparatus provided in an embodiment of the present disclosure is shown, where the apparatus includes:
a first obtaining module 610, configured to obtain a video to be identified; acquiring a first feature vector of the video to be identified; the first feature vector comprises attribute features of a group of viewing users of the video to be identified;
a second obtaining module 620, configured to obtain second feature vectors corresponding to multiple first reference videos in the first reference video set, respectively; the first reference video is a target type video; the first reference video set comprises a plurality of first reference videos in at least one clustering category meeting a clustering condition, wherein the clustering category meeting the clustering condition means that: the number of the first reference videos included in the cluster type is larger than a preset number, and the similarity between the second characteristic vectors of the first reference videos in the cluster type is larger than a set threshold value;
a first determining module 630, configured to determine whether the video to be identified belongs to the video of the target type based on a target similarity between the second feature vector and the first feature vector of each first reference video in the first reference video set.
In an optional embodiment, the apparatus further comprises a second determining module 640, configured to:
acquiring second feature vectors of a plurality of first reference videos of the target type;
inputting the second feature vector into a pre-trained target density clustering model to obtain a successfully clustered first reference video set; the target density clustering model is determined from density clustering models configured with different model parameters according to a plurality of second reference videos of target types and a plurality of training video samples carrying type labels.
In an alternative embodiment, the apparatus further comprises a training module 650 for:
acquiring third feature vectors of a plurality of second reference videos of the target type and fourth feature vectors of a plurality of training video samples;
inputting the third feature vectors into density clustering models configured with different model parameters respectively to obtain a second reference video set successfully clustered under different model parameters;
for any second reference video set, determining a test result of the training video sample based on a target similarity between the third feature vector corresponding to each second reference video in the second reference video set and the fourth feature vector corresponding to the training video sample; the test result indicates whether the training video sample belongs to the video of the target type;
and determining target model parameters corresponding to the density clustering model based on the test results of the training video samples under different model parameters and the type labels corresponding to the training video samples, and taking the density clustering model with the configured target model parameters as the trained target density clustering model.
In an optional embodiment, the first feature vector is a feature vector of the video to be identified when a target presentation amount is reached;
the apparatus further comprises a third determining module 660 configured to:
acquiring a fifth feature vector of a third reference video of the target type when the third reference video reaches a plurality of different display amounts;
determining target similarity between fifth feature vectors respectively corresponding to the third reference video when reaching a preset display quantity in the plurality of different display quantities and when reaching other display quantities except the preset display quantity in the plurality of different display quantities;
taking a fifth feature vector corresponding to the gentle critical point indicated by the variation trend of the target similarity as a target feature vector;
and taking the display quantity corresponding to the target feature vector as the target display quantity.
In an optional embodiment, when determining the target similarity, the third determining module 660 is configured to:
determining cosine similarity and Euclidean distance similarity between fifth feature vectors respectively corresponding to the third reference video when the third reference video reaches a preset display quantity in the plurality of different display quantities and when the third reference video reaches other display quantities except the preset display quantity in the plurality of different display quantities;
respectively determining the showing quantities corresponding to the gentle critical points indicated by the variation trends of the Euclidean distance similarity and the cosine similarity;
and taking the similarity with the lowest display quantity corresponding to the gentle critical point in the Euclidean distance similarity and the cosine similarity as the target similarity.
In an optional implementation manner, the first determining module 630 is specifically configured to:
determining whether a target similarity larger than a preset threshold exists in the obtained target similarities;
and if so, determining that the video to be identified belongs to the target type.
The description of the processing flow of each module in the apparatus and the interaction flow between the modules may refer to the relevant description in the above method embodiments, and will not be described in detail here.
Corresponding to the video identification method in fig. 1, an embodiment of the present disclosure further provides an electronic device 700, as shown in fig. 7, which is a schematic structural diagram of the electronic device 700 provided in the embodiment of the present disclosure, and includes:
a processor 71, a memory 72, and a bus 73; the memory 72 is used for storing execution instructions and includes a memory 721 and an external memory 722; the memory 721 is also referred to as an internal memory, and is used for temporarily storing the operation data in the processor 71 and the data exchanged with the external memory 722 such as a hard disk, the processor 71 exchanges data with the external memory 722 through the memory 721, and when the electronic device 700 operates, the processor 71 communicates with the memory 72 through the bus 73, so that the processor 71 executes the following instructions:
acquiring a video to be identified;
acquiring a first feature vector of the video to be identified; the first feature vector comprises attribute features of a group of viewing users of the video to be identified;
acquiring second feature vectors respectively corresponding to a plurality of first reference videos in a first reference video set; the first reference video is a video of a target type; the first reference video set comprises a plurality of first reference videos in at least one clustering category meeting a clustering condition, wherein the clustering category meeting the clustering condition means that: the number of the first reference videos included in the cluster category is greater than a preset number, and the similarity between the second feature vectors of the first reference videos in the cluster category is greater than a set threshold;
and determining whether the video to be identified belongs to the video of the target type or not based on the target similarity between the second feature vector of each first reference video in the first reference video set and the first feature vector respectively.
In an alternative embodiment, the processor 71 is further configured to:
acquiring second feature vectors of a plurality of first reference videos of the target type;
clustering the first reference video based on the second feature vector to obtain a successfully clustered first reference video set, wherein the first reference video set comprises a plurality of first reference videos in at least one cluster category meeting a clustering condition, and the cluster category meeting the clustering condition means that: the number of the first reference videos included in the cluster category is larger than a preset number, and the similarity between the second feature vectors of the first reference videos in the cluster category is larger than a set threshold.
In an alternative embodiment, the processor 71 is further configured to:
acquiring third feature vectors of a plurality of second reference videos of the target type and fourth feature vectors of a plurality of training video samples;
inputting the third feature vectors into density clustering models configured with different model parameters respectively to obtain a second reference video set successfully clustered under different model parameters;
for any second reference video set, determining a test result of the training video sample based on a target similarity between the third feature vector corresponding to each second reference video in the second reference video set and a fourth feature vector corresponding to the training video sample; the test result indicates whether the training video sample belongs to the video of the target type;
and determining target model parameters corresponding to the density clustering model based on the test results of the training video samples under different model parameters and the type labels corresponding to the training video samples, and taking the density clustering model with the target model parameters as a trained target density clustering model.
In an alternative embodiment, in the instruction executed by the processor 71, the first feature vector is a feature vector of the video to be recognized when a target presentation amount is reached; determining the target presentation amount by:
acquiring a fifth feature vector of a third reference video of the target type when the third reference video reaches a plurality of different display amounts;
determining target similarity between fifth feature vectors respectively corresponding to the third reference video when reaching a preset display quantity in the plurality of different display quantities and when reaching other display quantities except the preset display quantity in the plurality of different display quantities;
taking a fifth feature vector corresponding to a gentle critical point indicated by the variation trend of the target similarity as a target feature vector;
and taking the display quantity corresponding to the target feature vector as the target display quantity.
In an alternative embodiment, the determining the target similarity in the instructions executed by the processor 71 includes:
determining cosine similarity and Euclidean distance similarity between fifth feature vectors respectively corresponding to the third reference video when the third reference video reaches a preset display quantity in the plurality of different display quantities and when the third reference video reaches other display quantities except the preset display quantity in the plurality of different display quantities;
respectively determining the showing quantities corresponding to the gentle critical points indicated by the variation trends of the Euclidean distance similarity and the cosine similarity;
and taking the similarity with the lowest display quantity corresponding to the gentle critical point in the Euclidean distance similarity and the cosine similarity as the target similarity.
In an optional embodiment, in the instructions executed by the processor 71, the determining whether the video to be identified belongs to the video of the target type based on the target similarity between the second feature vector and the first feature vector of each first reference video in the first reference video set respectively includes:
determining whether a target similarity larger than a preset threshold exists in the obtained target similarities;
and if so, determining that the video to be identified belongs to the target type.
The embodiments of the present disclosure also provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the steps of the video identification method in the above-mentioned method embodiments. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The embodiments of the present disclosure further provide a computer program product, where the computer program product carries a program code, and instructions included in the program code may be used to execute the steps of the video identification method in the foregoing method embodiments, which may be specifically referred to in the foregoing method embodiments and are not described herein again.
The computer program product may be implemented by hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed coupling or direct coupling or communication connection between each other may be through some communication interfaces, indirect coupling or communication connection between devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solutions of the present disclosure, which are essential or part of the technical solutions contributing to the prior art, may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the methods described in the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used to illustrate the technical solutions of the present disclosure, but not to limit the technical solutions, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (10)

1. A video recognition method, comprising:
acquiring a video to be identified;
acquiring a first feature vector of the video to be identified; the first feature vector comprises attribute features of a group of viewing users of the video to be identified;
acquiring second feature vectors respectively corresponding to a plurality of first reference videos in a first reference video set; the first reference video is a video of a target type; the first reference video set comprises a plurality of first reference videos in at least one cluster category meeting a clustering condition, wherein the cluster category meeting the clustering condition is that: the number of the first reference videos included in the cluster category is greater than a preset number, and the similarity between the second feature vectors of the first reference videos in the cluster category is greater than a set threshold;
and determining whether the video to be identified belongs to the video of the target type or not based on the target similarity between the second feature vector of each first reference video in the first reference video set and the first feature vector respectively.
2. The method of claim 1, wherein the first reference video set is determined according to the following steps:
acquiring second feature vectors of a plurality of first reference videos of the target type;
inputting the second feature vector into a pre-trained target density clustering model to obtain a successfully clustered first reference video set; the target density clustering model is determined from density clustering models configured with different model parameters according to a plurality of second reference videos of target types and a plurality of training video samples carrying type labels.
3. The method of claim 2, wherein the target density clustering model is trained according to the following steps:
acquiring third feature vectors of a plurality of second reference videos of the target type and fourth feature vectors of a plurality of training video samples;
inputting the third feature vectors into density clustering models configured with different model parameters respectively to obtain a second reference video set successfully clustered under different model parameters;
for any second reference video set, determining a test result of the training video sample based on a target similarity between the third feature vector corresponding to each second reference video in the second reference video set and the fourth feature vector corresponding to the training video sample; the test result indicates whether the training video sample belongs to the video of the target type;
and determining target model parameters corresponding to the density clustering model based on the test results of the training video samples under different model parameters and the type labels corresponding to the training video samples, and taking the density clustering model with the configured target model parameters as the trained target density clustering model.
4. The method according to claim 1, wherein the first feature vector is a feature vector of the video to be identified when a target presentation amount is reached;
determining the target presentation amount by:
acquiring a fifth feature vector of a third reference video of the target type when the third reference video reaches a plurality of different display amounts;
determining target similarity between fifth feature vectors respectively corresponding to the third reference video when reaching a preset display quantity in the plurality of different display quantities and when reaching other display quantities except the preset display quantity in the plurality of different display quantities;
taking a fifth feature vector corresponding to the gentle critical point indicated by the variation trend of the target similarity as a target feature vector;
and taking the display quantity corresponding to the target feature vector as the target display quantity.
5. The method of claim 4, wherein determining the target similarity comprises:
determining cosine similarity and Euclidean distance similarity between fifth feature vectors respectively corresponding to the third reference video when the third reference video reaches a preset display quantity in the plurality of different display quantities and reaches other display quantities except the preset display quantity in the plurality of different display quantities;
respectively determining the showing quantities corresponding to the gentle critical points indicated by the variation trends of the Euclidean distance similarity and the cosine similarity;
and taking the similarity with the lowest display quantity corresponding to the gentle critical point in the Euclidean distance similarity and the cosine similarity as the target similarity.
6. The method according to claim 1, wherein the determining whether the video to be identified belongs to the video of the target type based on the target similarity between the second feature vector and the first feature vector of each first reference video in the first reference video set comprises:
determining whether a target similarity larger than a preset threshold exists in the obtained target similarities;
and if so, determining that the video to be identified belongs to the target type.
7. A video recognition apparatus, comprising:
the first acquisition module is used for acquiring a video to be identified; acquiring a first feature vector of the video to be identified; the first feature vector includes attribute features of a group of viewing users of the video to be identified;
the second obtaining module is used for obtaining second feature vectors corresponding to a plurality of first reference videos in the first reference video set respectively; the first reference video is a video of a target type; the first reference video set comprises a plurality of first reference videos in at least one cluster category meeting a clustering condition, wherein the cluster category meeting the clustering condition is that: the number of the first reference videos included in the cluster category is greater than a preset number, and the similarity between the second feature vectors of the first reference videos in the cluster category is greater than a set threshold;
a first determining module, configured to determine whether the video to be identified belongs to the video of the target type based on a target similarity between the second feature vector and the first feature vector of each first reference video in the first reference video set.
8. The apparatus of claim 7, further comprising a second determining module configured to:
acquiring second feature vectors of a plurality of first reference videos of the target type;
inputting the second feature vector into a pre-trained target density clustering model to obtain a successfully clustered first reference video set; the target density clustering model is determined from density clustering models configured with different model parameters according to a plurality of second reference videos of target types and a plurality of training video samples carrying type labels.
9. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when the electronic device is operating, the machine-readable instructions when executed by the processor performing the steps of the video recognition method of any of claims 1 to 6.
10. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the steps of the video recognition method of any one of claims 1 to 6.
CN202210618307.2A 2022-06-01 2022-06-01 Video identification method and device Pending CN114926771A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210618307.2A CN114926771A (en) 2022-06-01 2022-06-01 Video identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210618307.2A CN114926771A (en) 2022-06-01 2022-06-01 Video identification method and device

Publications (1)

Publication Number Publication Date
CN114926771A true CN114926771A (en) 2022-08-19

Family

ID=82813607

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210618307.2A Pending CN114926771A (en) 2022-06-01 2022-06-01 Video identification method and device

Country Status (1)

Country Link
CN (1) CN114926771A (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108647293A (en) * 2018-05-07 2018-10-12 广州虎牙信息科技有限公司 Video recommendation method, device, storage medium and server
CN109168044A (en) * 2018-10-11 2019-01-08 北京奇艺世纪科技有限公司 A kind of determination method and device of video features
CN109214374A (en) * 2018-11-06 2019-01-15 北京达佳互联信息技术有限公司 Video classification methods, device, server and computer readable storage medium
CN110502665A (en) * 2019-08-27 2019-11-26 北京百度网讯科技有限公司 Method for processing video frequency and device
CN110941738A (en) * 2019-11-27 2020-03-31 北京奇艺世纪科技有限公司 Recommendation method and device, electronic equipment and computer-readable storage medium
CN112131430A (en) * 2020-09-24 2020-12-25 腾讯科技(深圳)有限公司 Video clustering method and device, storage medium and electronic equipment
WO2021008026A1 (en) * 2019-07-18 2021-01-21 平安科技(深圳)有限公司 Video classification method and apparatus, computer device and storage medium
CN112364202A (en) * 2020-11-06 2021-02-12 上海众源网络有限公司 Video recommendation method and device and electronic equipment
CN112487300A (en) * 2020-12-18 2021-03-12 上海众源网络有限公司 Video recommendation method and device, electronic equipment and storage medium
CN112822526A (en) * 2020-12-30 2021-05-18 咪咕文化科技有限公司 Video recommendation method, server and readable storage medium
CN113573097A (en) * 2020-04-29 2021-10-29 北京达佳互联信息技术有限公司 Video recommendation method and device, server and storage medium
CN113705299A (en) * 2021-03-16 2021-11-26 腾讯科技(深圳)有限公司 Video identification method and device and storage medium
CN113836348A (en) * 2021-09-26 2021-12-24 深圳市易平方网络科技有限公司 Video-based classification label mark processing method, device, terminal and medium
CN114299321A (en) * 2021-08-04 2022-04-08 腾讯科技(深圳)有限公司 Video classification method, device, equipment and readable storage medium

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108647293A (en) * 2018-05-07 2018-10-12 广州虎牙信息科技有限公司 Video recommendation method, device, storage medium and server
CN109168044A (en) * 2018-10-11 2019-01-08 北京奇艺世纪科技有限公司 A kind of determination method and device of video features
CN109214374A (en) * 2018-11-06 2019-01-15 北京达佳互联信息技术有限公司 Video classification methods, device, server and computer readable storage medium
WO2021008026A1 (en) * 2019-07-18 2021-01-21 平安科技(深圳)有限公司 Video classification method and apparatus, computer device and storage medium
CN110502665A (en) * 2019-08-27 2019-11-26 北京百度网讯科技有限公司 Method for processing video frequency and device
CN110941738A (en) * 2019-11-27 2020-03-31 北京奇艺世纪科技有限公司 Recommendation method and device, electronic equipment and computer-readable storage medium
CN113573097A (en) * 2020-04-29 2021-10-29 北京达佳互联信息技术有限公司 Video recommendation method and device, server and storage medium
CN112131430A (en) * 2020-09-24 2020-12-25 腾讯科技(深圳)有限公司 Video clustering method and device, storage medium and electronic equipment
CN112364202A (en) * 2020-11-06 2021-02-12 上海众源网络有限公司 Video recommendation method and device and electronic equipment
CN112487300A (en) * 2020-12-18 2021-03-12 上海众源网络有限公司 Video recommendation method and device, electronic equipment and storage medium
CN112822526A (en) * 2020-12-30 2021-05-18 咪咕文化科技有限公司 Video recommendation method, server and readable storage medium
CN113705299A (en) * 2021-03-16 2021-11-26 腾讯科技(深圳)有限公司 Video identification method and device and storage medium
CN114299321A (en) * 2021-08-04 2022-04-08 腾讯科技(深圳)有限公司 Video classification method, device, equipment and readable storage medium
CN113836348A (en) * 2021-09-26 2021-12-24 深圳市易平方网络科技有限公司 Video-based classification label mark processing method, device, terminal and medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张茜;崔勇;田继鹏;: "关联性视频点播系统中基于视频相似的缓存替换策略", 中原工学院学报, no. 04, 31 August 2015 (2015-08-31) *

Similar Documents

Publication Publication Date Title
US9348898B2 (en) Recommendation system with dual collaborative filter usage matrix
CN108280477B (en) Method and apparatus for clustering images
CN108829808B (en) Page personalized sorting method and device and electronic equipment
CN108776676B (en) Information recommendation method and device, computer readable medium and electronic device
CN110795584B (en) User identifier generation method and device and terminal equipment
CN110727868B (en) Object recommendation method, device and computer-readable storage medium
CN110856037B (en) Video cover determination method and device, electronic equipment and readable storage medium
CN107885852B (en) APP recommendation method and system based on APP usage record
CN111125658B (en) Method, apparatus, server and storage medium for identifying fraudulent user
CN111209490A (en) Friend-making recommendation method based on user information, electronic device and storage medium
CN107403311B (en) Account use identification method and device
CN113657087B (en) Information matching method and device
CN111611390B (en) Data processing method and device
CN113656699B (en) User feature vector determining method, related equipment and medium
CN113626638A (en) Short video recommendation processing method and device, intelligent terminal and storage medium
CN113032676A (en) Recommendation method and system based on micro-feedback
CN113159213A (en) Service distribution method, device and equipment
CN110569447B (en) Network resource recommendation method and device and storage medium
CN114926771A (en) Video identification method and device
KR102323424B1 (en) Rating Prediction Method for Recommendation Algorithm Based on Observed Ratings and Similarity Graphs
CN113065025A (en) Video duplicate checking method, device, equipment and storage medium
CN111708908A (en) Video tag adding method and device, electronic equipment and computer-readable storage medium
CN117829968B (en) Service product recommendation method, device and system based on user data analysis
CN117708304B (en) Database question-answering method, equipment and storage medium
CN112000888B (en) Information pushing method, device, server and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant after: Douyin Vision Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant before: Tiktok vision (Beijing) Co.,Ltd.

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant after: Tiktok vision (Beijing) Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant before: BEIJING BYTEDANCE NETWORK TECHNOLOGY Co.,Ltd.