CN111212303A - Video recommendation method, server and computer-readable storage medium - Google Patents

Video recommendation method, server and computer-readable storage medium Download PDF

Info

Publication number
CN111212303A
CN111212303A CN201911401182.2A CN201911401182A CN111212303A CN 111212303 A CN111212303 A CN 111212303A CN 201911401182 A CN201911401182 A CN 201911401182A CN 111212303 A CN111212303 A CN 111212303A
Authority
CN
China
Prior art keywords
video
videos
content
similarity
labels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911401182.2A
Other languages
Chinese (zh)
Other versions
CN111212303B (en
Inventor
李立锋
白保军
徐丽莉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
MIGU Video Technology Co Ltd
MIGU Culture Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
MIGU Video Technology Co Ltd
MIGU Culture Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, MIGU Video Technology Co Ltd, MIGU Culture Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN201911401182.2A priority Critical patent/CN111212303B/en
Publication of CN111212303A publication Critical patent/CN111212303A/en
Application granted granted Critical
Publication of CN111212303B publication Critical patent/CN111212303B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/251Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/735Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/75Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44204Monitoring of content usage, e.g. the number of times a movie has been viewed, copied or the amount which has been watched
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44222Analytics of user selections, e.g. selection of programs or purchase activity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4662Learning process for intelligent management, e.g. learning user preferences for recommending movies characterized by learning algorithms
    • H04N21/4665Learning process for intelligent management, e.g. learning user preferences for recommending movies characterized by learning algorithms involving classification methods, e.g. Decision trees
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4667Processing of monitored end-user data, e.g. trend analysis based on the log file of viewer selections
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4668Learning process for intelligent management, e.g. learning user preferences for recommending movies for recommending content, e.g. movies

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Social Psychology (AREA)
  • Health & Medical Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Library & Information Science (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)

Abstract

The embodiment of the invention relates to the technical field of multimedia, and discloses a video recommendation method, a server and a computer-readable storage medium. In the present invention, the video recommendation method includes: carrying out homogenization classification on each video to be recommended; according to the result of the homogenization classification, screening a plurality of videos belonging to the same classification result; the screened videos are pushed to the client, so that the situation that too many homogeneous videos, namely similar videos, are recommended to the user is avoided, and the freshness of the user in the process of browsing the videos is enhanced.

Description

Video recommendation method, server and computer-readable storage medium
Technical Field
The embodiment of the invention relates to the technical field of multimedia, in particular to a video recommendation method, a server and a computer-readable storage medium.
Background
At present, a user usually browses short videos through some short video platforms, and the short video platforms select interesting contents from a large amount of short videos for the user. In the related art, short videos are often recommended or masked according to keywords, titles and the like, that is, short videos with similar keywords or similar titles are recommended to a user.
However, the inventors found that at least the following problems exist in the related art: short videos recommended by keywords, titles, etc. are typically videos of the same set of roads, the same story, but published by different people. Namely, the user finally sees a series of homogeneous videos with high similarity, the homogeneous videos may continuously occupy the recommendation interface, and the user always watches short videos with high similarity in the browsing process, so that the user lacks freshness, and is prone to suffering from excessive similar information, and the user is tired.
Disclosure of Invention
The embodiment of the invention aims to provide a video recommendation method, a server and a computer readable storage medium, which are beneficial to avoiding recommending too many homogeneous videos to a user, so that the freshness of the user in the process of browsing the videos is enhanced.
In order to solve the above technical problem, an embodiment of the present invention provides a video recommendation method, including the following steps: carrying out homogenization classification on each video to be recommended; according to the result of the homogenization classification, screening a plurality of videos belonging to the same classification result; and pushing the screened videos to the client.
An embodiment of the present invention further provides a server, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the video recommendation method described above.
Embodiments of the present invention also provide a computer-readable storage medium storing a computer program, which when executed by a processor implements the video recommendation method described above.
Compared with the prior art, the method and the device have the advantages that homogeneity classification is carried out on each video to be recommended; according to the result of the homogenization classification, screening a plurality of videos belonging to the same classification result; pushing the screened videos to a client; . The videos are subjected to homogenization classification, namely, the homogenization videos are used as a single class, the videos belonging to the same classification result are screened, and the screened videos are pushed to a client. That is to say, because the homogeneous videos are screened, too many homogeneous videos, that is, similar videos, can be prevented from being recommended to the user, and therefore the freshness of the user in the process of browsing the videos is enhanced.
In addition, the performing homogenization classification on each video to be recommended includes: acquiring the similarity between videos to be recommended; wherein the similarity comprises the similarity of the lines and/or the similarity of the bridge sections; and according to the similarity, carrying out homogenization classification on each video to be recommended. The method provides a realization mode of homogeneous classification, and facilitates accurate homogeneous classification of the videos through the similarity of the lines and/or the similarity of the bridge sections among the videos.
In addition, the similarity includes a bridge segment similarity, and the acquisition mode of the bridge segment similarity includes: acquiring content tags of videos to be recommended; wherein the content tag comprises at least an object identified in each video and an action of the object; reasoning to obtain a reasoning label in each video according to the content label and a pre-established knowledge graph; the knowledge graph stores inference relations between content labels and inference labels, and the inference labels are inferred contents; inputting the content label and the inference label into a pre-trained model, and outputting a bridge section to which each video belongs; and acquiring the bridge segment similarity among the videos according to the bridge segment to which the videos belong. The method for acquiring the similarity of the bridge sections is provided, and the content labels such as the targets and the actions of the targets identified in the videos and the inference labels corresponding to the inferred contents are combined, so that the bridge sections in the videos are output through a pre-trained model, and the similarity between the videos is further acquired according to the bridge sections in the videos.
In addition, the content tag further includes any one or a combination of the following identified in each video: scene, background music, lines, objects related to the action of the target; and the object related to the action of the target is an object which changes along with the change of the action of the target in the video. The determined bridge segments in each video are more accurate by considering a plurality of different elements possibly appearing in the video, namely, the matching accuracy is further improved.
In addition, the screening among a plurality of videos belonging to the same classification result according to the result of the homogeneous classification includes: obtaining the sequencing weight characteristics of a plurality of videos belonging to the same classification result; wherein the ranking weight features include: issuing account information and/or browsing times; determining sorting weights corresponding to the videos respectively according to the sorting weight characteristics; and screening a plurality of videos belonging to the same classification result according to the sorting weight. The popularity of the video can be reflected to a certain extent by the publishing account information and/or the browsing times of the video, so that the ranking weights corresponding to the videos are determined by combining the publishing account information and/or the browsing times, and the popularity of each video can be accurately measured. Furthermore, according to the sorting weight, the videos belonging to the same classification result are screened, so that the videos belonging to the same classification result can be screened in combination with the popularity of each video, and the videos with high popularity in the homogeneous videos are recommended to the user.
In addition, after pushing the screened video to the client, the method further comprises: and if the shielding operation of the client on the target video is detected, simultaneously shielding the video which belongs to the same classification result as the target video. That is, when a user watches a video, the user feels that a certain video is not good and the quality is too poor, and after the video is selected to be masked, all videos belonging to the same classification result as the video are masked, that is, other videos having the same package and the same story as the video are masked, which is beneficial to improving the watching experience of the user.
Drawings
One or more embodiments are illustrated by the corresponding figures in the drawings, which are not meant to be limiting.
Fig. 1 is a flowchart of a video recommendation method according to a first embodiment of the present invention;
FIG. 2 is a flow chart of a manner of obtaining the similarity of bridge sections according to a first embodiment of the present invention;
FIG. 3 is a schematic diagram of a knowledge-graph according to a second embodiment of the present invention;
fig. 4 is a flowchart of a video recommendation method according to a second embodiment of the present invention;
fig. 5 is a schematic configuration diagram of a server according to a third embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that numerous technical details are set forth in order to provide a better understanding of the present application in various embodiments of the present invention. However, the technical solution claimed in the present application can be implemented without these technical details and various changes and modifications based on the following embodiments. The following embodiments are divided for convenience of description, and should not constitute any limitation to the specific implementation manner of the present invention, and the embodiments may be mutually incorporated and referred to without contradiction.
The first embodiment of the invention relates to a video recommendation method, which is applied to a server, wherein the server can be a video recommendation platform for video recommendation. The following describes implementation details of the video recommendation method according to this embodiment in detail, and the following is only provided for easy understanding and is not necessary to implement the present invention.
A flowchart of the video recommendation method in this embodiment may be as shown in fig. 1, and specifically includes:
step 101: and carrying out homogenization classification on each video to be recommended.
The videos to be recommended may be all videos published on a video recommendation platform, such as all short videos on a short video recommendation platform. It can be understood that some relatively-hot short videos on the video recommendation platform are usually simulated by many people and then distributed to the video recommendation platform, so that many sets of homogeneous videos which are similar in route and plot but distributed by different people exist on the video recommendation platform. The homogenization classification in this step refers to classifying the homogenization videos with similar set of routes and similar plots of the video recommendation platform into one category, for example, videos belonging to set 1 are classified into one category, and videos belonging to set 2 are classified into one category.
Specifically, the homogenization classification may be performed by: and acquiring the similarity among the videos to be recommended, and performing homogenization classification on the videos to be recommended according to the similarity. For example, one video may be arbitrarily selected as a reference video, and the other videos may be selected as referenced videos, the similarities between the reference video and the other referenced videos are respectively obtained, a referenced video with the similarity greater than a preset similarity to the reference video is selected, and the selected referenced video and the reference video are regarded as homogeneous videos and classified into one category. For example, videos belonging to the same package or belonging to the same story are categorized into one category.
In one example, the similarity between videos includes a speech similarity. The obtaining mode of the similarity of the lines can be as follows: and performing voice analysis on each video to be recommended, extracting voice lines in each video, converting the voice lines into text lines, comparing the text lines in each video, and acquiring line similarity between the videos according to the comparison result. For example, voice analysis is performed on all videos, and voice lines in the videos are converted into text lines through a function of converting voice into text lines. Then, videos with similar lines can be classified into one type through a convolutional neural Network Deep Semantic similarity calculation (DSSM) (CNN-DSSM for short).
In one example, the similarity between videos includes a bridge segment similarity. The bridge segment similarity may be obtained as follows, with reference to fig. 2:
step 201: and acquiring the content label of each video to be recommended.
The content tag includes an object identified in each video and an action of the object, and the object can be understood as a person, an animal, a robot, and the like.
In one example, if the target is a person, the person in each video may be identified using face recognition techniques. For example, a face database may be preset, where the face database includes faces of different people, and different numbers may be set for different faces, so that different faces may be represented by different numbers. When the face is detected in the video, the face can be searched in a detected face database, and if the face in the video is searched, the number of the searched face can be added to the person in the video; if the face in the video is not retrieved, the face in the video can be added into the face database, and a new number is established for the face. For example, the identified persons may be represented by numbers such as A, B, C in the same video. It is to be understood that the identified persons may include: person name, age, occupation, expression, clothing, etc.
In one example, if the motion of the target is the motion of a human body, the motion of the human body can be recognized using a human body motion recognition technique in deep learning. For example, the motion of a person can be recognized through a 3D convolutional neural network, and the motion of the person can be captured through the structure that the 3D convolutional neural network takes continuous video frames as a box and performs convolution by using a three-dimensional convolution kernel. In a specific implementation, the motion of the person can be recognized through an RGB + optical flow algorithm and the like. However, in the present embodiment, only the above two examples are provided to recognize the motion of the person, and the manner of recognizing the motion of the person in the specific implementation is not limited to this.
Optionally, the content features may further include any one or a combination of the following: scenes, background music, lines, objects associated with the motion of the target are identified in each video. The identified scene may be a scene when the motion of the target occurs, and the object associated with the motion of the target is an object that changes with the change of the motion of the target in the video.
In one example, the scene in each video may be identified by: and extracting the video frame images in each video, and identifying the scenes when the characters act in the extracted video frame images according to the scene identification capability in the deep learning. For example, Gist information, that is, global feature information, in the video frame image is extracted, and the Gist information is a low-dimensional signature vector of a scene. The global feature information is adopted to identify and classify the scene, images do not need to be segmented and local feature extraction is not needed, and rapid scene identification and classification can be achieved.
In one example, the recognition method of the object associated with the motion of the target in each video may be: objects in the video are identified using a deep learning capability, such as a YOLO network. The YOLO network solves the object detection as a regression problem, and the positions and the categories of all objects in the video frame image can be obtained by inputting the video frame image into the YOLO network. Since all objects in the video frame image are not necessarily related to the motion of the target, all objects in the video frame image can be screened out to screen out the objects related to the motion of the target. For example, the object associated with the motion of the target may be screened out according to whether the position or size of the object in the video frame image changes with the change of the motion of the target. In addition, in a specific implementation, when the size of an object in the video is less than a certain ratio in the screening, the object can be ignored. Moreover, in order to ensure the effect, the granularity of object identification does not need to be fine, and the object identification can be classified. Such as: bmax 6, which is classified as: an automobile; three a playing cards, which are classified as: playing cards.
In one example, each video to be recommended may be split to obtain a plurality of video segments; each video clip corresponds to one bridge segment, and then content tags of a plurality of video clips are respectively obtained, that is, each video clip may have a corresponding content tag. The splitting mode of each video to be recommended can be as follows: the lines are analyzed by Natural Language Processing (NLP), and the semantic consistency of the context is analyzed. When the semantic consistency suddenly slips down or is interrupted greatly, the scenario is judged to be finished, and an independent scenario, namely a video clip, is formed between the two breakpoints. The splitting mode of each video to be recommended can also be as follows: judging through the background, the role and the role clothes; for example, when the character clothes feature suddenly changes greatly, the continuity of the scenario may be in problem, and the judgment can be made by combining other elements. Such as: when the chief angle falls into the water pit and the clothes are deformed to cause the failure of identification, whether the scenarios are connected or not can be judged by combining the background, for example, when the character background is greatly changed, the scenarios are not connected, the disconnected points are used as the dividing points, and the independent scenarios are arranged between the two dividing points, namely a video clip. If the change range of the character background is not large, such as in a narrow room, whether the scenario is coherent can be further judged through NLP.
Step 202: and reasoning to obtain a reasoning label in each video according to the content label and a pre-established knowledge graph.
The knowledge graph stores inference relations between content labels and inference labels, and the inference labels are contents inferred from videos. The knowledge graph is composed of nodes and edges, each node represents an entity, and each edge represents a relationship, as shown in fig. 3. The inference relation between the content label and the inference label can be predefined, or automatically crawl data from the network to establish the inference relation.
In one example, the content tags in the video are: a, if a male, age 35, and a doctor kill B, C, D using the same method, the content obtained by knowledge graph inference, i.e. inference labels, may be: killer, serial killer and camouflage. In another example, the content tags in the video are: a, chief deputy, wearing police uniform, pursuing, B, and front subject, the content obtained by knowledge map inference, i.e. inference label, can be: a is good. It will be appreciated that if a is the leading corner and the subject of the video is a front face, then it is highly likely that he is a good person. In brief, the content tag corresponding to the inference tag of good person may include: "B (person), help (action), passerby (person)"; "B (character), in park (scene), rescue (action), animal"; "B, contra bad"; "B, with neutral role, friendly", etc. The content label corresponding to the inference label of the bad person can include: "A, gun kill, police"; "A, injury, passerby"; "A, kidnapping, principal angle", etc. It is understood that inference tags are difficult to obtain through visual level recognition, but can be obtained through knowledge graph inference.
In one example, inference labels of the video segments can be inferred according to content labels of the video segments obtained by splitting the videos to be recommended and a pre-established knowledge graph. I.e. each video segment may have a corresponding inference tag.
Step 203: and inputting the content label and the inference label into a pre-trained model, and outputting the bridge section to which each video belongs.
In one example, the content label and the inference label of each video can be input into a pre-trained model, and the bridge segment to which each video belongs can be output. If a plurality of bridge segments exist in the video, the model can directly output the plurality of bridge segments to which the video belongs. In a specific implementation, the output may be information such as the name or number of the bridge segment.
In another example, each video may be segmented in advance to obtain a plurality of video segments, and the content tags and the inference tags corresponding to the plurality of video segments are obtained, so that the content tags and the inference tags corresponding to the plurality of video segments may be input into the pre-trained model in sequence, and the bridge segments to which the plurality of video segments belong may be output.
In a specific implementation, the content tag and the inference tag may be combined and converted into corresponding texts, that is, the tags may exist in the form of texts, and are referred to as text tags. The video time and the text label can be in one-to-one correspondence, such as: 00:00:12-00:00:18, corresponding text labels are: name a, male, age 35, doctor, killer, concatenated killer, camouflage, night, road, running, forest, emotional tension, atmosphere tricks. And finally, inputting the text labels into a pre-trained model and outputting the bridge sections to which the videos belong. The pre-trained model can be a word vector model, and a bridge segment corresponding to the text label is output according to the input text label.
The following briefly describes the training mode of the above model:
firstly, selecting a training sample; that is, a large number of videos are selected as training samples.
Secondly, selecting sample characteristics; sample features may include content tags, inference tags, labeled bridge segments. For example, the content tags are obtained by identifying people, objects, actions, scenes, character expressions, background music, lines, and the like in each video. And establishing an inference label based on the whole content of the video based on the knowledge graph and the content label. The content tags may also be combined with inference tags and converted to text form called text tags. The video is labeled by a bridge segment, for example, the video can be labeled manually, and the labeled bridge segment may be: scarp must not die, dress women for men, hero rescue, etc.
Finally, training a sample; namely, training samples based on training samples and sample features, for example, using machine learning, training with text labels of certain types of bridge segments as input, and obtaining a word vector model.
In one example, after training to obtain the word vector model, the word vector model may be updated at intervals. The bridge segment output by the word vector model can be compared with the actual bridge segment, so that the parameters of the word vector model can be adjusted, for example, the parameters of the word vector model can be adjusted by increasing the sample data volume or increasing the training times, and the bridge segment determined by the word vector model is more accurate.
In one example, the manner of determining the bridge segment in each video may also be: and matching the content tag with each bridge segment in a preset bridge segment library to determine the bridge segment in each video. The preset bridge section library can be established in advance and comprises bridge sections with various types. For example, "hero rescues", its bridge section is: b is deceived by C, A knocks down C and saves B, and A and B are opposite in nature. In specific implementation, the relationship among the identified target, object, action of the target, and scene may be matched with each bridge segment in the preset bridge segment library to obtain the bridge segment in the current video. It will be appreciated that the action of dividing the target and the target is a necessary content feature to make the match. Scenes, background music, lines, objects associated with the targets, etc. are optional content features, but these two optional content features may facilitate more accurate matching.
Step 204: and acquiring the bridge segment similarity among the videos according to the bridge segments in the videos.
In one example, the bridge segment similarity between the videos may be obtained according to whether the same bridge segment exists in each video. For example, the similarity of a bridge segment between videos where the same bridge segment exists is greater than the similarity between videos where the same bridge segment does not exist. Assuming that the same bridge segment exists between video 1 and video 2, and the same bridge segment does not exist between video 1 and video 3, the bridge segment similarity between video 1 and video 2 is greater than the bridge segment similarity between video 1 and video 3.
In another example, when there are multiple bridge segments in the video, the similarity of the bridge segments between the videos can also be obtained by combining the number of identical bridge segments in each video. For example, the greater the number of identical bridge segments present between videos, the greater the similarity of the bridge segments. Assuming that there are 2 identical bridge segments in video 1 and video 2 and 3 identical bridge segments in video 1 and video 3, the bridge segment similarity between video 1 and video 3 is greater than the bridge segment similarity between video 1 and video 2.
Step 102: and according to the result of the homogeneous classification, screening in a plurality of videos belonging to the same classification result.
The number of the screened videos can be smaller than a preset threshold, the preset threshold can be set according to actual needs, and the number of the screened videos is not controlled to be large. In a specific implementation, 1 video may also be screened from a plurality of videos belonging to the same classification result, for example, 1 video is selected from a plurality of videos belonging to the set 1, and 1 video is selected from a plurality of videos belonging to the set 2. In one example, the screening may be by:
firstly, the sorting weight characteristics of a plurality of videos belonging to the same classification result are obtained. Wherein the ranking weight features include: and issuing account information and/or browsing times. The browsing times can be obtained by the statistics of the video service platform. In a specific implementation, the browsing times corresponding to a plurality of videos in a preset time period may be counted, for example, the browsing times of the plurality of videos in 3 days may be counted. The publishing account information can be acquired from the video service platform, and the publishing account information may include any one of or a combination of the following: the level of the release account, the number of people who pay attention to the release account, and whether the release account is paid attention by the user who requests to recommend the video. It will be appreciated that different publishing accounts may correspond to different levels, with higher levels indicating that videos published through the publishing account are generally of higher quality. In addition, different publishing accounts may correspond to different numbers of people who pay attention, and the larger the number of people who pay attention indicates that the quality of videos published through the publishing account is generally higher. In addition, it can be understood that the video recommendation platform generally pushes a video to a client after receiving a recommendation request from the client, that is, recommends the video to a user corresponding to the client. Considering that different users have different interests, the concerned distribution accounts have differences, and therefore, the distribution account information of multiple videos belonging to the same classification result may further include whether the user who requests to recommend the videos pays attention to the distribution accounts.
And then, determining the sorting weight corresponding to each of the plurality of videos according to the sorting weight characteristics.
In one example, the sorting weights corresponding to the plurality of videos may be determined according to the information of the publishing account. For example, the higher the level of the distribution account, the greater the ranking weight corresponding to the video distributed by the distribution account, the greater the number of people interested in the distribution account, and the greater the ranking weight corresponding to the video distributed by the distribution account. The publishing account is concerned by the user who requests to recommend the video, and the sequencing weight corresponding to the video published through the publishing account is larger.
In another example, the respective ranking weights of the plurality of videos may be determined according to the browsing times. It can be understood that the more browsing times of a video, the greater the ranking weight corresponding to the video.
Optionally, the sorting weights corresponding to the multiple videos may be determined according to the release account information and the browsing times. For example, in a plurality of videos, whether a user pays attention to the release account thereof as F is determined, where F is 1.2 for the release account concerned, F is 1 for the release account not concerned, and F can be adjusted according to actual conditions. The browsing times of the video are recorded as V (V takes a value within 3 days), then the ranking weight can be recorded as: f is multiplied by V. That is, if a video is recommended to the user 1, whether the user 1 pays attention to the distribution account is considered, and if a video is recommended to the user 2, whether the user 2 pays attention to the distribution account is considered. When the ranking weights respectively corresponding to the videos are determined, the determined ranking weights respectively corresponding to the videos are more targeted to the user requesting to recommend the videos by combining whether the user requesting to recommend the videos pays attention to the publishing accounts of the videos, and personalized recommendation to different users is facilitated.
And finally, screening a plurality of videos belonging to the same classification result according to the sorting weight. For example, a video with the largest ranking weight or a video with a ranking weight greater than a preset weight may be screened out; the preset weight may be set according to actual needs, and this embodiment is not particularly limited thereto.
In a specific implementation, a random filtering manner may be adopted for filtering among a plurality of videos belonging to the same classification result, however, the filtering manner is not specifically limited in this embodiment.
Step 103: and pushing the screened videos to the client.
The number of the screened videos is smaller than a preset threshold, the preset threshold can be set according to actual needs, and the number of the videos of the same set of paths pushed to the client is controlled to be small. For example, 1 push is selected from a plurality of videos belonging to the same package 1 and sent to the client, and 1 push is selected from a plurality of videos belonging to the same package 2 and sent to the client.
Compared with the prior art, the method and the device have the advantages that each video to be recommended is subjected to homogenization classification; according to the result of the homogenization classification, screening a plurality of videos belonging to the same classification result; pushing the screened videos to a client; and the number of the screened videos is smaller than a preset threshold value. The videos are subjected to homogenization classification, namely, the homogenization videos are used as a single class, the videos belonging to the same classification result are screened, and the screened videos are pushed to a client. That is to say, the homogeneous videos are screened, and the number of the screened videos is smaller than a preset threshold value, so that too many homogeneous videos, namely similar videos, can be prevented from being recommended to a user, and the freshness of the user in the process of browsing the videos is enhanced.
A second embodiment of the present invention relates to a video recommendation method. The following describes implementation details of the video recommendation method according to this embodiment in detail, and the following is only provided for easy understanding and is not necessary to implement the present invention. The flowchart of the video recommendation method in this embodiment may be as shown in fig. 4, wherein steps 301 to 303 are substantially the same as steps 101 to 103 in the first embodiment, and steps 301 to 303 are not described again to avoid duplicating the embodiment.
Step 301: and carrying out homogenization classification on each video to be recommended.
Step 302: and according to the result of the homogeneous classification, screening in a plurality of videos belonging to the same classification result.
Step 303: and pushing the screened videos to the client.
Step 304: and if the shielding operation of the target video from the client is detected, shielding the videos which belong to the same classification result as the target video at the same time.
The target video is the video shielded by the user in the browsing process. Specifically, a display interface of the client may be provided with a trigger key with a shielding function, and the trigger key may be a virtual key. During the process of browsing videos, if a user does not like a certain video, the user can click the virtual key to shield the video. And after detecting that the video, namely the target video, is shielded, the video recommendation platform determines other videos which belong to the same classification result as the target video, and simultaneously shields the other videos. For example, the user masks the video 1, and the video recommendation platform determines that the classification result of the video 1 is the set road 1, and then the video recommendation platform simultaneously masks other videos belonging to the set road 1.
Compared with the prior art, in the embodiment, when a user feels that a certain video is not good and has poor quality when watching the video, all videos which belong to the same classification result with the video are shielded after selecting to shield the video, that is, other videos which have the same set path and the same story with the video are shielded, so that the watching experience of the user is improved.
The steps of the above methods are divided for clarity, and the implementation may be combined into one step or split some steps, and the steps are divided into multiple steps, so long as the same logical relationship is included, which are all within the protection scope of the present patent; it is within the scope of the patent to add insignificant modifications to the algorithms or processes or to introduce insignificant design changes to the core design without changing the algorithms or processes.
A third embodiment of the invention relates to a server, as shown in fig. 5, comprising at least one processor 401; and a memory 402 communicatively coupled to the at least one processor 401; the memory 402 stores instructions executable by the at least one processor 401, and the instructions are executed by the at least one processor 401, so that the at least one processor 401 can execute the video recommendation method according to the first or second embodiment.
Where the memory 402 and the processor 401 are coupled by a bus, which may include any number of interconnected buses and bridges that couple one or more of the various circuits of the processor 401 and the memory 402 together. The bus may also connect various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the processor 401 may be transmitted over a wireless medium via an antenna, which may receive the data and transmit the data to the processor 401.
The processor 401 is responsible for managing the bus and general processing and may provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And memory 402 may be used to store data used by processor 401 in performing operations.
A fourth embodiment of the present invention relates to a computer-readable storage medium storing a computer program. The computer program realizes the above-described method embodiments when executed by a processor.
That is, as can be understood by those skilled in the art, all or part of the steps in the method for implementing the embodiments described above may be implemented by a program instructing related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples for carrying out the invention, and that various changes in form and details may be made therein without departing from the spirit and scope of the invention in practice.

Claims (10)

1. A method for video recommendation, comprising:
carrying out homogenization classification on each video to be recommended;
according to the result of the homogenization classification, screening a plurality of videos belonging to the same classification result;
and pushing the screened videos to the client.
2. The video recommendation method according to claim 1, wherein the performing a homogeneous classification on each video to be recommended comprises:
acquiring the similarity between videos to be recommended; wherein the similarity comprises the similarity of the lines and/or the similarity of the bridge sections;
and according to the similarity, carrying out homogenization classification on each video to be recommended.
3. The video recommendation method according to claim 2, wherein the similarity includes a bridge segment similarity, and the obtaining manner of the bridge segment similarity includes:
acquiring content tags of videos to be recommended; wherein the content tag comprises at least an object identified in each video and an action of the object;
reasoning to obtain a reasoning label in each video according to the content label and a pre-established knowledge graph; the knowledge graph stores inference relations between content labels and inference labels, and the inference labels are inferred contents;
inputting the content label and the inference label into a pre-trained model, and outputting a bridge section to which each video belongs;
and acquiring the bridge segment similarity among the videos according to the bridge segment to which the videos belong.
4. The video recommendation method according to claim 3, wherein the content tag further comprises any one or a combination of the following identified in each video:
scene, background music, lines, objects related to the action of the target; and the object related to the action of the target is an object which changes along with the change of the action of the target in the video.
5. The video recommendation method according to claim 3, wherein said obtaining the content tag of each video to be recommended comprises:
splitting each video to be recommended to obtain a plurality of video clips; wherein each video clip corresponds to a bridge segment;
respectively acquiring content labels of the plurality of video clips;
the reasoning labels in the videos are obtained by reasoning according to the content labels and the pre-established knowledge graph, and the reasoning labels comprise:
reasoning to obtain reasoning labels of the video segments according to the content labels of the video segments and a pre-established knowledge graph;
the step of inputting the content label and the inference label into a pre-trained model and outputting the bridge segment to which each video belongs comprises the following steps:
and sequentially inputting the content labels and the inference labels corresponding to the video clips into a pre-trained model, and outputting the bridge sections to which the video clips belong.
6. The video recommendation method according to claim 1, wherein said screening among a plurality of videos belonging to the same classification result according to the result of the homogeneous classification comprises:
obtaining the sequencing weight characteristics of a plurality of videos belonging to the same classification result; wherein the ranking weight features include: issuing account information and/or browsing times;
determining sorting weights corresponding to the videos respectively according to the sorting weight characteristics;
and screening a plurality of videos belonging to the same classification result according to the sorting weight.
7. The video recommendation method of claim 6, wherein said ranking weight features comprise: issuing account information, wherein the issuing account information comprises any one or combination of the following items:
the level of the release account, the number of people who pay attention to the release account, and whether the release account is paid attention by the user who requests to recommend the video.
8. The video recommendation method according to claim 1, further comprising, after pushing the screened video to the client:
and if the shielding operation of the client on the target video is detected, simultaneously shielding the video which belongs to the same classification result as the target video.
9. A server, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the video recommendation method of any of claims 1-8.
10. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the video recommendation method of any one of claims 1 to 8.
CN201911401182.2A 2019-12-30 2019-12-30 Video recommendation method, server and computer-readable storage medium Active CN111212303B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911401182.2A CN111212303B (en) 2019-12-30 2019-12-30 Video recommendation method, server and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911401182.2A CN111212303B (en) 2019-12-30 2019-12-30 Video recommendation method, server and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN111212303A true CN111212303A (en) 2020-05-29
CN111212303B CN111212303B (en) 2022-05-10

Family

ID=70789428

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911401182.2A Active CN111212303B (en) 2019-12-30 2019-12-30 Video recommendation method, server and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN111212303B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111918137A (en) * 2020-06-29 2020-11-10 北京大学 Push method and device based on video characteristics, storage medium and terminal
CN112016613A (en) * 2020-08-26 2020-12-01 广州市百果园信息技术有限公司 Training method and device for video content classification model, computer equipment and medium
CN112487248A (en) * 2020-12-01 2021-03-12 深圳市易平方网络科技有限公司 Video file label generation method and device, intelligent terminal and storage medium
CN112989115A (en) * 2021-02-04 2021-06-18 有米科技股份有限公司 Screening control method and device for videos to be recommended
CN113347082A (en) * 2021-08-06 2021-09-03 深圳康易世佳科技有限公司 Method and device for intelligently displaying shared messages containing short videos
CN113426101A (en) * 2021-06-22 2021-09-24 咪咕互动娱乐有限公司 Teaching method, device, equipment and computer readable storage medium
CN113641855A (en) * 2021-08-13 2021-11-12 北京奇艺世纪科技有限公司 Video recommendation method, device, equipment and storage medium
CN114979767A (en) * 2022-05-07 2022-08-30 咪咕视讯科技有限公司 Video recommendation method, device, equipment and computer readable storage medium
WO2023020252A1 (en) * 2021-08-20 2023-02-23 腾讯科技(深圳)有限公司 Content recommendation method and apparatus, and device and readable storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101834837A (en) * 2009-12-18 2010-09-15 北京邮电大学 On-line landscape video active information service system of scenic spots in tourist attraction based on bandwidth network
CN102999640A (en) * 2013-01-09 2013-03-27 公安部第三研究所 Video and image retrieval system and method based on semantic reasoning and structural description
US20140223466A1 (en) * 2013-02-01 2014-08-07 Huawei Technologies Co., Ltd. Method and Apparatus for Recommending Video from Video Library
CN105916032A (en) * 2015-12-08 2016-08-31 乐视网信息技术(北京)股份有限公司 Video recommendation method and video recommendation terminal equipment
CN106649848A (en) * 2016-12-30 2017-05-10 合网络技术(北京)有限公司 Video recommendation method and video recommendation device
CN107133263A (en) * 2017-03-31 2017-09-05 百度在线网络技术(北京)有限公司 POI recommends method, device, equipment and computer-readable recording medium
CN107391509A (en) * 2016-05-16 2017-11-24 中兴通讯股份有限公司 Label recommendation method and device
US20180007409A1 (en) * 2015-07-06 2018-01-04 Tencent Technology (Shenzhen) Company Limited Video recommending method, server, and storage media
CN108563670A (en) * 2018-01-12 2018-09-21 武汉斗鱼网络科技有限公司 Video recommendation method, device, server and computer readable storage medium
CN110020122A (en) * 2017-10-16 2019-07-16 Tcl集团股份有限公司 A kind of video recommendation method, system and computer readable storage medium
CN110267097A (en) * 2019-06-26 2019-09-20 北京字节跳动网络技术有限公司 Video pushing method, device and electronic equipment based on characteristic of division

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101834837A (en) * 2009-12-18 2010-09-15 北京邮电大学 On-line landscape video active information service system of scenic spots in tourist attraction based on bandwidth network
CN102999640A (en) * 2013-01-09 2013-03-27 公安部第三研究所 Video and image retrieval system and method based on semantic reasoning and structural description
US20140223466A1 (en) * 2013-02-01 2014-08-07 Huawei Technologies Co., Ltd. Method and Apparatus for Recommending Video from Video Library
US20180007409A1 (en) * 2015-07-06 2018-01-04 Tencent Technology (Shenzhen) Company Limited Video recommending method, server, and storage media
CN105916032A (en) * 2015-12-08 2016-08-31 乐视网信息技术(北京)股份有限公司 Video recommendation method and video recommendation terminal equipment
CN107391509A (en) * 2016-05-16 2017-11-24 中兴通讯股份有限公司 Label recommendation method and device
CN106649848A (en) * 2016-12-30 2017-05-10 合网络技术(北京)有限公司 Video recommendation method and video recommendation device
CN107133263A (en) * 2017-03-31 2017-09-05 百度在线网络技术(北京)有限公司 POI recommends method, device, equipment and computer-readable recording medium
CN110020122A (en) * 2017-10-16 2019-07-16 Tcl集团股份有限公司 A kind of video recommendation method, system and computer readable storage medium
CN108563670A (en) * 2018-01-12 2018-09-21 武汉斗鱼网络科技有限公司 Video recommendation method, device, server and computer readable storage medium
CN110267097A (en) * 2019-06-26 2019-09-20 北京字节跳动网络技术有限公司 Video pushing method, device and electronic equipment based on characteristic of division

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵辰玮; 刘韬; 都海虹: "算法视域下抖音短视频平台视频推荐模式研究", 《出版广角》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111918137A (en) * 2020-06-29 2020-11-10 北京大学 Push method and device based on video characteristics, storage medium and terminal
CN112016613A (en) * 2020-08-26 2020-12-01 广州市百果园信息技术有限公司 Training method and device for video content classification model, computer equipment and medium
CN112487248A (en) * 2020-12-01 2021-03-12 深圳市易平方网络科技有限公司 Video file label generation method and device, intelligent terminal and storage medium
CN112989115A (en) * 2021-02-04 2021-06-18 有米科技股份有限公司 Screening control method and device for videos to be recommended
CN113426101A (en) * 2021-06-22 2021-09-24 咪咕互动娱乐有限公司 Teaching method, device, equipment and computer readable storage medium
CN113426101B (en) * 2021-06-22 2023-10-20 咪咕互动娱乐有限公司 Teaching method, device, equipment and computer readable storage medium
CN113347082A (en) * 2021-08-06 2021-09-03 深圳康易世佳科技有限公司 Method and device for intelligently displaying shared messages containing short videos
CN113641855A (en) * 2021-08-13 2021-11-12 北京奇艺世纪科技有限公司 Video recommendation method, device, equipment and storage medium
WO2023020252A1 (en) * 2021-08-20 2023-02-23 腾讯科技(深圳)有限公司 Content recommendation method and apparatus, and device and readable storage medium
CN114979767A (en) * 2022-05-07 2022-08-30 咪咕视讯科技有限公司 Video recommendation method, device, equipment and computer readable storage medium
CN114979767B (en) * 2022-05-07 2023-11-21 咪咕视讯科技有限公司 Video recommendation method, device, equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN111212303B (en) 2022-05-10

Similar Documents

Publication Publication Date Title
CN111212303B (en) Video recommendation method, server and computer-readable storage medium
CN108280155B (en) Short video-based problem retrieval feedback method, device and equipment
CN107657056B (en) Method and device for displaying comment information based on artificial intelligence
CN107833082B (en) Commodity picture recommendation method and device
CN108197532A (en) The method, apparatus and computer installation of recognition of face
CN110309114B (en) Method and device for processing media information, storage medium and electronic device
CN107305557A (en) Content recommendation method and device
CN110737783A (en) method, device and computing equipment for recommending multimedia content
CN113779308B (en) Short video detection and multi-classification method, device and storage medium
WO2021155691A1 (en) User portrait generating method and apparatus, storage medium, and device
CN107169002A (en) A kind of personalized interface method for pushing and device recognized based on face
CN111506794A (en) Rumor management method and device based on machine learning
JP2022547248A (en) Scalable architecture for automatic generation of content delivery images
Dellagiacoma et al. Emotion based classification of natural images
CN111783903A (en) Text processing method, text model processing method and device and computer equipment
CN115659008A (en) Information pushing system and method for big data information feedback, electronic device and medium
CN117351336A (en) Image auditing method and related equipment
Amorim et al. Novelty detection in social media by fusing text and image into a single structure
CN113244627B (en) Method and device for identifying plug-in, electronic equipment and storage medium
CN110674388A (en) Mapping method and device for push item, storage medium and terminal equipment
CN109033078B (en) The recognition methods of sentence classification and device, storage medium, processor
CN110413823A (en) Garment image method for pushing and relevant apparatus
CN113704623B (en) Data recommendation method, device, equipment and storage medium
Wieczorek et al. Semantic Image-Based Profiling of Users' Interests with Neural Networks
US20180114093A1 (en) Data analysis system, method for controlling data analysis system, and recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant