CN113365115A - Characteristic code determining method, device, server and storage medium - Google Patents

Characteristic code determining method, device, server and storage medium Download PDF

Info

Publication number
CN113365115A
CN113365115A CN202010140088.2A CN202010140088A CN113365115A CN 113365115 A CN113365115 A CN 113365115A CN 202010140088 A CN202010140088 A CN 202010140088A CN 113365115 A CN113365115 A CN 113365115A
Authority
CN
China
Prior art keywords
account
information
video information
counted
feature code
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010140088.2A
Other languages
Chinese (zh)
Other versions
CN113365115B (en
Inventor
常超
陈祯扬
宋金波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202010140088.2A priority Critical patent/CN113365115B/en
Publication of CN113365115A publication Critical patent/CN113365115A/en
Application granted granted Critical
Publication of CN113365115B publication Critical patent/CN113365115B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • H04N21/25875Management of end-user data involving end-user authentication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/251Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • H04N21/25891Management of end-user data being end-user preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4668Learning process for intelligent management, e.g. learning user preferences for recommending movies for recommending content, e.g. movies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content

Abstract

The present disclosure relates to a method, an apparatus, a server and a storage medium for determining feature codes, wherein the method comprises: constructing a first graph network according to the account information of the account to be counted, the positive sample video information and the negative sample video information; constructing a second graph network according to the account information of the account to be counted, the positive sample account information and the negative sample account information; constructing a third graph network according to the video information to be counted, the positive sample video information and the negative sample video information; and obtaining account information characteristic codes and video information characteristic codes to be counted according to the first graph network, the second graph network and the third graph network. By adopting the method, the graph network relations between the account and the video, between the account and between the video and the video can be comprehensively considered, so that the accuracy of the determined account information characteristic code and the determined video information characteristic code to be counted is higher, and the determination accuracy of the characteristic code is further improved.

Description

Characteristic code determining method, device, server and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method, an apparatus, a server, and a storage medium for determining a feature code.
Background
With the development of computer technology, various applications for browsing videos are in a variety, and more users select to browse videos through the applications, so that accurate pushing of the videos is more and more important.
In the related technology, generally, the video information clicked by a user is used as a positive sample, the video information not clicked by the user is used as a negative sample, a feature coding model is trained through the positive sample and the negative sample, and then video pushing is carried out through account information feature coding and video information feature coding output by the trained feature coding model; however, each piece of user information and each piece of video information are used as sample data individually and independently, so that a lot of information is lost in the training process, and the determination accuracy of feature coding is low.
Disclosure of Invention
The present disclosure provides a feature code determination method, device, server and storage medium, to at least solve the problem of low accuracy in feature code determination in the related art. The technical scheme of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided a method for determining feature codes, including:
acquiring account information of an account to be counted and video information to be counted; the account information of the account to be counted is matched with the account information of the corresponding positive sample account and the account information of the corresponding negative sample account; the video information to be counted is matched with corresponding positive sample video information and negative sample video information;
constructing a first graph network according to the account information of the account to be counted, the positive sample video information and the negative sample video information; constructing a second graph network according to the account information of the account to be counted, the account information of the positive sample account and the account information of the negative sample account; constructing a third graph network according to the video information to be counted, the positive sample video information and the negative sample video information;
and obtaining account information feature codes corresponding to the account information of the account to be counted and video information feature codes corresponding to the video information to be counted according to the first graph network, the second graph network and the third graph network.
In one embodiment, the obtaining, according to the first graph network, the second graph network, and the third graph network, an account information feature code corresponding to the account information of the account to be counted and a video information feature code corresponding to the video information to be counted includes:
inputting information in the first graph network, the second graph network and the third graph network into a feature code learning model to be trained respectively to obtain a first target feature code of account information of the account to be counted, a target feature code of the positive sample video information, a target feature code of the negative sample video information, a second target feature code of the account information of the account to be counted, a target feature code of the account information of the positive sample account, a target feature code of the account information of the negative sample account and a target feature code of the video information to be counted;
obtaining a target loss value according to a first target feature code of the account information of the account to be counted, a target feature code of the positive sample video information, a target feature code of the negative sample video information, a second target feature code of the account information of the account to be counted, a target feature code of the account information of the positive sample account, a target feature code of the account information of the negative sample account and a target feature code of the video information to be counted;
adjusting the network parameters of the feature coding learning model to be trained according to the target loss value until the target loss value obtained according to the feature coding learning model after network parameter adjustment meets a preset condition;
and if the target loss value obtained according to the feature code learning model after the network parameter adjustment meets the preset condition, splicing the current first target feature code and the second target feature code to obtain an account information feature code corresponding to the account information of the account to be counted, and identifying the current target feature code of the video information to be counted as the video information feature code corresponding to the video information to be counted.
In one embodiment, the inputting information in the first graph network, the second graph network, and the third graph network into a feature coding learning model to be trained to obtain a first target feature code of the account information of the account to be counted, a target feature code of the positive sample video information, a target feature code of the negative sample video information, a second target feature code of the account information of the account to be counted, a target feature code of the account information of the positive sample account, a target feature code of the account information of the negative sample account, and a target feature code of the video information to be counted includes:
extracting account information of the account to be counted, first neighbor video information, the positive sample video information, second neighbor video information, the negative sample video information and third neighbor video information from the first graph network; extracting first neighbor account information, account information of the positive sample account, second neighbor account information, account information of the negative sample account and third neighbor account information from the second graph network; extracting the video information to be counted and the fourth neighbor video information from the third graph network;
inputting the account information of the account to be counted, the first neighbor video information, the positive sample video information, the second neighbor video information, the negative sample video information, the third neighbor video information, the first neighbor account information, the account information of the positive sample account, the second neighbor account information, the account information of the negative sample account, the third neighbor account information, the video information to be counted and the fourth neighbor video information into a feature coding learning model to be trained respectively to obtain a first target feature code of the account information of the account to be counted, a target feature code of the positive sample video information, a target feature code of the negative sample video information, a second target feature code of the account information of the account to be counted, a target feature code of the account information of the positive sample account, And the target feature code of the account information of the negative sample account and the target feature code of the video information to be counted.
In one embodiment, the account information of the account to be counted, the first neighbor video information, the positive sample video information, the second neighbor video information, the negative sample video information, the third neighbor video information, the first neighbor account information, the account information of the positive sample account, the second neighbor account information, the account information of the negative sample account, the third neighbor account information, the video information to be counted, and the fourth neighbor video information are input into a feature code learning model to be trained, so as to obtain a first target feature code of the account information of the account to be counted, a target feature code of the positive sample video information, a target feature code of the negative sample video information, a second target feature code of the account information of the account to be counted, a target feature code of the account information of the positive sample account, The target feature code of the account information of the negative sample account and the target feature code of the video information to be counted comprise:
respectively obtaining the initial feature code of the account information of the account to be counted, the initial feature code of the first neighbor video information, the initial feature code of the positive sample video information, the initial feature code of the second neighbor video information, the initial feature code of the negative sample video information and the initial feature code of the third neighbor video information through a feature code learning model to be trained, the initial feature code of the first neighbor account information, the initial feature code of the account information of the positive sample account, the initial feature code of the second neighbor account information, the initial feature code of the account information of the negative sample account, the initial feature code of the third neighbor account information, the initial feature code of the video information to be counted and the initial feature code of the fourth neighbor video information;
obtaining a first target feature code of the account information of the account to be counted according to the initial feature code of the account information of the account to be counted and the initial feature code of the first neighbor video information; obtaining a target feature code of the positive sample video information according to the initial feature code of the positive sample video information and the initial feature code of the second neighbor video information; obtaining target feature codes of the negative sample video information according to the initial feature codes of the negative sample video information and the initial feature codes of the third neighbor video information;
obtaining a second target feature code of the account information of the account to be counted according to the initial feature code of the account information of the account to be counted and the initial feature code of the first neighbor account information; obtaining a target feature code of the account information of the positive sample account according to the initial feature code of the account information of the positive sample account and the initial feature code of the second neighbor account information; obtaining a target feature code of the negative sample account information according to the initial feature code of the negative sample account information and the initial feature code of the third neighbor account information;
and obtaining the target feature code of the video information to be counted according to the initial feature code of the video information to be counted and the initial feature code of the fourth neighboring video information.
In one embodiment, the obtaining a first target feature code of the account information of the account to be counted according to the initial feature code of the account information of the account to be counted and the initial feature code of the first neighbor video information includes:
aggregating the initial feature codes of the first neighbor video information to obtain aggregated feature codes;
and splicing the feature code after the aggregation processing and the initial feature code of the account information of the account to be counted to obtain a first target feature code of the account information of the account to be counted.
In one embodiment, the obtaining a target loss value according to a first target feature code of the account information of the account to be counted, a target feature code of the positive sample video information, a target feature code of the negative sample video information, a second target feature code of the account information of the account to be counted, a target feature code of the positive sample account information, a target feature code of the negative sample account information, and a target feature code of the video information to be counted includes:
obtaining a first loss value according to a first target feature code of the account information of the account to be counted, a target feature code of the positive sample video information and a target feature code of the negative sample video information;
obtaining a second loss value according to a second target feature code of the account information of the account to be counted, the target feature code of the positive sample account information and the target feature code of the negative sample account information;
obtaining a third loss value according to the target feature code of the video information to be counted, the target feature code of the positive sample video information and the target feature code of the negative sample video information;
and adding the first loss value, the second loss value and the third loss value to obtain the target loss value.
In one embodiment, the constructing a first graph network according to the account information of the account to be counted, the positive sample video information, and the negative sample video information includes:
acquiring first neighbor video information of the account to be counted, second neighbor video information of the positive sample video information and third neighbor video information of the negative sample video information;
constructing a first sub-graph network by taking the account information of the account to be counted as a center node and the first neighbor video information as a neighbor node; constructing a second sub-graph network by taking the positive sample video information as a central node and the second neighbor video information as neighbor nodes; constructing a third sub-graph network by taking the negative sample video information as a central node and the third neighbor video information as a neighbor node;
and combining the first sub-graph network, the second sub-graph network and the third sub-graph network to obtain the first graph network.
In one embodiment, the constructing a second graph network according to the account information of the account to be counted, the positive sample account information, and the negative sample account information includes:
acquiring first neighbor account information of the account to be counted, second neighbor account information of the positive sample account and third neighbor account information of the negative sample account;
establishing a fourth sub-graph network by taking the account information of the account to be counted as a center node and the first neighbor account information as a neighbor node; establishing a fifth sub-graph network by taking the account information of the positive sample account as a central node and the second neighbor account information as a neighbor node; establishing a sixth sub-graph network by taking the account information of the negative sample account as a center node and the third neighbor account information as a neighbor node;
and combining the fourth sub-graph network, the fifth sub-graph network and the sixth sub-graph network to obtain the second graph network.
In one embodiment, the constructing a third graph network according to the video information to be counted, the positive sample video information, and the negative sample video information includes:
acquiring fourth neighbor video information of the video information to be counted;
constructing a seventh sub-graph network by taking the video information to be counted as a central node and the fourth neighbor video information as a neighbor node;
and combining the second sub-graph network, the third sub-graph network and the seventh sub-graph network to obtain the third graph network.
According to a second aspect of the embodiments of the present disclosure, there is provided a video push method, including:
according to the determination method of the feature codes, account information feature codes corresponding to the account information of the account to be counted and video information feature codes corresponding to the video information to be counted are obtained;
acquiring account information of an account to be pushed, and determining account information characteristic codes corresponding to the account information of the account to be pushed from account information characteristic codes corresponding to the account information of the account to be counted;
screening target video information corresponding to the account to be pushed from the video information to be counted according to the account information feature code and the video information feature code to be counted;
and pushing the target video information to the account to be pushed.
In one embodiment, the determining, from the account information feature codes corresponding to the account information of the account to be counted, the account information feature code corresponding to the account information of the account to be pushed includes:
determining target account information matched with the account information of the account to be pushed from the account information of the account to be counted;
and screening out the account information characteristic code corresponding to the target account information from the account information characteristic code corresponding to the account information of the account to be counted, wherein the account information characteristic code is used as the account information characteristic code corresponding to the account information of the account to be pushed.
In one embodiment, the screening, according to the account information feature code and the to-be-counted video information feature code, target video information corresponding to the to-be-pushed account from the to-be-counted video information includes:
acquiring the feature similarity between the account information feature code and the video information feature code to be counted;
and determining the video information to be counted with the characteristic similarity larger than a preset threshold as target video information corresponding to the account to be pushed.
According to a third aspect of the embodiments of the present disclosure, there is provided a feature encoding determining apparatus, including:
the information acquisition module is configured to execute the acquisition of account information of an account to be counted and video information to be counted; the account information of the account to be counted is matched with the account information of the corresponding positive sample account and the account information of the corresponding negative sample account; the video information to be counted is matched with corresponding positive sample video information and negative sample video information;
the graph network construction module is configured to execute construction of a first graph network according to the account information of the account to be counted, the positive sample video information and the negative sample video information; constructing a second graph network according to the account information of the account to be counted, the account information of the positive sample account and the account information of the negative sample account; constructing a third graph network according to the video information to be counted, the positive sample video information and the negative sample video information;
and the feature code acquisition module is configured to execute the processing according to the first graph network, the second graph network and the third graph network to obtain account information feature codes corresponding to the account information of the account to be counted and video information feature codes corresponding to the video information to be counted.
In one embodiment, the feature code obtaining module is further configured to perform input of information in the first graph network, the second graph network, and the third graph network into a feature code learning model to be trained, so as to obtain a first target feature code of the account information of the account to be counted, a target feature code of the positive sample video information, a target feature code of the negative sample video information, a second target feature code of the account information of the account to be counted, a target feature code of the account information of the positive sample account, a target feature code of the account information of the negative sample account, and a target feature code of the video information to be counted; obtaining a target loss value according to a first target feature code of the account information of the account to be counted, a target feature code of the positive sample video information, a target feature code of the negative sample video information, a second target feature code of the account information of the account to be counted, a target feature code of the account information of the positive sample account, a target feature code of the account information of the negative sample account and a target feature code of the video information to be counted; adjusting the network parameters of the feature coding learning model to be trained according to the target loss value until the target loss value obtained according to the feature coding learning model after network parameter adjustment meets a preset condition; and if the target loss value obtained according to the feature code learning model after the network parameter adjustment meets the preset condition, splicing the current first target feature code and the second target feature code to obtain an account information feature code corresponding to the account information of the account to be counted, and identifying the current target feature code of the video information to be counted as the video information feature code corresponding to the video information to be counted.
In one embodiment, the feature code obtaining module is further configured to extract account information of the account to be counted, first neighbor video information, the positive sample video information, second neighbor video information, the negative sample video information, and third neighbor video information from the first graph network; extracting first neighbor account information, account information of the positive sample account, second neighbor account information, account information of the negative sample account and third neighbor account information from the second graph network; extracting the video information to be counted and the fourth neighbor video information from the third graph network; inputting the account information of the account to be counted, the first neighbor video information, the positive sample video information, the second neighbor video information, the negative sample video information, the third neighbor video information, the first neighbor account information, the account information of the positive sample account, the second neighbor account information, the account information of the negative sample account, the third neighbor account information, the video information to be counted and the fourth neighbor video information into a feature coding learning model to be trained respectively to obtain a first target feature code of the account information of the account to be counted, a target feature code of the positive sample video information, a target feature code of the negative sample video information, a second target feature code of the account information of the account to be counted, a target feature code of the account information of the positive sample account, And the target feature code of the account information of the negative sample account and the target feature code of the video information to be counted.
In one embodiment, the feature code obtaining module is further configured to execute feature code learning models respectively trained by feature codes to be trained, obtaining initial feature codes of the account information of the account to be counted, initial feature codes of the first neighbor video information, initial feature codes of the positive sample video information, initial feature codes of the second neighbor video information, initial feature codes of the negative sample video information, initial feature codes of the third neighbor video information, initial feature codes of the first neighbor account information, initial feature codes of the account information of the positive sample account, initial feature codes of the second neighbor account information, initial feature codes of the account information of the negative sample account, initial feature codes of the third neighbor account information, initial feature codes of the video information to be counted and initial feature codes of the fourth neighbor video information; obtaining a first target feature code of the account information of the account to be counted according to the initial feature code of the account information of the account to be counted and the initial feature code of the first neighbor video information; obtaining a target feature code of the positive sample video information according to the initial feature code of the positive sample video information and the initial feature code of the second neighbor video information; obtaining target feature codes of the negative sample video information according to the initial feature codes of the negative sample video information and the initial feature codes of the third neighbor video information; obtaining a second target feature code of the account information of the account to be counted according to the initial feature code of the account information of the account to be counted and the initial feature code of the first neighbor account information; obtaining a target feature code of the account information of the positive sample account according to the initial feature code of the account information of the positive sample account and the initial feature code of the second neighbor account information; obtaining a target feature code of the negative sample account information according to the initial feature code of the negative sample account information and the initial feature code of the third neighbor account information; and obtaining the target feature code of the video information to be counted according to the initial feature code of the video information to be counted and the initial feature code of the fourth neighboring video information.
In one embodiment, the feature code obtaining module is further configured to perform aggregation processing on the initial feature codes of the first neighbor video information to obtain feature codes after aggregation processing; and splicing the feature code after the aggregation processing and the initial feature code of the account information of the account to be counted to obtain a first target feature code of the account information of the account to be counted.
In one embodiment, the feature code obtaining module is further configured to execute a first target feature code according to the account information of the account to be counted, a target feature code of the positive sample video information, and a target feature code of the negative sample video information, so as to obtain a first loss value; obtaining a second loss value according to a second target feature code of the account information of the account to be counted, the target feature code of the positive sample account information and the target feature code of the negative sample account information; obtaining a third loss value according to the target feature code of the video information to be counted, the target feature code of the positive sample video information and the target feature code of the negative sample video information; and obtaining the target loss value according to the first loss value, the second loss value and the third loss value.
In one embodiment, the graph network building module is further configured to perform obtaining the first neighbor video information of the account to be counted, the second neighbor video information of the positive sample video information, and the third neighbor video information of the negative sample video information; constructing a first sub-graph network by taking the account information of the account to be counted as a center node and the first neighbor video information as a neighbor node; constructing a second sub-graph network by taking the positive sample video information as a central node and the second neighbor video information as neighbor nodes; constructing a third sub-graph network by taking the negative sample video information as a central node and the third neighbor video information as a neighbor node; and combining the first sub-graph network, the second sub-graph network and the third sub-graph network to obtain the first graph network.
In one embodiment, the graph network building module is further configured to perform obtaining the first neighbor account information of the account to be counted, the second neighbor account information of the positive exemplar account, and the third neighbor account information of the negative exemplar account; establishing a fourth sub-graph network by taking the account information of the account to be counted as a center node and the first neighbor account information as a neighbor node; establishing a fifth sub-graph network by taking the account information of the positive sample account as a central node and the second neighbor account information as a neighbor node; establishing a sixth sub-graph network by taking the account information of the negative sample account as a center node and the third neighbor account information as a neighbor node; and combining the fourth sub-graph network, the fifth sub-graph network and the sixth sub-graph network to obtain the second graph network.
In one embodiment, the graph network construction module is further configured to perform obtaining the fourth neighboring video information of the video information to be counted; constructing a seventh sub-graph network by taking the video information to be counted as a central node and the fourth neighbor video information as a neighbor node; and combining the second sub-graph network, the third sub-graph network and the seventh sub-graph network to obtain the third graph network.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a video push apparatus including:
the code acquisition module is configured to execute the determination method of the feature codes, and acquire account information feature codes corresponding to the account information of the account to be counted and video information feature codes corresponding to the video information to be counted;
the characteristic code determining module is configured to execute the acquisition of account information of an account to be pushed, and determine an account information characteristic code corresponding to the account information of the account to be pushed from account information characteristic codes corresponding to account information of the account to be counted;
the video information screening module is configured to execute the coding according to the account information characteristic and the coding according to the video information characteristic to be counted, and screen out target video information corresponding to the account to be pushed from the video information to be counted;
the video information pushing module is configured to execute pushing of the target video information to the account to be pushed.
In one embodiment, the feature code determination module is further configured to perform determination of target account information matching with the account information of the account to be pushed from the account information of the account to be counted; and screening out the account information characteristic code corresponding to the target account information from the account information characteristic code corresponding to the account information of the account to be counted, wherein the account information characteristic code is used as the account information characteristic code corresponding to the account information of the account to be pushed.
In one embodiment, the video information screening module is further configured to perform obtaining a feature similarity between the account information feature code and the video information feature code to be counted; and determining the video information to be counted with the characteristic similarity larger than a preset threshold as target video information corresponding to the account to be pushed.
According to a fifth aspect of embodiments of the present disclosure, there is provided a server including: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to implement the methods as in the embodiments of the first and second aspects.
According to a sixth aspect of embodiments of the present disclosure, there is provided a storage medium comprising: the instructions in the storage medium, when executed by a processor of a server, enable the server to perform a method as in the embodiments of the first and second aspects.
According to a seventh aspect of embodiments of the present disclosure, there is provided a computer program product comprising: computer program code which, when run by a computer, causes the computer to perform the method of the above aspects.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
learning to obtain account information characteristic codes corresponding to the account information of the account to be counted and to-be-counted video information characteristic codes corresponding to the video information to be counted by a first graph network constructed on the basis of the account information of the account to be counted, the positive sample video information and the negative sample video information, a second graph network constructed on the basis of the account information of the account to be counted, the account information of the positive sample account and the account information of the negative sample account, and a third graph network constructed on the basis of the video information to be counted, the positive sample video information and the negative sample video information; the method comprehensively considers the graph network relations between the account and the video, between the account and between the video and the video, and is beneficial to learning from multiple dimensions to obtain the account information feature code and the video information feature code to be counted, so that the obtained account information feature code and the video information feature code to be counted can reflect the features represented by the account information of the account to be counted and the features represented by the video information to be counted better, the accuracy of the determined account information feature code and the determined video information feature code to be counted is higher, and the determination accuracy of the feature code is further improved; meanwhile, the defect that the accuracy of determining the feature codes is low due to the fact that the video information or the account information sheet is made into independent sample data is overcome.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
Fig. 1 is a diagram illustrating an application environment of a method for determining feature codes according to an exemplary embodiment.
Fig. 2 is a flow chart illustrating a method of determining eigen-code according to an example embodiment.
Fig. 3(a) is a schematic diagram of a first graph network shown according to an example embodiment.
Fig. 3(b) is a schematic diagram of a second graph network shown in accordance with an example embodiment.
Fig. 3(c) is a schematic diagram of a third graph network shown in accordance with an example embodiment.
Fig. 4 is a flowchart illustrating steps for obtaining account information feature codes and video information feature codes to be counted according to an exemplary embodiment.
FIG. 5 is a flowchart illustrating steps for determining a target feature encoding according to an exemplary embodiment. Fig. 6 is a flowchart illustrating steps for constructing a first graph network in accordance with an exemplary embodiment.
Fig. 7 is a flowchart illustrating steps for constructing a second graph network in accordance with an exemplary embodiment.
Fig. 8 is a diagram illustrating an application environment for a video push method according to an exemplary embodiment.
Fig. 9 is a flow diagram illustrating a video push method in accordance with an example embodiment.
Fig. 10 is a block diagram illustrating an apparatus for determining feature codes according to an example embodiment.
Fig. 11 is a block diagram illustrating a video push device according to an example embodiment.
FIG. 12 is a diagram illustrating an internal structure of a computer device, according to an example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Fig. 1 is a diagram illustrating an application environment of a video push method according to an exemplary embodiment. Referring to fig. 1, the application environment diagram includes a server 110, and the server 110 may be implemented by an independent server or a server cluster composed of a plurality of servers. In fig. 1, the server 110 is an independent server for explanation, and referring to fig. 1, the server 110 obtains account information of an account to be counted and video information to be counted; the account information of the account to be counted is matched with the account information of the corresponding positive sample account and the account information of the corresponding negative sample account; the video information to be counted is matched with corresponding positive sample video information and negative sample video information; constructing a first graph network according to the account information of the account to be counted, the positive sample video information and the negative sample video information; constructing a second graph network according to the account information of the account to be counted, the positive sample account information and the negative sample account information; constructing a third graph network according to the video information to be counted, the positive sample video information and the negative sample video information; and obtaining account information characteristic codes corresponding to the account information of the account to be counted and video information characteristic codes corresponding to the video information to be counted according to the first graph network, the second graph network and the third graph network.
Fig. 2 is a flowchart illustrating a method for determining a signature code according to an exemplary embodiment, where as shown in fig. 2, the method for determining a signature code is used in the server shown in fig. 1, and includes the following steps:
in step S21, account information of an account to be counted and video information to be counted are acquired; the account information of the account to be counted is matched with the account information of the corresponding positive sample account and the account information of the corresponding negative sample account; the video information to be counted is matched with corresponding positive sample video information and negative sample video information.
The account refers to a registered account of a video application in the terminal, such as a registered account of a short video application, a registered account of a video browsing application, and the like. The account to be counted is an authorized account which needs to be processed and analyzed; the account information refers to information for identifying an account, such as an account name, an account number, and the like.
The video information to be counted is a plurality of pieces of preliminarily determined video information, and is not the video information which is finally pushed to the account; specifically, the latest video information, such as video information of recent days; in an actual scene, the video information to be counted may refer to short video information, micro-movie information, drama information, and the like.
The positive sample account of the account to be counted is a similar account of the account to be counted, and particularly is an account which has similar video watching interest with the account to be counted in the near term; for example, accounts with the number of videos which are recently operated together with the account to be counted (such as clicking, praise, commenting, forwarding and the like) and are greater than or equal to the preset number are used as positive sample accounts of the account to be counted. The negative sample account of the account to be counted is an irrelevant account of the account to be counted, and particularly is an account which has no similar video watching interest with the account to be counted in the near term; for example, the accounts with the number of videos which are operated together with the account to be counted recently and less than the preset number are used as the negative sample accounts of the account to be counted.
The positive sample video information of the video information to be counted is similar video information of the video information to be counted, and specifically is video information with a large number of same viewing accounts with the video information to be counted; for example, the video information which is operated by the common account more than or equal to the preset number recently with the video information to be counted is taken as the positive sample video information of the video information to be counted. The negative sample video information of the video information to be counted refers to randomly negatively sampled video information, and specifically refers to video information obtained by randomly sampling all recent video information.
Specifically, the server acquires account information corresponding to an authorized account on the network based on a big data technology, and the account information is used as account information of an account to be counted; acquiring video information in a preset time period on a network, such as video information in the recent period on the network, as video information to be counted; randomly sampling account information of one positive sample account from a queue storing account information of a plurality of positive sample accounts of accounts to be counted, wherein the account information is used as the account information of the positive sample account of the account to be counted; randomly sampling account information of one negative sample account from a queue storing the account information of a plurality of negative sample accounts of the accounts to be counted, wherein the account information is used as the account information of the negative sample account of the account to be counted; randomly sampling positive sample video information from a queue of a plurality of positive sample video information storing video information to be counted, wherein the positive sample video information is used as the positive sample video information of the video information to be counted; and randomly sampling one piece of negative sample video information from the queue in which a plurality of pieces of negative sample video information are stored, wherein the negative sample video information is used as the negative sample video information of the video information to be counted.
In step S22, a first graph network is constructed according to the account information of the account to be counted, the positive sample video information, and the negative sample video information; constructing a second graph network according to the account information of the account to be counted, the account information of the positive sample account and the account information of the negative sample account; and constructing a third graph network according to the video information to be counted, the positive sample video information and the negative sample video information.
Wherein, a graph network refers to a data structure for characterizing relationships between data; the first graph network is used for representing the relation between the account and the video and corresponds to the account-video aggregation dimension; the central node is account information of an account to be counted, positive sample video information and negative sample video information, and the sampling neighbor is video information; as shown in fig. 3(a), account information of an account to be counted, positive sample video information, and negative sample video information are taken as central nodes, 2-hop neighbors of the central nodes are sampled, and 2 pieces of relevant video information of the nodes sampled in each hop are sampled.
The second graph network is used for representing the relationship between the account and corresponds to the account-account aggregation dimension; the central node is account information of an account to be counted, account information of a positive sample account and account information of a negative sample account, and the sampling neighbor is the account information; as shown in fig. 3(b), the account information of the account to be counted, the account information of the positive sample account, and the account information of the negative sample account are respectively used as a central node, 2 hops of neighbors of the central node are sampled, and 2 pieces of related account information of the node sampled in each hop are sampled.
The third graph network is used for representing the relation between the videos and corresponds to a video-video aggregation dimension; the central node is video information to be counted, positive sample video information and negative sample video information, and the sampling neighbor is video information; as shown in fig. 3(c), the video information to be counted, the positive sample video information, and the negative sample video information are respectively used as the central node, 2-hop neighbors of the node are sampled, and 2 pieces of relevant video information of the node sampled in each hop are sampled.
Specifically, the server takes account information of an account to be counted as a central node, and samples K-hop neighbors of the central node, wherein the neighbors are video information; for each node sampled in the K hops, N pieces of related video information are sampled, and a graph network with account information of the account to be counted as a central node is constructed; according to the same method, a graph network with positive sample video information as a central node and a graph network with negative sample video information as a central node can be constructed; obtaining a first graph network according to a graph network which takes account information of an account to be counted as a central node, a graph network which takes positive sample video information as a central node and a graph network which takes negative sample video information as a central node; taking account information of an account to be counted as a central node, and sampling a K-hop neighbor of the central node, wherein the neighbor is account information; for each node sampled in the K hops, N pieces of related account information are sampled, and a graph network with the account information of the account to be counted as a central node is constructed; according to the same method, a graph network taking account information of the positive sample accounts as a central node and a graph network taking account information of the negative sample accounts as a central node can be constructed; obtaining a second graph network according to the graph network which takes the account information of the account to be counted as a central node, the graph network which takes the account information of the positive sample account as the central node and the graph network which takes the account information of the negative sample account as the central node; taking video information to be counted as a central node, and sampling K-hop neighbors of the central node, wherein the neighbors are video information; for each node sampled in the K hops, N pieces of related video information are sampled, and a graph network with the video information to be counted as a central node is constructed; and obtaining a third graph network according to the graph network taking the video information to be counted as the central node, the graph network taking the positive sample video information as the central node and the graph network taking the negative sample video information as the central node. Therefore, the method is beneficial to the follow-up image network relationship between users and videos, between users and between videos which are comprehensively considered based on the first image network, the second image network and the third image network, so that the learned account information feature codes and the to-be-counted video information feature codes are more accurate.
In step S23, according to the first graph network, the second graph network, and the third graph network, an account information feature code corresponding to the account information of the account to be counted and a video information feature code corresponding to the video information to be counted are obtained.
The account information feature coding refers to a compressed and coded low-dimensional feature vector used for representing low-level semantics of account information, the video information feature coding to be counted is also a compressed and coded low-dimensional feature vector used for representing low-level semantics of the video information to be counted, and both the feature coding and the video information feature coding are obtained through learning of a pre-trained feature coding learning model.
The pre-trained feature coding learning model is a neural network model which can perform feature extraction and feature coding on account information and video information to be counted to obtain account information feature codes corresponding to the account information and video information to be counted feature codes corresponding to the video information to be counted.
Specifically, the server inputs information in a first graph network, a second graph network and a third graph network into a feature code learning model respectively, and obtains a first feature code of account information of an account to be counted, a feature code of positive sample video information, a feature code of negative sample video information, a second feature code of the account information of the account to be counted, a feature code of the positive sample account information, a feature code of the negative sample account information and a feature code of the video information to be counted through learning of the feature code learning model; counting a first characteristic code of account information of an account to be counted, a characteristic code of positive sample video information, a characteristic code of negative sample video information, a second characteristic code of the account information of the account to be counted, a characteristic code of the positive sample account information, a characteristic code of the negative sample account information and a characteristic code of the video information to be counted by a preset logarithmic loss function to obtain a target loss value; if the target loss value does not meet the preset condition, adjusting the network parameters of the feature coding learning model until the target loss value obtained according to the feature coding learning model after network parameter adjustment meets the preset condition; and if the target loss value obtained according to the feature code learning model after network parameter adjustment meets the preset condition, splicing the current first feature code and the second feature code to obtain an account information feature code corresponding to the account information of the account to be counted, and identifying the current feature code of the video information to be counted as the video information feature code corresponding to the video information to be counted.
In the step, the graph network relations between accounts and videos, between accounts and between videos are comprehensively considered, and account information feature codes and to-be-counted video information feature codes can be obtained through learning from multiple dimensions, so that the similarity degree between the feature codes of the account information and the video information and the feature codes of the positive samples is higher than the similarity degree between the feature codes of the account information and the video information and the feature codes of the negative samples; the method and the device are beneficial to more accurate follow-up pushing of the video to the account to be pushed, and further improve the accuracy of video pushing.
The method for determining the feature code provided by the embodiment of the disclosure learns to obtain the account information feature code corresponding to the account information of the account to be counted and the video information feature code corresponding to the video information to be counted through a first graph network constructed based on the account information of the account to be counted, the positive sample video information and the negative sample video information, a second graph network constructed based on the account information of the account to be counted, the positive sample account information and the negative sample account information, and a third graph network constructed based on the video information to be counted, the positive sample video information and the negative sample video information; the method comprehensively considers the graph network relations between the account and the video, between the account and between the video and the video, and is beneficial to obtaining the account information feature code and the video information feature code to be counted from multi-dimension learning, so that the obtained account information feature code and the video information feature code to be counted can reflect the features represented by the account information of the account to be counted and the features represented by the video information to be counted better, the accuracy of the determined account information feature code and the video information feature code to be counted is higher, the determination accuracy of the feature code is further improved, and the defect that the determination accuracy of the feature code is lower due to the fact that the video information or the account information is made into one independent sample data is avoided.
In an exemplary embodiment, as shown in fig. 4, the step S23 further includes the following steps:
in step S41, the information in the first graph network, the second graph network, and the third graph network is input into the feature code learning model to be trained, so as to obtain a first target feature code of the account information of the account to be counted, a target feature code of the positive sample video information, a target feature code of the negative sample video information, a second target feature code of the account information of the account to be counted, a target feature code of the account information of the positive sample account, a target feature code of the account information of the negative sample account, and a target feature code of the video information to be counted.
Referring to fig. 3(a), the information in the first graph network refers to account information of an account to be counted, first neighbor video information, positive sample video information, second neighbor video information, negative sample video information, and third neighbor video information; referring to fig. 3(b), the information in the second graph network refers to the first neighbor account information, the account information of the positive exemplar account, the second neighbor account information, the account information of the negative exemplar account, and the third neighbor account information; referring to fig. 3(c), the information in the third graph network refers to the video information to be counted and the fourth neighboring video information.
Specifically, the server inputs account information of an account to be counted in a first graph network into a feature code learning model to be trained, and learns to obtain a first target feature code of the account information of the account to be counted on the basis of the first graph network through the feature code learning model to be trained; the target feature code of the other information can be obtained by the same method as described above.
In step S42, a target loss value is obtained according to the first target feature code of the account information of the account to be counted, the target feature code of the positive sample video information, the target feature code of the negative sample video information, the second target feature code of the account information of the account to be counted, the target feature code of the account information of the positive sample account, the target feature code of the account information of the negative sample account, and the target feature code of the video information to be counted.
The target loss value is used for limiting the similarity degree between the feature codes of the account information to be counted and the video information to be counted, which are obtained through the learning of the feature code learning model to be trained, and the feature codes of the positive sample, and is higher than the similarity degree between the feature codes of the negative sample and the feature codes of the account information to be counted.
In step S43, the network parameters of the feature coding learning model to be trained are adjusted according to the target loss value until the target loss value obtained by the feature coding learning model after the network parameter adjustment satisfies the preset condition.
The target loss value obtained according to the feature coding learning model after the network parameter adjustment meets the preset condition, and the similarity degree between the feature coding of the account information to be counted and the video information to be counted, which is learned according to the feature coding learning model after the network parameter adjustment, and the feature coding of the positive sample is far higher than the similarity degree between the feature coding of the negative sample and the feature coding of the account information to be counted.
Specifically, if the target loss value obtained according to the feature code learning model after the network parameter adjustment does not satisfy the preset condition, the network parameter of the feature code learning model is continuously adjusted, and the above steps S41 to S42 are repeatedly performed until the target loss value obtained according to the feature code learning model after the network parameter adjustment satisfies the preset condition.
In step S44, if the target loss value obtained according to the feature code learning model after the network parameter adjustment satisfies the preset condition, the current first target feature code and the second target feature code are spliced to obtain an account information feature code corresponding to the account information of the account to be counted, and the target feature code of the current video information to be counted is identified as the video information feature code to be counted corresponding to the video information to be counted.
For example, the current first target feature code and the second target feature code are spliced together according to a line to obtain a feature code, which is used as an account information feature code corresponding to account information of an account to be counted.
Further, if the target loss value obtained according to the feature coding learning model after network parameter adjustment meets the preset condition, the feature coding learning model after network parameter adjustment is used as a pre-trained feature coding learning model.
In the technical solution provided by the embodiment of the present disclosure, based on the feature coding learning model, and comprehensively considering the relationship between the account information of the account to be counted and the positive sample video information, the relationship between the account information of the account to be counted and the negative sample video information, the relationship between the account information of the account to be counted and the account information of the positive sample account, the relationship between the account information of the account to be counted and the account information of the negative sample account, the relationship between the video information to be counted and the positive sample video information, and the relationship between the video information to be counted and the negative sample video information, the account information feature code and the video information feature code to be counted obtained through learning can reflect the features represented by the account information of the account to be counted and the features represented by the video information to be counted better, and the accuracy of determining the account information feature code and the video information feature code to be counted is further improved.
In an exemplary embodiment, the step S41 further includes the following steps: extracting account information of an account to be counted, first neighbor video information, positive sample video information, second neighbor video information, negative sample video information and third neighbor video information from a first graph network; extracting first neighbor account information, account information of a positive sample account, second neighbor account information, account information of a negative sample account and third neighbor account information from a second graph network; extracting video information to be counted and fourth neighbor video information from the third graph network; respectively inputting account information of an account to be counted, first neighbor video information, positive sample video information, second neighbor video information, negative sample video information, third neighbor video information, first neighbor account information, account information of a positive sample account, second neighbor account information, account information of a negative sample account, third neighbor account information, video information to be counted and fourth neighbor video information into a feature coding learning model to be trained, and obtaining a first target feature code of the account information of the account to be counted, a target feature code of the positive sample video information, a target feature code of the negative sample video information, a second target feature code of the account information of the account to be counted, a target feature code of the account information of the positive sample account, a target feature code of the account information of the negative sample account and a target feature code of the video information to be counted.
In the technical scheme provided by the embodiment of the disclosure, the feature extraction and the feature coding are performed on the information in each graph network based on the feature coding learning model, so that the feature coding corresponding to the information in each graph network can be effectively learned, and the target loss value can be determined according to the feature coding corresponding to the information in each graph network.
In an exemplary embodiment, as shown in fig. 5, account information of an account to be counted, first neighbor video information, positive sample video information, second neighbor video information, negative sample video information, third neighbor video information, first neighbor account information, account information of a positive sample account, second neighbor account information, account information of a negative sample account, third neighbor account information, video information to be counted, and fourth neighbor video information are respectively input into a feature coding learning model to be trained, and a first target feature code of the account information of the account to be counted, a target feature code of the positive sample video information, a target feature code of the negative sample video information, a second target feature code of the account information of the account to be counted, a target feature code of the account information of the positive sample account, a target feature code of the account information of the negative sample account, and a target feature code of the video information to be counted are obtained, also comprises the following steps:
in step S51, an initial feature code of the account information of the account to be counted, an initial feature code of the first neighbor video information, an initial feature code of the positive sample video information, an initial feature code of the second neighbor video information, an initial feature code of the negative sample video information, an initial feature code of the third neighbor video information, an initial feature code of the first neighbor account information, an initial feature code of the account information of the positive sample account, an initial feature code of the second neighbor account information, an initial feature code of the account information of the negative sample account, an initial feature code of the third neighbor account information, an initial feature code of the video information to be counted, and an initial feature code of the fourth neighbor video information are obtained through the feature code learning model to be trained, respectively.
Specifically, the account information of the account to be counted is input into a feature coding learning model to be trained, the features of the account information of the account to be counted are extracted through the feature coding learning model to be trained, and feature coding is carried out on the features of the account information of the account to be counted to obtain an initial feature code of the account information of the account to be counted; the initial feature code of the other information can be obtained in the same manner as described above.
In step S52, a first target feature code of the account information of the account to be counted is obtained according to the initial feature code of the account information of the account to be counted and the initial feature code of the first neighbor video information; obtaining target feature codes of the positive sample video information according to the initial feature codes of the positive sample video information and the initial feature codes of the second neighbor video information; and obtaining the target feature code of the negative sample video information according to the initial feature code of the negative sample video information and the initial feature code of the third neighbor video information.
In step S53, a second target feature code of the account information of the account to be counted is obtained according to the initial feature code of the account information of the account to be counted and the initial feature code of the first neighbor account information; obtaining a target feature code of the account information of the positive sample account according to the initial feature code of the account information of the positive sample account and the initial feature code of the second neighbor account information; and obtaining the target feature code of the negative sample account information according to the initial feature code of the negative sample account information and the initial feature code of the third neighbor account information.
In step S54, a target feature code of the video information to be counted is obtained according to the initial feature code of the video information to be counted and the initial feature code of the fourth neighboring video information.
Specifically, the first target feature code of the account information of the account to be counted may be obtained by: adopting a neighbor node aggregation algorithm to aggregate the initial feature codes of the first neighbor video information to obtain aggregated feature codes; and splicing the feature codes after the aggregation processing and the initial feature codes of the account information of the account to be counted to obtain first target feature codes of the account information of the account to be counted.
For example, referring to fig. 3(a), it is assumed that first neighbor video information of a first layer is first neighbor video information 1 and first neighbor video information 2, respectively, first neighbor video information corresponding to the first neighbor video information 1 is first neighbor video information 3 and first neighbor video information 4, respectively, and first neighbor video information corresponding to the first neighbor video information 2 is first neighbor video information 5 and first neighbor video information 6, respectively; respectively carrying out linear transformation on the initial feature codes of the first neighbor video information 3 and the initial feature codes of the first neighbor video information 4, splicing the initial feature codes together in a row through an activation function, carrying out pooling processing to obtain aggregated feature codes corresponding to the first neighbor video information 1, splicing or adding the aggregated feature codes corresponding to the first neighbor video information 1 and the initial feature codes corresponding to the first neighbor video information 1 in a row manner to obtain the feature codes corresponding to the first neighbor video information 1; according to the same method, the feature code corresponding to the first neighbor video information 2 can be obtained; according to the same method, the feature code corresponding to the first neighbor video information 1 and the feature code corresponding to the first neighbor video information 2 are processed, so that the feature code after aggregation processing corresponding to the account information of the account to be counted can be obtained; and splicing the aggregated feature codes corresponding to the account information of the account to be counted and the initial feature codes of the account information of the account to be counted to obtain first target feature codes of the account information of the account to be counted.
It should be noted that, according to the same method, the target feature code of the positive sample video information, the target feature code of the negative sample video information, the second target feature code of the account information of the account to be counted, the target feature code of the account information of the positive sample account, the target feature code of the negative sample account information, and the target feature code of the video information to be counted can be obtained.
In the technical scheme provided by the embodiment of the disclosure, the feature extraction and the feature coding are performed on the information in each graph network based on the feature coding learning model, so that the target feature coding corresponding to the information in each graph network can be effectively learned, and the target loss value can be determined according to the target feature coding corresponding to the information in each graph network.
In an exemplary embodiment, the step S42 further includes: obtaining a first loss value according to a first target feature code of account information of an account to be counted, a target feature code of positive sample video information and a target feature code of negative sample video information; obtaining a second loss value according to a second target feature code of the account information of the account to be counted, a target feature code of the positive sample account information and a target feature code of the negative sample account information; obtaining a third loss value according to the target feature code of the video information to be counted, the target feature code of the positive sample video information and the target feature code of the negative sample video information; and adding the first loss value, the second loss value and the third loss value to obtain a target loss value.
The first loss value is used for measuring the collaborative loss between the account and the video, the second loss value is used for measuring the similarity loss between the account and the account, and the third loss value is used for measuring the similarity loss between the video and the account; the target loss value is used for comprehensively measuring the synergy loss between the account and the video, the similarity loss between the account and the similarity loss between the video and the video.
For example, the first loss value is calculated by the following formula:
Lu_p(u)=-log(sigmoid(u·ppos-u·pneg));
wherein L isu_p(u) represents a first loss value, log represents a logarithmic function, sigmoid represents a sigmoid function, u represents a first target feature code of account information of the account to be counted, and pposTarget feature coding, p, representing positive sample video informationnegAnd target feature codes representing negative sample account information.
Calculating a second loss value by the following equation:
Lu_u(u)=-log(sigmoid(u·upos))+log(sigmoid(u·uneg));
wherein L isu_u(u) represents a second loss value, log represents a logarithmic function, sigmoid represents a sigmoid function, u represents a second target characteristic code of account information of the account to be counted, and u represents a second target characteristic code of the account information of the account to be countedposTarget signature code, u, representing positive sample account informationnegAnd target feature codes representing negative sample account information.
Calculating a third loss value by the following equation:
Lp_p(p)=-log(sigmoid(p·ppos))+log(sigmoid(p·pneg));
wherein L isp_p(p) represents a third loss value, log represents a logarithmic function, sigmoid represents a sigmoid function, p represents target characteristic coding of video information to be counted, and pposTarget feature coding, p, representing positive sample video informationnegObject feature codes representing negative sample video information.
Calculating a target loss value by the following formula:
L=Lu_p(u)+Lu_u(u)+Lp_p(p);
wherein L represents a target loss value, Lu_p(u) represents a first loss value, Lu_u(u) represents a second loss value, Lp_p(p) represents a third loss value.
According to the technical scheme provided by the embodiment of the disclosure, various loss values are comprehensively considered, which is beneficial to comprehensively measuring the similarity loss between the account and the account, the collaborative loss between the account and the video and the similarity loss between the video and the video, so that the finally output account information feature code and the to-be-counted video information feature code are more accurate, and the determination accuracy of the feature code is further improved.
In an exemplary embodiment, the step S22 further includes: acquiring first neighbor video information of an account to be counted, second neighbor video information of positive sample video information and third neighbor video information of negative sample video information; the account information of the account to be counted is taken as a center node, the first neighbor video information is taken as a neighbor node, and a first sub-graph network is constructed; constructing a second sub-graph network by taking the positive sample video information as a central node and second neighbor video information as neighbor nodes; constructing a third sub-graph network by taking the negative sample video information as a central node and taking the third neighbor video information as a neighbor node; and combining the first sub-graph network, the second sub-graph network and the third sub-graph network to obtain the first graph network.
Specifically, referring to fig. 6, the step S22 may be specifically implemented by the following steps:
in step S61, obtaining video information operated by the account to be counted as first neighbor video information of the account to be counted; and acquiring video information associated with the first neighbor video information as the first neighbor video information of the first neighbor video information until the total layer number of the first neighbor video information meets a first preset condition.
The video information operated by the account to be counted refers to the video information clicked, praised, commented or forwarded by the account to be counted; the video information associated with the first neighbor video information refers to video information which has a large number of same viewing accounts with the first neighbor video information; the first preset condition is used to identify the total number of layers of the first neighbor video information, the second neighbor video information, or the third neighbor video information, as shown in fig. 3(a), and the first preset condition is 2.
Exemplarily, referring to fig. 3(a), the server obtains the clicked, praise, comment or forwarded video information of the account to be counted as the video information operated by the account to be counted; randomly sampling 2 pieces of video information from the video information operated by the account to be counted, wherein the video information is used as first neighbor video information of the account to be counted; acquiring video information associated with first neighbor video information, and randomly sampling 2 pieces of video information from the video information associated with the first neighbor video information to serve as the first neighbor video information of the first neighbor video information; and repeating the steps until the total layer number of the first neighbor video information meets a first preset condition.
In step S62, video information associated with the positive sample video information is acquired as second neighbor video information of the positive sample video information; and acquiring video information associated with the second neighbor video information as the second neighbor video information of the second neighbor video information until the total layer number of the second neighbor video information meets the first preset condition.
The video information associated with the positive sample video information refers to video information with a large number of same viewing accounts with the positive sample video information, and the video information associated with the second neighbor video information refers to video information with a large number of same viewing accounts with the second neighbor video information.
Exemplarily, referring to fig. 3(a), the server acquires video information associated with the positive sample video information, and randomly samples 2 pieces of video information from the video information associated with the positive sample video information as second neighbor video information of the positive sample video information; and acquiring video information associated with second neighbor video information, and randomly sampling 2 pieces of video information from the video information associated with the second neighbor video information to serve as the second neighbor video information of the second neighbor video information until the total layer number of the second neighbor video information meets a first preset condition.
In step S63, video information associated with the negative sample video information is acquired as third neighbor video information of the negative sample video information; and acquiring video information associated with the third neighbor video information as the third neighbor video information of the third neighbor video information until the total layer number of the third neighbor video information meets the first preset condition.
The video information associated with the negative sample video information refers to video information with a large number of same viewing accounts with the negative sample video information, and the video information associated with the third neighbor video information refers to video information with a large number of same viewing accounts with the third neighbor video information.
Exemplarily, referring to fig. 3(a), the server acquires video information associated with negative sample video information, and randomly samples 2 pieces of video information from the video information associated with the negative sample video information as third neighbor video information of the negative sample video information; and acquiring video information associated with the third neighbor video information, and randomly sampling 2 pieces of video information from the video information associated with the third neighbor video information to serve as the third neighbor video information of the third neighbor video information until the total layer number of the third neighbor video information meets a first preset condition.
In step S64, a first sub-graph network is constructed with account information of the account to be counted as a central node and the first neighboring video information as a neighboring node.
In step S65, a second sub-graph network is constructed with the positive sample video information as a central node and the second neighboring video information as neighboring nodes.
In step S66, a third sub-graph network is constructed with the negative sample video information as a central node and the third neighboring video information as a neighboring node.
In step S67, the first sub-graph network, the second sub-graph network, and the third sub-graph network are combined to obtain the first graph network.
Exemplarily, referring to fig. 3(a), the server uses the account information of the account to be counted as a central node, and uses each first neighbor video information as each neighbor node, thereby constructing a first sub-graph network; taking the positive sample video information as a central node, and taking each second neighbor video information as each neighbor node respectively, so as to construct a second sub-graph network; taking the negative sample video information as a central node, and taking each third neighbor video information as each neighbor node respectively, thereby constructing a third sub-graph network; and splicing the first sub-graph network, the second sub-graph network and the third sub-graph network together to obtain the first graph network.
According to the technical scheme provided by the embodiment of the disclosure, the first graph network is constructed, so that the graph network relationship between the account and the video can be accurately represented, the more accurate account information feature code of the account to be counted can be obtained conveniently based on the first graph network learning, and the similarity degree between the account information feature code of the account to be counted and the feature code of the positive sample video information is higher than the similarity degree between the account information feature code of the account to be counted and the feature code of the negative sample video information.
In an exemplary embodiment, the step S22 further includes: acquiring first neighbor account information of an account to be counted, second neighbor account information of a positive sample account and third neighbor account information of a negative sample account; the account information of the account to be counted is taken as a center node, the first neighbor account information is taken as a neighbor node, and a fourth sub-graph network is constructed; the account information of the positive sample account is used as a center node, and the second neighbor account information is used as a neighbor node to construct a fifth sub-graph network; constructing a sixth sub-graph network by taking account information of the negative sample account as a center node and taking third neighbor account information as a neighbor node; and combining the fourth sub-graph network, the fifth sub-graph network and the sixth sub-graph network to obtain a second graph network.
Specifically, referring to fig. 7, the step S22 may be specifically implemented by the following steps:
in step S71, account information associated with the account to be counted is obtained as first neighbor account information of the account to be counted; and acquiring account information associated with the first neighbor account information as the first neighbor account information of the first neighbor account information until the total layer number of the first neighbor account information meets a first preset condition.
The account information related to the account to be counted is account information corresponding to an account which has similar video viewing interests with the account to be counted in the recent period, and the account information related to the first neighbor account information is account information corresponding to an account which has similar video viewing interests with the first neighbor account information in the recent period. The first preset condition is also used to identify the total number of tiers of the first neighbor account information, the second neighbor account information, or the third neighbor account information, as shown in fig. 3(b), where the first preset condition is 2.
Exemplarily, referring to fig. 3(b), the server obtains account information corresponding to an account having a video viewing interest similar to the account to be counted in the near term, as account information associated with the account to be counted; randomly sampling 2 account information from account information associated with the account to be counted to serve as first neighbor account information of the account to be counted; acquiring account information associated with first neighbor account information, and randomly sampling 2 pieces of account information from the account information associated with the first neighbor account information to be used as the first neighbor account information of the first neighbor account information; and repeating the steps until the total layer number of the first neighbor account information meets a first preset condition.
In step S72, acquiring account information associated with the positive sample account as second neighbor account information of the positive sample account; and acquiring account information associated with the second neighbor account information as the second neighbor account information of the second neighbor account information until the total layer number of the second neighbor account information meets the first preset condition.
The account information related to the positive sample account refers to account information corresponding to an account which has similar video viewing interests with the positive sample account in the near future, and the account information related to the second neighbor account information refers to account information corresponding to an account which has similar video viewing interests with the second neighbor account information in the near future.
Illustratively, referring to fig. 3(b), the server obtains account information corresponding to an account having a video viewing interest similar to the positive sample account in the near future as account information associated with the positive sample account; randomly sampling 2 pieces of account information from the account information associated with the positive sample account to serve as second neighbor account information of the positive sample account; acquiring account information associated with second neighbor account information, and randomly sampling 2 pieces of account information from the account information associated with the second neighbor account information to be used as second neighbor account information of the second neighbor account information; and repeating the steps until the total layer number of the second neighbor account information meets the first preset condition.
In step S73, account information associated with the negative sample account is obtained as third neighbor account information of the negative sample account; and acquiring account information associated with the third neighbor account information as the third neighbor account information of the third neighbor account information until the total layer number of the third neighbor account information meets the first preset condition.
The account information related to the negative sample account refers to account information corresponding to an account which has similar video viewing interests with the negative sample account in the near term, and the account information related to the third neighbor account information refers to account information corresponding to an account which has similar video viewing interests with the third neighbor account information in the near term.
Illustratively, referring to fig. 3(b), the server obtains account information corresponding to an account having a similar video viewing interest recently as the negative sample account as account information associated with the negative sample account; randomly sampling 2 pieces of account information from the account information associated with the negative sample account to serve as third neighbor account information of the negative sample account; acquiring account information associated with third neighbor account information, and randomly sampling 2 pieces of account information from the account information associated with the third neighbor account information to be used as the third neighbor account information of the third neighbor account information; and repeating the steps until the total layer number of the account information of the third neighbor meets the first preset condition.
In step S74, a fourth sub-graph network is constructed with account information of the account to be counted as a central node and the first neighbor account information as a neighbor node.
In step S75, a fifth sub-graph network is constructed with the account information of the positive sample account as the center node and the second neighbor account information as the neighbor nodes.
In step S76, a sixth sub-graph network is constructed with the account information of the negative sample account as a central node and the third neighbor account information as a neighbor node.
In step S77, the fourth sub-graph network, the fifth sub-graph network, and the sixth sub-graph network are combined to obtain the second graph network.
Exemplarily, referring to fig. 3(b), the server uses the account information of the account to be counted as a central node, and uses each first neighbor account information as each neighbor node, thereby constructing a fourth sub-graph network; the account information of the positive sample account is used as a center node, and the account information of each second neighbor is used as each neighbor node respectively, so that a fifth sub-graph network is constructed; taking account information of the negative sample account as a central node, and taking account information of each sixth neighbor as each neighbor node respectively, so as to construct a sixth sub-graph network; and splicing the fourth sub-graph network, the fifth sub-graph network and the sixth sub-graph network together to obtain a second graph network.
According to the technical scheme provided by the embodiment of the disclosure, the second graph network is constructed, so that the graph network relationship between the account and the account can be accurately characterized, the more accurate account information feature code of the account to be counted can be obtained conveniently through subsequent learning based on the second graph network, and the similarity degree between the account information feature code of the account to be counted and the feature code of the account information of the positive sample account is higher than the similarity degree between the account information feature code of the account to be counted and the feature code of the account information of the negative sample account.
In an exemplary embodiment, the step S22 further includes: acquiring fourth neighbor video information of the video information to be counted; constructing a seventh sub-graph network by taking the video information to be counted as a central node and the fourth neighbor video information as a neighbor node; and combining the second sub-graph network, the third sub-graph network and the seventh sub-graph network to obtain a third graph network. The fourth neighboring video information of the video information to be counted is video information associated with the video information to be counted, and specifically is video information having a large number of same viewing accounts with the video information to be counted.
Specifically, the server acquires video information associated with the video information to be counted as fourth neighbor video information of the video information to be counted; acquiring video information associated with the fourth neighboring video information, wherein the video information is used as the fourth neighboring video information of the fourth neighboring video information until the total layer number of the fourth neighboring video information meets a first preset condition; constructing a seventh sub-graph network by taking the video information to be counted as a central node and the fourth neighbor video information as a neighbor node; and combining the second sub-graph network, the third sub-graph network and the seventh sub-graph network to obtain a third graph network. The video information related to the fourth neighboring video information is video information which has a large number of same viewing accounts with the fourth neighboring video information; the first preset condition is also used to identify the total number of layers of the fourth neighboring video information, as shown in fig. 3(c), and the first preset condition is 2.
Exemplarily, referring to fig. 3(c), the server acquires video information having a large number of the same viewing accounts as the video information to be counted as video information associated with the video information to be counted; randomly sampling 2 pieces of video information from the video information associated with the video information to be counted, wherein the video information is used as fourth neighbor video information of the video information to be counted; acquiring video information associated with fourth neighboring video information, and randomly sampling 2 pieces of video information from the video information associated with the fourth neighboring video information to serve as the fourth neighboring video information of the fourth neighboring video information; repeating the steps until the total layer number of the fourth neighboring video information meets the first preset condition; taking the video information to be counted as a central node, and taking each fourth neighbor video information as each neighbor node respectively, so as to construct a seventh sub-graph network; and splicing the second sub-graph network, the third sub-graph network and the seventh sub-graph network together to obtain a third graph network.
According to the technical scheme provided by the embodiment of the disclosure, the third graph network is constructed, so that the graph network relationship between the video and the video can be accurately represented, the more accurate characteristic code of the video information to be counted can be conveniently obtained based on the third graph network learning, and the similarity degree between the characteristic code of the video information to be counted and the characteristic code of the positive sample video information is higher than the similarity degree between the characteristic code of the video information to be counted and the characteristic code of the negative sample video information.
Fig. 8 is an application environment diagram illustrating a video push method according to an exemplary embodiment. Referring to fig. 8, the application environment diagram includes a terminal 810 and a server 820, and the terminal 810 and the server 820 are connected through a network. The terminal 810 is an electronic device with a video viewing function, and the electronic device may be a smart phone, a tablet computer, a notebook computer, or the like; the server 820 may be implemented as a stand-alone server or as a server cluster comprised of multiple servers. In fig. 8, taking the terminal 810 as a smart phone for example to explain, referring to fig. 8, the server 820 obtains an account information feature code corresponding to account information of an account to be counted and a video information feature code corresponding to video information to be counted; for example, the server 820 obtains account information of an account to be counted and video information to be counted; the account information of the account to be counted is matched with the account information of the corresponding positive sample account and the account information of the corresponding negative sample account; the video information to be counted is matched with corresponding positive sample video information and negative sample video information; constructing a first graph network according to the account information of the account to be counted, the positive sample video information and the negative sample video information; constructing a second graph network according to the account information of the account to be counted, the positive sample account information and the negative sample account information; constructing a third graph network according to the video information to be counted, the positive sample video information and the negative sample video information; and obtaining account information characteristic codes corresponding to the account information of the account to be counted and video information characteristic codes corresponding to the video information to be counted according to the first graph network, the second graph network and the third graph network. Then, the server 820 obtains the account information of the current login account of the terminal 810 as the account information of the account to be pushed; determining account information characteristic codes corresponding to the account information of the account to be pushed from the account information characteristic codes corresponding to the account information of the account to be counted; screening target video information corresponding to the account to be pushed from the video information to be counted according to the account information characteristic code and the video information characteristic code to be counted; the target video information is pushed to the terminal 810, and the terminal 110 displays the target video information through a terminal interface for the account to be pushed to watch.
Fig. 9 is a flowchart illustrating a video push method according to an exemplary embodiment, where, as shown in fig. 9, the video push method is used in the server shown in fig. 8, and includes the following steps:
in step S91, according to the determination method of the feature code, the account information feature code corresponding to the account information of the account to be counted and the video information feature code corresponding to the video information to be counted are obtained.
It should be noted that, the specific manner of obtaining the account information feature code corresponding to the account information of the account to be counted and the video information feature code corresponding to the video information to be counted has been described in detail in the embodiment of the method, and a detailed description will not be made here.
In step S92, account information of the account to be pushed is obtained, and the account information feature code corresponding to the account information of the account to be pushed is determined from the account information feature code corresponding to the account information of the account to be counted.
Specifically, the server acquires account information of a current login account of the terminal, and the account information is used as account information of an account to be pushed; and matching the account information of the account to be pushed with the account information of the account to be counted, and identifying the account information feature code corresponding to the account information of the account to be pushed as the account information feature code corresponding to the account information of the account to be pushed if the account information of the account to be pushed is matched with the account information of the account to be counted. Therefore, the method is beneficial to screening out the target video information corresponding to the account to be pushed from the video information to be counted based on the account information characteristic code corresponding to the account to be pushed and the video information characteristic code to be counted subsequently, and then pushing the target video information to the account to be pushed, so that the accuracy of video pushing is further improved, and the click rate of the video is effectively improved.
Exemplarily, in a scene of opening a short video application, an account to be pushed refers to a viewing account of the short video application, and video information to be counted refers to a pre-selected short video pushed to a short video playing interface of a terminal; in a scene of closing the short video application program, the account to be pushed also refers to a viewing account of the short video application program, and the video information to be counted refers to a preselected short video pushed to the terminal in a notification message reminding manner.
In step S93, target video information corresponding to the account to be pushed is screened from the video information to be counted according to the account information feature code and the video information feature code to be counted.
The target video information refers to video information pushed to an account to be pushed.
Specifically, the server counts the feature similarity between the account information feature code and the video information feature code to be counted, and determines the video information corresponding to the account to be pushed from the video information to be counted as the target video information according to the feature similarity.
The characteristic similarity is used for measuring the similarity between the characteristics represented by the account information characteristic codes and the characteristics represented by the video information codes to be counted; generally, the higher the feature similarity is, the more similar the features represented by the account information feature codes and the features represented by the video information codes to be counted are, which indicates that the accounts corresponding to the account information feature codes are more interested in the video information to be counted corresponding to the video information codes to be counted; meanwhile, the feature similarity is also used for measuring the sequence of the video information to be counted pushed to the account; generally, the higher the feature similarity is, the more the corresponding video information to be counted is pushed to the account.
For example, the server counts cosine similarity between the account information feature code and the video information feature code to be counted, and the cosine similarity is used as the feature similarity between the account information feature code and the video information feature code to be counted; taking the video information to be counted with the characteristic similarity meeting the preset condition, for example, the video information to be counted with the maximum characteristic similarity as the target video information corresponding to the account to be pushed; therefore, the characteristic similarity between the account information characteristic code and the video information characteristic code to be counted is comprehensively considered, the determined target video information is more accurate, the follow-up accurate determined target video information is pushed to the account to be pushed, the accurate pushing of the video information is realized, the accuracy of video pushing is further improved, and the click rate of the video information is improved.
In step S94, the target video information is pushed to the account to be pushed.
Specifically, the server acquires a terminal identifier corresponding to the account to be pushed, and pushes target video information corresponding to the account to be pushed to a terminal corresponding to the terminal identifier according to a preset frequency, and the video information with higher feature similarity is displayed through a terminal interface, so that the interest requirement of the account to be pushed currently logged in by the terminal is met, and accurate pushing of the video information is realized.
Further, the server can convert the target video information corresponding to the account to be pushed into the video information corresponding to the preset pushing mode, and pushes the video information to the account to be pushed. The preset push mode refers to a typesetting and special effect mode of video information, such as a movie mode, a vintage mode, a filter mode, and the like.
Illustratively, in a short video application program opening scene, a terminal responds to a trigger operation of an account to be pushed on a short video playing interface, generates a pushing request and sends the pushing request to a corresponding server; the server analyzes the received push request to obtain an account to be pushed, which is currently logged in by the terminal; and pushing video information corresponding to the account to be pushed currently logged in by the terminal to the terminal from the video information to be counted.
Illustratively, in a closed scene of the short video application program, the server pushes video information corresponding to an account to be pushed currently logged in by the terminal to the terminal in the form of a notification message, and displays the video information corresponding to the account to be pushed currently logged in by the terminal through a notification message column of the terminal.
The video pushing method provided by the embodiment of the disclosure learns to obtain account information feature codes corresponding to account information of an account to be counted and account information feature codes corresponding to video information to be counted through a first graph network constructed based on the account information of the account to be counted, positive sample video information and negative sample video information, a second graph network constructed based on the account information of the account to be counted, the positive sample account information and the negative sample account information, and a third graph network constructed based on the video information to be counted, the positive sample video information and the negative sample video information; then, determining account information characteristic codes corresponding to the account information of the account to be pushed from the account information characteristic codes corresponding to the account information of the account to be counted, screening target video information corresponding to the account to be pushed from the video information to be counted by combining the video information characteristic codes to be counted, and finally pushing the target video information to the account to be pushed; the method comprehensively considers the graph network relationship between the account and the video, between the account and between the video and the video, and is beneficial to obtaining account information feature codes and video information feature codes to be counted from multi-dimension learning, so that the obtained account information feature codes and the video information feature codes to be counted can reflect the features represented by the account information of the account to be counted and the features represented by the video information to be counted better, the accuracy of the determined account information feature codes and the video information feature codes to be counted is higher, the video which is pushed to the account to be pushed based on the determined account information feature codes and the video information feature codes to be counted subsequently is more accurate, and the accuracy of video pushing is improved; meanwhile, the account information characteristic code corresponding to the account information of the account to be pushed is determined from the learned account information characteristic codes, then the target video information corresponding to the account to be pushed is screened from the video information to be counted by combining the video information characteristic code to be counted, and then the target video information is pushed to the account to be pushed, so that the accuracy of video pushing is further improved, and the click rate of the video is effectively improved.
In an exemplary embodiment, the step S92 further includes: determining target account information matched with the account information of the account to be pushed from the account information of the account to be counted; and screening out the account information characteristic code corresponding to the target account information from the account information characteristic code corresponding to the account information of the account to be counted, wherein the account information characteristic code is used as the account information characteristic code corresponding to the account information of the account to be pushed.
Specifically, the server matches the account information of the account to be pushed with the account information of the account to be counted, and if the account information of the account to be pushed is successfully matched with the account information of the account to be counted, the account information of the account to be counted is used as target account information; screening out an account information characteristic code corresponding to the target account information from account information characteristic codes corresponding to accounts to be counted, and taking the account information characteristic code as the account information characteristic code corresponding to the account information of the account to be pushed.
According to the technical scheme provided by the embodiment of the disclosure, the account information feature code corresponding to the account information of the account to be pushed is determined from the account information feature code corresponding to the account information of the account to be counted, which is obtained in advance, so that the account information feature code corresponding to the account information of the account to be pushed is beneficial to pushing more accurate video information for the account to be pushed, and accurate pushing of the video information is realized.
In an exemplary embodiment, the step S93 further includes: acquiring the feature similarity between the account information feature code and the video information feature code to be counted; and determining the video information to be counted with the characteristic similarity larger than a preset threshold as target video information corresponding to the account to be pushed.
Specifically, the server counts cosine similarity between the account information feature code and the video information feature code to be counted, and the cosine similarity is used as the feature similarity between the account information feature code and the video information feature code to be counted; the video information to be counted corresponding to the video information to be counted with the characteristic similarity larger than the preset threshold is screened out from the video information to be counted and serves as target video information corresponding to the account to be pushed, only the video information to be counted with the characteristic similarity larger than the preset threshold is pushed to the account to be pushed conveniently, and therefore accurate pushing of the video information is achieved.
For example, the account information characteristic is encoded as v1The video information to be counted is coded as v2And if so, the feature similarity between the account information feature code and the video information feature code to be counted is as follows:
Figure BDA0002398787790000291
in an exemplary embodiment, the step S94 further includes: sequencing target video information corresponding to the account to be pushed according to the feature similarity to obtain sequenced target video information; and pushing the sequenced target video information to an account to be pushed according to a preset frequency.
The preset frequency refers to a frequency of pushing target video information to the terminal, for example, 10 items of target video information are pushed every minute.
Specifically, the server sequentially sorts the target video information corresponding to the account to be pushed according to the sequence of the feature similarity from large to small to obtain the sorted target video information; and pushing the sequenced target video information to a terminal corresponding to the account to be pushed according to a preset frequency, and displaying the sequenced target video information through a terminal interface, so that the account to be pushed, which is currently logged in by the terminal, can be conveniently watched.
According to the technical scheme provided by the embodiment of the disclosure, the target video information sorted according to the feature similarity is pushed to the terminal, so that the accurate pushing of the video information is facilitated, and the accuracy of the video pushing is improved; meanwhile, the click rate of the video is improved.
Fig. 10 is a block diagram illustrating an apparatus for determining feature codes according to an example embodiment. Referring to fig. 10, the apparatus includes an information acquisition module 1010, a graph network construction module 1020, and a feature code acquisition module 1030.
The information acquisition module 1010 is configured to perform acquisition of account information of an account to be counted and video information to be counted; the account information of the account to be counted is matched with the account information of the corresponding positive sample account and the account information of the corresponding negative sample account; the video information to be counted is matched with corresponding positive sample video information and negative sample video information;
a graph network construction module 1020 configured to execute construction of a first graph network according to account information of an account to be counted, the positive sample video information, and the negative sample video information; constructing a second graph network according to the account information of the account to be counted, the account information of the positive sample account and the account information of the negative sample account; constructing a third graph network according to the video information to be counted, the positive sample video information and the negative sample video information;
the feature code obtaining module 1030 is configured to execute the first graph network, the second graph network, and the third graph network to obtain the account information feature code corresponding to the account information of the account to be counted and the video information feature code corresponding to the video information to be counted.
The device for determining the feature code comprehensively considers the image network relationships between the account and the video, between the account and the account, and between the video and the video, and is beneficial to learning from multiple dimensions to obtain the account information feature code and the video information feature code to be counted, so that the obtained account information feature code and the video information feature code to be counted can reflect the features represented by the account information of the account to be counted and the features represented by the video information to be counted better, the accuracy of the determined account information feature code and the video information feature code to be counted is higher, the determination accuracy of the feature code is further improved, and the defect that the determination accuracy of the feature code is lower due to the fact that the video information or the account information is made into one independent sample data is avoided.
In an exemplary embodiment, the feature code obtaining module 1030 is further configured to input information in the first graph network, the second graph network, and the third graph network into a feature code learning model to be trained, so as to obtain a first target feature code of account information of an account to be counted, a target feature code of positive sample video information, a target feature code of negative sample video information, a second target feature code of account information of the account to be counted, a target feature code of account information of a positive sample account, a target feature code of account information of a negative sample account, and a target feature code of video information to be counted; obtaining a target loss value according to a first target feature code of account information of an account to be counted, a target feature code of positive sample video information, a target feature code of negative sample video information, a second target feature code of account information of the account to be counted, a target feature code of account information of a positive sample account, a target feature code of account information of a negative sample account and a target feature code of video information to be counted; adjusting the network parameters of the feature coding learning model to be trained according to the target loss value until the target loss value obtained according to the feature coding learning model after network parameter adjustment meets the preset condition; and if the target loss value obtained according to the feature code learning model after network parameter adjustment meets the preset condition, splicing the current first target feature code and the second target feature code to obtain the account information feature code corresponding to the account information of the account to be counted, and identifying the current target feature code of the video information to be counted as the video information feature code corresponding to the video information to be counted.
In an exemplary embodiment, the feature code obtaining module 1030 is further configured to extract account information of an account to be counted, first neighbor video information, positive sample video information, second neighbor video information, negative sample video information, and third neighbor video information from the first graph network; extracting first neighbor account information, account information of a positive sample account, second neighbor account information, account information of a negative sample account and third neighbor account information from a second graph network; extracting video information to be counted and fourth neighbor video information from the third graph network; respectively inputting account information of an account to be counted, first neighbor video information, positive sample video information, second neighbor video information, negative sample video information, third neighbor video information, first neighbor account information, account information of a positive sample account, second neighbor account information, account information of a negative sample account, third neighbor account information, video information to be counted and fourth neighbor video information into a feature coding learning model to be trained, and obtaining a first target feature code of the account information of the account to be counted, a target feature code of the positive sample video information, a target feature code of the negative sample video information, a second target feature code of the account information of the account to be counted, a target feature code of the account information of the positive sample account, a target feature code of the account information of the negative sample account and a target feature code of the video information to be counted.
In an exemplary embodiment, the feature code obtaining module 1030 is further configured to perform learning of the model by the feature codes to be trained, obtaining initial feature codes of account information of an account to be counted, initial feature codes of first neighbor video information, initial feature codes of positive sample video information, initial feature codes of second neighbor video information, initial feature codes of negative sample video information, initial feature codes of third neighbor video information, initial feature codes of first neighbor account information, initial feature codes of account information of a positive sample account, initial feature codes of second neighbor account information, initial feature codes of account information of a negative sample account, initial feature codes of third neighbor account information, initial feature codes of video information to be counted and initial feature codes of fourth neighbor video information; obtaining a first target feature code of the account information of the account to be counted according to the initial feature code of the account information of the account to be counted and the initial feature code of the first neighbor video information; obtaining target feature codes of the positive sample video information according to the initial feature codes of the positive sample video information and the initial feature codes of the second neighbor video information; obtaining target feature codes of the negative sample video information according to the initial feature codes of the negative sample video information and the initial feature codes of the third neighbor video information; obtaining a second target feature code of the account information of the account to be counted according to the initial feature code of the account information of the account to be counted and the initial feature code of the first neighbor account information; obtaining a target feature code of the account information of the positive sample account according to the initial feature code of the account information of the positive sample account and the initial feature code of the second neighbor account information; obtaining a target feature code of the negative sample account information according to the initial feature code of the negative sample account information and the initial feature code of the third neighbor account information; and obtaining the target feature code of the video information to be counted according to the initial feature code of the video information to be counted and the initial feature code of the fourth neighbor video information.
In an exemplary embodiment, the feature code obtaining module 1030 is further configured to perform aggregation processing on the initial feature codes of the first neighbor video information, so as to obtain feature codes after aggregation processing; and splicing the feature codes after the aggregation processing and the initial feature codes of the account information of the account to be counted to obtain first target feature codes of the account information of the account to be counted.
In an exemplary embodiment, the feature code obtaining module 1030 is further configured to perform a first target feature code according to the account information of the account to be counted, a target feature code of the positive sample video information, and a target feature code of the negative sample video information, so as to obtain a first loss value; obtaining a second loss value according to a second target feature code of the account information of the account to be counted, a target feature code of the positive sample account information and a target feature code of the negative sample account information; obtaining a third loss value according to the target feature code of the video information to be counted, the target feature code of the positive sample video information and the target feature code of the negative sample video information; and obtaining a target loss value according to the first loss value, the second loss value and the third loss value.
In an exemplary embodiment, the graph network constructing module 1020 is further configured to perform obtaining first neighbor video information of an account to be counted, second neighbor video information of positive sample video information, and third neighbor video information of negative sample video information; the account information of the account to be counted is taken as a center node, the first neighbor video information is taken as a neighbor node, and a first sub-graph network is constructed; constructing a second sub-graph network by taking the positive sample video information as a central node and second neighbor video information as neighbor nodes; constructing a third sub-graph network by taking the negative sample video information as a central node and taking the third neighbor video information as a neighbor node; and combining the first sub-graph network, the second sub-graph network and the third sub-graph network to obtain the first graph network.
In an exemplary embodiment, the graph network building module 1020 is further configured to perform obtaining first neighbor account information of an account to be counted, second neighbor account information of a positive sample account, and third neighbor account information of a negative sample account; the account information of the account to be counted is taken as a center node, the first neighbor account information is taken as a neighbor node, and a fourth sub-graph network is constructed; the account information of the positive sample account is used as a center node, and the second neighbor account information is used as a neighbor node to construct a fifth sub-graph network; constructing a sixth sub-graph network by taking account information of the negative sample account as a center node and taking third neighbor account information as a neighbor node; and combining the fourth sub-graph network, the fifth sub-graph network and the sixth sub-graph network to obtain a second graph network.
In an exemplary embodiment, the graph network constructing module 1020 is further configured to perform obtaining fourth neighboring video information of the video information to be counted; constructing a seventh sub-graph network by taking the video information to be counted as a central node and the fourth neighbor video information as a neighbor node; and combining the second sub-graph network, the third sub-graph network and the seventh sub-graph network to obtain a third graph network.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 11 is a block diagram illustrating a video push device according to an example embodiment. Referring to fig. 11, the apparatus includes an encoding acquisition module 1110, a feature encoding determination module 1120, a video information filtering module 1130, and a video information pushing module 1130.
The code obtaining module 1110 is configured to execute the determination method according to the feature code, and obtain an account information feature code corresponding to account information of an account to be counted and a video information feature code corresponding to video information to be counted;
the feature code determining module 1120 is configured to execute acquiring account information of an account to be pushed, and determine an account information feature code corresponding to the account information of the account to be pushed from account information feature codes corresponding to account information of an account to be counted;
the video information screening module 1130 is configured to perform coding according to the account information characteristics and the characteristics of the video information to be counted, and screen out target video information corresponding to an account to be pushed from the video information to be counted;
and a video information pushing module 1140 configured to perform pushing the target video information to the account to be pushed.
In an exemplary embodiment, the feature code determining module 1120 is further configured to perform determining, from the account information of the account to be counted, target account information matching the account information of the account to be pushed; and screening out the account information characteristic code corresponding to the target account information from the account information characteristic code corresponding to the account information of the account to be counted, wherein the account information characteristic code is used as the account information characteristic code corresponding to the account information of the account to be pushed.
In an exemplary embodiment, the video information filtering module 1130 is further configured to perform obtaining a feature similarity between the account information feature code and the video information feature code to be counted; and determining the video information to be counted with the characteristic similarity larger than a preset threshold as target video information corresponding to the account to be pushed.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
FIG. 12 is a block diagram illustrating a computer device according to an example embodiment. The computer device may be a server, and its internal structure diagram may be as shown in fig. 12. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer equipment is used for storing data such as account information characteristic codes, video information characteristic codes to be counted and the like. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a feature encoding determination method and a video push method.
Those skilled in the art will appreciate that the architecture shown in fig. 12 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In an exemplary embodiment, the present disclosure also provides a server, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to execute the instructions to implement the corresponding steps and/or flows in the above-mentioned embodiments of the feature encoding determining method and the video pushing method.
In an exemplary embodiment, the present disclosure also provides a storage medium comprising instructions, such as a memory comprising instructions, executable by a processor of a server to perform the above-described method for determining a feature code and the method for pushing a video. Alternatively, the storage medium may be a non-transitory computer readable storage medium, which may be, for example, a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, the present disclosure also provides a computer program product comprising: computer program code which, when run by a computer, causes the computer to perform the respective steps and/or flows corresponding to the above-described embodiments of the method for determining eigen-coding and the method for video push. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A method for determining feature codes, comprising:
acquiring account information of an account to be counted and video information to be counted; the account information of the account to be counted is matched with the account information of the corresponding positive sample account and the account information of the corresponding negative sample account, and the video information to be counted is matched with the corresponding positive sample video information and the corresponding negative sample video information;
constructing a first graph network according to the account information of the account to be counted, the positive sample video information and the negative sample video information; constructing a second graph network according to the account information of the account to be counted, the account information of the positive sample account and the account information of the negative sample account; constructing a third graph network according to the video information to be counted, the positive sample video information and the negative sample video information;
and obtaining account information feature codes corresponding to the account information of the account to be counted and video information feature codes corresponding to the video information to be counted according to the first graph network, the second graph network and the third graph network.
2. The method according to claim 1, wherein obtaining the account information feature code corresponding to the account information of the account to be counted and the video information feature code corresponding to the video information to be counted according to the first graph network, the second graph network, and the third graph network comprises:
inputting information in the first graph network, the second graph network and the third graph network into a feature code learning model to be trained respectively to obtain a first target feature code of account information of the account to be counted, a target feature code of the positive sample video information, a target feature code of the negative sample video information, a second target feature code of the account information of the account to be counted, a target feature code of the account information of the positive sample account, a target feature code of the account information of the negative sample account and a target feature code of the video information to be counted;
obtaining a target loss value according to a first target feature code of the account information of the account to be counted, a target feature code of the positive sample video information, a target feature code of the negative sample video information, a second target feature code of the account information of the account to be counted, a target feature code of the account information of the positive sample account, a target feature code of the account information of the negative sample account and a target feature code of the video information to be counted;
adjusting the network parameters of the feature coding learning model to be trained according to the target loss value until the target loss value obtained according to the feature coding learning model after network parameter adjustment meets a preset condition;
and if the target loss value obtained according to the feature code learning model after the network parameter adjustment meets the preset condition, splicing the current first target feature code and the second target feature code to obtain an account information feature code corresponding to the account information of the account to be counted, and identifying the current target feature code of the video information to be counted as the video information feature code corresponding to the video information to be counted.
3. The method according to claim 2, wherein the inputting information in the first graph network, the second graph network, and the third graph network into a feature coding learning model to be trained respectively to obtain a first target feature code of the account information of the account to be counted, a target feature code of the positive sample video information, a target feature code of the negative sample video information, a second target feature code of the account information of the account to be counted, a target feature code of the account information of the positive sample account, a target feature code of the account information of the negative sample account, and a target feature code of the video information to be counted comprises:
extracting account information of the account to be counted, first neighbor video information, the positive sample video information, second neighbor video information, the negative sample video information and third neighbor video information from the first graph network; extracting first neighbor account information, account information of the positive sample account, second neighbor account information, account information of the negative sample account and third neighbor account information from the second graph network; extracting the video information to be counted and the fourth neighbor video information from the third graph network;
inputting the account information of the account to be counted, the first neighbor video information, the positive sample video information, the second neighbor video information, the negative sample video information, the third neighbor video information, the first neighbor account information, the account information of the positive sample account, the second neighbor account information, the account information of the negative sample account, the third neighbor account information, the video information to be counted and the fourth neighbor video information into a feature coding learning model to be trained respectively to obtain a first target feature code of the account information of the account to be counted, a target feature code of the positive sample video information, a target feature code of the negative sample video information, a second target feature code of the account information of the account to be counted, a target feature code of the account information of the positive sample account, And the target feature code of the account information of the negative sample account and the target feature code of the video information to be counted.
4. The method according to claim 3, wherein the account information of the account to be counted, the first neighbor video information, the positive sample video information, the second neighbor video information, the negative sample video information, the third neighbor video information, the first neighbor account information, the account information of the positive sample account, the second neighbor account information, the account information of the negative sample account, the third neighbor account information, the video information to be counted, and the fourth neighbor video information are input into a feature code learning model to be trained, respectively, to obtain a first target feature code of the account information of the account to be counted, a target feature code of the positive sample video information, a target feature code of the negative sample video information, a second target feature code of the account information of the account to be counted, a first target feature code of the account information of the account to be counted, a second target feature code of the account information to be counted, and a third target feature code of the video information of the account to be counted, The target feature codes of the account information of the positive sample account, the account information of the negative sample account and the video information to be counted include:
respectively obtaining the initial feature code of the account information of the account to be counted, the initial feature code of the first neighbor video information, the initial feature code of the positive sample video information, the initial feature code of the second neighbor video information, the initial feature code of the negative sample video information and the initial feature code of the third neighbor video information through a feature code learning model to be trained, the initial feature code of the first neighbor account information, the initial feature code of the account information of the positive sample account, the initial feature code of the second neighbor account information, the initial feature code of the account information of the negative sample account, the initial feature code of the third neighbor account information, the initial feature code of the video information to be counted and the initial feature code of the fourth neighbor video information;
obtaining a first target feature code of the account information of the account to be counted according to the initial feature code of the account information of the account to be counted and the initial feature code of the first neighbor video information; obtaining a target feature code of the positive sample video information according to the initial feature code of the positive sample video information and the initial feature code of the second neighbor video information; obtaining target feature codes of the negative sample video information according to the initial feature codes of the negative sample video information and the initial feature codes of the third neighbor video information;
obtaining a second target feature code of the account information of the account to be counted according to the initial feature code of the account information of the account to be counted and the initial feature code of the first neighbor account information; obtaining a target feature code of the account information of the positive sample account according to the initial feature code of the account information of the positive sample account and the initial feature code of the second neighbor account information; obtaining a target feature code of the negative sample account information according to the initial feature code of the negative sample account information and the initial feature code of the third neighbor account information;
and obtaining the target feature code of the video information to be counted according to the initial feature code of the video information to be counted and the initial feature code of the fourth neighboring video information.
5. The method according to claim 4, wherein obtaining the first target feature code of the account information of the account to be counted according to the initial feature code of the account information of the account to be counted and the initial feature code of the first neighbor video information comprises:
aggregating the initial feature codes of the first neighbor video information to obtain aggregated feature codes;
and splicing the feature code after the aggregation processing and the initial feature code of the account information of the account to be counted to obtain a first target feature code of the account information of the account to be counted.
6. A video push method, comprising:
the method according to any one of claims 1 to 5, wherein account information feature codes corresponding to account information of the account to be counted and video information feature codes corresponding to video information to be counted are obtained;
acquiring account information of an account to be pushed, and determining account information characteristic codes corresponding to the account information of the account to be pushed from account information characteristic codes corresponding to the account information of the account to be counted;
screening target video information corresponding to the account to be pushed from the video information to be counted according to the account information feature code and the video information feature code to be counted;
and pushing the target video information to the account to be pushed.
7. An apparatus for determining feature codes, comprising:
the information acquisition module is configured to execute the acquisition of account information of an account to be counted and video information to be counted; the account information of the account to be counted is matched with the account information of the corresponding positive sample account and the account information of the corresponding negative sample account; the video information to be counted is matched with corresponding positive sample video information and negative sample video information;
the graph network construction module is configured to execute construction of a first graph network according to the account information of the account to be counted, the positive sample video information and the negative sample video information; constructing a second graph network according to the account information of the account to be counted, the account information of the positive sample account and the account information of the negative sample account; constructing a third graph network according to the video information to be counted, the positive sample video information and the negative sample video information;
and the feature code acquisition module is configured to execute the processing according to the first graph network, the second graph network and the third graph network to obtain account information feature codes corresponding to the account information of the account to be counted and video information feature codes corresponding to the video information to be counted.
8. A video push apparatus, comprising:
the code acquisition module is configured to execute the method according to any one of claims 1 to 5, and acquire an account information feature code corresponding to the account information of the account to be counted and a video information feature code corresponding to the video information to be counted;
the characteristic code determining module is configured to execute the acquisition of account information of an account to be pushed, and determine an account information characteristic code corresponding to the account information of the account to be pushed from account information characteristic codes corresponding to account information of the account to be counted;
and the video information screening module is configured to execute the coding according to the account information characteristic and the coding according to the video information characteristic to be counted, and screen out target video information corresponding to the account to be pushed from the video information to be counted.
9. A server, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method of any one of claims 1 to 6.
10. A storage medium in which instructions, when executed by a processor of a server, enable the server to perform the method of any one of claims 1 to 6.
CN202010140088.2A 2020-03-03 2020-03-03 Characteristic code determining method, device, server and storage medium Active CN113365115B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010140088.2A CN113365115B (en) 2020-03-03 2020-03-03 Characteristic code determining method, device, server and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010140088.2A CN113365115B (en) 2020-03-03 2020-03-03 Characteristic code determining method, device, server and storage medium

Publications (2)

Publication Number Publication Date
CN113365115A true CN113365115A (en) 2021-09-07
CN113365115B CN113365115B (en) 2022-11-04

Family

ID=77523140

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010140088.2A Active CN113365115B (en) 2020-03-03 2020-03-03 Characteristic code determining method, device, server and storage medium

Country Status (1)

Country Link
CN (1) CN113365115B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180189228A1 (en) * 2017-01-04 2018-07-05 Qualcomm Incorporated Guided machine-learning training using a third party cloud-based system
CN108304441A (en) * 2017-11-14 2018-07-20 腾讯科技(深圳)有限公司 Network resource recommended method, device, electronic equipment, server and storage medium
CN109543066A (en) * 2018-10-31 2019-03-29 北京达佳互联信息技术有限公司 Video recommendation method, device and computer readable storage medium
CN109872242A (en) * 2019-01-30 2019-06-11 北京字节跳动网络技术有限公司 Information-pushing method and device
CN110162701A (en) * 2019-05-10 2019-08-23 腾讯科技(深圳)有限公司 Content delivery method, device, computer equipment and storage medium
CN110532469A (en) * 2019-08-26 2019-12-03 上海喜马拉雅科技有限公司 A kind of information recommendation method, device, equipment and storage medium
CN110557659A (en) * 2019-08-08 2019-12-10 北京达佳互联信息技术有限公司 Video recommendation method and device, server and storage medium
CN110717069A (en) * 2018-07-11 2020-01-21 北京优酷科技有限公司 Video recommendation method and device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180189228A1 (en) * 2017-01-04 2018-07-05 Qualcomm Incorporated Guided machine-learning training using a third party cloud-based system
CN108304441A (en) * 2017-11-14 2018-07-20 腾讯科技(深圳)有限公司 Network resource recommended method, device, electronic equipment, server and storage medium
CN110717069A (en) * 2018-07-11 2020-01-21 北京优酷科技有限公司 Video recommendation method and device
CN109543066A (en) * 2018-10-31 2019-03-29 北京达佳互联信息技术有限公司 Video recommendation method, device and computer readable storage medium
CN109872242A (en) * 2019-01-30 2019-06-11 北京字节跳动网络技术有限公司 Information-pushing method and device
CN110162701A (en) * 2019-05-10 2019-08-23 腾讯科技(深圳)有限公司 Content delivery method, device, computer equipment and storage medium
CN110557659A (en) * 2019-08-08 2019-12-10 北京达佳互联信息技术有限公司 Video recommendation method and device, server and storage medium
CN110532469A (en) * 2019-08-26 2019-12-03 上海喜马拉雅科技有限公司 A kind of information recommendation method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN113365115B (en) 2022-11-04

Similar Documents

Publication Publication Date Title
WO2020221278A1 (en) Video classification method and model training method and apparatus thereof, and electronic device
Hoiles et al. Engagement and popularity dynamics of YouTube videos and sensitivity to meta-data
US20200372369A1 (en) System and method for machine learning architecture for partially-observed multimodal data
CN110909205B (en) Video cover determination method and device, electronic equipment and readable storage medium
CN111444966B (en) Media information classification method and device
CN110856037B (en) Video cover determination method and device, electronic equipment and readable storage medium
CN109961080B (en) Terminal identification method and device
EP3923182A1 (en) Method for identifying a video frame of interest in a video sequence, method for generating highlights, associated systems
CN110166826B (en) Video scene recognition method and device, storage medium and computer equipment
US20220172476A1 (en) Video similarity detection method, apparatus, and device
CN111050193A (en) User portrait construction method and device, computer equipment and storage medium
CN112364204A (en) Video searching method and device, computer equipment and storage medium
CN112001274A (en) Crowd density determination method, device, storage medium and processor
CN112817563B (en) Target attribute configuration information determining method, computer device, and storage medium
CN111340112A (en) Classification method, classification device and server
CN111414842B (en) Video comparison method and device, computer equipment and storage medium
Hoiles et al. Engagement dynamics and sensitivity analysis of YouTube videos
CN111597361B (en) Multimedia data processing method, device, storage medium and equipment
CN111310516A (en) Behavior identification method and device
CN113365115B (en) Characteristic code determining method, device, server and storage medium
CN113297417B (en) Video pushing method, device, electronic equipment and storage medium
CN115017362A (en) Data processing method, electronic device and storage medium
CN115482500A (en) Crowd counting method and device based on confidence probability
CN112668504A (en) Action recognition method and device and electronic equipment
CN115203471B (en) Attention mechanism-based multimode fusion video recommendation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant