CN113032625B - Video sharing method and device, computer equipment and storage medium - Google Patents

Video sharing method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN113032625B
CN113032625B CN202110319469.1A CN202110319469A CN113032625B CN 113032625 B CN113032625 B CN 113032625B CN 202110319469 A CN202110319469 A CN 202110319469A CN 113032625 B CN113032625 B CN 113032625B
Authority
CN
China
Prior art keywords
information
user
video
friend
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110319469.1A
Other languages
Chinese (zh)
Other versions
CN113032625A (en
Inventor
陈昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110319469.1A priority Critical patent/CN113032625B/en
Publication of CN113032625A publication Critical patent/CN113032625A/en
Application granted granted Critical
Publication of CN113032625B publication Critical patent/CN113032625B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/738Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Library & Information Science (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application provides a video sharing method, a video sharing device, computer equipment and a storage medium, wherein at least one friend of a user is determined in response to a friend sharing request of the user on a target video; determining association information of the user and each friend respectively, wherein the association information of the user and the friends indicates the favorite degree of the friends on the user forwarding target video; based on the association information of the user and each friend, adjusting the ordering sequence among at least one friend to generate and display a friend list; and sharing the target video to the target friends according to the selection operation of the user on the target friends in the friend list. The method and the device can realize the display of the friend list of the user by combining the preference degree of the friends of the user to the video forwarded by the user, promote the user to share more videos, and increase the social activity of the video.

Description

Video sharing method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of video sharing technologies, and in particular, to a video sharing method, apparatus, computer device, and storage medium.
Background
Users enjoy sharing with friends and friends of the like-minded friends when encountering their favorite videos. In the existing video sharing mode, after a user clicks a friend sharing button, a friend list of the user is displayed according to the latest interaction (chat sequence), so that the user can manually select friends needing to be subjected to video sharing from the friend list to complete video sharing.
The method for sequencing friends according to social demands of users to generate and display the friend list only during video sharing teaches that users can only share videos in inherent habit friend relations, and can not promote the users to share videos and increase the social activity of the videos.
Disclosure of Invention
In view of the above, in order to solve the above problems, the present invention provides a video sharing method, apparatus, computer device and storage medium, which combine the favorites of friends for forwarding videos to users to realize the display of a friend list, and promote users to share more videos, so as to increase the social activity of the videos, and the technical scheme is as follows:
a video sharing method, comprising:
responding to a friend sharing request of a user on a target video, and determining at least one friend of the user;
determining association information of the user and each friend respectively, wherein the association information of the user and the friends indicates the preference degree of the friends for the user to forward the target video;
based on the association information of the user and each friend, adjusting the ordering sequence among the at least one friend to generate and display a friend list;
And sharing the target video to the target friends according to the selection operation of the user on the target friends in the friend list.
A video sharing apparatus, comprising:
the friend acquisition unit is used for responding to a friend sharing request of a user on a target video and determining at least one friend of the user;
the association information determining unit is used for determining association information of the user and each friend respectively, and the association information of the user and the friends indicates the preference degree of the friends for forwarding the target video to the user;
the sequence adjusting unit is used for adjusting the sequencing sequence among the at least one friend to generate and display a friend list based on the association information of the user and each friend;
and the video sharing unit is used for sharing the target video to the target friends according to the selection operation of the user on the target friends in the friend list.
Preferably, the association information determining unit for determining association information of the user and the friend includes:
the characteristic information determining unit is used for determining first characteristic information, second characteristic information and third characteristic information, wherein the first characteristic information represents video preference of the user, the second characteristic information represents video preference of the friend, and the third characteristic information represents the user who prefers the target video;
The first information determining unit is used for determining first information representing the similarity degree of video preference of the user and the friends according to the first characteristic information and the second characteristic information;
the second information determining unit is used for determining second information representing the favorite degree of the friend on the target video by utilizing the second characteristic information and the third characteristic information;
and the video association information determining subunit is used for determining association information of the user and the friend by combining the first information and the second information.
Preferably, the feature information determining unit includes:
the matrix determining unit is used for generating a user video behavior matrix according to the user video playing behaviors, wherein the user video behavior matrix indicates the playing behaviors of each user in the whole network to each video in the whole network respectively;
the discrete learning unit is used for analyzing the discrete user video behaviors in the user video behavior matrix to generate characteristic information of the whole network users and videos;
and the characteristic information query unit is used for querying the first characteristic information of the user, the second characteristic information of the friends and the third characteristic information of the target video from the characteristic information of the whole network user and the video.
Preferably, the discrete learning unit includes:
the matrix decomposition unit is used for multiplying and fitting the user video behavior matrix by using the user embedding matrix and the video embedding matrix;
the optimizing unit is used for training the user embedding matrix and the video embedding matrix by adopting a gradient descent algorithm to obtain an optimized user embedding matrix and an optimized video embedding matrix; the optimized user embedding matrix and the optimized video embedding matrix form characteristic information of the whole network user and the video.
Preferably, the first information determining unit includes:
a first calculation unit configured to determine first similarity information characterizing a similarity between the first feature information and the second feature information;
and the first information determining subunit is used for inputting the first characteristic information, the second characteristic information and the first similarity information into a pre-trained first model to obtain first information representing the similarity degree of video preference of the user and the friend.
Preferably, the second information determining unit includes:
a second calculation unit configured to determine second similarity information characterizing a similarity between the second feature information and the third feature information;
And the second information determining subunit is used for inputting the second characteristic information, the third characteristic information and the second similarity information into a pre-trained second model to obtain second information representing the favorite degree of the friend on the target video.
Preferably, the video association information determining subunit includes:
a weight determining unit configured to determine a weight of the first information and a weight of the second information;
a target information calculation unit configured to generate target information by combining the first information, the weight of the first information, the second information, and the weight of the second information;
and the information generating unit is used for inputting the target information into a pre-trained third model to obtain the associated information of the user and the friend.
Preferably, the video association information determining subunit includes:
a weight determining unit configured to determine a weight of the first information and a weight of the second information;
a target information calculating unit for generating target information by combining the first information, the weight of the first information, the second information, and the weight of the second information;
and the information generating unit is used for inputting the target information into the pre-trained third model to obtain the associated information of the user and the friends.
Preferably, the weight determining unit includes:
an initial weight determining unit configured to determine an initial weight of the first information and an initial weight of the second information based on an attention mechanism module;
the normalization processing unit is used for performing normalization processing on the initial weight of the first information and the initial weight of the second information to obtain the weight of the first information and the weight of the second information.
Further, the video sharing prediction model generating unit is configured to generate a video sharing prediction model, where the video sharing model is configured by the first model, the second model, the attention mechanism module, and the third model, and the model generating unit includes:
the training sample acquisition unit is used for acquiring training samples, wherein the training samples indicate user samples, friend samples and video samples;
the prediction unit is used for determining a prediction result of the video sharing prediction model to be trained on the association information of the user sample and the friend sample, wherein the prediction result indicates the preference degree of the friend sample to the user sample for forwarding the video sample;
the model generation subunit is used for reversely adjusting parameters in the video sharing prediction model by taking a standard result of the prediction result approaching to the training sample as a training target so as to generate a video sharing prediction model; the standard result of the training sample characterizes the real forwarding state of the video sample to the friend sample by the user sample.
A computer device, comprising: the device comprises a processor and a memory, wherein the processor and the memory are connected through a communication bus; the processor is used for calling and executing the program stored in the memory; the memory is used for storing a program, and the program is used for realizing the video sharing method.
A computer readable storage medium having stored thereon a computer program, the computer program being loaded and executed by a processor, implementing the steps of the video sharing method.
The application provides a video sharing method, a video sharing device, computer equipment and a storage medium, wherein after a friend sharing request of a user for a target video is responded, at least one friend of the user is not displayed according to the latest interaction sequence after being determined, at least one friend is displayed according to the association information of the user and each friend, and the association information of the user and the friend indicates the favorite degree of the friend for forwarding the target video to the user, so that the user cannot be taught to only share the video in the inherent habit friend relationship, more videos are shared by the user, and the social activity of the video is increased.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present application, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a video sharing method according to an embodiment of the present application;
FIG. 2 is a flowchart of a method for determining video association information between a user and a friend according to an embodiment of the present application;
fig. 3 is a flowchart of a method for generating a deep learning model involved in a video sharing method according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a video sharing device according to an embodiment of the present application;
fig. 5 is a block diagram of a hardware structure of a computer device to which the video sharing method according to the embodiment of the present application is applicable.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use the knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
Machine Learning (ML) is a multi-domain interdisciplinary, involving multiple disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory, etc. It is specially studied how a computer simulates or implements learning behavior of a human to acquire new knowledge or skills, and reorganizes existing knowledge structures to continuously improve own performance. Machine learning is the core of artificial intelligence, a fundamental approach to letting computers have intelligence, which is applied throughout various areas of artificial intelligence. Machine learning and deep learning typically include techniques such as artificial neural networks, confidence networks, reinforcement learning, transfer learning, induction learning, teaching learning, and the like.
With research and advancement of artificial intelligence technology, research and application of artificial intelligence technology is being developed in various fields, such as common smart home, smart wearable devices, virtual assistants, smart speakers, smart marketing, unmanned, automatic driving, unmanned aerial vehicles, robots, smart medical treatment, smart customer service, etc., and it is believed that with the development of technology, artificial intelligence technology will be applied in more fields and with increasing importance value.
The scheme provided by the embodiment of the application relates to artificial intelligence machine learning and other technologies, and is specifically described through the following embodiment.
In addition, in the embodiment of the application, the content related to information storage in the video sharing method can be realized in a blockchain mode.
Blockchains are novel application modes of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanisms, encryption algorithms, and the like. The Blockchain (Blockchain), which is essentially a decentralised database, is a string of data blocks that are generated by cryptographic means in association, each data block containing a batch of information of network transactions for verifying the validity of the information (anti-counterfeiting) and generating the next block. The blockchain may include a blockchain underlying platform, a platform product services layer, and an application services layer.
The blockchain underlying platform may include processing modules for user management, basic services, smart contracts, and operational monitoring. The user management module is responsible for identity information management of all blockchain participants, including maintenance of public and private key generation (account management), key management, maintenance of corresponding relation between the real identity of the user and the blockchain address (authority management) and the like, and under the condition of authorization, supervision and audit of transaction conditions of certain real identities, and provision of rule configuration (wind control audit) of risk control; the basic service module is deployed on all block chain node devices, is used for verifying the validity of a service request, recording the service request on a storage after the effective request is identified, for a new service request, the basic service firstly analyzes interface adaptation and authenticates the interface adaptation, encrypts service information (identification management) through an identification algorithm, and transmits the encrypted service information to a shared account book (network communication) in a complete and consistent manner, and records and stores the service information; the intelligent contract module is responsible for registering and issuing contracts, triggering contracts and executing contracts, a developer can define contract logic through a certain programming language, issue the contract logic to a blockchain (contract registering), invoke keys or other event triggering execution according to the logic of contract clauses to complete the contract logic, and simultaneously provide a function of registering contract upgrading; the operation monitoring module is mainly responsible for deployment in the product release process, modification of configuration, contract setting, cloud adaptation and visual output of real-time states in product operation, for example: alarming, monitoring network conditions, monitoring node equipment health status, etc.
The platform product service layer provides basic capabilities and implementation frameworks of typical applications, and developers can complete the blockchain implementation of business logic based on the basic capabilities and the characteristics of the superposition business. The application service layer provides the application service based on the block chain scheme to the business participants for use.
The following describes a video sharing method according to the embodiments of the present application in detail based on the commonalities.
Users enjoy sharing videos to friends and friends of the like when encountering the videos. Whereas existing sharing modes require the user to manually select the sharing user, where the user candidate list is shared according to the most recent interactions (chat order). However, the sharing mode is only ordered according to social demands of users, and when an alternative user list (i.e., a user candidate list; also referred to as a friend list) during sharing is enumerated, the video favorites in friends cannot be sorted according to the filtering and ordering, so that users can only share in inherent habit friends.
According to the video sharing method, when a user wants to share one video, the method can intelligently predict the favorite degree of all users in the friend list of the user on the video, sort the favorite degrees, and finally place friends like the videos in front, so that more videos can be shared by the users, and the social activity of the video is increased.
The video may be, for example, a short video. The foregoing is merely a preferred expression of a video to be shared provided in the embodiments of the present application, and the specific expression of the video may be set by those skilled in the art according to their own needs, which is not limited herein.
In order that the above-recited objects, features and advantages of the present application will become more readily apparent, a more particular description of the application will be rendered by reference to the appended drawings and appended detailed description.
Fig. 1 is a flowchart of a video sharing method according to an embodiment of the present application.
As shown in fig. 1, the method includes:
s101, responding to a friend sharing request of a user to a target video, and determining at least one friend of the user;
the video sharing method provided by the embodiment of the application is applied to the first application, and a user can execute friend sharing operation on the target video in the first application.
Correspondingly, the first application responds to a friend sharing request of a user for the target video, determines a second application to which the user requests to share the target video, and determines all friends of the user in the second application, wherein all friends determined here can be considered as at least one friend of the user determined in response to the friend sharing request of the user for the target video.
The first application and the second application may or may not be the same application, and are not limited herein.
S102, determining association information of the user and each friend respectively, wherein the association information of the user and the friends indicates the favorite degree of the friends for the user to forward the target video;
fig. 2 is a flowchart of a method for determining association information between a user and a friend according to an embodiment of the present application. As shown in fig. 2, the method includes:
s201, determining first feature information, second feature information and third feature information, wherein the first feature information represents video preference of a user, the second feature information represents video preference of friends, and the third feature information represents a user who prefers a target video;
according to the embodiment of the application, from the feature information of the whole network user and the video, the first feature information representing the video preference of the user, the second feature information representing the video preference of friends and the third feature information representing the user who prefers the target video are inquired.
The generation process of the feature information of the whole network user and the video can be as follows: generating a user video behavior matrix according to the user video playing behavior, wherein the user video behavior matrix indicates the playing behavior of each user in the whole network to each video in the whole network respectively; and analyzing the discrete user video behaviors in the user video behavior matrix to generate characteristic information of the whole network user and the video.
The method for analyzing the discrete user video behaviors in the user video behavior matrix to generate the characteristic information of the whole network user and the video comprises the following steps: multiplying the user embedding matrix and the video embedding matrix to fit a user video behavior matrix; training the user embedded matrix and the video embedded matrix by adopting a gradient descent algorithm to obtain an optimized user embedded matrix and an optimized video embedded matrix; the optimized user embedding matrix and the optimized video embedding matrix form characteristic information of the whole network user and the video.
By way of example, R may be defined as a user-to-video behavior matrix (which may also be referred to as a user video behavior matrix), with the dimension of R being (N, M), where the number of users is N and the number of videos is M. If the ith user and the jth video have interactive behaviors, R i,j =1, otherwise R i,j =0. Where R is defined in terms of actual click behavior of the user video in reality.
If the ith user views the target video frame of the jth video, the ith user and the jth video are considered to have interactive behaviors, otherwise, the ith user and the jth video are considered to have no interactive behaviors. Wherein the target video frame is the 90% video frame in the video frame sequence of the j-th video. The above is merely preferable content of the target video frame, and specific content of the target video frame can be set by those skilled in the art according to their own needs, which is not limited herein. For example, the target video frame may be the 95% of the video frames in the video frame sequence of the jth video.
According to the embodiment of the application, the vectorized representation of the whole network user and the video is obtained by learning the discrete user video behaviors. A detailed description is given here of how user, video embedding is performed by matrix decomposition.
Matrix factorization uses a method of multiplying two matrices to fit a user's video behavior matrix, i.e., R. Wherein R is equal to P.times.Q T The dimension of P is (N, K), the dimension of Q is (M, K), P is a user embedding matrix, Q is a video embedding matrix, and the adopted loss function isHere->Refers to the Frobenius norm of the matrix. The embodiment of the application uses a gradient descent algorithm to train the user embedded matrix and the video embedded matrix in matrix decomposition, so as to obtain an optimized user embedded matrix and an optimized video embedded matrix. The update formula of gradient descent is as follows:
alpha means that the learning rate of matrix decomposition is typically between 10-1 and 10-5. After P and Q are obtained, P and Q have the same number of columns, and then P and Q can be spliced to be used as the value of E. The final E is represented as follows:
E=[e u1 ,…,e uN ,e i1 ,…,e iM ]
wherein e u1 -e uN Is of users 1-NHidden variables, the hidden variables of the user characterizing video preferences of the user; e, e i1 -e iM Is the hidden variable of video 1-M, which characterizes the user who prefers the video.
E is a user and video hidden variable matrix, wherein the features of rows 1 to N are features of the user, the features of rows N+1 to N+M are features of the video, and then the dimension of E is (M+N, K). E has the dimension of (M+N) K, and the heterogeneous graph neural network embedding method is similar to the matrix decomposition and shallow graph embedding method, and the heterogeneous graph neural network embedding method needs to obtain the representation of each user and each video through training.
For example, E may be updated once per preset time interval according to the current user video behavior matrix, which may be one month, half month, one day, etc. The specific content of the preset time interval can be set by a person skilled in the art according to his own needs, and is not limited herein.
In the embodiment of the application, E is a user and video hidden variable matrix, the matrix can be called a user video embedding matrix, the user video embedding matrix carries characteristic information of the whole network user and video, hidden variables of the user queried in E can be called first characteristic information of the user, hidden variables of friends queried in E can be called second characteristic information of the friends, and hidden variables of target video queried in E can be called third characteristic information of the target video.
S202, determining first information representing the similarity degree of video preference of a user and friends according to the first characteristic information and the second characteristic information;
and aiming at the target video to be forwarded by the user, forwarding and predicting all friends in at least one friend of the user, and sequencing the favorites of the target video in the friends. Because the forwarding prediction is different from the traditional prediction, the embodiment of the application needs to consider the interests of the user, the information of the target video which the user wants to forward, and the information of friends. Thus, for each pair of (user a, video j, buddy b) to be forwarded, the following deep learning model may be used to predict how well buddy b likes user a to forward the video j, and how well buddy b likes user a to forward the video j may be referred to as the association information of user a and buddy b.
For example, the deep learning model may be a video sharing prediction model, which is composed of a first model, a second model, an attention mechanism module, and a third model.
Based on the generated user video embedding matrix E, the hidden variable of the user a can be determined (the hidden variable of the user a can be referred to as the feature vector E of the user a a ) The hidden variable of video j (the hidden variable of video j may be referred to as feature vector e of video j j ) And the hidden variables of user b (the hidden variables of user b may be referred to as the feature vector e of user b b ). In the forwarding prediction, the similarity between the user a and the user b needs to be considered, and the preference degree of the user b to the video j needs to be considered.
The similarity of user a and user b (which may characterize the degree of similarity of video preferences of user a and user b) may be modeled with the following model:
s 1 =MLP 1 (e a ⊙e b ||e a ·e b ||e a ||e b ) Wherein s is 1 For the first information, MLP 1 For the first model, e a ⊙e b Feature vector e characterizing user a a Feature vector e of user b b First similarity between e a ·e b Feature vector e characterizing user a a Feature vector e of user b b Second similarity between the first similarity information is obtained by the feature vector e of the user a a Feature vector e of user b b The first similarity and the second similarity between the first and second images.
User b's preference for video j can be modeled with the following model:
s 2 =MLP 2 (e b ⊙e j ||e b ·e j ||e b ||e j ) Wherein s is 1 And s 2 Is a vector with two same lengths s 2 For the second information, MLP 2 E is the second model b ⊙e j Characterizing the feature direction of user bQuantity e b Feature vector e of video j j First similarity between e b ·e j Feature vector e characterizing user b b Feature vector e of video j j A second similarity between the two, the second similarity information is composed of the characteristic vector e of the user b b Feature vector e of video j j The first similarity and the second similarity between the first and second images.
S203, determining second information representing the favorites of friends on the target video by using the second characteristic information and the third characteristic information;
s204, combining the first information and the second information to determine the association information of the user and the friend.
After modeling the two items, attention mechanisms can be used to flexibly consider whether to pay more attention to the similarity of the user a and the user b or pay more attention to the preference of the user b to the video j. The mechanism of attention is s 1 And s 2 Different weights are assigned. The mechanism of attention is shown in the following formula:
α 1 =σ(α·s 1 )
α 2 =σ(α·s 2 )
wherein alpha is 1 For the first information s 1 Is equal to the initial weight of alpha 2 For the second information s 2 Is used to determine the initial weight of the model.
The weight of the attention mechanism can then be normalized
Wherein, alpha' 1 For the first information s 1 Weights, alpha' 2 For the second information s 2 Weights of (2); and then according to s 1 Weight α 'of (2)' 1 Sum s 2 Weight α 'of (2)' 2 For s 1 、s 2 Calculating weighted summation to obtain target informationRest alpha' 1 s 1 +α′ 2 s 2 And by inputting the target information into a new third model MLP 3 Obtain the prediction resultWherein (1)>Can be considered as the association information of user a and user b.
S103, based on the association information of the user and each friend, adjusting the ordering sequence among at least one friend to generate and display a friend list;
in the embodiment of the application, after the at least one friend of the user is determined in response to the friend sharing request of the user on the target video, the association information of the user and each friend in the at least one friend can be determined, and further, the ordering sequence among the friends in the at least one friend is adjusted based on the association information of the user and the friend to generate a friend list, and further, the generated friend list can be displayed.
For example, taking a friend as an example, the association information of the user and the friend indicates that the higher the friend favors the target video forwarded by the user, the earlier the friend ranks in the friend list. The preference degree of the friends on the target video forwarded by the user is related to first information and second information, wherein the first information represents the similarity degree of video preference of the user and the friends, and the second information represents the preference degree of the friends on the target video.
S104, sharing the target video to the target friends according to the selection operation of the user on the target friends in the friend list.
According to the method and the device for sharing the target video, after the friend list is displayed to the user in response to the friend sharing request of the user on the target video, the target video can be shared with the target friend in response to the selection operation of the user on the target friend in the friend list.
The application provides a video sharing method, which responds to a friend sharing request of a user on a target video, and after at least one friend of the user is determined, the at least one friend is not displayed according to the latest interaction sequence, but rather the at least one friend is displayed according to the association information of the user and each friend, and the association information of the user and the friend indicates the favorite degree of the friend on the target video forwarded by the user, so that the user cannot be taught to share the video only in the inherent habit friend relationship, more videos are shared by the user, and the social activity of the video is increased.
Based on the above detailed description of the video sharing method provided by the embodiment of the present application, a process of generating a deep learning model (video sharing prediction model) related to the video sharing method will be further described with reference to fig. 3. The video sharing prediction model is illustratively composed of a first model, a second model, an attention mechanism module, and a third model. As shown in fig. 3, the method includes:
S301, acquiring training samples, wherein the training samples indicate user samples, friend samples and video samples;
s302, determining a prediction result of the video sharing prediction model to be trained on the association information of the user sample and the friend sample, wherein the prediction result indicates the preference degree of the friend sample on the user sample for forwarding the video sample;
s303, reversely adjusting parameters in the video sharing prediction model by taking a standard result of which the prediction result approaches to a training sample as a training target so as to generate the video sharing prediction model; the standard result of the training sample characterizes the real forwarding state of the video sample to the friend sample by the user sample.
By way of example, we can predict the probability that a user sample forwards a video sample to a friendly sample, and can label this forwarding prediction by actually whether the user sample forwards a video sample to a friendly sample (without involving privacy concerns, all data is desensitized). I.e. to obtain the label y a,b,j . Thus by calculating y a,b,j And (3) withWe can divide the video to be trainedAll parameters in the shared prediction model are trained by automatic error back propagation to generate the video sharing prediction model.
In combination with the above commonalities, a video sharing device provided in an embodiment of the present application is described in detail below, and refer to fig. 4 specifically. As shown in fig. 4, the apparatus includes:
The friend obtaining unit 401 is configured to determine at least one friend of the user in response to a friend sharing request of the user on the target video;
the association information determining unit 402 is configured to determine association information of the user and each friend, where the association information of the user and the friends indicates a preference degree of the friends for forwarding the target video to the user;
a sequence adjustment unit 403, configured to adjust a ranking sequence between at least one friend to generate and display a friend list based on association information of the user with each friend;
the video sharing unit 404 is configured to share the target video with the target friends according to the selection operation of the user on the target friends in the friend list.
In the embodiment of the present application, preferably, the association information determining unit for determining association information of the user and the friend includes:
the characteristic information determining unit is used for determining first characteristic information, second characteristic information and third characteristic information, wherein the first characteristic information represents video preference of a user, the second characteristic information represents video preference of friends, and the third characteristic information represents a user who prefers target videos;
the first information determining unit is used for determining first information representing the similarity degree of video preference of the user and the friends according to the first characteristic information and the second characteristic information;
The second information determining unit is used for determining second information representing the favorites of friends on the target video by utilizing the second characteristic information and the third characteristic information;
and the video association information determining subunit is used for determining association information of the user and the friend by combining the first information and the second information.
In an embodiment of the present application, preferably, the feature information determining unit includes:
the matrix determining unit is used for generating a user video behavior matrix according to the user video playing behaviors, wherein the user video behavior matrix indicates the playing behaviors of each user in the whole network to each video in the whole network respectively;
the discrete learning unit is used for analyzing the discrete user video behaviors in the user video behavior matrix to generate characteristic information of the whole network user and the video;
and the characteristic information query unit is used for querying the first characteristic information of the user, the second characteristic information of friends and the third characteristic information of the target video from the characteristic information of the whole network user and the video.
In an embodiment of the present application, preferably, the discrete learning unit includes:
the matrix decomposition unit is used for multiplying and fitting a user video behavior matrix by the user embedding matrix and the video embedding matrix;
The optimizing unit is used for training the user embedding matrix and the video embedding matrix by adopting a gradient descent algorithm to obtain an optimized user embedding matrix and an optimized video embedding matrix; the optimized user embedding matrix and the optimized video embedding matrix form characteristic information of the whole network user and the video.
In an embodiment of the present application, preferably, the first information determining unit includes:
a first calculation unit configured to determine first similarity information characterizing a similarity between the first feature information and the second feature information;
the first information determining subunit is configured to input the first feature information, the second feature information, and the first similarity information to the pre-trained first model, so as to obtain first information that characterizes a similarity degree of video preferences of the user and the friend.
In an embodiment of the present application, preferably, the second information determining unit includes:
a second calculation unit configured to determine second similarity information characterizing a similarity between the second feature information and the third feature information;
the second information determining subunit is configured to input second feature information, third feature information, and second similarity information to the pre-trained second model, so as to obtain second information that characterizes the preference degree of the friend for the target video.
In the embodiment of the present application, preferably, the video association information determining subunit includes:
a weight determining unit configured to determine a weight of the first information and a weight of the second information;
a target information calculating unit for generating target information by combining the first information, the weight of the first information, the second information, and the weight of the second information;
and the information generating unit is used for inputting the target information into the pre-trained third model to obtain the associated information of the user and the friends.
In an embodiment of the present application, preferably, the weight determining unit includes:
an initial weight determining unit for determining an initial weight of the first information and an initial weight of the second information based on the attention mechanism module;
the normalization processing unit is used for performing normalization processing on the initial weight of the first information and the initial weight of the second information to obtain the weight of the first information and the weight of the second information.
Further, the video sharing device provided by the embodiment of the present application further includes a model generating unit for generating a video sharing prediction model, where the video sharing model is composed of a first model, a second model, an attention mechanism module, and a third model, and the model generating unit includes:
The training sample acquisition unit is used for acquiring training samples, wherein the training samples indicate user samples, friend samples and video samples;
the prediction unit is used for determining a prediction result of the video sharing prediction model to be trained on the association information of the user sample and the friend sample, and the prediction result indicates the preference degree of the friend sample on the user sample for forwarding the video sample;
the model generation subunit is used for reversely adjusting parameters in the video sharing prediction model by taking a standard result of which the prediction result approaches to the training sample as a training target so as to generate the video sharing prediction model; the standard result of the training sample characterizes the real forwarding state of the video sample to the friend sample by the user sample.
As shown in fig. 5, a block diagram of an implementation manner of a computer device according to an embodiment of the present application includes:
a memory 501 for storing a program;
a processor 502 for executing a program, the program being specifically for:
responding to a friend sharing request of a user on a target video, and determining at least one friend of the user;
determining association information of the user and each friend respectively, wherein the association information of the user and the friends indicates the favorite degree of the friends on the user forwarding target video;
Based on the association information of the user and each friend, adjusting the ordering sequence among at least one friend to generate and display a friend list;
and sharing the target video to the target friends according to the selection operation of the user on the target friends in the friend list.
The processor 502 may be a central processing unit CPU or a specific integrated circuit ASIC (Application Specific Integrated Circuit).
The control device may further comprise a communication interface 503 and a communication bus 504, wherein the memory 501, the processor 502 and the communication interface 503 perform communication with each other via the communication bus 504.
The embodiment of the present application further provides a readable storage medium, on which a computer program is stored, where the computer program is loaded and executed by a processor, to implement each step of the video sharing method, and a specific implementation process may refer to descriptions of corresponding parts of the foregoing embodiment, which is not repeated in this embodiment.
The application also proposes a computer program product or a computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions, so that the computer device executes the methods provided in the various optional implementations of the video sharing method aspect or the video sharing device aspect, and the specific implementation process may refer to the descriptions of the corresponding embodiments and will not be repeated.
According to the video sharing method, device, computer equipment and storage medium provided by the embodiment of the application, when a user wants to share one video, the method, device and storage medium can intelligently predict the favorite degree of all friends of the user on the video, and sort the friends according to the favorite degree so as to place the friends which like the video in front, thereby promoting the user to share more short videos and increasing the social activity of the video.
The video sharing method, the video sharing device, the video sharing computer equipment and the video sharing storage medium provided by the application are described in detail, and specific examples are applied to illustrate the principles and the implementation modes of the application, and the description of the above examples is only used for helping to understand the method and the core ideas of the application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.
It should be noted that, in the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described as different from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
It is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include, or is intended to include, elements inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (12)

1. A video sharing method, comprising:
responding to a friend sharing request of a user on a target video, and determining at least one friend of the user;
determining association information of the user and each friend respectively, wherein the association information of the user and the friends indicates the preference degree of the friends for the user to forward the target video;
based on the association information of the user and each friend, adjusting the ordering sequence among the at least one friend to generate and display a friend list;
sharing the target video to the target friends according to the selection operation of the user on the target friends in the friend list;
wherein determining the association information of the user and the friend comprises:
determining first feature information, second feature information and third feature information, wherein the first feature information represents video preference of the user, the second feature information represents video preference of the friend, and the third feature information represents user preference of the target video;
determining first similarity information representing similarity between the first feature information and the second feature information according to the first feature information and the second feature information;
Inputting the first characteristic information, the second characteristic information and the first similarity information into a pre-trained first model to obtain first information representing the similarity degree of video preference of the user and the friends;
determining second similarity information characterizing similarity between the second feature information and the third feature information using the second feature information and the third feature information;
inputting the second characteristic information, the third characteristic information and the second similarity information into a pre-trained second model to obtain second information representing the favorites of the friends on the target video;
determining a weight of the first information and a weight of the second information;
generating target information by combining the first information, the weight of the first information, the second information and the weight of the second information;
and inputting the target information into a pre-trained third model to obtain the associated information of the user and the friend.
2. The method of claim 1, wherein the determining the first, second, and third characteristic information comprises:
Generating a user video behavior matrix according to the user video playing behavior, wherein the user video behavior matrix indicates the playing behavior of each user in the whole network to each video in the whole network respectively;
analyzing the discrete user video behaviors in the user video behavior matrix to generate characteristic information of the whole network user and the video;
and inquiring the first characteristic information of the user, the second characteristic information of the friends and the third characteristic information of the target video from the characteristic information of the whole network user and the video.
3. The method of claim 2, wherein analyzing the discrete user video behaviors in the user video behavior matrix generates feature information of the full-network user and video, comprising:
multiplying and fitting the user video behavior matrix by using a user embedding matrix and a video embedding matrix;
training the user embedded matrix and the video embedded matrix by adopting a gradient descent algorithm to obtain an optimized user embedded matrix and an optimized video embedded matrix;
the optimized user embedding matrix and the optimized video embedding matrix form characteristic information of the whole network user and the video.
4. The method of claim 1, wherein the determining the weight of the first information and the weight of the second information comprises:
determining an initial weight of the first information and an initial weight of the second information based on an attention mechanism module;
and carrying out standardization processing on the initial weight of the first information and the initial weight of the second information to obtain the weight of the first information and the weight of the second information.
5. The method of claim 4, further comprising a video sharing prediction model generation process, the video sharing prediction model being comprised of the first model, the second model, the attention mechanism module, and the third model, the video sharing prediction model generation process comprising:
acquiring training samples, wherein the training samples indicate user samples, friend samples and video samples;
determining a prediction result of a video sharing prediction model to be trained on the association information of the user sample and the friend sample, wherein the prediction result indicates the preference degree of the friend sample to the user sample for forwarding the video sample;
taking the standard result of the prediction result approaching to the training sample as a training target, and reversely adjusting parameters in the video sharing prediction model to generate a video sharing prediction model; the standard result of the training sample characterizes the real forwarding state of the video sample to the friend sample by the user sample.
6. A video sharing apparatus, comprising:
the friend acquisition unit is used for responding to a friend sharing request of a user on a target video and determining at least one friend of the user;
the association information determining unit is used for determining association information of the user and each friend respectively, and the association information of the user and the friends indicates the preference degree of the friends for forwarding the target video to the user;
the sequence adjusting unit is used for adjusting the sequencing sequence among the at least one friend to generate and display a friend list based on the association information of the user and each friend;
the video sharing unit is used for sharing the target video to the target friends according to the selection operation of the user on the target friends in the friend list;
wherein the association information determining unit includes:
the characteristic information determining unit is used for determining first characteristic information, second characteristic information and third characteristic information, wherein the first characteristic information represents video preference of the user, the second characteristic information represents video preference of the friend, and the third characteristic information represents the user who prefers the target video;
A first calculation unit configured to determine first similarity information characterizing similarity between the first feature information and the second feature information according to the first feature information and the second feature information;
a first information determining subunit, configured to input the first feature information, the second feature information, and the first similarity information to a pre-trained first model, to obtain first information that characterizes a similarity degree of video preferences of the user and the friend;
a second calculation unit configured to determine second similarity information characterizing similarity between the second feature information and the third feature information using the second feature information and the third feature information;
the second information determining subunit is configured to input the second feature information, the third feature information, and the second similarity information to a pre-trained second model, so as to obtain second information that characterizes the preference degree of the friend on the target video;
a weight determining unit configured to determine a weight of the first information and a weight of the second information;
a target information calculation unit configured to generate target information by combining the first information, the weight of the first information, the second information, and the weight of the second information;
And the information generating unit is used for inputting the target information into a pre-trained third model to obtain the associated information of the user and the friend.
7. The apparatus according to claim 6, wherein the characteristic information determination unit includes:
the matrix determining unit is used for generating a user video behavior matrix according to the user video playing behaviors, wherein the user video behavior matrix indicates the playing behaviors of each user in the whole network to each video in the whole network respectively;
the discrete learning unit is used for analyzing the discrete user video behaviors in the user video behavior matrix to generate characteristic information of the whole network users and videos;
and the characteristic information query unit is used for querying the first characteristic information of the user, the second characteristic information of the friends and the third characteristic information of the target video from the characteristic information of the whole network user and the video.
8. The apparatus of claim 7, wherein the discrete learning unit comprises:
the matrix decomposition unit is used for multiplying and fitting the user video behavior matrix by using the user embedding matrix and the video embedding matrix;
the optimizing unit is used for training the user embedding matrix and the video embedding matrix by adopting a gradient descent algorithm to obtain an optimized user embedding matrix and an optimized video embedding matrix;
The optimized user embedding matrix and the optimized video embedding matrix form characteristic information of the whole network user and the video.
9. The apparatus according to claim 6, wherein the weight determining unit includes:
an initial weight determining unit configured to determine an initial weight of the first information and an initial weight of the second information based on an attention mechanism module;
the normalization processing unit is used for performing normalization processing on the initial weight of the first information and the initial weight of the second information to obtain the weight of the first information and the weight of the second information.
10. The apparatus of claim 9, further comprising a model generation unit for generating a video sharing prediction model, the video sharing prediction model being composed of the first model, the second model, the attention mechanism module, and the third model, the model generation unit comprising:
the training sample acquisition unit is used for acquiring training samples, wherein the training samples indicate user samples, friend samples and video samples;
the prediction unit is used for determining a prediction result of the video sharing prediction model to be trained on the association information of the user sample and the friend sample, wherein the prediction result indicates the preference degree of the friend sample to the user sample for forwarding the video sample;
The model generation subunit is used for reversely adjusting parameters in the video sharing prediction model by taking a standard result of the prediction result approaching to the training sample as a training target so as to generate a video sharing prediction model; the standard result of the training sample characterizes the real forwarding state of the video sample to the friend sample by the user sample.
11. A computer device, comprising: the device comprises a processor and a memory, wherein the processor and the memory are connected through a communication bus; the processor is used for calling and executing the program stored in the memory; the memory is configured to store a program for implementing the video sharing method according to any one of claims 1 to 5.
12. A computer readable storage medium having stored thereon a computer program, the computer program being loaded and executed by a processor, implementing the steps of the video sharing method of any of claims 1-5.
CN202110319469.1A 2021-03-25 2021-03-25 Video sharing method and device, computer equipment and storage medium Active CN113032625B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110319469.1A CN113032625B (en) 2021-03-25 2021-03-25 Video sharing method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110319469.1A CN113032625B (en) 2021-03-25 2021-03-25 Video sharing method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113032625A CN113032625A (en) 2021-06-25
CN113032625B true CN113032625B (en) 2023-11-14

Family

ID=76473728

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110319469.1A Active CN113032625B (en) 2021-03-25 2021-03-25 Video sharing method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113032625B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080270038A1 (en) * 2007-04-24 2008-10-30 Hadi Partovi System, apparatus and method for determining compatibility between members of a social network
CN103166930A (en) * 2011-12-15 2013-06-19 腾讯科技(深圳)有限公司 Method and system for pushing network information
CN104967525A (en) * 2014-09-10 2015-10-07 腾讯科技(深圳)有限公司 News sharing method, apparatus and system
CN106326228A (en) * 2015-06-17 2017-01-11 富士通株式会社 Method and device for evaluating interest tendency of user
CN110837598A (en) * 2019-11-11 2020-02-25 腾讯科技(深圳)有限公司 Information recommendation method, device, equipment and storage medium
CN110971507A (en) * 2019-11-26 2020-04-07 维沃移动通信有限公司 Information display method and electronic equipment
CN111740896A (en) * 2020-07-07 2020-10-02 腾讯科技(深圳)有限公司 Content sharing control method and device, electronic equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080270038A1 (en) * 2007-04-24 2008-10-30 Hadi Partovi System, apparatus and method for determining compatibility between members of a social network
CN103166930A (en) * 2011-12-15 2013-06-19 腾讯科技(深圳)有限公司 Method and system for pushing network information
CN104967525A (en) * 2014-09-10 2015-10-07 腾讯科技(深圳)有限公司 News sharing method, apparatus and system
CN106326228A (en) * 2015-06-17 2017-01-11 富士通株式会社 Method and device for evaluating interest tendency of user
CN110837598A (en) * 2019-11-11 2020-02-25 腾讯科技(深圳)有限公司 Information recommendation method, device, equipment and storage medium
CN110971507A (en) * 2019-11-26 2020-04-07 维沃移动通信有限公司 Information display method and electronic equipment
CN111740896A (en) * 2020-07-07 2020-10-02 腾讯科技(深圳)有限公司 Content sharing control method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113032625A (en) 2021-06-25

Similar Documents

Publication Publication Date Title
Wu et al. An optimal feedback model to prevent manipulation behavior in consensus under social network group decision making
Li Reinforcement learning applications
CN110990871B (en) Machine learning model training method, prediction method and device based on artificial intelligence
CN110598016B (en) Method, device, equipment and medium for recommending multimedia information
CN103368921B (en) Distributed user modeling and method for smart machine
US9767201B2 (en) Modeling actions for entity-centric search
CN112231570B (en) Recommendation system support attack detection method, device, equipment and storage medium
US11929962B2 (en) Method and system for monitoring and integration of one or more intelligent conversational agents
CN111400603A (en) Information pushing method, device and equipment and computer readable storage medium
CN111125420B (en) Object recommendation method and device based on artificial intelligence and electronic equipment
Ren et al. A Novel Regret Theory‐Based Decision‐Making Method Combined with the Intuitionistic Fuzzy Canberra Distance
Jiang et al. Distributed deep learning optimized system over the cloud and smart phone devices
US11706028B1 (en) Social media profile identification connected to cryptographic token
Nobahari et al. ISoTrustSeq: a social recommender system based on implicit interest, trust and sequential behaviors of users using matrix factorization
CN115249082A (en) User interest prediction method, device, storage medium and electronic equipment
CN109829593A (en) The credit rating of target object determines method, apparatus, storage medium and electronic device
CN113032625B (en) Video sharing method and device, computer equipment and storage medium
Xu et al. Generalized contextual bandits with latent features: Algorithms and applications
CN116204709A (en) Data processing method and related device
US20210248515A1 (en) Assisted learning with module privacy
Davoudi et al. Paywall policy learning in digital news media
CN109918576B (en) Microblog attention recommendation method based on joint probability matrix decomposition
Yu Emergence and evolution of agent-based referral networks
JP2021105838A (en) Prediction system, prediction method and program
AU2021104349A4 (en) Methods, systems to monitor communications, design objectives and prepare actions on triggers using advanced digital opportunities including block chain and AI.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40047277

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant