CN109451334A - User, which draws a portrait, generates processing method, device and electronic equipment - Google Patents

User, which draws a portrait, generates processing method, device and electronic equipment Download PDF

Info

Publication number
CN109451334A
CN109451334A CN201811400674.5A CN201811400674A CN109451334A CN 109451334 A CN109451334 A CN 109451334A CN 201811400674 A CN201811400674 A CN 201811400674A CN 109451334 A CN109451334 A CN 109451334A
Authority
CN
China
Prior art keywords
video
user
information
model
video information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811400674.5A
Other languages
Chinese (zh)
Other versions
CN109451334B (en
Inventor
黄山山
徐钊
向宇
隋雪芹
于芝涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Poly Cloud Technology Co Ltd
Original Assignee
Qingdao Poly Cloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Poly Cloud Technology Co Ltd filed Critical Qingdao Poly Cloud Technology Co Ltd
Priority to CN201811400674.5A priority Critical patent/CN109451334B/en
Publication of CN109451334A publication Critical patent/CN109451334A/en
Application granted granted Critical
Publication of CN109451334B publication Critical patent/CN109451334B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • H04N21/25891Management of end-user data being end-user preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/251Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4532Management of client data or end-user data involving end-user characteristics, e.g. viewer profile, preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4661Deriving a combined profile for a plurality of end-users of the same client, e.g. for family members within a home
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4667Processing of monitored end-user data, e.g. trend analysis based on the log file of viewer selections

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Graphics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the present invention provides a kind of user's portrait generation processing method, device and electronic equipment, the described method includes: according to the information of user's watched video, determine the corresponding video information set of the user, it include multiple video informations in the video information set, the first video information in the multiple video information includes the video identifier of the first video and the preference value of the first video, the preference value of first video is for identifying the user for the preference of first video, the information of user's watched video includes video identifier and playing duration;The corresponding video information set of the user is inputted in the first model, the probabilistic information of at least one user's portrait to be selected exported by first model is obtained;According to the probabilistic information of at least one user's portrait to be selected, user's portrait of the user is determined.By the obtained user's portrait of this method, not only dimension is complete and accuracy is high.

Description

User portrait generation processing method and device and electronic equipment
Technical Field
The embodiment of the invention relates to computer technology, in particular to a user portrait generation processing method and device and electronic equipment.
Background
A user representation is a target user model built on a series of attribute data, such as gender, age, education, income level, marital status, etc. The accurate user portrait is beneficial to accurate marketing, personalized service and guidance of product optimization. Therefore, obtaining an accurate user representation is critical to the enterprise.
In the prior art, the user portrait is mainly obtained by means of active provision of a user. For example, the enterprise may collect information such as gender and age of the user by way of a survey and create a user profile based on the information.
However, a complete and accurate user representation may not be obtained using prior art methods.
Disclosure of Invention
The embodiment of the invention provides a user portrait generation processing method, a user portrait generation processing device and electronic equipment, which are used for solving the problem that a complete and accurate user portrait cannot be obtained in the prior art.
The first aspect of the embodiments of the present invention provides a user portrait generation processing method, including:
determining a video information set corresponding to a user according to information of videos watched by the user, wherein the video information set comprises a plurality of pieces of video information, first video information in the plurality of pieces of video information comprises a video identifier of a first video and a preference value of the first video, the preference value of the first video is used for identifying the preference degree of the user for the first video, and the information of the videos watched by the user comprises a video identifier and playing duration;
inputting the video information set corresponding to the user into a first model to obtain probability information of at least one to-be-selected user portrait output by the first model;
and determining the user portrait of the user according to the probability information of the at least one user portrait to be selected.
Further, the inputting the video information set corresponding to the user into a first model to obtain probability information of at least one to-be-selected user portrait output by the first model includes:
and inputting the video information set corresponding to the user into a first model, determining the user characteristics of the user by the first model according to a plurality of video information in the video information set and a video characteristic set obtained in advance, and determining and outputting the probability information of the at least one to-be-selected user portrait by the first model according to the user characteristics of the user.
Further, the determining, by the first model, the user characteristics of the user according to the plurality of pieces of video information in the set of video information and a pre-obtained set of video characteristics includes:
the first model searches first video characteristics of a first video from the video characteristic set according to the video identification of the first video in the first video information;
the first model determines a second video characteristic of the first video according to the first video characteristic and the preference value of the first video;
and the first model determines the user characteristics of the user according to the second video characteristics corresponding to each video information in the video information set.
Further, the determining and outputting, by the first model, probability information of the at least one to-be-selected user portrait according to the user characteristics of the user includes:
the first model determines and outputs probability information of the at least one user portrait to be selected according to the user characteristics and the relation parameters obtained in advance;
wherein the relationship parameter is used to characterize a potential relationship between a user feature and a user representation.
Further, before the step of inputting the video information set corresponding to the user into the first model to obtain the probability information of at least one to-be-selected user portrait output by the first model, the method further includes:
and training the first model by using a preset data set, wherein the trained first model comprises the video feature set and the relation parameters.
Further, before the step of inputting the video information set corresponding to the user into the first model to obtain the probability information of at least one to-be-selected user portrait output by the first model, the method further includes:
and judging whether the video characteristics corresponding to the first video exist in a pre-obtained video characteristic set, and if not, deleting the first video information from the video information set.
Further, the determining a video information set corresponding to the user according to the information of the video watched by the user includes:
determining a preference value of the first video according to the playing time length of the first video;
combining the video identification of the first video and the preference value of the first video into the first video information and adding to the set of video information.
A second aspect of the embodiments of the present invention provides a user portrait generation processing apparatus, including:
the video information collection comprises a plurality of pieces of video information, wherein first video information in the plurality of pieces of video information comprises a video identifier of a first video and a preference value of the first video, the preference value of the first video is used for identifying the preference degree of the user for the first video, and the information of the video watched by the user comprises a video identifier and playing time;
the processing module is used for inputting the video information set corresponding to the user into a first model to obtain probability information of at least one to-be-selected user portrait output by the first model;
and the second determining module is used for determining the user portrait of the user according to the probability information of the at least one user portrait to be selected.
Further, the processing module is specifically configured to:
and inputting the video information set corresponding to the user into a first model, determining the user characteristics of the user by the first model according to a plurality of video information in the video information set and a video characteristic set obtained in advance, and determining and outputting the probability information of the at least one to-be-selected user portrait by the first model according to the user characteristics of the user.
Further, the processing module comprises:
the searching unit is used for searching the first video characteristic of the first video from the video characteristic set according to the video identification of the first video in the first video information;
a first determining unit, configured to determine a second video feature of the first video according to the first video feature and the preference value of the first video;
and the second determining unit is used for determining the user characteristics of the user according to the second video characteristics corresponding to each piece of video information in the video information set.
Further, the processing module further includes:
a third determining unit, configured to determine and output probability information of the at least one to-be-selected user portrait according to the user characteristics and relationship parameters obtained in advance by the first model;
wherein the relationship parameter is used to characterize a potential relationship between a user feature and a user representation.
Further, the apparatus further comprises:
and the training module is used for training the first model by using a preset data set, and the trained first model comprises the video feature set and the relation parameters.
Further, the apparatus further comprises:
and the deleting module is used for judging whether the video characteristics corresponding to the first video exist in a pre-obtained video characteristic set, and if not, deleting the first video information from the video information set.
Further, the first determining module comprises:
a fourth determining unit, configured to determine a preference value of the first video according to a playing duration of the first video;
a combining unit for combining the video identification of the first video and the preference value of the first video into the first video information and adding to the set of video information.
A third aspect of embodiments of the present invention provides an electronic device, including:
a memory for storing program instructions;
a processor for calling and executing the program instructions in the memory to perform the method steps of the first aspect.
A fourth aspect of the embodiments of the present invention provides a readable storage medium, in which a computer program is stored, the computer program being configured to execute the method according to the first aspect.
The user portrait generation processing method, the device and the electronic equipment provided by the embodiment of the invention use the specific model to analyze the user portrait of the user based on the identification and the playing duration information of the video watched by the user at ordinary times. The user portrait obtained by the method has complete dimensionality, and meanwhile, the information of the video watched by the user can reflect the real characteristics of the user, so the accuracy of the user portrait obtained based on the information of the video watched by the user at ordinary times is greatly improved.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, the following briefly introduces the drawings needed to be used in the description of the embodiments or the prior art, and obviously, the drawings in the following description are some embodiments of the present invention, and those skilled in the art can obtain other drawings according to the drawings without inventive labor.
FIG. 1 is a diagram illustrating an exemplary system architecture for a user representation generation process provided by an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a first implementation of a user representation generation processing method according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating a second implementation of a user representation generation processing method according to an embodiment of the present invention;
FIG. 4 is a flow chart illustrating a third implementation of a user representation generation processing method according to an embodiment of the present invention;
FIG. 5 is a block diagram of a first embodiment of a user representation generation apparatus according to the present invention;
FIG. 6 is a block diagram of a second embodiment of a user representation generation apparatus according to the present invention;
FIG. 7 is a block diagram of a third embodiment of a user representation generation apparatus according to the present invention;
FIG. 8 is a block diagram of a fourth embodiment of a user representation generation apparatus according to the present invention;
FIG. 9 is a block diagram of a fifth embodiment of a user representation generation processor according to the present invention;
FIG. 10 is a block diagram of a sixth implementation of a user representation generation processing apparatus according to an embodiment of the present invention;
fig. 11 is a block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Many users do not wish to provide certain personal information to the enterprise in the sense of protection of personal privacy. Therefore, the user image obtained by the prior art often has dimension loss, and the obtained user image is difficult to guarantee in terms of accuracy. Therefore, a complete and accurate user representation cannot be obtained by the prior art.
Based on the above problems, embodiments of the present invention provide a user portrait generation method, which uses a specific model to analyze a user portrait of a user based on an identifier and play duration information of a video watched by the user at ordinary times. The user portrait obtained by the method has complete dimensionality, and meanwhile, the information of the video watched by the user can reflect the real characteristics of the user, so the accuracy of the user portrait obtained based on the information of the video watched by the user at ordinary times is greatly improved.
FIG. 1 is a diagram illustrating an exemplary system architecture of a user representation generation processing method according to an embodiment of the present invention, as shown in FIG. 1, the method involving a server and a terminal device. The terminal equipment can be a television, a mobile phone, a personal computer and other equipment with a video playing function. And communication connection is established between the terminal equipment and the server. The terminal device may display a video list provided by the server. When a user needs to watch a video, the terminal device may request the content of the video from the server. The terminal equipment can record the time length of the video watched by the user and send the time length of the video watched by the user to the server. Alternatively, the duration of time the user viewed the video may also be recorded by the server. The embodiment of the present invention is not particularly limited to this. The server further obtains the user portrait of the user by using the method of the embodiment of the invention based on the time length of the user watching the videos. Alternatively, the server may transmit the obtained time length for the user to view the plurality of videos to another specific device, and perform the user figure generation processing by the specific device.
Fig. 2 is a flowchart illustrating a first implementation of the user representation generation processing method according to an embodiment of the present invention, where an execution subject of the method may be the server or may be other specific devices. The following embodiments of the present invention are all described by taking a server as an example. As shown in fig. 2, the method includes:
s201, determining a video information set corresponding to a user according to information of videos watched by the user, wherein the video information set comprises a plurality of pieces of video information, first video information in the plurality of pieces of video information comprises a video identifier of a first video and a preference value of the first video, the preference value of the first video is used for identifying the preference degree of the user for the first video, and the information of the videos watched by the user comprises a video identifier and playing time length.
The first video information is any one of video information in a video information set.
Optionally, each piece of video information in the set of video information refers to a combination of a video identifier and a video preference value, that is, represents a degree of preference of a user for a video corresponding to the video identifier.
Alternatively, the preference value of the video may be an integer of 0 to 10, each number representing a preference value.
Alternatively, the video identifier may be a number arranged for the video by the server, and the like.
Illustratively, the first video information is (1,3), i.e. it represents that the user has a preference value of 3 for the video with the video identification of 1.
For example, assuming that a user a has watched 3 videos, i.e. video 1, video 2, and video 3, respectively, the set of video information corresponding to the user a obtained according to the information of the 3 videos watched by the user may be { (1,2), (2,5), (3,1) }. That is, the preference value of user a for video 1 is 2, the preference value of user a for video 2 is 5, and the preference value of user a for video 3 is 1.
S202, inputting the video information set corresponding to the user into a first model to obtain the probability information of at least one to-be-selected user portrait output by the first model.
Optionally, the first model may be a neural network model. The first model is trained in advance through a preset data set, so that the first model has the processing capacity of generating the user image.
Optionally, taking the home user portrait as an example, the home user portrait may be defined as the corresponding dimension in table 1 below.
TABLE 1
For the above dimension definitions, multiple dimension combinations may be included, one representing a selected user representation. Illustratively, one combination of dimensions is ([0,0,1,0,0], [1,1], [0,1], [1,0], [1,0], [1,0,1,0,0], [0,0,1,0,0], [0,0, 0]), where each 1 in the combination represents the value term of the home user in that dimension. Specifically, in this example, the number of family members of the family user is 3, the gender includes male and female, no elderly, little child, married, ages 14 and 25-34, the highest education level is college and high school, and the average monthly income is medium. In another example, a combination of dimensions may also be ([0,0,1,0,0], [1,1], [0,1], [1,0], [1,0], [1,0,1,0,0], [0,0,1,0,0], [0,0,0,1,0]), which differs from the first combination of dimensions only in terms of monthly average revenue.
When a set of video information for a particular user is input into the first model, the first model may output probabilities for all possible combinations of dimensions, i.e., the probability that the user belongs to each user representation.
Illustratively, for the two example dimension combinations, the probability that the user belongs to the first dimension combination is 80%, and the probability that the user belongs to the second dimension combination is 5%.
In the implementation, for a specific user portrait creation scenario, portrait dimensions as shown in table 1 above need to be predefined, and the first model may be trained and analyzed based on the predefined dimensions.
S203, determining the user portrait of the user according to the probability information of the at least one user portrait to be selected.
Optionally, after the first model outputs probability information of at least one dimension combination, a user portrait represented by one dimension combination with a probability value larger than a preset threshold value may be selected as the user portrait of the user, or a user portrait represented by one dimension combination with a maximum probability value may be selected as the user portrait of the user.
Illustratively, in the home user portrait scenario, for a specific home user B, the combination of dimensions output by the first model includes the two dimensional combinations of the above examples, wherein the probability value of the first dimensional combination is the largest. The user representation of home user B may be determined to be the user representation represented by the first combination of dimensions. That is, the number of family members of the family user B is 3, the gender includes male and female, no old man, child, married, ages 14 and 25 to 34, the highest education level is college and secondary school, and the average monthly income is medium. Thereby obtaining a complete and accurate user representation of the family member B.
In this embodiment, based on the identifier and the playing duration information of the video watched by the user at ordinary times, the user portrait of the user is analyzed by using a specific model. The user portrait obtained by the method has complete dimensionality, and meanwhile, the information of the video watched by the user can reflect the real characteristics of the user, so the accuracy of the user portrait obtained based on the information of the video watched by the user at ordinary times is greatly improved.
Optionally, the step S202 may specifically include:
and inputting the video information set corresponding to the user into a first model, determining the user characteristics of the user according to a plurality of pieces of video information in the video information set and a video characteristic set obtained in advance by the first model, and determining and outputting probability information of the at least one to-be-selected user portrait according to the user characteristics of the user by the first model.
The pre-obtained video feature set is a set of video features, and the server can provide all videos played by the terminal device, wherein each video has a specific video feature. Such as the type of video, whether it is made domestically, whether it is paid for, age of fit, etc. The video feature set may be obtained during training of the first model, and the first model may be continuously updated in a periodic or event-triggered manner, so that the video feature set is continuously updated. The above-mentioned video features may be represented by vectors.
The user features are vectors capable of representing features of the user.
The process of the first model determining the user characteristics is first explained below.
Fig. 3 is a schematic flow chart of a second implementation of the user portrait generation processing method according to the embodiment of the present invention, and as shown in fig. 3, a process of determining a user feature of a user according to a plurality of pieces of video information in a video information set and a video feature set obtained in advance by a first model is as follows:
s301, the first model searches the first video feature of the first video from the video feature set according to the video identifier of the first video in the first video information.
The first video feature refers to a video feature of the first video in the video feature set, that is, a feature of the video itself.
S302, the first model determines a second video characteristic of the first video according to the first video characteristic and the preference value of the first video.
Optionally, the second video feature represents a video feature of the first video for a specific user.
It should be noted that the above steps S301 to S302 are processes for determining the second video feature of one of the videos. In the specific implementation process, for each video message in the video information set, the second video feature needs to be determined according to the above steps S301 to S302.
And S303, the first model determines the user characteristics of the user according to the second video characteristics corresponding to each video information in the video information set.
Alternatively, the first model may calculate the user characteristics of the user by the following formula (1).
Wherein,for the above video feature set, Rd represents a d-dimensional vector, vjRepresenting the video characteristics of the jth video. N is an integer greater than 1. x is the number of(i)Representing the video viewed by user i, | x(i)I represents x(i)The number of medium videos, i.e. the number of videos watched by the user, r(i)Indicating the preference value of the user i for each of the viewed videos.
As can be seen from the formula (1), on the basis of obtaining the video information set corresponding to the user, the first model performs weighting processing on the video features of each video by using the preference value of the user to each video, so as to obtain the video features of each video for the user, and further, the first model performs average processing on the video features of all videos for the user, so as to obtain the user features of the user.
The following describes a process of the first model determining and outputting probability information of at least one to-be-selected user portrait according to user characteristics.
Optionally, the first model determines and outputs probability information of the at least one to-be-selected user portrait according to the user characteristics and the relationship parameters obtained in advance.
Wherein the relationship parameter is used for representing the potential relationship between the user characteristic and the user portrait.
Optionally, the relationship parameter may be obtained during training of the first model, and the first model may be continuously updated according to a periodic or event-triggered manner, so that the relationship parameter is continuously updated. The above-mentioned relation parameters may be represented by a matrix.
Optionally, the first model may calculate probability information of at least one of the candidate user representations by the following formula (2).
Wherein W is the above relation parameter, y(i)A user representation, u, of user i(i)For the user features obtained by the above equation (1), Y is the set of all possible user representations, i.e., the aforementioned multiple dimension combinations.
As can be seen from the above equation (2), the user characteristics u of the user i are obtained(i)Thereafter, the first model is used to identify each particular user profile based on the user characteristics u(i)And the relation parameter can calculate the probability that the user belongs to the specific user portrait, and further can obtain the actual user portrait of the user based on the probability information of each user portrait.
Optionally, before analyzing probability information of the user portrait by using the first model, the first model may be trained by using a preset data set, and the trained first model may include the video feature set and the relationship parameter.
In an optional implementation manner, before the video information set corresponding to the user is input to the first model, the following process may be further performed:
and judging whether the video characteristics corresponding to the first video exist in a video characteristic set obtained in advance, and if not, deleting the first video information from the video information set.
Optionally, in some scenarios, for some new videos, there may be a case where the new video features are not obtained through model training, and then the video features of the new videos may not exist in the video feature set. Therefore, in this embodiment, it may be determined whether the video features of the new videos exist in the video feature set according to the video identifiers of the new videos, and if not, the video information of the new videos is deleted from the video information set, so as to ensure that the video information input into the first model is valid information.
In an optional implementation manner, when the video information set corresponding to the user is determined in step S201, the determination may be specifically performed in the following manner.
Fig. 4 is a schematic flow chart of a third implementation of the user portrait generation processing method according to the embodiment of the present invention, as shown in fig. 4, the step S201 may include:
s401, determining a preference value of the first video according to the playing time length of the first video.
Optionally, the playing duration of the first video may refer to a cumulative playing duration of the first video, for example, a sum of durations of the user watching the first video from a time when the user registers in the server to a current time may be used as the playing duration of the first video.
As described previously, the preference value of the video may be represented by an integer of 0 to 10. Optionally, a mapping relationship between the playing time of the video and the preference value of the video may be established.
Illustratively, the playing time is 10-20 hours, and the preference value is 1; the playing time is 21-30 hours, and the preference value is 2.
Furthermore, in this step, after the playing duration of each video is obtained, the preference value of the video may be obtained according to the mapping relationship.
S402, combining the video identification of the first video and the preference value of the first video into the first video information and adding the first video information into the video information set.
For example, assuming that the identifier of the video is 1, and the preference value of the video determined in the above step S401 is 3, the combination of the two is (1,3), and the value is added as video information to the above video information set.
Fig. 5 is a block diagram of a first embodiment of a user representation generating and processing apparatus according to an embodiment of the present invention, as shown in fig. 5, the apparatus includes:
the first determining module 501 is configured to determine, according to information of videos watched by a user, a video information set corresponding to the user, where the video information set includes multiple pieces of video information, where a first video information in the multiple pieces of video information includes a video identifier of a first video and a preference value of the first video, the preference value of the first video is used to identify a preference degree of the user for the first video, and the information of the videos watched by the user includes a video identifier and a playing time length.
A processing module 502, configured to input the video information set corresponding to the user into a first model, so as to obtain probability information of at least one to-be-selected user portrait output by the first model.
A second determining module 503, configured to determine the user portrait of the user according to the probability information of the at least one user portrait to be selected.
The device is used for realizing the method embodiments, the realization principle and the technical effect are similar, and the details are not repeated here.
In another embodiment, the processing module 502 is specifically configured to:
and inputting the video information set corresponding to the user into a first model, determining the user characteristics of the user by the first model according to a plurality of video information in the video information set and a video characteristic set obtained in advance, and determining and outputting the probability information of the at least one to-be-selected user portrait by the first model according to the user characteristics of the user.
Fig. 6 is a block diagram of a second implementation of the user representation generation processing apparatus according to the embodiment of the present invention, and as shown in fig. 6, the processing module 502 includes:
a searching unit 5021, configured to search the first video feature of the first video from the video feature set according to the video identifier of the first video in the first video information.
A first determining unit 5022, configured to determine a second video feature of the first video according to the first video feature and the preference value of the first video.
A second determining unit 5023, configured to determine the user characteristics of the user according to the second video characteristics corresponding to each piece of video information in the video information set.
Fig. 7 is a block diagram of a third implementation of the user representation generation processing apparatus according to the embodiment of the present invention, and as shown in fig. 7, the processing module 502 further includes:
a third determining unit 5024, configured to determine and output probability information of the at least one to-be-selected user portrait according to the user feature and the relationship parameter obtained in advance by the first model.
Wherein the relationship parameter is used to characterize a potential relationship between a user feature and a user representation.
Fig. 8 is a block diagram of a fourth implementation of the user image generation processing apparatus according to the embodiment of the present invention, as shown in fig. 8, the apparatus further includes:
a training module 504, configured to train the first model using a preset data set, where the trained first model includes the video feature set and the relationship parameter.
Fig. 9 is a block diagram of a fifth implementation of the user image generation processing apparatus according to the embodiment of the present invention, and as shown in fig. 9, the apparatus further includes:
a deleting module 505, configured to determine whether a video feature corresponding to the first video exists in a pre-obtained video feature set, and if not, delete the first video information from the video information set.
Fig. 10 is a block diagram of a sixth implementation of the user representation generation processing apparatus according to an embodiment of the present invention, and as shown in fig. 10, the first determining module 501 includes:
the fourth determining unit 5011 is configured to determine a preference value of the first video according to the playing time of the first video.
A combining unit 5022, configured to combine the video identifier of the first video and the preference value of the first video into the first video information and add to the set of video information.
Fig. 11 is a block diagram of an electronic device according to an embodiment of the present invention, and as shown in fig. 11, the electronic device 1100 includes:
a memory 1101 for storing program instructions.
The processor 1102 is configured to call and execute the program instructions in the memory 1101 to perform the method steps in the above-described method embodiments.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A user representation generation processing method, comprising:
determining a video information set corresponding to a user according to information of videos watched by the user, wherein the video information set comprises a plurality of pieces of video information, first video information in the plurality of pieces of video information comprises a video identifier of a first video and a preference value of the first video, the preference value of the first video is used for identifying the preference degree of the user for the first video, and the information of the videos watched by the user comprises a video identifier and playing duration;
inputting the video information set corresponding to the user into a first model to obtain probability information of at least one to-be-selected user portrait output by the first model;
and determining the user portrait of the user according to the probability information of the at least one user portrait to be selected.
2. The method of claim 1, wherein the inputting the set of video information corresponding to the user into a first model to obtain probability information of at least one candidate user portrait output by the first model comprises:
and inputting the video information set corresponding to the user into a first model, determining the user characteristics of the user by the first model according to a plurality of video information in the video information set and a video characteristic set obtained in advance, and determining and outputting the probability information of the at least one to-be-selected user portrait by the first model according to the user characteristics of the user.
3. The method of claim 2, wherein determining, by the first model, the user characteristic of the user according to a plurality of pieces of video information in the set of video information and a pre-obtained set of video characteristics comprises:
the first model searches first video characteristics of a first video from the video characteristic set according to the video identification of the first video in the first video information;
the first model determines a second video characteristic of the first video according to the first video characteristic and the preference value of the first video;
and the first model determines the user characteristics of the user according to the second video characteristics corresponding to each video information in the video information set.
4. The method of claim 2, wherein the first model determines and outputs probability information of the at least one selected user representation according to the user characteristics of the user, comprising:
the first model determines and outputs probability information of the at least one user portrait to be selected according to the user characteristics and the relation parameters obtained in advance;
wherein the relationship parameter is used to characterize a potential relationship between a user feature and a user representation.
5. The method of claim 4, wherein before inputting the set of video information corresponding to the user into the first model and obtaining the probability information of at least one of the candidate user images output by the first model, the method further comprises:
and training the first model by using a preset data set, wherein the trained first model comprises the video feature set and the relation parameters.
6. The method according to any one of claims 1-5, wherein before inputting the set of video information corresponding to the user into the first model and obtaining the probability information of at least one candidate user portrait output by the first model, the method further comprises:
and judging whether the video characteristics corresponding to the first video exist in a pre-obtained video characteristic set, and if not, deleting the first video information from the video information set.
7. The method according to any one of claims 1 to 5, wherein the determining the set of video information corresponding to the user according to the information that the user has watched the video comprises:
determining a preference value of the first video according to the playing time length of the first video;
combining the video identification of the first video and the preference value of the first video into the first video information and adding to the set of video information.
8. A user representation generation processing apparatus, comprising:
the video information collection comprises a plurality of pieces of video information, wherein first video information in the plurality of pieces of video information comprises a video identifier of a first video and a preference value of the first video, the preference value of the first video is used for identifying the preference degree of the user for the first video, and the information of the video watched by the user comprises a video identifier and playing time;
the processing module is used for inputting the video information set corresponding to the user into a first model to obtain probability information of at least one to-be-selected user portrait output by the first model;
and the second determining module is used for determining the user portrait of the user according to the probability information of the at least one user portrait to be selected.
9. An electronic device, comprising:
a memory for storing program instructions;
a processor for invoking and executing program instructions in said memory for performing the method steps of any of claims 1-7.
10. A readable storage medium, characterized in that a computer program is stored in the readable storage medium for performing the method of any of claims 1-7.
CN201811400674.5A 2018-11-22 2018-11-22 User portrait generation processing method and device and electronic equipment Active CN109451334B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811400674.5A CN109451334B (en) 2018-11-22 2018-11-22 User portrait generation processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811400674.5A CN109451334B (en) 2018-11-22 2018-11-22 User portrait generation processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN109451334A true CN109451334A (en) 2019-03-08
CN109451334B CN109451334B (en) 2021-04-06

Family

ID=65554685

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811400674.5A Active CN109451334B (en) 2018-11-22 2018-11-22 User portrait generation processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN109451334B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110008375A (en) * 2019-03-22 2019-07-12 广州新视展投资咨询有限公司 Video is recommended to recall method and apparatus
CN110008376A (en) * 2019-03-22 2019-07-12 广州新视展投资咨询有限公司 User's portrait vector generation method and device
CN111556369A (en) * 2020-05-21 2020-08-18 四川省有线广播电视网络股份有限公司 Television-based family classification method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007149123A2 (en) * 2005-12-20 2007-12-27 General Instrument Corporation Method and apparatus for providing user profiling based on facial recognition
CN106874266A (en) * 2015-12-10 2017-06-20 中国电信股份有限公司 User's portrait method and the device for user's portrait
CN107124653A (en) * 2017-05-16 2017-09-01 四川长虹电器股份有限公司 The construction method of TV user portrait
CN108804454A (en) * 2017-04-28 2018-11-13 华为技术有限公司 One population portrait method, group's portrait device and server

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007149123A2 (en) * 2005-12-20 2007-12-27 General Instrument Corporation Method and apparatus for providing user profiling based on facial recognition
CN106874266A (en) * 2015-12-10 2017-06-20 中国电信股份有限公司 User's portrait method and the device for user's portrait
CN108804454A (en) * 2017-04-28 2018-11-13 华为技术有限公司 One population portrait method, group's portrait device and server
CN107124653A (en) * 2017-05-16 2017-09-01 四川长虹电器股份有限公司 The construction method of TV user portrait

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110008375A (en) * 2019-03-22 2019-07-12 广州新视展投资咨询有限公司 Video is recommended to recall method and apparatus
CN110008376A (en) * 2019-03-22 2019-07-12 广州新视展投资咨询有限公司 User's portrait vector generation method and device
CN111556369A (en) * 2020-05-21 2020-08-18 四川省有线广播电视网络股份有限公司 Television-based family classification method

Also Published As

Publication number Publication date
CN109451334B (en) 2021-04-06

Similar Documents

Publication Publication Date Title
CN110297848B (en) Recommendation model training method, terminal and storage medium based on federal learning
CN108304435B (en) Information recommendation method and device, computer equipment and storage medium
US11914639B2 (en) Multimedia resource matching method and apparatus, storage medium, and electronic apparatus
CN106326391B (en) Multimedia resource recommendation method and device
CN111741336B (en) Video content recommendation method, device, equipment and storage medium
CN109451334B (en) User portrait generation processing method and device and electronic equipment
CN109872242A (en) Information-pushing method and device
US20170169062A1 (en) Method and electronic device for recommending video
CN114747227A (en) Method, system, and apparatus for estimating census-level audience size and total impression duration across demographic groups
CN108932646B (en) User tag verification method and device based on operator and electronic equipment
CN111897950A (en) Method and apparatus for generating information
CN111683274A (en) Bullet screen advertisement display method, device and equipment and computer readable storage medium
JP2024508502A (en) Methods and devices for pushing information
CN110990627A (en) Knowledge graph construction method and device, electronic equipment and medium
CN110855487B (en) Network user similarity management method, device and storage medium
CN104967690A (en) Information push method and device
CN109377284B (en) Method and electronic equipment for pushing information
CN112672202B (en) Bullet screen processing method, equipment and storage medium
CN117459662B (en) Video playing method, video identifying method, video playing device, video playing equipment and storage medium
CN113204699B (en) Information recommendation method and device, electronic equipment and storage medium
CN111708946A (en) Personalized movie recommendation method and device and electronic equipment
CN116244601A (en) Training method of credit evaluation model of different network user and credit grade evaluation method
CN114722279A (en) Content recommendation method and device, electronic equipment and storage medium
KR20190033884A (en) Method for deep learning based point-of-interest prediction using user informaiton of sns
US11272254B1 (en) System, method, and computer program for using user attention data to make content recommendations

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant