CN114902212A - Image generation method, image generation device, server and storage medium - Google Patents

Image generation method, image generation device, server and storage medium Download PDF

Info

Publication number
CN114902212A
CN114902212A CN202080084186.7A CN202080084186A CN114902212A CN 114902212 A CN114902212 A CN 114902212A CN 202080084186 A CN202080084186 A CN 202080084186A CN 114902212 A CN114902212 A CN 114902212A
Authority
CN
China
Prior art keywords
application
user
label
group
tag
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080084186.7A
Other languages
Chinese (zh)
Inventor
王逸峰
安琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Shenzhen Huantai Technology Co Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Shenzhen Huantai Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd, Shenzhen Huantai Technology Co Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Publication of CN114902212A publication Critical patent/CN114902212A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Finance (AREA)
  • Strategic Management (AREA)
  • General Physics & Mathematics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Game Theory and Decision Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The application discloses an image generation method, an image generation device, a server and a storage medium, wherein the image generation method comprises the following steps: acquiring application information of at least one application used by a target user; generating an application label of the target user according to the application information; acquiring a group label corresponding to the target user according to the application label, wherein the group label is used for representing the behavior characteristics of a user group to which the target user belongs; and generating the user portrait of the target user according to the application label and the group label. The method can construct a multi-dimensional user picture.

Description

Image generation method, image generation device, server and storage medium Technical Field
The present application relates to the field of data analysis technologies, and in particular, to a method, an apparatus, a server, and a storage medium for generating an image.
Background
With the rapid development of network information technology, information recommendation technology based on big data technology is also in force. In the information recommendation technology, related information is recommended mainly according to a user portrait, and the accuracy of the user portrait influences the accuracy of information recommendation, so that the constructed user portrait becomes the key of the information recommendation technology.
Disclosure of Invention
In view of the foregoing, the present application provides a method, an apparatus, a server, and a storage medium for generating an image.
In a first aspect, an embodiment of the present application provides an image generation method, where the method includes: acquiring application information of at least one application used by a target user; generating an application label of the target user according to the application information; acquiring a group label corresponding to the target user according to the application label, wherein the group label is used for representing the behavior characteristics of a user group to which the target user belongs; and generating the user portrait of the target user according to the application label and the group label.
In a second aspect, an embodiment of the present application provides a portrait generation apparatus, including: the system comprises an information acquisition module, a label generation module, a label acquisition module and a content generation module, wherein the information acquisition module is used for acquiring application information of at least one application used by a target user; the label generating module is used for generating an application label of the target user according to the application information; the tag obtaining module is used for obtaining a group tag corresponding to the target user according to the application tag, wherein the group tag is used for representing the behavior characteristics of a user group to which the target user belongs; and the content generation module is used for generating the user portrait of the target user according to the application label and the group label.
In a third aspect, an embodiment of the present application provides a server, including: one or more processors; a memory; one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the representation generation method provided by the first aspect above.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where a program code is stored in the computer-readable storage medium, and the program code is called by a processor to execute the image generation method provided in the first aspect.
According to the scheme provided by the application, the application label of the target user is generated according to the obtained application information by obtaining the application information of at least one application used by the target user, the group label corresponding to the target user is obtained according to the application label, the group label is used for representing the behavior characteristics of the user group to which the target user belongs, and finally the user portrait of the target user is generated according to the application label and the group label, so that the user portrait with more dimensionalities is constructed according to the application information used by the user and the group label of the user group to which the user belongs, and the accuracy of the constructed user portrait is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 shows a flowchart of a representation generation method according to one embodiment of the present application.
FIG. 2 illustrates a flowchart of a representation generation method according to another embodiment of the present application.
FIG. 3 is a flowchart illustrating step S220 of a representation generation method according to another embodiment of the present application.
FIG. 4 is a flowchart illustrating step S240 of an image generation method according to another embodiment of the present application.
FIG. 5 illustrates a flowchart of a representation generation method according to yet another embodiment of the present application.
FIG. 6 illustrates a flowchart of a representation generation method according to yet another embodiment of the present application.
FIG. 7 illustrates a flowchart of a representation generation method according to yet another embodiment of the present application.
FIG. 8 shows a block diagram of a representation generation apparatus according to an embodiment of the present application.
FIG. 9 shows a block diagram of a content generation module in a representation generation apparatus according to an embodiment of the present application.
Fig. 10 shows a block diagram of a feature acquisition unit in a content generation module according to an embodiment of the present application.
FIG. 11 is a block diagram of a server for executing a representation generation method according to an embodiment of the present application.
FIG. 12 is a storage unit for storing or carrying program codes for implementing the image generation method according to the embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
User portrayal is widely applied in various fields as an effective tool for outlining target users and associating user appeal and design direction. User portrayal is originally applied in the E-commerce field, and in the background of the big data era, user information is flooded in a network, each piece of concrete information of a user is abstracted into labels, and the labels are utilized to concretize the user image, so that targeted services are provided for the user.
User portrayal has become a popular area. Most user representation systems obtain user labels carved based on an Application (APP) through a series of feature and algorithm conversion according to information such as operation and payment behaviors of users on the APP. And selecting a user group similar to the target user label when information recommendation and marketing are performed, and then performing corresponding information recommendation and marketing on the users.
The inventor finds that, through long-term research, the user portrait of the user is generated according to the information generated on the APP of the user, so that portrait description with more dimensionality cannot be diffused and made, and the portrait system cannot achieve the expected effect when the information is recommended.
In view of the foregoing problems, the inventor proposes a portrait generation method, apparatus, server and storage medium according to embodiments of the present application, which can construct a user portrait with a large number of dimensions according to application information of a user using an application and a group tag of a user group to which the user belongs, so as to improve accuracy of the constructed user portrait. The specific image generation method will be described in detail in the following embodiments.
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating an image generation method according to an embodiment of the present disclosure. The portrait generation method is used for constructing user portraits with more dimensions, and accuracy of the constructed user portraits is improved. In a specific embodiment, the representation generation method is applied to a representation generation apparatus 400 as shown in FIG. 8 and a server 100 (FIG. 11) in which the representation generation apparatus 400 is disposed. The specific flow of the embodiment will be described below by taking a server as an example, and it is understood that the server applied in the embodiment may be a conventional server, a cloud server, and the like, and is not limited herein. As will be described in detail with respect to the flow shown in fig. 2, the sketch generating method may specifically include the following steps:
step S110: application information of at least one application used by a target user is acquired.
In the embodiment of the application, when the server constructs the user portrait of the target user, the server can acquire application information generated by the target user using the application, so that habits and personal characteristics of the target user are analyzed according to the application information of the target user using the application, and the user portrait is constructed. The application information may include user information of the user when logging in or registering the application, and may also include practical information of using the application, and of course, the specific application information may not be limited, for example, the application information may also include attribute information of the application itself, and the like. The application information of the application used by the target user may be application information of at least one application, that is, application information of one application, or application information of a plurality of applications.
Step S120: and generating an application label of the target user according to the application information.
In the embodiment of the application, when the server generates the user representation of the target user according to the obtained application information, the server may generate the application tag of the target user according to the application information. The application tag is a tag generated by application information corresponding to a target user, and the application tag may be divided into multiple types of tags, for example, a base portrait tag corresponding to user information, and a behavior tag corresponding to usage information, where a specific tag type included in the application tag may not be limited.
In some embodiments, the application tag may include a plurality of tags, each having a specific characteristic value. For example, the application tags include tags of dimensions such as application type, operation preference, consumption capability, operation time, and the like, and each tag has corresponding content. It should be noted that each type of information may be quantified or directly used as a feature value, so as to form the content corresponding to each label. Of course, the specific application label may not be limited, and only the personal characteristics and behavior habits of the user need to be characterized.
In some embodiments, the server generates the application label of the target user according to the application information corresponding to the target user, or generates the application label by using a label generation model trained in advance. Specifically, the server may obtain a large amount of application information, and label the application label that needs to be generated for each piece of application information, thereby obtaining a large amount of training samples. And then training by using a machine learning method according to the training samples to obtain a trained label generation model.
After the server finishes training the label generation model, the server can also test the generated label generation model according to the test sample (which can also be application information obtained in advance and an application label corresponding to the application information) to obtain the label generation accuracy of the label generation model, so that a user can adjust the parameters of the label generation model according to the accuracy to obtain a label generation model with higher accuracy. For example, when the tested accuracy is lower than a certain value, a prompt message may be generated to prompt the user to adjust the model parameters of the tag generation model.
Of course, the server may also be trained using a variety of machine learning methods to obtain a plurality of label generation models. And then testing the label generation models to obtain the accuracy of each label generation model, and finally using the label generation models with the accuracy meeting the preset accuracy condition as the finally obtained label generation models. The preset accuracy condition may be that the accuracy is the highest, or that the accuracy is greater than a preset threshold, and the like, which is not limited herein.
Step S130: and acquiring a group label corresponding to the target user according to the application label, wherein the group label is used for representing the behavior characteristics of the user group to which the target user belongs.
In the embodiment of the application, after the server generates the application tag according to the application information corresponding to the target user, the server may further obtain the group tag corresponding to the target user according to the application tag. The group tag is used to characterize the behavior characteristics of a user group to which the target user belongs, that is, the target user may be classified into a user group, and the user group has a group tag, and the group tag may characterize the behavior characteristics of the user group.
In some embodiments, different user groups may cluster different users in advance according to application information of the users obtained in the past. Clustering can be performed according to the behavior characteristics of the users during clustering, that is, clustering is performed according to the information reflecting the behavior characteristics in the application information of different users, so that a plurality of clustering results are obtained, each clustering result can comprise one or corresponding users, and each clustering result can be used as a user group. Of course, the group label may also be generated in advance according to information that reflects the user behavior characteristics in the application information of the user in each user group.
Further, the server may match the characteristics reflecting the user behavior in the application tag with the behavior characteristics in each group tag according to the application tag of the target user, and determine the group tag of the user group to which the target user belongs according to the matching result.
In other embodiments, the server may also perform clustering according to the behavior characteristics of the user directly according to the application information of the user obtained in the past and the application information of the target user obtained this time, so as to obtain a plurality of user groups, that is, a user group to which the target user belongs, and generate a group tag corresponding to the user group.
Of course, the group tag may be obtained from another device, for example, if the device for managing the group tags of a plurality of user groups is another server, the server for generating the representation may obtain the group tag corresponding to the target user from the other server, specifically, the server may generate a tag obtaining request according to the application information corresponding to the target user, and then send the tag obtaining request to the other server, so as to obtain the group tag of the group to which the target user belongs from the other server.
It can be understood that the group tag of the user group to which the target user belongs is determined by applying the tag, and the behavior habit of the user group can reflect the behavior habit of the target user, so that the user portrait of the target user can be constructed by using the group tag, so as to improve the dimension of the tag in the user portrait of the target user.
Step S140: and generating the user portrait of the target user according to the application label and the group label.
In the embodiment of the application, after the server generates the application tag and acquires the group tag of the user group to which the target user belongs, the user portrait of the target user can be generated according to the application tag and the group tag. The generated user portrait can comprise the application label and the group label, so that the user portrait of the target user not only comprises the application label formed by the application information of the target user using the application, but also comprises the group label of the user group to which the target user belongs, the dimensionality of the label in the user portrait is wider, and the accuracy of the constructed user portrait is improved. The constructed user portrait can be used for pushing information, and due to the fact that the dimensionality of the label included in the user portrait is large, the user portrait can be suitable for pushing of various scenes, and accuracy of information pushing is guaranteed.
According to the portrait generation method provided by the embodiment of the application, the application label of the target user is generated according to the obtained application information by obtaining the application information of at least one application used by the target user, the group label corresponding to the target user is obtained according to the application label, the group label is used for representing the behavior characteristics of the user group to which the target user belongs, and finally the user portrait of the target user is generated according to the application label and the group label, so that the user portrait with more dimensions is constructed according to the application information used by the user and the group label of the user group to which the user belongs, and the accuracy of the constructed user portrait is improved.
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating an image generation method according to another embodiment of the present application. The method is applied to the server, and will be described in detail with respect to the flow shown in fig. 2, and the portrait generation method may specifically include the following steps:
step S210: application information of at least one application used by a target user is obtained.
In an embodiment of the present application, the application information that the target user uses the at least one application, which is acquired by the server, may include user information of the at least one application and usage information of the at least one application. The user information may be user information generated by a user logging in or registering an application, and may include: name, age, birthday, education level, constellation, belief, marital status, email address, etc.; the usage information may be information generated by a user using an application, and may include, for example: the usage information may reflect usage habits of the user, such as usage time, used functions, operation types, operation contents, and geographical locations.
In the embodiment of the application, the server may obtain application information of a target user using a plurality of applications. The plurality of applications may include different types of applications, for example, a shopping application, a takeaway application, a trip application, a work-related application, and the like, and it can be understood that the wider the type of the application is, the more the user portrait generated subsequently reflects the use habits of the user.
In some embodiments, each software developer focuses on its own related services during development, so that the user identifiers used by different software developers have respective specifications, and therefore, each application can be communicated through a unified user identifier to acquire application information of different applications. As an implementation mode, the user identifier of the manufacturer of each application can be associated with a uniform identifier, and then the application information of different applications can be obtained through the corresponding uniform identifier of the target user and the association relation. As another embodiment, when logging in and registering different applications, a third-party account may be used for logging in, and the third-party account is controlled by the server, so that application information of different applications may also be obtained through the third-party account, for example, a manufacturer account of a mobile phone may be used.
In other embodiments, the server may also communicate with a vendor server corresponding to each different application to obtain the application information of the different applications from the vendor server corresponding to each different application.
Of course, in the embodiment of the present application, a manner in which a specific server obtains application information of a plurality of different applications may not be limited.
Step S220: and generating an application label of the target user according to the application information.
In this embodiment, the server may generate the application tag of the target user according to the application information corresponding to the target user, where the application tag of the target user may include a base portrait tag of the target user and a behavior portrait tag of the target user. The basic portrait label is used for representing basic information of a user and can be a label generated according to information such as basic attributes and social attributes in application information; the behavior portrait tag is used for representing the behavior habit of the user, and can be a tag generated according to the behavior habit, hobbies, application use preference, mobile phone use preference and other information in the application information.
Personal information includes, but is not limited to, name, age, education level, constellation, belief, marital status, email address, etc.
Social attributes include, but are not limited to, industry/profession, position, income level, child status, vehicle usage, housing, mobile equipment, and mobile operators. The dwelling of the house may include: renting houses, owning houses and repaying loans. The mobile phone may include: brand and price. The mobile operator may include: brand, network, traffic characteristics and mobile number. These brands may include: mobile, connected, telecommunications, etc. The network may include: none, 2G, 3G and 4G. The flow characteristics may include: high, medium and low.
Behavioral habits include, but are not limited to: geographic location, lifestyle, traffic, residential hotel type, economic/financial features, eating habits, shopping features, and payment. Lifestyle habits may include: working schedule, working hours, computer internet surfing time and shopping time. The shopping characteristics may include: shopping item categories and shopping methods. The payment conditions may include: time of payment, place of payment, manner of payment, individual payment amount and total payment amount.
Hobbies include, but are not limited to: reading preferences, news preferences, video preferences, music preferences, sports preferences, and travel preferences. The reading preferences may include: reading frequency, reading time period, total reading time and reading classification.
In some embodiments, referring to fig. 3, step S220 may include:
step S221: and generating a basic portrait label of the target user according to the user information.
Step S222: and generating a behavior portrait label of the target user according to the use information.
In this embodiment, the server may generate a base portrait label for the target user based on user information in the application information. Specifically, the server can analyze information such as basic attributes and social attributes according to the user information and generate a basic portrait label; the server can analyze information such as behavior habits, hobbies, application use preferences and mobile phone use preferences according to the use information and generate the behavior portrait label.
Step S230: and acquiring a group label corresponding to the target user according to the application label, wherein the group label is used for representing the behavior characteristics of the user group to which the target user belongs.
In the embodiment of the present application, step S230 may refer to the contents of the foregoing embodiments, and is not described herein again.
Step S240: and acquiring at least part of behavior characteristics, which need to be added to the user portrait of the target user, in the group label as first behavior characteristics.
In the embodiment of the application, when the server generates the user portrait of the target user according to the generated application tag and the acquired group tag, the group tag is a tag formed by behavior characteristics of all users in a user group, and therefore the tag which needs to be added to the user portrait can be determined from the group tag.
In particular, the server may first determine at least some of the behavioral characteristics of the community tags that need to be added to the user representation of the target user.
As an embodiment, referring to fig. 4, step S240 may include:
step S241: and acquiring a plurality of dimensions corresponding to the behavior features existing in the group label as a first dimension, and acquiring a plurality of dimensions corresponding to the behavior features existing in the application label as a second dimension.
In this embodiment, there are multiple dimensions of features in both the population tags and the application tags. The basic information and the use habit of the user can be divided into multiple dimensions, for example, the use habit is divided into multiple dimensions such as use time, used functions, operation types, operation contents, geographical positions and the like.
The server may determine, according to the generated application tag and the obtained group tag, a plurality of dimensions corresponding to behavior features (characterizing behavior habits) existing in the group tag, and take the plurality of dimensions as a first dimension. And determining a plurality of dimensions corresponding to the behavior features existing in the application label, and taking the plurality of dimensions as a second dimension.
Step S242: and acquiring a dimension different from the second dimension in the first dimension as a third dimension.
In this embodiment, after obtaining the first dimension and the second dimension, the server may obtain a dimension of the first dimension different from the second dimension, that is, a dimension of a behavior feature that does not exist in the application tag, and a dimension of a behavior feature that exists in the population tag, and use the obtained dimension as a third dimension.
Step S243: taking the behavior feature of the third dimension in the population tag as the first behavior feature.
In this embodiment, the server may use the behavior feature of the third dimension in the group tag as the first behavior feature, that is, at least part of the behavior feature that needs to be added to the user representation of the target user. It will be appreciated that behavioral features of a third dimension not present in the application tags will be present in the population tags, and thus behavioral features of dimensions not present in the application tags can be added to the user representation.
As another embodiment, step S240 may include: and acquiring the feature of a first set dimension in the group label as the first behavior feature.
In this embodiment, the set dimension may be a dimension for constructing a behavior feature required by the user portrait, that is, when the user portrait includes a tag formed by the behavior feature of the set dimension, the dimension of the tag in the user portrait may be made wide, and the behavior habit of the user may be embodied. The specific set dimension may not be limited, and the more the set dimension, the more accurate the user figure to be generated subsequently.
Step S250: and generating a user portrait of the target user according to the application label and the label formed by the first behavior characteristic.
In the embodiment of the application, after determining the first behavior feature that needs to be added to the user portrait in the group tag, the server may generate the user portrait of the target user according to the application tag and the tag formed by the first behavior feature. The generated user representation includes both the application tag and the tag comprising the first behavioral feature. The constructed user portrait can be used for pushing information, and due to the fact that the dimensionality of the label included in the user portrait is large, the user portrait can be suitable for pushing of various scenes, and accuracy of information pushing is guaranteed.
According to the portrait generation method provided by the embodiment of the application, the application label of the target user is generated according to the obtained application information by obtaining the application information of at least one application used by the target user, the group label corresponding to the target user is obtained according to the application label, the group label is used for representing the behavior characteristics of the user group to which the target user belongs, then the first behavior characteristic needing to be added into the user portrait is determined from the group label, and finally the user portrait of the target user is generated according to the label formed by the application label and the first behavior characteristic, so that the user portrait with more dimensions is constructed according to the application information used by the user and the group label of the user group to which the user belongs, and the accuracy of the constructed user portrait is improved.
Referring to fig. 5, fig. 5 is a schematic flowchart illustrating an image generation method according to another embodiment of the present application. The method is applied to the server, and will be described in detail with respect to the flow shown in fig. 5, and the portrait generation method may specifically include the following steps:
step S310: application tags for a plurality of users are obtained.
In this embodiment, the server may group users in advance. First, application tags for multiple users may be obtained. The plurality of users may be users who have constructed user figures in the past or users who have acquired application information in the past. The application tags of the plurality of users may be application tags saved in the past when the user images of the users were created, or application tags regenerated from application information acquired in the past.
Step S320: and according to the application label of each user in the plurality of users, dividing the plurality of users into one or more user groups.
In this embodiment, the server may divide the plurality of applications into one or more user groups according to the application tag of each of the plurality of users.
In some embodiments, step S320 may include: clustering or classifying according to a set rule according to the behavior characteristics in the application label of each user to obtain one or more categories; and taking all users corresponding to the behavior characteristics in each category as a user group to obtain one or more user groups.
As an implementation manner, the server may cluster different users according to the behavior characteristics in the application tag, so as to cluster users with the same behavior habit into the same class, thereby obtaining a user group. As another embodiment, the server may group different users according to the behavior characteristics in the application tag by using a preset rule of user grouping, so as to group users with the same behavior habit into the same user group.
Of course, the specific way of grouping the plurality of users may not be limited.
Step S330: and generating a group label of each user group according to the application labels of the plurality of users.
In this embodiment of the present application, after the server groups a plurality of users, the server may generate a group label corresponding to each user group for each user group.
In some embodiments, step S330 may include: acquiring application labels of all users in each user group according to the application labels of the users; and generating a group label of each user group according to all the application labels corresponding to each user group. In this embodiment, the server may construct the group tag of each user group according to the application tags of the users in each user group, that is, the constructed group tag of the user group may include the application tags of all the users in the user group.
Step S340: and generating an application label of the target user according to the application information.
In the embodiment of the present application, step S340 may refer to the contents of the foregoing embodiments, and is not described herein again.
Step S350: and acquiring a user group to which the target user belongs according to the behavior characteristics in the application label.
In this embodiment of the application, the server may obtain, according to the behavior feature in the application tag, a user group to which the target user belongs, that is, the obtained behavior feature of the user group may be matched with the behavior feature of the target user, and the behavior habits of the users of the user group are also matched with the behavior habits of the target user.
In some embodiments, step S350 may include: acquiring the behavior characteristics in the application label and the similarity of the behavior characteristics of each user group; and acquiring the user group with the similarity meeting the specified numerical value condition as the user group to which the target user belongs.
In this embodiment, the similarity between the behavior feature of the application tag of the target user and the behavior feature of each user group may be obtained, and then, according to the obtained multiple similarities, a user group with a similarity satisfying a specified numerical condition is determined among the multiple user groups as the user group to which the target user belongs. The specified numerical condition may be that the similarity is the highest, or that the similarity is greater than a similarity threshold, and the like, and is not limited herein.
Step S360: and acquiring the group label of the user group as the group label of the target user.
In the embodiment of the application, after determining the user group to which the target user belongs, the server may determine the group tag of the user group to which the target user belongs according to the group tag of the user group obtained in advance.
Step S370: and generating the user portrait of the target user according to the application label and the group label.
In the embodiment of the present application, step S370 may refer to the contents of the foregoing embodiments, and is not described herein again. The constructed user portrait can be used for pushing information, and due to the fact that the dimensionality of the label included in the user portrait is large, the user portrait can be suitable for pushing of various scenes, and accuracy of information pushing is guaranteed.
According to the portrait generation method provided by the embodiment of the application, the application labels of the multiple users are obtained in advance, the multiple users are divided into one or more user groups according to the application label of each user in the multiple users, and then the group labels of the multiple user groups are generated according to the application labels of the multiple users, so that the user groups are grouped in advance, and the group labels are generated. When the user portrait of the target user is actually generated, the application label of the target user is generated according to the obtained application information by obtaining the application information of at least one application used by the target user, the user group to which the target user belongs is obtained according to the behavior characteristics in the application label and the behavior characteristics in the group labels of the user groups, then the group label corresponding to the user group is obtained, and then the user portrait is generated according to the application label of the target user and the group label of the user group to which the target user belongs, so that the user portrait with more dimensions is constructed according to the application information used by the user and the group label of the user group to which the user belongs, and the accuracy of the constructed user portrait is improved.
Referring to fig. 6, fig. 6 is a schematic flowchart illustrating an image generation method according to still another embodiment of the present application. The method is applied to the server, and will be described in detail with respect to the flow shown in fig. 6, and the portrait generation method may specifically include the following steps:
step S410: application information of at least one application used by a target user is acquired.
Step S420: and generating an application label of the target user according to the application information.
Step S430: and acquiring a group label corresponding to the target user according to the application label, wherein the group label is used for representing the behavior characteristics of the user group to which the target user belongs.
Step S440: and generating the user portrait of the target user according to the application label and the group label.
In the embodiment of the present application, steps S410 to S440 may refer to the contents of the foregoing embodiments, and are not described herein again.
In some embodiments, the server may further calculate a degree of association between the application tag of the target user and the group tag of the user group to which the target user belongs, and correct the application tag of the target user according to the degree of association, so as to improve accuracy of the finally generated user representation. The constructed user portrait can be used for pushing information, and due to the fact that the dimensionality of the label included in the user portrait is large, the user portrait can be suitable for pushing of various scenes, and accuracy of information pushing is guaranteed.
Step S450: and updating the group label according to the application label, wherein the updated group label comprises at least part of the behavior characteristics in the application label.
In this embodiment of the application, after the server generates the application of the target user, the group tag may be updated according to the application tag of the target user, and the updated group tag may include at least part of behavior characteristics in the application tag of the target user.
In some embodiments, step S450 may include: acquiring the feature of the dimensionality, which does not exist in the group tags, in the application tags as a second behavior feature; adding the second behavior feature to the population tag.
In this embodiment, the server may obtain a feature of a dimension where the group tag does not exist in the application tag, that is, may obtain a dimension where the dimension of the behavior feature in the group tag is different from the dimension of the behavior feature in the application tag, and then use the behavior feature corresponding to the dimension obtained in the application tag of the target user as the second behavior feature. And then the server can add the acquired second behavior characteristics to the group label of the user group to which the target user belongs, so that the dimensionality of the behavior characteristics in the group label is increased, and the subsequent construction of the user portrait of other users is more accurate.
In other embodiments, step S450 may include: acquiring the feature of the specified dimension in the application label as a third behavior feature; adding the third behavior feature to the population tag.
In this embodiment, the specified dimensions may be dimensions of behavioral features required to construct the user's image. The server can take the behavior characteristics of all the specified dimensions in the application label of the target user as third behavior characteristics, and add the third behavior characteristics to the group label of the user group to which the target user belongs, so that the behavior characteristics in the group label of the group to which the target user belongs are more and more, and the user portrait of other users can be constructed more accurately.
In the embodiment of the application, the server can also update the user portrait of each user in the user group to which the target user belongs according to the updated group tag, and store the updated user portrait so as to enable the user portraits of other users to be more accurate.
According to the portrait generation method provided by the embodiment of the application, the application label of the target user is generated according to the obtained application information by obtaining the application information of at least one application used by the target user, the group label corresponding to the target user is obtained according to the application label, the group label is used for representing the behavior characteristics of the user group to which the target user belongs, and finally the user portrait of the target user is generated according to the application label and the group label, so that the user portrait with more dimensions is constructed according to the application information used by the user and the group label of the user group to which the user belongs, and the accuracy of the constructed user portrait is improved. In addition, the server updates the group label of the user group to which the target user belongs according to the application label of the target user, so that the dimensionality of the behavior characteristics in the group label is increased, and the user portrait of other users constructed subsequently is more accurate.
Referring to fig. 7, fig. 7 is a schematic flowchart illustrating an image generation method according to yet another embodiment of the present application. The method is applied to the server, and will be described in detail with respect to the flow shown in fig. 7, and the portrait generation method may specifically include the following steps:
step S510: application information of at least one application used by a target user is acquired.
Step S520: and generating an application label of the target user according to the application information.
Step S530: and acquiring a group label corresponding to the target user according to the application label, wherein the group label is used for representing the behavior characteristics of the user group to which the target user belongs.
Step S540: and generating the user portrait of the target user according to the application label and the group label.
In the embodiment of the present application, steps S510 to S540 may refer to the contents of the foregoing embodiments, and are not described herein again.
Step S550: and determining push content of other applications except the at least one application according to the user portrait.
In the embodiment of the application, after the server generates the user portrait of the target user, the generated user portrait can be used for pushing various applications of the target user. Besides the application corresponding to the application information acquired by the server, other applications can be pushed according to the user portrait. It will be appreciated that the user representation may be used for push of other applications as the user representation includes tags with more dimensional behavioral characteristics. In particular, the server may determine push content for applications other than the at least one application based on the user representation. For example, if the at least one application is application a, application B, and application C, the push content of application D may be determined from the user representation.
Step S560: and pushing the push content to the target user.
In the embodiment of the application, after determining the push content of the application other than the at least one application, the server may push the push content to the target user. When pushing the content pushed by the other application, the server may send the pushed content to the server corresponding to the other application, so that the server corresponding to the other application pushes the content.
Step S570: and determining an application server corresponding to the at least one application.
Step S580: and pushing the user portrait of the target user to the application server.
In the embodiment of the application, after the server generates the user portrait of the target user, the user portrait of the target user can be fed back to the application server corresponding to the at least one application, so that the application server can push the user portrait more accurately according to the user portrait. It can be understood that, because the previous application server can only acquire the application information of the application corresponding to the previous application server to construct a user portrait and then push the user portrait, the dimensionalities of the behavioral characteristics in the user portrait constructed in the mode are not many, and the behavioral habits of the user can not be accurately reflected.
According to the portrait generation method provided by the embodiment of the application, the application label of the target user is generated according to the obtained application information by obtaining the application information of at least one application used by the target user, the group label corresponding to the target user is obtained according to the application label, the group label is used for representing the behavior characteristics of the user group to which the target user belongs, and finally the user portrait of the target user is generated according to the application label and the group label, so that the user portrait with more dimensions is constructed according to the application information used by the user and the group label of the user group to which the user belongs, and the accuracy of the constructed user portrait is improved. The server also pushes the content of the application other than the at least one application based on the generated user representation of the target user. And the user portrait is also fed back to the application server corresponding to the at least one application, so that the application server can push the user portrait more accurately.
Referring to FIG. 8, a block diagram of an image generating apparatus 400 according to an embodiment of the present disclosure is shown. The image generation device 400 is applied to the server, and the image generation device 400 includes: an information acquisition module 410, a tag generation module 420, a tag acquisition module 430, and a content generation module 440. The information obtaining module 410 is configured to obtain application information of at least one application used by a target user; the tag generating module 420 is configured to generate an application tag of the target user according to the application information; the tag obtaining module 430 is configured to obtain a group tag corresponding to the target user according to the application tag, where the group tag is used to characterize a behavior feature of a user group to which the target user belongs; the content generation module 440 is configured to generate a user representation of the target user according to the application tag and the group tag.
In some embodiments, referring to fig. 9, the content generation module 440 includes: a feature obtaining unit 441, configured to obtain at least part of behavior features, which need to be added to the user portrait of the target user, in the group tag as a first behavior feature; and a representation construction unit 442, configured to generate a user representation of the target user according to the application tag and the tag formed by the first behavior feature.
As an embodiment, referring to fig. 10, the feature obtaining unit 441 includes: a first dimension obtaining subunit 4411, configured to obtain multiple dimensions corresponding to the behavior features existing in the group tag as a first dimension, and obtain multiple dimensions corresponding to the behavior features existing in the application tag as a second dimension; a second dimension acquiring subunit 4412 configured to acquire, as a third dimension, a dimension different from the second dimension in the first dimension; and a behavior feature acquiring subunit 4413, configured to use the behavior feature of the third dimension in the population tag as the first behavior feature.
As another embodiment, the feature obtaining unit 441 may be specifically configured to: and acquiring the feature of a first set dimension in the group label as the first behavior feature.
In some embodiments, the application information includes user information for the at least one application and usage information for the at least one application, the application tags including a base representation tag and a behavior representation tag.
Under this embodiment, the tag generation module 420 includes: a first label generating unit, configured to generate a base portrait label of the target user according to the user information; and the second label generating unit is used for generating the behavior portrait label of the target user according to the use information.
In some embodiments, the tag acquisition module 430 comprises: the group acquisition unit is used for acquiring a user group to which the target user belongs according to the behavior characteristics in the application label; and the label determining unit is used for acquiring the group label of the user group as the group label of the target user.
In this embodiment, the population acquisition unit includes: the similarity obtaining subunit is used for obtaining the similarity between the behavior characteristics in the application label and the behavior characteristics of each user group; and a group determining subunit, configured to obtain a user group with the similarity satisfying a specified numerical condition, as the user group to which the target user belongs.
In some embodiments, the representation generation apparatus 400 may further include: the system comprises an application label acquisition module, a user grouping module and a group label generation module. The application label acquisition module is used for acquiring application labels of a plurality of users; the user grouping module is used for grouping the plurality of users into one or more user groups according to the application label of each user in the plurality of users; and the group label generating module is used for generating the group label of each user group according to the application labels of the plurality of users.
In this embodiment, the user grouping module includes: the classification unit is used for clustering or classifying according to a set rule according to the behavior characteristics in the application label of each user to obtain one or more classes; and the group obtaining unit is used for taking all the users corresponding to the behavior characteristics in each category as a user group to obtain one or more user groups.
Under this embodiment, the population tag generation module includes: the label classification unit is used for acquiring the application labels of all users in each user group according to the application labels of the users; and the generation execution unit is used for generating the group label of each user group according to all the application labels corresponding to each user group.
In some embodiments, the representation generation apparatus 400 may further include: and a label updating module. The label updating module is used for updating the group label according to the application label, wherein the updated group label comprises at least part of the behavior characteristics in the application label.
As an embodiment, the tag update module includes: a first feature obtaining unit, configured to obtain a feature of a dimension, in the application label, for which the population label does not exist, as a second behavior feature; a first characteristic adding unit, configured to add the second behavior characteristic to the population tag.
As another embodiment, the tag update module includes: a second feature obtaining unit, configured to obtain a feature of a specified dimension in the application label as a third behavior feature; a second feature adding unit, configured to add the third behavior feature to the population tag.
In some embodiments, the representation generation apparatus 400 may further include: the device comprises a content determining module and a content pushing module. The content determining module is used for determining push content of other applications except the at least one application according to the user portrait; and the content pushing module is used for pushing the pushed content to the target user.
In some embodiments, the representation generation apparatus 400 may further include: the server determination module and the portrait pushing module. The server determining module is used for determining an application server corresponding to the at least one application; the portrait pushing module is used for pushing the user portrait of the target user to the application server.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and modules may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, the coupling between the modules may be electrical, mechanical or other type of coupling.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
In summary, by obtaining application information of at least one application used by a target user, generating an application tag of the target user according to the obtained application information, obtaining a group tag corresponding to the target user according to the application tag, where the group tag is used for representing behavior characteristics of a user group to which the target user belongs, and finally generating a user portrait of the target user according to the application tag and the group tag, a user portrait with a large number of dimensions is constructed according to the application information used by the user and the group tag of the user group to which the user belongs, so that accuracy of the constructed user portrait is improved.
Referring to fig. 11, a block diagram of a server according to an embodiment of the present disclosure is shown. The server 100 may be a conventional server, a cloud server, or the like capable of running an application. The server 100 in the present application may include one or more of the following components: a processor 110, a memory 120, a touch screen 130, and one or more applications, wherein the one or more applications may be stored in the memory 120 and configured to be executed by the one or more processors 110, the one or more programs configured to perform the methods as described in the aforementioned method embodiments.
Processor 110 may include one or more processing cores. The processor 110 connects various parts within the overall server 100 using various interfaces and lines, performs various functions of the server 100 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 120, and calling data stored in the memory 120. Alternatively, the processor 110 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 110 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 110, but may be implemented by a communication chip.
The Memory 120 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 120 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 120 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The storage data area may also store data created by the server 100 in use (such as phone books, audio and video data, chat log data), and the like.
Referring to fig. 12, a block diagram of a computer-readable storage medium according to an embodiment of the present application is shown. The computer-readable medium 800 has stored therein a program code that can be called by a processor to execute the method described in the above-described method embodiments.
The computer-readable storage medium 800 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Alternatively, the computer-readable storage medium 800 includes a non-volatile computer-readable storage medium. The computer readable storage medium 800 has storage space for program code 810 to perform any of the method steps of the method described above. The program code can be read from or written to one or more computer program products. The program code 810 may be compressed, for example, in a suitable form.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; while the present application has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (20)

  1. A method of portrait generation, the method comprising:
    acquiring application information of at least one application used by a target user;
    generating an application label of the target user according to the application information;
    acquiring a group label corresponding to the target user according to the application label, wherein the group label is used for representing the behavior characteristics of a user group to which the target user belongs;
    and generating the user portrait of the target user according to the application label and the group label.
  2. The method of claim 1, wherein generating a user representation of the target user based on the application tag and the population tag comprises:
    acquiring at least part of behavior characteristics which need to be added to the user portrait of the target user in the group label as first behavior characteristics;
    and generating the user portrait of the target user according to the application label and the label formed by the first behavior characteristic.
  3. The method of claim 2, wherein the obtaining at least some behavior features of the crowd tag that need to be added to the user representation of the target user as the first behavior feature comprises:
    obtaining a plurality of dimensions corresponding to the behavior features existing in the group label as a first dimension, and obtaining a plurality of dimensions corresponding to the behavior features existing in the application label as a second dimension;
    acquiring a dimension different from the second dimension in the first dimension as a third dimension;
    taking the behavior feature of the third dimension in the population tag as the first behavior feature.
  4. The method of claim 2, wherein the obtaining at least some behavior features of the crowd tag that need to be added to the user representation of the target user as the first behavior feature comprises:
    and acquiring the feature of a first set dimension in the group label as the first behavior feature.
  5. The method of any of claims 1-4, wherein the application information includes user information for the at least one application and usage information for the at least one application, and wherein the application tags include a base representation tag and a behavior representation tag;
    the generating an application tag of the target user according to the application information includes:
    generating a basic portrait label of the target user according to the user information;
    and generating a behavior portrait label of the target user according to the use information.
  6. The method according to any one of claims 1-5, wherein the obtaining the group tag of the target user according to the application tag comprises:
    acquiring a user group to which the target user belongs according to the behavior characteristics in the application label;
    and acquiring the group label of the user group as the group label of the target user.
  7. The method according to claim 6, wherein the obtaining the user group to which the target user belongs according to the behavior feature in the application tag comprises:
    acquiring the behavior characteristics in the application label and the similarity of the behavior characteristics of each user group;
    and acquiring the user group with the similarity meeting the specified numerical value condition as the user group to which the target user belongs.
  8. The method according to any one of claims 1-7, wherein before the obtaining of the group tag corresponding to the target user according to the application tag, the method further comprises:
    acquiring application labels of a plurality of users;
    dividing the plurality of users into one or more user groups according to the application label of each user in the plurality of users;
    and generating a group label of each user group according to the application labels of the plurality of users.
  9. The method of claim 8, wherein the grouping the plurality of users into one or more user groups according to the application label of each of the plurality of users comprises:
    clustering or classifying according to a set rule according to the behavior characteristics in the application label of each user to obtain one or more categories;
    and taking all users corresponding to the behavior characteristics in each category as a user group to obtain one or more user groups.
  10. The method of claim 8, wherein generating a group label for each user group based on the application labels of the plurality of users comprises:
    acquiring application labels of all users in each user group according to the application labels of the users;
    and generating a group label of each user group according to all the application labels corresponding to each user group.
  11. The method according to any one of claims 1-10, wherein after the obtaining of the group tag corresponding to the target user according to the application tag, the method further comprises:
    and updating the group label according to the application label, wherein the updated group label comprises at least part of the behavior characteristics in the application label.
  12. The method of claim 11, wherein the updating the population tag according to the application tag comprises:
    acquiring the feature of the dimension of the application label, in which the population label does not exist, as a second behavior feature;
    adding the second behavior feature to the population tag.
  13. The method of claim 11, wherein the updating the population tag according to the application tag comprises:
    acquiring the feature of the specified dimension in the application label as a third behavior feature;
    adding the third behavior feature to the population tag.
  14. The method of any of claims 1-13, wherein after said generating a user representation of said target user from said application tag and said population tag, said method further comprises:
    determining push content of other applications except the at least one application according to the user portrait;
    and pushing the pushed content to the target user.
  15. The method of any of claims 1-14, wherein after said generating a user representation of said target user from said application tag and said community tag, said method further comprises:
    determining an application server corresponding to the at least one application;
    and pushing the user portrait of the target user to the application server.
  16. A representation generation apparatus, comprising: an information acquisition module, a label generation module, a label acquisition module and a content generation module, wherein,
    the information acquisition module is used for acquiring application information of at least one application used by a target user;
    the label generating module is used for generating an application label of the target user according to the application information;
    the tag obtaining module is used for obtaining a group tag corresponding to the target user according to the application tag, wherein the group tag is used for representing the behavior characteristics of a user group to which the target user belongs;
    and the content generation module is used for generating the user portrait of the target user according to the application label and the group label.
  17. The apparatus of claim 16, wherein the content generation module comprises:
    the characteristic acquisition unit is used for acquiring at least part of behavior characteristics, which need to be added to the user portrait of the target user, in the group label as first behavior characteristics; and
    and the portrait construction unit is used for generating the user portrait of the target user according to the application label and the label formed by the first behavior characteristic.
  18. The apparatus of claim 16, wherein the feature obtaining unit comprises:
    a first dimension obtaining subunit, configured to obtain, as a first dimension, multiple dimensions corresponding to behavior features existing in the group tag, and obtain, as a second dimension, multiple dimensions corresponding to behavior features existing in the application tag;
    a second dimension acquiring subunit, configured to acquire, as a third dimension, a dimension of the first dimension that is different from the second dimension; and
    a behavior feature obtaining subunit, configured to use the behavior feature of the third dimension in the group label as the first behavior feature.
  19. A server, comprising:
    one or more processors;
    a memory;
    one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of any of claims 1-15.
  20. A computer-readable storage medium, having stored thereon program code that can be invoked by a processor to perform the method according to any one of claims 1 to 15.
CN202080084186.7A 2020-01-16 2020-01-16 Image generation method, image generation device, server and storage medium Pending CN114902212A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/072502 WO2021142719A1 (en) 2020-01-16 2020-01-16 Portrait generation method and apparatus, server and storage medium

Publications (1)

Publication Number Publication Date
CN114902212A true CN114902212A (en) 2022-08-12

Family

ID=76863344

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080084186.7A Pending CN114902212A (en) 2020-01-16 2020-01-16 Image generation method, image generation device, server and storage medium

Country Status (2)

Country Link
CN (1) CN114902212A (en)
WO (1) WO2021142719A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113806638B (en) * 2021-09-29 2023-12-08 中国平安人寿保险股份有限公司 Personalized recommendation method based on user portrait and related equipment
CN113935429B (en) * 2021-10-27 2024-07-26 北京搜房科技发展有限公司 User portrait construction method and device, storage medium and electronic equipment
CN114461674A (en) * 2022-01-21 2022-05-10 浪潮卓数大数据产业发展有限公司 Implementation method and system for optimizing user portrait
CN115033565A (en) * 2022-04-20 2022-09-09 厦门市美亚柏科信息股份有限公司 User portrait method and terminal
CN114880535B (en) * 2022-06-09 2023-04-21 武汉十月科技有限责任公司 User portrait generation method based on communication big data
CN118071388B (en) * 2024-04-25 2024-07-19 深圳市奇迅新游科技股份有限公司 Control method, equipment and storage medium of Internet product operation system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105574730A (en) * 2014-10-10 2016-05-11 中兴通讯股份有限公司 Internet of Things big data platform-based intelligent user portrait method and device
CN105893407A (en) * 2015-11-12 2016-08-24 乐视云计算有限公司 Individual user portraying method and system
CN109118288A (en) * 2018-08-22 2019-01-01 中国平安人寿保险股份有限公司 Target user's acquisition methods and device based on big data analysis
CN109977308A (en) * 2019-03-20 2019-07-05 北京字节跳动网络技术有限公司 Construction method, device, storage medium and the electronic equipment of user group's portrait
CN110431585A (en) * 2018-01-22 2019-11-08 华为技术有限公司 A kind of generation method and device of user's portrait

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160063536A1 (en) * 2014-08-27 2016-03-03 InMobi Pte Ltd. Method and system for constructing user profiles
CN110782289B (en) * 2019-10-28 2020-11-10 四川旅投数字信息产业发展有限责任公司 Service recommendation method and system based on user portrait

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105574730A (en) * 2014-10-10 2016-05-11 中兴通讯股份有限公司 Internet of Things big data platform-based intelligent user portrait method and device
CN105893407A (en) * 2015-11-12 2016-08-24 乐视云计算有限公司 Individual user portraying method and system
CN110431585A (en) * 2018-01-22 2019-11-08 华为技术有限公司 A kind of generation method and device of user's portrait
CN109118288A (en) * 2018-08-22 2019-01-01 中国平安人寿保险股份有限公司 Target user's acquisition methods and device based on big data analysis
CN109977308A (en) * 2019-03-20 2019-07-05 北京字节跳动网络技术有限公司 Construction method, device, storage medium and the electronic equipment of user group's portrait

Also Published As

Publication number Publication date
WO2021142719A1 (en) 2021-07-22

Similar Documents

Publication Publication Date Title
CN114902212A (en) Image generation method, image generation device, server and storage medium
CN112868004B (en) Resource recommendation method and device, electronic equipment and storage medium
CN110110201B (en) Content recommendation method and system
CN108416616A (en) The sort method and device of complaints and denunciation classification
CN108885624A (en) Information recommendation system and method
CN108563680A (en) Resource recommendation method and device
CN110674144A (en) User portrait generation method and device, computer equipment and storage medium
CN112464034A (en) User data extraction method and device, electronic equipment and computer readable medium
CN110727857A (en) Method and device for identifying key features of potential users aiming at business objects
WO2021081914A1 (en) Pushing object determination method and apparatus, terminal device and storage medium
CN112070542B (en) Information conversion rate prediction method, device, equipment and readable storage medium
CN111783873A (en) Incremental naive Bayes model-based user portrait method and device
CN112507218A (en) Business object recommendation method and device, electronic equipment and storage medium
CN113434755A (en) Page generation method and device, electronic equipment and storage medium
CN115659008A (en) Information pushing system and method for big data information feedback, electronic device and medium
CN112989213A (en) Content recommendation method, device and system, electronic equipment and storage medium
CN114491093B (en) Multimedia resource recommendation and object representation network generation method and device
CN112507214B (en) User name-based data processing method, device, equipment and medium
US8234893B2 (en) Cold-start in situation-aware systems
CN113792211A (en) Resource pushing processing method and device, electronic equipment and storage medium
CN112905892A (en) Big data processing method and big data server applied to user portrait mining
CN114519114B (en) Method, device, server and storage medium for constructing multimedia resource classification model
CN113052677B (en) Construction method and device of two-stage loan prediction model based on machine learning
CN114417944B (en) Recognition model training method and device, and user abnormal behavior recognition method and device
WO2021168830A1 (en) Content pushing method and apparatus, server and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination