WO2021142719A1 - 画像生成方法、装置、服务器及存储介质 - Google Patents

画像生成方法、装置、服务器及存储介质 Download PDF

Info

Publication number
WO2021142719A1
WO2021142719A1 PCT/CN2020/072502 CN2020072502W WO2021142719A1 WO 2021142719 A1 WO2021142719 A1 WO 2021142719A1 CN 2020072502 W CN2020072502 W CN 2020072502W WO 2021142719 A1 WO2021142719 A1 WO 2021142719A1
Authority
WO
WIPO (PCT)
Prior art keywords
application
user
group
tag
label
Prior art date
Application number
PCT/CN2020/072502
Other languages
English (en)
French (fr)
Inventor
王逸峰
安琪
Original Assignee
深圳市欢太科技有限公司
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市欢太科技有限公司, Oppo广东移动通信有限公司 filed Critical 深圳市欢太科技有限公司
Priority to PCT/CN2020/072502 priority Critical patent/WO2021142719A1/zh
Priority to CN202080084186.7A priority patent/CN114902212A/zh
Publication of WO2021142719A1 publication Critical patent/WO2021142719A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising

Definitions

  • This application relates to the field of data analysis technology, and more specifically, to a portrait generation method, device, server, and storage medium.
  • Information recommendation technology mainly recommends related information based on user portraits.
  • the accuracy of user portraits affects the accuracy of information recommendation. Therefore, the construction of user portraits becomes the key to information recommendation technology.
  • this application proposes a portrait generation method, device, server and storage medium.
  • an embodiment of the present application provides a portrait generation method.
  • the method includes: obtaining application information of at least one application used by a target user; generating an application tag of the target user according to the application information;
  • the application tag is used to obtain the group tag corresponding to the target user, the group tag is used to characterize the behavior characteristics of the user group to which the target user belongs; the user of the target user is generated according to the application tag and the group tag portrait.
  • an embodiment of the present application provides a portrait generation device, the device includes: an information acquisition module, a label generation module, a label acquisition module, and a content generation module, wherein the information acquisition module is used to acquire target users Application information of at least one application; the label generation module is used to generate the application label of the target user according to the application information; the label acquisition module is used to obtain the application label corresponding to the target user according to the application label
  • the group tag is used to characterize the behavior characteristics of the user group to which the target user belongs; the content generation module is used to generate a user portrait of the target user according to the application tag and the group tag.
  • an embodiment of the present application provides a server, including: one or more processors; a memory; one or more application programs, wherein the one or more application programs are stored in the memory and are It is configured to be executed by the one or more processors, and the one or more programs are configured to execute the portrait generation method provided in the first aspect described above.
  • an embodiment of the present application provides a computer-readable storage medium.
  • the computer-readable storage medium stores program code, and the program code can be invoked by a processor to execute the portrait provided in the first aspect. Generation method.
  • the solution provided by this application obtains the application information of at least one application used by the target user, generates the application label of the target user according to the obtained application information, and obtains the group label corresponding to the target user according to the application label.
  • the group label is used to characterize the target According to the behavior characteristics of the user group to which the user belongs, a user portrait of the target user is generated according to the application label and the group label, so as to realize the construction of a user portrait with more dimensions according to the application information of the user using the application and the group label of the user group to which the user belongs. Improve the accuracy of the constructed user portrait.
  • Fig. 1 shows a flowchart of a portrait generation method according to an embodiment of the present application.
  • Fig. 2 shows a flowchart of a portrait generation method according to another embodiment of the present application.
  • FIG. 3 shows a flowchart of step S220 in the portrait generation method provided by another embodiment of the present application.
  • FIG. 4 shows a flowchart of step S240 in the portrait generation method provided by another embodiment of the present application.
  • Fig. 5 shows a flowchart of a portrait generation method according to another embodiment of the present application.
  • Fig. 6 shows a flowchart of a portrait generation method according to still another embodiment of the present application.
  • Fig. 7 shows a flowchart of a portrait generation method according to yet another embodiment of the present application.
  • Fig. 8 shows a block diagram of a portrait generating device according to an embodiment of the present application.
  • Fig. 9 shows a block diagram of a content generation module in a portrait generation device according to an embodiment of the present application.
  • Fig. 10 shows a block diagram of a feature acquisition unit in a content generation module according to an embodiment of the present application.
  • FIG. 11 is a block diagram of a server for executing the portrait generation method according to the embodiment of the present application according to an embodiment of the present application.
  • Fig. 12 is a storage unit for storing or carrying program codes for implementing the portrait generation method according to the embodiment of the present application according to an embodiment of the present application.
  • User portraits as an effective tool for delineating target users, contacting user demands and design directions, user portraits have been widely used in various fields.
  • User portraits were originally applied in the field of e-commerce.
  • user information is flooded in the Internet, and each specific information of the user is abstracted into tags, and these tags are used to materialize the user’s image to provide users with Targeted services.
  • User portraits have become a hot field. Most of the user portrait system is based on the user's operation on the application (APP, Application), payment behavior and other information, after a series of features and algorithm transformation, to obtain the user label based on the APP.
  • APP application
  • payment behavior after a series of features and algorithm transformation, to obtain the user label based on the APP.
  • information recommendation and marketing select user groups with similar tags to the target user, and then conduct corresponding information recommendation and marketing to these users.
  • the inventor found that the current user portrait of the user is generated based on the information generated on the user’s APP, which cannot be diffused and made more dimensional portrait descriptions, making its own portrait system unable to achieve expectations when recommending information. Effect.
  • the inventors proposed the portrait generation method, device, server, and storage medium provided by the embodiments of the present application, which can construct a user portrait with more dimensions according to the application information of the application used by the user and the group tag of the user group to which the user belongs. , So that the accuracy of the constructed user portrait is improved.
  • the specific portrait generation method will be described in detail in the subsequent embodiments.
  • FIG. 2 shows a schematic flowchart of a portrait generation method provided by an embodiment of the present application.
  • the portrait generation method is used to construct a user portrait with more dimensions, so as to improve the accuracy of the constructed user portrait.
  • the portrait generation method is applied to the portrait generation device 400 shown in FIG. 8 and the server 100 (FIG. 11) equipped with the portrait generation device 400.
  • the following will take a server as an example to describe the specific process of this embodiment.
  • the server applied in this embodiment may be a traditional server, a cloud server, etc., which is not limited here.
  • the process shown in FIG. 2 will be described in detail below, and the portrait generation method may specifically include the following steps:
  • Step S110 Obtain application information of at least one application used by the target user.
  • the server when the server constructs the user portrait of the target user, it can obtain the application information generated by the target user using the application, so as to analyze the target user’s habits and personal characteristics according to the application information of the target user’s use of the application, thereby constructing Create user portraits.
  • the application information may include user information when the user logs in or register the application, and may also include practical information about using the application.
  • the specific application information may not be limited.
  • the application information may also include the attribute information of the application itself. Wait.
  • the application information used by the target user can be the application information of at least one application, that is, the application information of one application, or the application information of multiple applications.
  • Step S120 According to the application information, an application tag of the target user is generated.
  • the server when it generates the user portrait of the target user according to the obtained application information, it may generate the application tag of the target user according to the application information.
  • application tags are tags generated from application information corresponding to the target user.
  • Application tags can be divided into multiple types of tags, such as basic portrait tags corresponding to user information, behavior tags corresponding to usage information, and content contained in application tags.
  • the specific tag type may not be used as a limitation.
  • the application tag may include multiple tags, and each tag has a specific feature value.
  • the application tags include tags with dimensions such as application type, operation preferences, consumption power, and operation time, and each tag has corresponding content. It should be noted that each type of information can be quantified or directly used as a feature value to form the content corresponding to each label.
  • the specific application tag may not be used as a limitation, it only needs to characterize the user's personal characteristics and behavior habits.
  • the server generates the application label of the target user according to the application information corresponding to the target user, or it may be generated by using a pre-trained label generation model. Specifically, the server can obtain a large amount of application information, and label the application label that each application information needs to generate, thereby obtaining a large number of training samples. Then, according to the training samples, the machine learning method is used for training to obtain a trained label generation model.
  • the server can also test the generated label generation model according to the test sample (which can also be the application information obtained in advance and the application label corresponding to the application information) to obtain the label of the label generation model.
  • the generated accuracy rate allows the user to adjust the parameters of the label generation model according to the accuracy rate to obtain a label generation model with higher accuracy. For example, when the tested accuracy rate is lower than a certain value, a prompt message can be generated to prompt the user to adjust the model parameters of the label generation model.
  • the server can also use multiple machine learning methods for training to obtain multiple label generation models. Then the label generation model is tested to obtain the accuracy of each label generation model, and finally the label generation model whose accuracy meets the preset accuracy condition is used as the final label generation model.
  • the preset accuracy rate condition may be the highest accuracy rate, or may be that the accuracy rate is greater than a preset threshold value, etc., which is not limited here.
  • Step S130 Obtain a group tag corresponding to the target user according to the application tag, where the group tag is used to characterize the behavior characteristics of the user group to which the target user belongs.
  • the server may also obtain the group label corresponding to the target user according to the application label.
  • the group label is used to characterize the behavior characteristics of the user group to which the target user belongs, that is, the target user can be classified into a user group, and the user group has a group label, and the group label can characterize the behavior characteristics of the user group .
  • different user groups may cluster different users based on the user's application information obtained in the past.
  • Clustering can be performed according to the behavior characteristics of users, that is to say, clustering is performed according to the information reflecting the behavior characteristics in the application information of different users, so as to obtain multiple clustering results, and each clustering result can include For one or corresponding user, each clustering result can be regarded as a user group.
  • the group tag can also be generated in advance according to the information reflecting the user's behavior characteristics in the application information of the users in each user group.
  • the server may match the characteristics reflecting the user behavior in the application tags with the behavior characteristics in each group tag according to the application tags of the target user, and determine the group tag of the user group to which the target user belongs according to the matching result.
  • the server can also directly cluster the user’s behavioral characteristics based on the user’s application information obtained in the past and the target user’s application information obtained this time, so as to obtain multiple user groups.
  • the user group to which the target user belongs, and a group label corresponding to the user group is generated.
  • the group tag can also be obtained from other devices.
  • the server used to generate the portrait can obtain the group tag corresponding to the target user from other servers.
  • the server may generate a label acquisition request according to the application information corresponding to the target user, and then send the label acquisition request to other servers to obtain the group label of the group to which the target user belongs.
  • the group label of the user group to which the target user belongs is determined by applying the label. Since the behavior habits of the user group can also reflect the behavior habits of the target user, the group label can be used to construct the user portrait of the target user to improve The dimensions of the tags in the user portrait of the target user.
  • Step S140 Generate a user portrait of the target user according to the application tag and the group tag.
  • the server may generate a user portrait of the target user according to the application label and the group label.
  • the generated user portrait can include the above application tags and group tags, so that the user portrait of the target user includes both the application tag formed by the application information of the target user and the group tag of the user group to which the target user belongs. , Which makes the dimension of tags in the user portrait wider, and the accuracy of the constructed user portrait is also improved.
  • the constructed user portrait can be used for information push. Because the tags included in the user portrait have many dimensions, it can be applied to push in a variety of scenarios and ensure the accuracy of information push.
  • the portrait generation method provided by the embodiment of the present application obtains application information of at least one application used by the target user, generates the application label of the target user according to the obtained application information, and obtains the group label corresponding to the target user according to the application label, and the group label Used to characterize the behavioral characteristics of the user group to which the target user belongs, and finally generate a user portrait of the target user based on the application label and the group label, so as to realize the construction of more dimensions according to the application information of the user using the application and the group label of the user group to which the target user belongs.
  • the user portrait of the user can improve the accuracy of the constructed user portrait.
  • FIG. 2 shows a schematic flowchart of a portrait generation method provided by another embodiment of the present application. This method is applied to the above-mentioned server. The process shown in FIG. 2 will be described in detail below.
  • the portrait generation method may specifically include the following steps:
  • Step S210 Obtain application information of at least one application used by the target user.
  • the application information of the target user using the at least one application obtained by the server may include the user information of the at least one application and the use information of the at least one application.
  • the user information can be user information generated by the user logging in or registering the application, for example, it can include: name, age, birthday, education level, constellation, belief, marital status, e-mail address, etc.;
  • the usage information can be the user's use of the application.
  • the generated information for example, may include: use time, used function, operation type, operation content, geographic location and other information, and the use information may reflect the user's usage habits.
  • the server may obtain application information of multiple applications used by the target user.
  • multiple applications can include different types of applications, such as shopping applications, food delivery applications, travel applications, work-related applications, etc. It is understandable that the wider the application types, the more user portraits generated subsequently reflect users. Usage habits.
  • a unified user identifier can be used to Connect each application to obtain application information of different applications.
  • Information for example, the manufacturer account of the mobile phone can be used.
  • the server may also communicate with vendor servers corresponding to different applications, so as to obtain application information of different applications from vendor servers corresponding to different applications.
  • Step S220 Generate an application tag of the target user according to the application information.
  • the server generates the application label of the target user according to the application information corresponding to the target user, and may include the basic portrait label of the target user and the behavior portrait label of the target user.
  • the basic portrait label is used to characterize the user’s basic information, which can be a label generated based on the basic attributes, social attributes and other information in the application information
  • the behavior portrait label is used to characterize the user’s behavior habits, which can be based on the behavior in the application information. Habits, hobbies, application preferences, mobile phone preferences and other information generated by the label.
  • personal information includes but is not limited to name, age, education level, constellation, belief, marital status and email address, etc.
  • Social attributes include but are not limited to industry/occupation, position, income level, child status, vehicle usage, housing, mobile devices and mobile operators.
  • the residence of the house may include: renting a house, owning a house and repaying the loan.
  • Mobile phones can include: brand and price.
  • Mobile operators can include: brand, network, traffic characteristics and mobile number. These brands may include: China Mobile, China Unicom, Telecom, etc.
  • the network may include: none, 2G, 3G and 4G.
  • Flow characteristics can include: high, medium and low.
  • Behavior habits include but are not limited to: geographic location, lifestyle, transportation, residential hotel type, economic/financial characteristics, dining habits, shopping characteristics, and payment.
  • Living habits may include: work schedule, working hours, working hours, computer surfing time and shopping time.
  • Shopping features can include: shopping item categories and shopping methods.
  • the payment situation can include: payment time, payment location, payment method, single payment amount and total payment amount.
  • Reading preferences can include: reading frequency, reading time period, total reading time and reading category.
  • step S220 may include:
  • Step S221 Generate a basic portrait tag of the target user according to the user information.
  • Step S222 Generate a behavior portrait label of the target user according to the usage information.
  • the server may generate a basic portrait tag of the target user according to the user information in the application information.
  • the server can analyze basic attributes, social attributes, and other information based on user information, and generate basic portrait tags; the server can analyze behavior habits, hobbies, application preferences, mobile phone preferences, and other information based on the usage information, and generate Behavioral portrait label.
  • Step S230 Obtain a group tag corresponding to the target user according to the application tag, where the group tag is used to characterize the behavior characteristics of the user group to which the target user belongs.
  • step S230 can refer to the content of the foregoing embodiment, and will not be repeated here.
  • Step S240 Obtain at least part of the behavior features in the group tag that need to be added to the user portrait of the target user as the first behavior feature.
  • the server when the server generates the user portrait of the target user according to the generated application tags and the obtained group tags, since the group tags are tags composed of the behavior characteristics of all users in a user group, they can be obtained from In the group tag, identify the tags that need to be added to the user portrait.
  • the server may first determine at least part of the behavior characteristics of the group tag that needs to be added to the user portrait of the target user.
  • step S240 may include:
  • Step S241 Acquire multiple dimensions corresponding to the behavior features existing in the group tag as the first dimension, and multiple dimensions corresponding to the behavior features existing in the application tag as the second dimension.
  • the basic information and usage habits of users can be divided into multiple dimensions.
  • usage habits can be divided into multiple dimensions such as usage time, functions used, operation type, operation content, and geographic location.
  • the server may determine multiple dimensions corresponding to the behavior characteristics (characterizing behavior habits) existing in the group tags according to the generated application tags and the obtained group tags, and use the multiple as the first dimension. It is also possible to determine multiple dimensions corresponding to the behavior characteristics existing in the application tag, and use the multiple dimensions as the second dimension.
  • Step S242 Obtain a dimension different from the second dimension in the first dimension as the third dimension.
  • the server can obtain dimensions in the first dimension that are different from the second dimension, that is, obtain the dimensions of behavior characteristics that do not exist in the application tag, and the group tag
  • the dimension of the behavior characteristics existing in the, and the acquired dimension is regarded as the third dimension.
  • Step S243 Use the behavior feature of the third dimension in the group tag as the first behavior feature.
  • the server may use the third-dimensional behavior feature in the group tag as the first behavior feature, that is, at least part of the behavior feature that needs to be added to the user portrait of the target user. It is understandable that the third-dimensional behavior characteristics that do not exist in the application tags are used, and the third-dimensional behavior characteristics exist in the group tags, so the behavior characteristics of the dimensions that do not exist in the application tags can be added to the user portrait.
  • step S240 may include: acquiring a feature of a first set dimension in the group tag as the first behavior feature.
  • the set dimension can be the dimension of the behavior characteristics required to construct the user portrait, that is, when the user portrait includes the label formed by the behavior characteristics of the set dimension, the label in the user portrait can be used.
  • the dimensions of is wide, reflecting the user’s behavior habits.
  • the specific setting dimension may not be a limitation. The more the dimension of the setting dimension, the more accurate the user portrait generated subsequently.
  • Step S250 Generate a user portrait of the target user according to the label formed by the application label and the first behavior characteristic.
  • the server after the server determines the first behavior feature in the group tag that needs to be added to the user portrait, it can generate the user portrait of the target user according to the application tag and the tag constituted by the first behavior feature.
  • the generated user portrait includes both application tags and tags composed of first behavior characteristics.
  • the constructed user portrait can be used for information push. Because the tags included in the user portrait have many dimensions, it can be applied to push in a variety of scenarios and ensure the accuracy of information push.
  • the portrait generation method obtaineds application information of at least one application used by the target user, generates the application label of the target user according to the obtained application information, and obtains the group label corresponding to the target user according to the application label, and the group label It is used to characterize the behavior characteristics of the user group to which the target user belongs, and then determine the first behavior feature that needs to be added to the user portrait from the group tag, and finally generate the user portrait of the target user according to the application tag and the label formed by the first behavior feature , So as to realize the construction of a more dimensional user portrait according to the application information of the user using the application and the group label of the user group to which the user belongs, so as to improve the accuracy of the constructed user portrait.
  • FIG. 5 shows a schematic flowchart of a portrait generation method provided by another embodiment of the present application. This method is applied to the above-mentioned server. The process shown in FIG. 5 will be described in detail below.
  • the portrait generation method may specifically include the following steps:
  • Step S310 Obtain application tags of multiple users.
  • the server may group users in advance. First, you can get the application tags of multiple users. Multiple users can be users who have built user portraits in the past, or users who have obtained application information in the past. The application tags of multiple users may be the application tags saved when the user portrait of the user is constructed in the past, or the application tags regenerated based on the application information obtained in the past.
  • Step S320 Divide the multiple users into one or more user groups according to the application tag of each of the multiple users.
  • the server may classify multiple applications into one or more user groups according to the application tag of each of the above multiple users.
  • step S320 may include: clustering or categorizing according to set rules according to the behavior characteristics in the application tags of each user to obtain one or more categories; As a user group, all users of, get one or more user groups.
  • the server may cluster different users according to the behavior characteristics in the application tags, so as to cluster users with the same behavior habits into the same category to obtain a user group.
  • the server may use preset user grouping setting rules to group different users according to the behavior characteristics in the application tags, so as to classify users with the same behavior habits into the same user group.
  • Step S330 According to the application tags of the multiple users, a group tag of each user group is generated.
  • the server may generate a group label corresponding to each user group for each user group.
  • step S330 may include: obtaining the application tags of all users in each user group according to the application tags of the multiple users; generating each user group according to all the application tags corresponding to each user group Group tags.
  • the server can construct the group label of each user group according to the application label of the user in each user group, that is, the group label of the constructed user group can include all users in the user group. Application label.
  • Step S340 According to the application information, an application tag of the target user is generated.
  • step S340 can refer to the content of the foregoing embodiment, and will not be repeated here.
  • Step S350 Acquire the user group to which the target user belongs according to the behavior characteristics in the application tag.
  • the server can obtain the user group to which the target user belongs based on the behavior characteristics in the application tag, that is, the obtained behavior characteristics of the user group can match the behavior characteristics of the target user, and the user group The behavior habits of the users also match the behavior habits of the target users.
  • step S350 may include: acquiring the behavior characteristics in the application tag and the similarity with the behavior characteristics of each user group; acquiring the user groups whose similarity meets a specified numerical condition as the target The user group to which the user belongs.
  • the similarity between the target user’s application tag’s behavioral characteristics and the behavioral characteristics of each user group can be obtained, and then according to the obtained multiple similarities, the similarity among multiple user groups can be determined to meet the specified value.
  • the conditional user group is the user group to which the target user belongs.
  • the specified numerical condition may be that the similarity is the highest, or it may be that the similarity is greater than the similarity threshold, etc., which is not limited here.
  • Step S360 Obtain the group label of the user group as the group label of the target user.
  • the server may determine the group label of the user group to which the target user belongs based on the group label of the user group obtained in advance.
  • Step S370 Generate a user portrait of the target user according to the application tag and the group tag.
  • step S370 can refer to the content of the foregoing embodiment, which will not be repeated here.
  • the constructed user portrait can be used for information push. Because the tags included in the user portrait have many dimensions, it can be applied to push in a variety of scenarios and ensure the accuracy of information push.
  • the application tags of multiple users are obtained in advance, and the multiple users are divided into one or more user groups according to the application tags of each user in the multiple users, and then according to the multiple users
  • the application tags of generate group tags of multiple user groups, so as to complete the grouping of users in advance and the generation of group tags.
  • FIG. 6 shows a schematic flowchart of a portrait generation method provided by another embodiment of the present application. This method is applied to the above-mentioned server. The process shown in FIG. 6 will be described in detail below.
  • the portrait generation method may specifically include the following steps:
  • Step S410 Obtain application information of at least one application used by the target user.
  • Step S420 Generate an application tag of the target user according to the application information.
  • Step S430 Obtain a group tag corresponding to the target user according to the application tag, where the group tag is used to characterize the behavior characteristics of the user group to which the target user belongs.
  • Step S440 Generate a user portrait of the target user according to the application tag and the group tag.
  • steps S410 to S440 may refer to the content of the foregoing embodiment, and details are not described herein again.
  • the server may also calculate the degree of association between the target user’s application label and the group label of the user group to which the target user belongs, and correct the target user’s application label according to the degree of association to improve the final user generated
  • the accuracy of the portrait The constructed user portrait can be used for information push. Because the tags included in the user portrait have many dimensions, it can be applied to push in a variety of scenarios and ensure the accuracy of information push.
  • Step S450 Update the community label according to the application label, and the updated community label includes at least part of the behavior characteristics in the application label.
  • the server may also update the group label according to the application label of the target user.
  • the updated group label may include at least part of the behavior characteristics of the application label of the target user. .
  • step S450 may include: obtaining a feature of a dimension in which the group tag does not exist in the application tag as a second behavior feature; adding the second behavior feature to the group tag.
  • the server can obtain features of dimensions in which the group tag does not exist in the application tag, that is, it can obtain the dimension of the behavior feature in the group tag that is different from the dimension of the behavior feature in the application tag, and then compare the target user’s The behavior feature corresponding to the dimension obtained in the application tag is used as the second behavior feature. Then the server can add the acquired second behavior feature to the group tag of the user group to which the target user belongs, so that the dimension of the behavior feature in the group tag can be increased, and subsequent user portraits of other users can be constructed more accurately.
  • step S450 may include: acquiring a feature of a specified dimension in the application tag as a third behavior feature; adding the third behavior feature to the group tag.
  • the specified dimension may be the dimension of behavior characteristics required to construct the user portrait.
  • the server can use all the behavior characteristics of the specified dimension in the target user's application label as the third behavior characteristic, and add the third behavior characteristic to the group label of the user group to which the target user belongs, so that the target user belongs to the group label of the group There are more and more behavioral characteristics, which makes the subsequent construction of user portraits of other users more accurate.
  • the server may also update the user portrait of each user in the user group to which the target user belongs based on the updated group tag, and store the updated user portrait so that users of other users The portrait is more accurate.
  • the portrait generation method obtaineds application information of at least one application used by the target user, generates the application label of the target user according to the obtained application information, and obtains the group label corresponding to the target user according to the application label, and the group label Used to characterize the behavioral characteristics of the user group to which the target user belongs, and finally generate a user portrait of the target user based on the application label and the group label, so as to realize the construction of more dimensions according to the application information of the user using the application and the group label of the user group to which the user belongs
  • the user portrait of the user can improve the accuracy of the constructed user portrait.
  • the server also updates the group label of the user group to which the target user belongs according to the application label of the target user, so as to increase the dimensionality of the behavior characteristics in the group label, and make subsequent user portraits of other users more accurate.
  • FIG. 7 shows a schematic flowchart of a portrait generation method provided by yet another embodiment of the present application. This method is applied to the above-mentioned server. The process shown in FIG. 7 will be described in detail below.
  • the portrait generation method may specifically include the following steps:
  • Step S510 Obtain application information of at least one application used by the target user.
  • Step S520 Generate an application tag of the target user according to the application information.
  • Step S530 Obtain a group label corresponding to the target user according to the application label, where the group label is used to characterize the behavior characteristics of the user group to which the target user belongs.
  • Step S540 Generate a user portrait of the target user according to the application tag and the group tag.
  • steps S510 to S540 can refer to the content of the foregoing embodiment, which will not be repeated here.
  • Step S550 Determine the push content of other applications except the at least one application according to the user portrait.
  • the generated user portrait may be used for pushing various applications of the target user.
  • other applications can be pushed according to the user portrait.
  • the server may determine the push content of other applications other than the above at least one application according to the user portrait. For example, if at least one of the above applications is application A, application B, and application C, the push content of application D can be determined according to the user portrait.
  • Step S560 Push the pushed content to the target user.
  • the server may push the pushed content to the target user after determining the push content of other applications except at least one application.
  • the server may send the pushed content to the server corresponding to the other application, so that the server corresponding to the other application can push.
  • Step S570 Determine the application server corresponding to the at least one application.
  • Step S580 Push the user portrait of the target user to the application server.
  • the user portrait of the target user may also be fed back to the application server corresponding to at least one of the above applications, so that the application server can push more accurately according to the user portrait. It is understandable that since the previous application server can only obtain the application information of its corresponding application to construct the user profile, and then push it, the user profile constructed in this way has not many dimensions of behavior characteristics, and may not be accurate. Reflecting the user’s behavior and habits, and constructing user portraits through the method provided in the embodiments of this application can enable these application servers to have more dimensional user portraits, so that accurate push and marketing can be achieved during push and marketing. Bring convenience to businesses and users alike.
  • the portrait generation method obtaineds application information of at least one application used by the target user, generates the application label of the target user according to the obtained application information, and obtains the group label corresponding to the target user according to the application label, and the group label Used to characterize the behavioral characteristics of the user group to which the target user belongs, and finally generate a user portrait of the target user based on the application label and the group label, so as to realize the construction of more dimensions according to the application information of the user using the application and the group label of the user group to which the user belongs
  • the user portrait of the user can improve the accuracy of the constructed user portrait.
  • the server also pushes the content of other applications other than at least one of the above applications based on the generated user profile of the target user.
  • the user portrait is also fed back to the application server corresponding to at least one of the above applications, so that the application server can push more accurately according to the user portrait.
  • FIG. 8 shows a structural block diagram of a portrait generating apparatus 400 provided by an embodiment of the present application.
  • the portrait generation device 400 uses the above-mentioned server, and the portrait generation device 400 includes: an information acquisition module 410, a label generation module 420, a label acquisition module 430, and a content generation module 440.
  • the information obtaining module 410 is used to obtain application information of at least one application used by the target user; the label generating module 420 is used to generate an application label of the target user according to the application information; the label obtaining module 430 is configured to obtain a group tag corresponding to the target user according to the application tag, and the group tag is used to characterize the behavior characteristics of the user group to which the target user belongs; the content generation module 440 is configured to obtain a group tag according to the application tag And the group tag to generate a user portrait of the target user.
  • the content generation module 440 includes a feature acquiring unit 441, configured to acquire at least some of the behavior features in the group tag that need to be added to the user portrait of the target user, as A first behavior feature; and a portrait construction unit 442, configured to generate a user portrait of the target user according to the application tag and the label formed by the first behavior feature.
  • the feature acquiring unit 441 includes: a first dimension acquiring subunit 4411, configured to acquire multiple dimensions corresponding to the behavior characteristics existing in the group tag as the first dimension, And multiple dimensions corresponding to the behavior characteristics existing in the application tag are used as the second dimension; the second dimension obtaining subunit 4412 is configured to obtain a dimension in the first dimension that is different from the second dimension as the second dimension; Three dimensions; and a behavior feature acquiring subunit 4413, configured to use the behavior feature of the third dimension in the group tag as the first behavior feature.
  • the feature obtaining unit 441 may be specifically configured to obtain a feature of a first set dimension in the group tag as the first behavior feature.
  • the application information includes user information of the at least one application and usage information of the at least one application
  • the application tags include basic portrait tags and behavior portrait tags.
  • the label generation module 420 includes: a first label generation unit, configured to generate a basic portrait label of the target user according to the user information; and a second label generation unit, configured according to the usage information, Generate the behavior portrait label of the target user.
  • the tag obtaining module 430 includes: a group obtaining unit, configured to obtain the user group to which the target user belongs according to the behavior characteristics in the application tag; and the tag determining unit, configured to obtain information about the user group.
  • the group tag serves as the group tag of the target user.
  • the group acquisition unit includes: a similarity acquisition subunit for acquiring the behavior characteristics in the application tags and the similarity with the behavior characteristics of each user group; and a group determination subunit for acquiring The user group whose similarity meets the specified numerical condition is regarded as the user group to which the target user belongs.
  • the portrait generation device 400 may further include: an application tag acquisition module, a user grouping module, and a group tag generation module.
  • the application tag obtaining module is used to obtain application tags of multiple users;
  • the user grouping module is used to divide the multiple users into one or more user groups according to the application tags of each of the multiple users;
  • group tags The generating module is used to generate a group tag of each user group according to the application tags of the multiple users.
  • the user grouping module includes: a classification unit for clustering or classifying according to set rules according to the behavior characteristics in each user's application tag to obtain one or more categories; a group obtaining unit, It is used to treat all users corresponding to the behavior characteristics in each category as a user group to obtain one or more user groups.
  • the group label generation module includes: a label classification unit for obtaining application labels of all users in each user group according to the application labels of the multiple users; and generating an execution unit for obtaining application labels for all users in each user group according to the application labels of the multiple users. All the application tags corresponding to the group, the group tag of each user group is generated.
  • the portrait generating device 400 may further include: a label update module.
  • the label update module is configured to update the group label according to the application label, and the updated group label includes at least part of the behavior characteristics in the application label.
  • the tag update module includes: a first feature acquiring unit, configured to acquire features of dimensions in which the group tag does not exist in the application tag, as a second behavior feature; and a first feature adding unit, configured to The second behavior characteristic is added to the group tag.
  • the tag update module includes: a second feature acquiring unit, configured to acquire features of a specified dimension in the application tag as the third behavior feature; a second feature adding unit, configured to convert the third The behavior characteristics are added to the group tag.
  • the portrait generating apparatus 400 may further include: a content determination module and a content push module.
  • the content determination module is used to determine the push content of other applications except the at least one application according to the user portrait; the content push module is used to push the push content to the target user.
  • the portrait generating apparatus 400 may further include: a server determining module and a portrait pushing module.
  • the server determination module is used to determine the application server corresponding to the at least one application; the portrait push module is used to push the user portrait of the target user to the application server.
  • the coupling between the modules may be electrical, mechanical or other forms of coupling.
  • each functional module in each embodiment of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module.
  • the above-mentioned integrated modules can be implemented in the form of hardware or software functional modules.
  • the application label of the target user is generated according to the acquired application information, and the group label corresponding to the target user is obtained according to the application label.
  • the group label is used to characterize the target user According to the behavior characteristics of the user group, the user portrait of the target user is generated according to the application label and the group label, so as to realize the construction of a user portrait with more dimensions according to the application information of the user using the application and the group label of the user group to which it belongs. The accuracy of the constructed user portrait is improved.
  • the server 100 may be a server capable of running application programs, such as a traditional server or a cloud server.
  • the server 100 in this application may include one or more of the following components: a processor 110, a memory 120, a touch screen 130, and one or more application programs, where one or more application programs may be stored in the memory 120 and configured as Executed by one or more processors 110, one or more programs are configured to execute the methods described in the foregoing method embodiments.
  • the processor 110 may include one or more processing cores.
  • the processor 110 uses various interfaces and lines to connect various parts of the entire server 100, and executes the server by running or executing instructions, programs, code sets, or instruction sets stored in the memory 120, and calling data stored in the memory 120. 100's various functions and processing data.
  • the processor 110 may adopt at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA).
  • DSP Digital Signal Processing
  • FPGA Field-Programmable Gate Array
  • PDA Programmable Logic Array
  • the processor 110 may be integrated with one or a combination of a central processing unit (CPU), a graphics processing unit (GPU), a modem, and the like.
  • the CPU mainly processes the operating system, user interface, and application programs; the GPU is used for rendering and drawing of display content; the modem is used for processing wireless communication. It can be understood that the above-mentioned modem may not be integrated into the processor 110, but may be implemented by a communication chip alone.
  • the memory 120 may include random access memory (RAM) or read-only memory (Read-Only Memory).
  • the memory 120 may be used to store instructions, programs, codes, code sets or instruction sets.
  • the memory 120 may include a program storage area and a data storage area, where the program storage area may store instructions for implementing the operating system and instructions for implementing at least one function (such as touch function, sound playback function, image playback function, etc.) , Instructions used to implement the following various method embodiments, etc.
  • the storage data area can also store data (such as phone book, audio and video data, chat record data) created by the server 100 during use.
  • FIG. 12 shows a structural block diagram of a computer-readable storage medium provided by an embodiment of the present application.
  • the computer-readable medium 800 stores program code, and the program code can be invoked by a processor to execute the method described in the foregoing method embodiment.
  • the computer-readable storage medium 800 may be an electronic memory such as flash memory, EEPROM (Electrically Erasable Programmable Read Only Memory), EPROM, hard disk, or ROM.
  • the computer-readable storage medium 800 includes a non-transitory computer-readable storage medium.
  • the computer-readable storage medium 800 has storage space for the program code 810 for executing any method steps in the above-mentioned methods. These program codes can be read from or written into one or more computer program products.
  • the program code 810 may be compressed in a suitable form, for example.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Finance (AREA)
  • Strategic Management (AREA)
  • General Physics & Mathematics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Game Theory and Decision Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

本申请公开了一种画像生成方法、装置、服务器及存储介质,该画像生成方法包括:获取目标用户使用的至少一个应用的应用信息;根据所述应用信息,生成所述目标用户的应用标签;根据所述应用标签,获取所述目标用户对应的群体标签,所述群体标签用于表征所述目标用户所属用户群体的行为特征;根据所述应用标签以及所述群体标签,生成所述目标用户的用户画像。本方法可以构建多维度的用户画像。

Description

画像生成方法、装置、服务器及存储介质 技术领域
本申请涉及数据分析技术领域,更具体地,涉及一种画像生成方法、装置、服务器及存储介质。
背景技术
随着网络信息技术的快速发展,基于大数据技术的信息推荐技术也应运而生。信息推荐技术中主要依据用户画像而进行相关信息的推荐,用户画像的准确性影响着信息推荐的准确性,因此构建的用户画像成为了信息推荐技术的关键。
发明内容
鉴于上述问题,本申请提出了一种画像生成方法、装置、服务器及存储介质。
第一方面,本申请实施例提供了一种画像生成方法,所述方法包括:获取目标用户使用的至少一个应用的应用信息;根据所述应用信息,生成所述目标用户的应用标签;根据所述应用标签,获取所述目标用户对应的群体标签,所述群体标签用于表征所述目标用户所属用户群体的行为特征;根据所述应用标签以及所述群体标签,生成所述目标用户的用户画像。
第二方面,本申请实施例提供了一种画像生成装置,所述装置包括:信息获取模块、标签生成模块、标签获取模块以及内容生成模块,其中,所述信息获取模块用于获取目标用户使用的至少一个应用的应用信息;所述标签生成模块用于根据所述应用信息,生成所述目标用户的应用标签;所述标签获取模块用于根据所述应用标签,获取所述目标用户对应的群体标签,所述群体标签用于表征所述目标用户所属用户群体的行为特征;所述内容生成模块用于根据所述应用标签以及所述群体标签,生成所述目标用户的用户画像。
第三方面,本申请实施例提供了一种服务器,包括:一个或多个处理器;存储器;一个或多个应用程序,其中所述一个或多个应用程序被存储在所述存储器中并被配置为由所述一个或多个处理器执行,所述一个或多个程序配置用于执行上述第一方面提供的画像生成方法。
第四方面,本申请实施例提供了一种计算机可读取存储介质,所述计算机可读取存储介质中存储有程序代码,所述程序代码可被处理器调用执行上述第一方面提供的画像生成方法。
本申请提供的方案,通过获取目标用户使用的至少一个应用的应用信息,根据获取的应用信息,生成目标用户的应用标签,根据应用标签,获取目标用户对应的群体标签,群体标签用于表征目标用户所属用户群体的行为特征,最后根据应用标签以及群体标签,生成目标用户的用户画像,从而实现根据用户使用应用的应用信息,以及所属用户群体的群体标签,来构建维度较多的用户画像,使构建的用户画像的准确性提升。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1示出了根据本申请一个实施例的画像生成方法流程图。
图2示出了根据本申请另一个实施例的画像生成方法流程图。
图3示出了本申请另一个实施例提供的画像生成方法中步骤S220的流程图。
图4示出了本申请另一个实施例提供的画像生成方法中步骤S240的流程图。
图5示出了根据本申请又一个实施例的画像生成方法流程图。
图6示出了根据本申请再一个实施例的画像生成方法流程图。
图7示出了根据本申请又另一个实施例的画像生成方法流程图。
图8示出了根据本申请一个实施例的画像生成装置的一种框图。
图9示出了根据本申请一个实施例的画像生成装置中内容生成模块的框图。
图10示出了根据本申请一个实施例的内容生成模块中特征获取单元的框图。
图11是本申请实施例的用于执行根据本申请实施例的画像生成方法的服务器的框图。
图12是本申请实施例的用于保存或者携带实现根据本申请实施例的画像生成方法的程序代码的存储单元。
具体实施方式
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。
用户画像,作为一种勾画目标用户、联系用户诉求与设计方向的有效工具,用户画像在各领域得到了广泛的应用。用户画像最初是在电商领域得到应用的,在大数据时代背景下,用户信息充斥在网络中,将用户的每个具体信息抽象成标签,利用这些标签将用户形象具体化,从而为用户提供有针对性的服务。
用户画像已经成为一个热门的领域。大部分的用户画像系统是根据用户在应用程序(APP,Application)上的操作、付费行为等信息,经过一系列的特征及算法转化,得出基于APP刻画出的用户标签。在做信息推荐和营销的时候选择与目标用户标签相似的用户群体,然后对这些用户进行相应的信息推荐和营销。
发明人经过长期研究发现,目前根据用户的APP上产生的信息,来生成该用户的用户画像,无法扩散以及做出更多维度的画像描述,使得本身的画像系统在信息推荐时无法达到预期的效果。
针对上述问题,发明人提出了本申请实施例提供的画像生成方法、装置、服务器以及存储介质,可以根据用户使用应用的应用信息,以及所属用户群体的群体标签,来构建维度较多的用户画像,使构建的用户画像的准确性提升。其中,具体的画像生成方法在后续的实施例中进行详细的说明。
请参阅图2,图2示出了本申请一个实施例提供的画像生成方法的流程示意图。所述画像生成方法用于构建维度较多的用户画像,使构建的用户画像的准确性提升。在具体的实施例中,所述画像生成方法应用于如图8所示画像生成装置400以及配置有所述画像生成装置400的服务器100(图11)。下面将以服务器为例,说明本实施例的具体流程,当然,可以理解的,本实施例所应用的服务器可以为传统服务器、云服务器等,在此不做限定。下面将针对图2所示的流程进行详细的阐述,所述画像生成方法具体可以包括以下步骤:
步骤S110:获取目标用户使用的至少一个应用的应用信息。
在本申请实施例中,服务器在构建目标用户的用户画像时,可以获取目标用户使用应用所产生的应用信息,以便根据目标用户使用应用的应用信息,分析目标用户的习惯和个人特征,从而构建出用户画像。其中,应用信息可以包括用户在登陆或者注册应用时的用户信息,还可以包括使用应用的实用信息,当然,具体的应用信息可以不作为限定,例如,应用信息也还可以包括应用本身的属性信息等。目标用户使用应 用的应用信息可以为使用至少一个应用的应用信息,即可以为一个应用的应用信息,也可以为多个应用的应用信息。
步骤S120:根据所述应用信息,生成所述目标用户的应用标签。
在本申请实施例中,服务器在根据获得的应用信息生成目标用户的用户画像时,可以根据应用信息生成目标用户的应用标签。其中,应用标签为关于目标用户对应的应用信息所生成的标签,应用标签可以分为多种类型的标签,例如用户信息对应的基础画像标签,使用信息所对应的行为标签,应用标签所包含的具体的标签类型可以不作为限定。
在一些实施方式中,应用标签中可以包括多个标签,每个标签有具体的特征值。例如,应用标签中包括应用类型、操作喜好、消费能力、操作时间等维度的标签,并且每个标签有相应的内容。需要说明的是,每个类型的信息可以被量化或者直接作为特征值,从而形成每个标签所对应的内容。当然,具体的应用标签可以不作为限定,仅需表征用户的个人特征和行为习惯即可。
在一些实施方式中,服务器根据目标用户所对应的应用信息,生成目标用户的应用标签,也可以是利用预先训练好的标签生成模型进行生成。具体地,服务器可以获取大量的应用信息,并将每个应用信息所需要生成的应用标签进行标注,从而获得大量训练样本。然后根据训练样本,利用机器学习方法进行训练,获得训练好的标签生成模型。
服务器在训练完成标签生成模型之后,还可以根据测试样本(同样可以为预先获得的应用信息,以及应用信息所对应的应用标签),对生成的标签生成模型进行测试,以获得标签生成模型的标签生成的准确率,以便用户根据准确率,对标签生成模型的参数进行调整,以获得准确率更高的标签生成模型。例如,当测试出的准确率低于一定值时,可以生成提示信息,以提示用户对标签生成模型的模型参数进行调整。
当然,服务器也可以采用多种机器学习方法进行训练,从而获得多个标签生成模型。然后再对标签生成模型进行测试,获得每个标签生成模型的准确率,最后将准确率满足预设准确率条件的标签生成模型,作为最终获得的标签生成模型。其中,预设准确率条件可以为准确率最高,也可以为准确率大于预设阈值等,在此不做限定。
步骤S130:根据所述应用标签,获取所述目标用户对应的群体标签,所述群体标签用于表征所述目标用户所属用户群体的行为特征。
在本申请实施例中,服务器在根据目标用户所对应的应用信息生成应用标签之后,还可以根据应用标签,获取目标用户对应的群体标签。其中,群体标签用于表征目标用户所属用户群体的行为特征,也就是说,目标用户可以被分类到一个用户群体中,并且用户群体具有群体标签,而群体标签则可以表征该用户群体的行为特征。
在一些实施方式中,不同的用户群体可以预先根据以往获得的用户的应用信息,而对不同的用户进行聚类。聚类时可以按照用户的行为特征进行聚类,也就是说,根据不同用户的应用信息中的反映行为特征的信息进行聚类,从而获得多个聚类结果,每个聚类结果中可以包括一个或对应用户,每个聚类结果即可作为一个用户群体。当然,还可以预先根据每个用户群体中的用户的应用信息中反映用户行为特征的信息,来生成群体标签。
进一步地,服务器可以根据目标用户的应用标签,将应用标签中反映用户行为的特征与各个群体标签中的行为特征进行匹配,并根据匹配结果确定目标用户所属用户群体的群体标签。
在另一些实施方式中,服务器也可以直接根据以往获得的用户的应用信息,以及本次获得的目标用户的应用信息,按照用户的行为特征进行聚类,从而获得多个用户群体,也就获得了目标用户所属的用户群体,并生成该用户群体所对应的群体标签。
当然,群体标签也可以从其他设备获取,例如,对多个用户群体的群体标签进行 管理的设备为其他服务器,则用于生成画像的服务器可以从其他服务器获取该目标用户对应的群体标签,具体地,该服务器可以根据目标用户所对应的应用信息生成标签获取请求,然后将标签获取请求发送至其他服务器,以从其他服务器获取到目标用户所属群体的群体标签。
可以理解的,通过应用标签确定的目标用户所属用户群体的群体标签,由于用户群体的行为习惯也能反映该目标用户的行为习惯,因此可以利用该群体标签来构建目标用户的用户画像,以提升目标用户的用户画像中的标签的维度。
步骤S140:根据所述应用标签以及所述群体标签,生成所述目标用户的用户画像。
在本申请实施例中,服务器在生成应用标签以及获取到目标用户所属用户群体的群体标签之后,则可以根据应用标签以及群体标签,生成目标用户的用户画像。生成的用户画像中可以包括以上应用标签以及群体标签,从而使得目标用户的用户画像中既包该目标用户使用应用的应用信息所形成的应用标签,也包括了该目标用户所属用户群体的群体标签,使得用户画像中标签的维度更加宽泛,构建的用户画像的准确率也随之提升。构建的用户画像可以用于信息的推送,由于用户画像中包括的标签的维度较多,因此可以适用于多种场景的推送,并且保证信息推送的准确性。
本申请实施例提供的画像生成方法,通过获取目标用户使用的至少一个应用的应用信息,根据获取的应用信息,生成目标用户的应用标签,根据应用标签,获取目标用户对应的群体标签,群体标签用于表征目标用户所属用户群体的行为特征,最后根据应用标签以及群体标签,生成目标用户的用户画像,从而实现根据用户使用应用的应用信息,以及所属用户群体的群体标签,来构建维度较多的用户画像,使构建的用户画像的准确性提升。
请参阅图2,图2示出了本申请另一个实施例提供的画像生成方法的流程示意图。该方法应用于上述服务器,下面将针对图2所示的流程进行详细的阐述,所述画像生成方法具体可以包括以下步骤:
步骤S210:获取目标用户使用的至少一个应用的应用信息。
在本申请实施例中,服务器获取的目标用户使用至少一个应用的应用信息可以包括至少一个应用的用户信息以及至少一个应用的使用信息。其中,用户信息可以为用户登陆或者注册应用所产生的用户信息,例如可以包括:姓名、年龄、生日、教育程度、星座、信仰、婚姻状况、电子邮件地址等;使用信息可以为用户使用应用所产生的信息,例如可以包括:使用时间、使用的功能、操作类型、操作内容、地理位置等信息,使用信息可以反映用户的使用习惯。
在本申请实施例中,服务器可以获取目标用户使用多个应用的应用信息。其中,多个应用可以包括不同类型的应用,例如可以包括购物应用、外卖应用、出行应用、工作相关的应用等,可以理解的,应用的类型越广,则后续生成的用户画像越能反映用户的使用习惯。
在一些实施方式中,由于各个软件开发商在开发的时候都是着眼与自身相关的业务,因此不同软件开发商所使用的用户标识符都有各自的规范,因此可以通过统一的用户标识符来连通各个应用,以获取到不同应用的应用信息。作为一种实施方式,可以通过将每个应用的厂商的用户标识符与一个统一的统一标识符关联,然后通过目标用户对应统一标识符,以及该关联关系,获取不同应用的应用信息。作为另一种实施方式,可以通过使各个不同的应用在登录和注册时,使用第三方账号进行登录,第三方账号由该服务器进行管控,这样,也可以通过第三方账号获取到不同应用的应用信息,例如,可以使用手机的厂商账号等。
在另一些实施方式中,服务器也可以与各个不同应用所对应的厂商服务器进行通信,以从各个不同应用所对应的厂商服务器,获取到不同应用的应用信息。
当然,在本申请实施例中,具体服务器获取多个不同应用的应用信息的方式可以 不作为限定。
步骤S220:根据所述应用信息,生成所述目标用户的应用标签。
在本申请实施例中,服务器根据目标用户所对应的应用信息,生成的目标用户的应用标签可以包括目标用户的基础画像标签以及目标用户的行为画像标签。其中,基础画像标签用于表征用户的基础信息,可以为根据应用信息中基本属性、社会属性等信息所生成的标签;行为画像标签用于表征用户的行为习惯,可以为根据应用信息中的行为习惯、爱好、应用使用偏好、手机使用偏好等信息所生成的标签。
其中,个人信息包括但不限于姓名,年龄,教育程度,星座,信仰,婚姻状况和电子邮件地址等。
社会属性包括但不限于行业/职业,职位,收入水平,儿童状况,车辆使用情况,住房,移动设备和移动运营商。房屋的住所可能包括:租用房屋,拥有房屋并偿还贷款。手机可以包括:品牌和价格。移动运营商可以包括:品牌,网络,流量特征和移动号码。这些品牌可能包括:移动,联通,电信等。该网络可能包括:无,2G,3G和4G。流动特性可以包括:高,中和低。
行为习惯包括但不限于:地理位置,生活方式,交通,住宅酒店类型,经济/金融特征,就餐习惯,购物特征和支付。生活习惯可能包括:工作时间表,上班时间,工作时间,计算机上网时间和购物时间。购物特征可以包括:购物项目类别和购物方法。付款情况可以包括:付款时间,付款地点,付款方式,单笔付款金额和总付款金额。
爱好包括但不限于:阅读偏好,新闻偏好,视频偏好,音乐偏好,运动偏好和旅行偏好。阅读偏好可以包括:阅读频率,阅读时间段,总阅读时间和阅读分类。
在一些实施方式中,请参阅图3,步骤S220可以包括:
步骤S221:根据所述用户信息,生成所述目标用户的基础画像标签。
步骤S222:根据所述使用信息,生成所述目标用户的行为画像标签。
在该实施方式中,服务器可以根据应用信息中的用户信息,来生成目标用户的基础画像标签。具体地,服务器可以根据用户信息,分析出基本属性、社会属性等信息,并且生成基本画像标签;服务器可以根据使用信息,分析出行为习惯、爱好、应用使用偏好、手机使用偏好等信息,并生成行为画像标签。
步骤S230:根据所述应用标签,获取所述目标用户对应的群体标签,所述群体标签用于表征所述目标用户所属用户群体的行为特征。
在本申请实施例中,步骤S230可以参阅前述实施例的内容,在此不再赘述。
步骤S240:获取所述群体标签中需要添加至所述目标用户的用户画像中的至少部分行为特征,作为第一行为特征。
在本申请实施例中,服务器在根据生成的应用标签,以及获取的群体标签,生成目标用户的用户画像时,由于群体标签为一个用户群体中所有用户的行为特征所构成的标签,因此可以从群体标签中确定需要加入到用户画像中的标签。
具体地,服务器可以先确定群体标签中需要添加至目标用户的用户画像中至少部分行为特征。
作为一种实施方式,请参阅图4,步骤S240可以包括:
步骤S241:获取所述群体标签中存在的行为特征所对应的多个维度作为第一维度,以及所述应用标签中存在的行为特征所对应的多个维度作为第二维度。
在该实施方式中,群体标签以及应用标签中均存在多个维度的特征。用户的基础信息以及使用习惯均可以分为多个维度,例如,使用习惯分为使用时间、使用的功能、操作类型、操作内容、地理位置等多个维度。
服务器可以根据生成的应用标签,以及获得的群体标签,确定群体标签中存在的行为特征(表征行为习惯)所对应的多个维度,并将该多个作为第一维度。还可以确定应用标签中存在的行为特征所对应的多个维度,并将该多个维度作为第二维度。
步骤S242:获取所述第一维度中与所述第二维度不同的维度,作为第三维度。
在该实施方式中,服务器在获得第一维度以及第二维度之后,则可以获取第一维度中与第二维度不同的维度,也就是获取应用标签中不存在的行为特征的维度,而群体标签中存在的行为特征的维度,并将获得的维度作为第三维度。
步骤S243:将所述群体标签中所述第三维度的行为特征作为所述第一行为特征。
在该实施方式中,服务器可以将群体标签中第三维度的行为特征作为第一行为特征,也就是需要添加至目标用户的用户画像中至少部分行为特征。可以理解的,将应用标签中不存在的第三维度的行为特征,而群体标签中存在第三维度的行为特征,因此可以将应用标签中不存在的维度的行为特征加入到用户画像中。
作为另一种实施方式,步骤S240可以包括:获取所述群体标签中第一设定维度的特征,作为所述第一行为特征。
在该实施方式中,设定维度可以为构建用户画像所要求的行为特征的维度,也就是说,当用户画像包括设定维度的行为特征所构成的标签时,才可以使用户画像中的标签的维度广泛,体现出用户的行为习惯。具体的设定维度可以不作为限定,设定维度的维度越多,则后续生成的用户画像也越准确。
步骤S250:根据所述应用标签以及所述第一行为特征所构成的标签,生成所述目标用户的用户画像。
在本申请实施例中,服务器在确定出群体标签中需要加入到用户画像中的第一行为特征之后,则可以根据应用标签以及第一行为特征所构成的标签,生成目标用户的用户画像。生成的用户画像中既包括应用标签,也包括第一行为特征所构成的标签。构建的用户画像可以用于信息的推送,由于用户画像中包括的标签的维度较多,因此可以适用于多种场景的推送,并且保证信息推送的准确性。
本申请实施例提供的画像生成方法,通过获取目标用户使用的至少一个应用的应用信息,根据获取的应用信息,生成目标用户的应用标签,根据应用标签,获取目标用户对应的群体标签,群体标签用于表征目标用户所属用户群体的行为特征,然后从群体标签中确定需要加入到用户画像中的第一行为特征,最后根据应用标签以及第一行为特征所构成的标签,生成目标用户的用户画像,从而实现根据用户使用应用的应用信息,以及所属用户群体的群体标签,来构建维度较多的用户画像,使构建的用户画像的准确性提升。
请参阅图5,图5示出了本申请又一个实施例提供的画像生成方法的流程示意图。该方法应用于上述服务器,下面将针对图5所示的流程进行详细的阐述,所述画像生成方法具体可以包括以下步骤:
步骤S310:获取多个用户的应用标签。
在本申请实施例中,服务器可以预先对用户进行分群。首先,可以获取多个用户的应用标签。多个用户可以为以往已构建用户画像的用户,或者以往获取到过应用信息的用户。多个用户的应用标签可以是以往构建用户的用户画像时所保存的应用标签,也可以是根据以往获取的应用信息所重新生成的应用标签。
步骤S320:根据所述多个用户中每个用户的应用标签,将所述多个用户分为一个或多个用户群体。
在本申请实施例中,服务器可以根据以上多个用户中每个用户的应用标签,将多个应用分为一个或多个用户群体。
在一些实施方式中,步骤S320可以包括:根据每个用户的应用标签中的行为特征,进行聚类或者按照设定规则进行分类,获得一个或多个类别;将每个类别中行为特征所对应的所有用户作为一个用户群体,获得一个或多个用户群体。
作为一种实施方式,服务器可以根据应用标签中的行为特征,对不同的用户进行聚类,以将相同行为习惯的用户聚类为同一类,获得用户群体。作为另一种实施方式, 服务器可以利用预先设定的用户分群的设定规则,根据应用标签中的行为特征,对不同的用户进行分群,以将相同行为习惯的用户分为同一用户群体。
当然,具体对多个用户进行分群的方式可以不作为限定。
步骤S330:根据所述多个用户的应用标签,生成每个用户群体的群体标签。
在本申请实施例中,服务器在对多个用户进行分群之后,则可以分别对于每个用户群体,生成每个用户群体所对应的群体标签。
在一些实施方式中,步骤S330可以包括:根据所述多个用户的应用标签,获取每个用户群体中所有用户的应用标签;根据每个用户群体所对应的所有应用标签,生成每个用户群体的群体标签。在该实施方式中,服务器根据每个用户群体中用户的应用标签,即可构建出每个用户群体的群体标签,也就是说,构建出的用户群体的群体标签可以包括该用户群体中所有用户的应用标签。
步骤S340:根据所述应用信息,生成所述目标用户的应用标签。
在本申请实施例中,步骤S340可以参阅前述实施例的内容,在此不再赘述。
步骤S350:根据所述应用标签中的行为特征,获取所述目标用户所属的用户群体。
在本申请实施例中,服务器可以根据应用标签中的行为特征,获取目标用户所属的用户群体,也就是说,获取到的用户群体的行为特征能与该目标用户的行为特征匹配,该用户群体的用户的行为习惯也就与目标用户的行为习惯匹配。
在一些实施方式中,步骤S350可以包括:获取所述应用标签中的行为特征,与每个用户群体的行为特征的相似度;获取所述相似度满足指定数值条件的用户群体,作为所述目标用户所属的用户群体。
在该实施方式中,可以通过获取目标用户的应用标签的行为特征与每个用户群体的行为特征的相似度,然后根据获得的多个相似度,从而多个用户群体中确定相似度满足指定数值条件的用户群体,作为目标用户所属的用户群体。其中,指定数值条件可以为相似度最高,也可以为相似度大于相似度阈值等,在此不做限定。
步骤S360:获取所述用户群体的群体标签作为所述目标用户的群体标签。
在本申请实施例中,服务器在确定出目标用户所属的用户群体之后,则可以根据预先获得的用户群体的群体标签,确定目标用户所属用户群体的群体标签。
步骤S370:根据所述应用标签以及所述群体标签,生成所述目标用户的用户画像。
在本申请实施例中,步骤S370可以参阅前述实施例的内容,在此不再赘述。构建的用户画像可以用于信息的推送,由于用户画像中包括的标签的维度较多,因此可以适用于多种场景的推送,并且保证信息推送的准确性。
本申请实施例提供的画像生成方法,通过预先获取多个用户的应用标签,根据多个用户中每个用户的应用标签,将多个用户分为一个或多个用户群体,然后根据多个用户的应用标签,生成多个用户群体的群体标签,从而完成预先对用户的分群,以及群体标签的生成。在实际生成目标用户的用户画像时,通过获取目标用户使用的至少一个应用的应用信息,根据获取的应用信息,生成目标用户的应用标签,根据应用标签中的行为特征以及各个用户群体的群体标签中的行为特征,获取目标用户所属的用户群体,然后获取该用户群体对应的群体标签,然后根据目标用户的应用标签以及目标用户所属用户群体的群体标签生成用户画像,从而实现根据用户使用应用的应用信息,以及所属用户群体的群体标签,来构建维度较多的用户画像,使构建的用户画像的准确性提升。
请参阅图6,图6示出了本申请再一个实施例提供的画像生成方法的流程示意图。该方法应用于上述服务器,下面将针对图6所示的流程进行详细的阐述,所述画像生成方法具体可以包括以下步骤:
步骤S410:获取目标用户使用的至少一个应用的应用信息。
步骤S420:根据所述应用信息,生成所述目标用户的应用标签。
步骤S430:根据所述应用标签,获取所述目标用户对应的群体标签,所述群体标签用于表征所述目标用户所属用户群体的行为特征。
步骤S440:根据所述应用标签以及所述群体标签,生成所述目标用户的用户画像。
在本申请实施例中,步骤S410至步骤S440可以参阅前述实施例的内容,在此不再赘述。
在一些实施方式中,服务器还可以计算目标用户的应用标签与目标用户所属用户群体的群体标签之间的关联程度,并且根据该关联程度对目标用户的应用标签进行校正,以提升最终生成的用户画像的准确性。构建的用户画像可以用于信息的推送,由于用户画像中包括的标签的维度较多,因此可以适用于多种场景的推送,并且保证信息推送的准确性。
步骤S450:根据所述应用标签,对所述群体标签进行更新,所述更新后的群体标签包括所述应用标签中的至少部分行为特征。
在本申请实施例中,服务器在生成目标用户的应用之后,还可以根据目标用户的应用标签,对群体标签进行更新,更新后的群体标签可以包括该目标用户的应用标签中的至少部分行为特征。
在一些实施方式中,步骤S450可以包括:获取所述应用标签中所述群体标签不存在的维度的特征,作为第二行为特征;将所述第二行为特征添加至所述群体标签中。
在该实施方式中,服务器可以获取应用标签中群体标签不存在的维度的特征,也就是,可以获取群体标签中行为特征的维度与应用标签中行为特征的维度不同的维度,然后将目标用户的应用标签中获取的维度所对应的行为特征,作为第二行为特征。然后服务器可以将获取的第二行为特征添加至该目标用户所属的用户群体的群体标签中,这样可以使群体标签中的行为特征的维度增加,使后续构建其他用户的用户画像更加准确。
在另一些实施方式中,步骤S450可以包括:获取所述应用标签中的指定维度的特征作为第三行为特征;将所述第三行为特征添加至所述群体标签中。
在该实施方式中,指定维度可以为构建用户画像所需的行为特征的维度。服务器可以将目标用户的应用标签中所有指定维度的行为特征作为第三行为特征,并将第三行为特征添加至该目标用户所属用户群体的群体标签中,从而使目标用户所属群体的群体标签中的行为特征越来越多,使后续构建其他用户的用户画像更加准确。
在本申请实施例中,服务器还可以根据更新后的群体标签,对该目标用户所属用户群体中的各个用户的用户画像进行更新,并将更新后的用户画像进行存储,以使其他用户的用户画像更为准确。
本申请实施例提供的画像生成方法,通过获取目标用户使用的至少一个应用的应用信息,根据获取的应用信息,生成目标用户的应用标签,根据应用标签,获取目标用户对应的群体标签,群体标签用于表征目标用户所属用户群体的行为特征,最后根据应用标签以及群体标签,生成目标用户的用户画像,从而实现根据用户使用应用的应用信息,以及所属用户群体的群体标签,来构建维度较多的用户画像,使构建的用户画像的准确性提升。另外,服务器还根据目标用户的应用标签,对目标用户所属用户群体的群体标签进行更新,以使群体标签中的行为特征的维度增加,使后续构建的其他用户的用户画像更加准确。
请参阅图7,图7示出了本申请又另一个实施例提供的画像生成方法的流程示意图。该方法应用于上述服务器,下面将针对图7所示的流程进行详细的阐述,所述画像生成方法具体可以包括以下步骤:
步骤S510:获取目标用户使用的至少一个应用的应用信息。
步骤S520:根据所述应用信息,生成所述目标用户的应用标签。
步骤S530:根据所述应用标签,获取所述目标用户对应的群体标签,所述群体标 签用于表征所述目标用户所属用户群体的行为特征。
步骤S540:根据所述应用标签以及所述群体标签,生成所述目标用户的用户画像。
在本申请实施例中,步骤S510至步骤S540可以参阅前述实施例的内容,在此不再赘述。
步骤S550:根据所述用户画像,确定除所述至少一个应用以外的其他应用的推送内容。
在本申请实施例中,服务器在生成目标用户的用户画像之后,则可以将生成的用户画像用于该目标用户的各个应用的推送。除了以上服务器获取的应用信息所对应的应用以外,还可以根据用户画像,进行其他应用的推送。可以理解的,由于用户画像中包括了更多维度的行为特征的标签,因此用户画像可以用于其他应用的推送。具体地,服务器可以根据用户画像,确定以上至少一个应用以外的其他应用的推送内容。例如,以上至少一个应用为应用A、应用B和应用C,则可以根据用户画像确定应用D的推送内容。
步骤S560:将所述推送内容推送至所述目标用户。
在本申请实施例中,服务器在确定出除至少一个应用以外的其他应用的推送内容之后,则可以将推送内容推送至目标用户。在进行其他应用的推送的内容的推送时,服务器可以将该推送内容发送至该其他应用所对应的服务器,以便该其他应用所对应的服务器进行推送。
步骤S570:确定所述至少一个应用所对应的应用服务器。
步骤S580:将所述目标用户的用户画像推送至所述应用服务器。
在本申请实施例中,服务器在生成目标用户的用户画像之后,也可以将目标用户的用户画像反馈至以上至少一个应用对应的应用服务器,以便应用服务器根据用户画像进行更精准的推送。可以理解的,由于以往的应用服务器仅能获取到其对应的应用的应用信息,来构建用户画像,然后进行推送,这种方式构建的用户画像中行为特征的维度并不多,可能并不能准确反映用户的行为习惯,而通过本申请实施例提供的方式来构建用户画像,则可以使这些应用服务器拥有更多维度的用户画像,这样在进行推送和营销时,能做到精准推送和营销,为商家和用户均带来便利。
本申请实施例提供的画像生成方法,通过获取目标用户使用的至少一个应用的应用信息,根据获取的应用信息,生成目标用户的应用标签,根据应用标签,获取目标用户对应的群体标签,群体标签用于表征目标用户所属用户群体的行为特征,最后根据应用标签以及群体标签,生成目标用户的用户画像,从而实现根据用户使用应用的应用信息,以及所属用户群体的群体标签,来构建维度较多的用户画像,使构建的用户画像的准确性提升。另外,服务器还根据生成的目标用户的用户画像,进行除以上至少一个应用以外的其他应用的内容推送。也还将用户画像反馈至以上至少一个应用对应的应用服务器,以便应用服务器根据用户画像进行更精准的推送。
请参阅图8,其示出了本申请实施例提供的一种画像生成装置400的结构框图。该画像生成装置400应用上述服务器,该画像生成装置400包括:信息获取模块410、标签生成模块420、标签获取模块430以及内容生成模块440。其中,所述信息获取模块410用于获取目标用户使用的至少一个应用的应用信息;所述标签生成模块420用于根据所述应用信息,生成所述目标用户的应用标签;所述标签获取模块430用于根据所述应用标签,获取所述目标用户对应的群体标签,所述群体标签用于表征所述目标用户所属用户群体的行为特征;所述内容生成模块440用于根据所述应用标签以及所述群体标签,生成所述目标用户的用户画像。
在一些实施方式中,请参阅图9,所述内容生成模块440包括:特征获取单元441,用于获取所述群体标签中需要添加至所述目标用户的用户画像中的至少部分行为特征,作为第一行为特征;以及画像构建单元442,用于根据所述应用标签以及所述第一 行为特征所构成的标签,生成所述目标用户的用户画像。
作为一种实施方式,请参阅图10,所述特征获取单元441包括:第一维度获取子单元4411,用于获取所述群体标签中存在的行为特征所对应的多个维度作为第一维度,以及所述应用标签中存在的行为特征所对应的多个维度作为第二维度;第二维度获取子单元4412,用于获取所述第一维度中与所述第二维度不同的维度,作为第三维度;以及行为特征获取子单元4413,用于将所述群体标签中所述第三维度的行为特征作为所述第一行为特征。
作为另一种实施方式,所述特征获取单元441可以具体用于:获取所述群体标签中第一设定维度的特征,作为所述第一行为特征。
在一些实施方式中,应用信息包括所述至少一个应用的用户信息以及所述至少一个应用的使用信息,所述应用标签包括基础画像标签以及行为画像标签。
在该实施方式下,标签生成模块420包括:第一标签生成单元,用于根据所述用户信息,生成所述目标用户的基础画像标签;第二标签生成单元,用于根据所述使用信息,生成所述目标用户的行为画像标签。
在一些实施方式中,标签获取模块430包括:群体获取单元,用于根据所述应用标签中的行为特征,获取所述目标用户所属的用户群体;标签确定单元,用于获取所述用户群体的群体标签作为所述目标用户的群体标签。
在该实施方式下,群体获取单元包括:相似度获取子单元,用于获取所述应用标签中的行为特征,与每个用户群体的行为特征的相似度;以及群体确定子单元,用于获取所述相似度满足指定数值条件的用户群体,作为所述目标用户所属的用户群体。
在一些实施方式中,该画像生成装置400还可以包括:应用标签获取模块、用户分群模块以及群体标签生成模块。应用标签获取模块用于获取多个用户的应用标签;用户分群模块用于根据所述多个用户中每个用户的应用标签,将所述多个用户分为一个或多个用户群体;群体标签生成模块用于根据所述多个用户的应用标签,生成每个用户群体的群体标签。
在该实施方式下,用户分群模块包括:分类单元,用于根据每个用户的应用标签中的行为特征,进行聚类或者按照设定规则进行分类,获得一个或多个类别;群体获得单元,用于将每个类别中行为特征所对应的所有用户作为一个用户群体,获得一个或多个用户群体。
在该实施方式下,群体标签生成模块包括:标签分类单元,用于根据所述多个用户的应用标签,获取每个用户群体中所有用户的应用标签;生成执行单元,用于根据每个用户群体所对应的所有应用标签,生成每个用户群体的群体标签。
在一些实施方式中,该画像生成装置400还可以包括:标签更新模块。标签更新模块用于根据所述应用标签,对所述群体标签进行更新,所述更新后的群体标签包括所述应用标签中的至少部分行为特征。
作为一种实施方式,标签更新模块包括:第一特征获取单元,用于获取所述应用标签中所述群体标签不存在的维度的特征,作为第二行为特征;第一特征添加单元,用于将所述第二行为特征添加至所述群体标签中。
作为另一种实施方式,标签更新模块包括:第二特征获取单元,用于获取所述应用标签中的指定维度的特征作为第三行为特征;第二特征添加单元,用于将所述第三行为特征添加至所述群体标签中。
在一些实施方式中,该画像生成装置400还可以包括:内容确定模块以及内容推送模块。内容确定模块用于根据所述用户画像,确定除所述至少一个应用以外的其他应用的推送内容;内容推送模块用于将所述推送内容推送至所述目标用户。
在一些实施方式中,该画像生成装置400还可以包括:服务器确定模块以及画像推送模块。服务器确定模块用于确定所述至少一个应用所对应的应用服务器;画像推 送模块用于将所述目标用户的用户画像推送至所述应用服务器。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述装置和模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,模块相互之间的耦合可以是电性,机械或其它形式的耦合。
另外,在本申请各个实施例中的各功能模块可以集成在一个处理模块中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
综上所述,通过获取目标用户使用的至少一个应用的应用信息,根据获取的应用信息,生成目标用户的应用标签,根据应用标签,获取目标用户对应的群体标签,群体标签用于表征目标用户所属用户群体的行为特征,最后根据应用标签以及群体标签,生成目标用户的用户画像,从而实现根据用户使用应用的应用信息,以及所属用户群体的群体标签,来构建维度较多的用户画像,使构建的用户画像的准确性提升。
请参考图11,其示出了本申请实施例提供的一种服务器的结构框图。该服务器100可以是传统服务器、云服务器等能够运行应用程序的服务器。本申请中的服务器100可以包括一个或多个如下部件:处理器110、存储器120、触摸屏130以及一个或多个应用程序,其中一个或多个应用程序可以被存储在存储器120中并被配置为由一个或多个处理器110执行,一个或多个程序配置用于执行如前述方法实施例所描述的方法。
处理器110可以包括一个或者多个处理核。处理器110利用各种接口和线路连接整个服务器100内的各个部分,通过运行或执行存储在存储器120内的指令、程序、代码集或指令集,以及调用存储在存储器120内的数据,执行服务器100的各种功能和处理数据。可选地,处理器110可以采用数字信号处理(Digital Signal Processing,DSP)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、可编程逻辑阵列(Programmable Logic Array,PLA)中的至少一种硬件形式来实现。处理器110可集成中央处理器(Central Processing Unit,CPU)、图像处理器(Graphics Processing Unit,GPU)和调制解调器等中的一种或几种的组合。其中,CPU主要处理操作系统、用户界面和应用程序等;GPU用于负责显示内容的渲染和绘制;调制解调器用于处理无线通信。可以理解的是,上述调制解调器也可以不集成到处理器110中,单独通过一块通信芯片进行实现。
存储器120可以包括随机存储器(Random Access Memory,RAM),也可以包括只读存储器(Read-Only Memory)。存储器120可用于存储指令、程序、代码、代码集或指令集。存储器120可包括存储程序区和存储数据区,其中,存储程序区可存储用于实现操作系统的指令、用于实现至少一个功能的指令(比如触控功能、声音播放功能、图像播放功能等)、用于实现下述各个方法实施例的指令等。存储数据区还可以存储服务器100在使用中所创建的数据(比如电话本、音视频数据、聊天记录数据)等。
请参考图12,其示出了本申请实施例提供的一种计算机可读存储介质的结构框图。该计算机可读介质800中存储有程序代码,所述程序代码可被处理器调用执行上述方法实施例中所描述的方法。
计算机可读存储介质800可以是诸如闪存、EEPROM(电可擦除可编程只读存储器)、EPROM、硬盘或者ROM之类的电子存储器。可选地,计算机可读存储介质800包括非易失性计算机可读介质(non-transitory computer-readable storage medium)。计算机可读存储介质800具有执行上述方法中的任何方法步骤的程序代码810的存储空间。这些程序代码可以从一个或者多个计算机程序产品中读出或者写入到这一个或者多个计算机程序产品中。程序代码810可以例如以适当形式进行压缩。
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;尽 管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不驱使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。

Claims (20)

  1. 一种画像生成方法,其特征在于,所述方法包括:
    获取目标用户使用的至少一个应用的应用信息;
    根据所述应用信息,生成所述目标用户的应用标签;
    根据所述应用标签,获取所述目标用户对应的群体标签,所述群体标签用于表征所述目标用户所属用户群体的行为特征;
    根据所述应用标签以及所述群体标签,生成所述目标用户的用户画像。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述应用标签以及所述群体标签,生成所述目标用户的用户画像,包括:
    获取所述群体标签中需要添加至所述目标用户的用户画像中的至少部分行为特征,作为第一行为特征;
    根据所述应用标签以及所述第一行为特征所构成的标签,生成所述目标用户的用户画像。
  3. 根据权利要求2所述的方法,其特征在于,所述获取所述群体标签中需要添加至所述目标用户的用户画像中的至少部分行为特征,作为第一行为特征,包括:
    获取所述群体标签中存在的行为特征所对应的多个维度作为第一维度,以及所述应用标签中存在的行为特征所对应的多个维度作为第二维度;
    获取所述第一维度中与所述第二维度不同的维度,作为第三维度;
    将所述群体标签中所述第三维度的行为特征作为所述第一行为特征。
  4. 根据权利要求2所述的方法,其特征在于,所述获取所述群体标签中需要添加至所述目标用户的用户画像中的至少部分行为特征,作为第一行为特征,包括:
    获取所述群体标签中第一设定维度的特征,作为所述第一行为特征。
  5. 根据权利要求1-4任一项所述的方法,其特征在于,所述应用信息包括所述至少一个应用的用户信息以及所述至少一个应用的使用信息,所述应用标签包括基础画像标签以及行为画像标签;
    所述根据所述应用信息,生成所述目标用户的应用标签,包括:
    根据所述用户信息,生成所述目标用户的基础画像标签;
    根据所述使用信息,生成所述目标用户的行为画像标签。
  6. 根据权利要求1-5任一项所述的方法,其特征在于,所述根据所述应用标签,获取所述目标用户的群体标签,包括:
    根据所述应用标签中的行为特征,获取所述目标用户所属的用户群体;
    获取所述用户群体的群体标签作为所述目标用户的群体标签。
  7. 根据权利要求6所述的方法,其特征在于,所述根据所述应用标签中的行为特征,获取所述目标用户所属的用户群体,包括:
    获取所述应用标签中的行为特征,与每个用户群体的行为特征的相似度;
    获取所述相似度满足指定数值条件的用户群体,作为所述目标用户所属的用户群体。
  8. 根据权利要求1-7任一项所述的方法,其特征在于,在所述根据所述应用标签,获取所述目标用户对应的群体标签之前,所述方法还包括:
    获取多个用户的应用标签;
    根据所述多个用户中每个用户的应用标签,将所述多个用户分为一个或多个用户群体;
    根据所述多个用户的应用标签,生成每个用户群体的群体标签。
  9. 根据权利要求8所述的方法,其特征在于,所述根据所述多个用户中每个用户的应用标签,将所述多个用户分为一个或多个用户群体,包括:
    根据每个用户的应用标签中的行为特征,进行聚类或者按照设定规则进行分类,获得一个或多个类别;
    将每个类别中行为特征所对应的所有用户作为一个用户群体,获得一个或多个用户群体。
  10. 根据权利要求8所述的方法,其特征在于,所述根据所述多个用户的应用标签,生成每个用户群体的群体标签,包括:
    根据所述多个用户的应用标签,获取每个用户群体中所有用户的应用标签;
    根据每个用户群体所对应的所有应用标签,生成每个用户群体的群体标签。
  11. 根据权利要求1-10任一项所述的方法,其特征在于,在所述根据所述应用标签,获取所述目标用户对应的群体标签之后,所述方法还包括:
    根据所述应用标签,对所述群体标签进行更新,所述更新后的群体标签包括所述应用标签中的至少部分行为特征。
  12. 根据权利要求11所述的方法,其特征在于,所述根据所述应用标签,对所述群体标签进行更新,包括:
    获取所述应用标签中所述群体标签不存在的维度的特征,作为第二行为特征;
    将所述第二行为特征添加至所述群体标签中。
  13. 根据权利要求11所述的方法,其特征在于,所述根据所述应用标签,对所述群体标签进行更新,包括:
    获取所述应用标签中的指定维度的特征作为第三行为特征;
    将所述第三行为特征添加至所述群体标签中。
  14. 根据权利要求1-13任一项所述的方法,其特征在于,在所述根据所述应用标签以及所述群体标签,生成所述目标用户的用户画像之后,所述方法还包括:
    根据所述用户画像,确定除所述至少一个应用以外的其他应用的推送内容;
    将所述推送内容推送至所述目标用户。
  15. 根据权利要求1-14任一项所述的方法,其特征在于,在所述根据所述应用标签以及所述群体标签,生成所述目标用户的用户画像之后,所述方法还包括:
    确定所述至少一个应用所对应的应用服务器;
    将所述目标用户的用户画像推送至所述应用服务器。
  16. 一种画像生成装置,其特征在于,所述装置包括:信息获取模块、标签生成模块、标签获取模块以及内容生成模块,其中,
    所述信息获取模块用于获取目标用户使用的至少一个应用的应用信息;
    所述标签生成模块用于根据所述应用信息,生成所述目标用户的应用标签;
    所述标签获取模块用于根据所述应用标签,获取所述目标用户对应的群体标签,所述群体标签用于表征所述目标用户所属用户群体的行为特征;
    所述内容生成模块用于根据所述应用标签以及所述群体标签,生成所述目标用户的用户画像。
  17. 根据权利要求16所述的装置,其特征在于,所述内容生成模块包括:
    特征获取单元,用于获取所述群体标签中需要添加至所述目标用户的用户画像中的至少部分行为特征,作为第一行为特征;以及
    画像构建单元,用于根据所述应用标签以及所述第一行为特征所构成的标签,生成所述目标用户的用户画像。
  18. 根据权利要求16所述的装置,其特征在于,所述特征获取单元包括:
    第一维度获取子单元,用于获取所述群体标签中存在的行为特征所对应的多个维度作为第一维度,以及所述应用标签中存在的行为特征所对应的多个维度作为第二维 度;
    第二维度获取子单元,用于获取所述第一维度中与所述第二维度不同的维度,作为第三维度;以及
    行为特征获取子单元,用于将所述群体标签中所述第三维度的行为特征作为所述第一行为特征。
  19. 一种服务器,其特征在于,包括:
    一个或多个处理器;
    存储器;
    一个或多个应用程序,其中所述一个或多个应用程序被存储在所述存储器中并被配置为由所述一个或多个处理器执行,所述一个或多个程序配置用于执行如权利要求1-15任一项所述的方法。
  20. 一种计算机可读取存储介质,其特征在于,所述计算机可读取存储介质中存储有程序代码,所述程序代码可被处理器调用执行如权利要求1-15任一项所述的方法。
PCT/CN2020/072502 2020-01-16 2020-01-16 画像生成方法、装置、服务器及存储介质 WO2021142719A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2020/072502 WO2021142719A1 (zh) 2020-01-16 2020-01-16 画像生成方法、装置、服务器及存储介质
CN202080084186.7A CN114902212A (zh) 2020-01-16 2020-01-16 画像生成方法、装置、服务器及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/072502 WO2021142719A1 (zh) 2020-01-16 2020-01-16 画像生成方法、装置、服务器及存储介质

Publications (1)

Publication Number Publication Date
WO2021142719A1 true WO2021142719A1 (zh) 2021-07-22

Family

ID=76863344

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/072502 WO2021142719A1 (zh) 2020-01-16 2020-01-16 画像生成方法、装置、服务器及存储介质

Country Status (2)

Country Link
CN (1) CN114902212A (zh)
WO (1) WO2021142719A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113806638A (zh) * 2021-09-29 2021-12-17 中国平安人寿保险股份有限公司 基于用户画像的个性化推荐方法及相关设备
CN114461674A (zh) * 2022-01-21 2022-05-10 浪潮卓数大数据产业发展有限公司 一种优化用户画像的实现方法及系统
CN114880535A (zh) * 2022-06-09 2022-08-09 昕新讯飞科技(北京)有限公司 一种基于通讯大数据的用户画像生成方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160063536A1 (en) * 2014-08-27 2016-03-03 InMobi Pte Ltd. Method and system for constructing user profiles
CN105893407A (zh) * 2015-11-12 2016-08-24 乐视云计算有限公司 个体用户画像方法和系统
WO2019140703A1 (zh) * 2018-01-22 2019-07-25 华为技术有限公司 一种用户画像的生成方法及装置
CN110782289A (zh) * 2019-10-28 2020-02-11 方文珠 一种基于用户画像的业务推荐方法和系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160063536A1 (en) * 2014-08-27 2016-03-03 InMobi Pte Ltd. Method and system for constructing user profiles
CN105893407A (zh) * 2015-11-12 2016-08-24 乐视云计算有限公司 个体用户画像方法和系统
WO2019140703A1 (zh) * 2018-01-22 2019-07-25 华为技术有限公司 一种用户画像的生成方法及装置
CN110782289A (zh) * 2019-10-28 2020-02-11 方文珠 一种基于用户画像的业务推荐方法和系统

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113806638A (zh) * 2021-09-29 2021-12-17 中国平安人寿保险股份有限公司 基于用户画像的个性化推荐方法及相关设备
CN113806638B (zh) * 2021-09-29 2023-12-08 中国平安人寿保险股份有限公司 基于用户画像的个性化推荐方法及相关设备
CN114461674A (zh) * 2022-01-21 2022-05-10 浪潮卓数大数据产业发展有限公司 一种优化用户画像的实现方法及系统
CN114880535A (zh) * 2022-06-09 2022-08-09 昕新讯飞科技(北京)有限公司 一种基于通讯大数据的用户画像生成方法

Also Published As

Publication number Publication date
CN114902212A (zh) 2022-08-12

Similar Documents

Publication Publication Date Title
CN108665355B (zh) 金融产品推荐方法、装置、设备和计算机存储介质
CN106897428B (zh) 文本分类特征提取方法、文本分类方法及装置
US10380158B2 (en) System for determining and optimizing for relevance in match-making systems
WO2021142719A1 (zh) 画像生成方法、装置、服务器及存储介质
WO2020093289A1 (zh) 资源推荐方法、装置、电子设备及存储介质
WO2022033199A1 (zh) 一种获得用户画像的方法及相关装置
WO2019149145A1 (zh) 投诉举报类别的排序方法和装置
US20160132904A1 (en) Influence score of a brand
CN104866969A (zh) 个人信用数据处理方法和装置
US20170171336A1 (en) Method and electronic device for information recommendation
WO2021135562A1 (zh) 特征有效性评估方法、装置、电子设备及存储介质
US11756059B2 (en) Discovery of new business openings using web content analysis
CN111178950A (zh) 一种用户画像构建方法、装置及计算设备
US9767417B1 (en) Category predictions for user behavior
TW201939378A (zh) 資源推薦方法及裝置
WO2021155691A1 (zh) 用户画像生成方法、装置、存储介质及设备
KR102141245B1 (ko) 콘텐츠 창작자와 후원자 매칭을 통한 온라인 콘텐츠 투자 시스템 및 방법
WO2023000491A1 (zh) 一种应用推荐方法、装置、设备及计算机可读存储介质
US20110270819A1 (en) Context-aware query classification
CN112507218A (zh) 业务对象推荐方法、装置、电子设备及存储介质
CN112989213A (zh) 内容推荐方法、装置、系统、电子设备及存储介质
CN113434755A (zh) 页面的生成方法、装置、电子设备及存储介质
WO2021081914A1 (zh) 推送对象确定方法、装置、终端设备及存储介质
WO2021092803A1 (zh) 推送用户确定方法、装置、服务器以及存储介质
US20220406038A1 (en) Training data generation for advanced frequency management

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20913393

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 07.12.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20913393

Country of ref document: EP

Kind code of ref document: A1