CN108154401B - User portrait depicting method, device, medium and computing equipment - Google Patents

User portrait depicting method, device, medium and computing equipment Download PDF

Info

Publication number
CN108154401B
CN108154401B CN201810037604.1A CN201810037604A CN108154401B CN 108154401 B CN108154401 B CN 108154401B CN 201810037604 A CN201810037604 A CN 201810037604A CN 108154401 B CN108154401 B CN 108154401B
Authority
CN
China
Prior art keywords
data
user
dimension
consumption
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810037604.1A
Other languages
Chinese (zh)
Other versions
CN108154401A (en
Inventor
傅凌进
苏英敏
范启弘
严言
袁博
吴珂
刘潇然
王迎宾
毛成军
沈琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Shenzhen Technology Co ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN201810037604.1A priority Critical patent/CN108154401B/en
Publication of CN108154401A publication Critical patent/CN108154401A/en
Application granted granted Critical
Publication of CN108154401B publication Critical patent/CN108154401B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data

Abstract

The embodiment of the invention provides a user portrait depicting method, which comprises the following steps: aiming at a target user, acquiring original data of a user portrait used for describing the target user; determining user characteristics expressed by a target user on a plurality of first preset dimensions based on the original data; the user portrait of the target user is described through the user characteristics expressed by the target user on a plurality of first preset dimensions. In addition, the embodiment of the invention provides a user portrait depicting device, a computer readable storage medium and a computing device.

Description

User portrait depicting method, device, medium and computing equipment
Technical Field
Embodiments of the present invention relate to the field of computers, and more particularly, to a user portrait depicting method, a user portrait depicting apparatus, a computer storage medium and a computing device.
Background
This section is intended to provide a background or context to the embodiments of the invention that are recited in the claims. The description herein is not admitted to be prior art by inclusion in this section.
A user representation is a virtual representation of a real user in the Internet, which is a user model built on top of a series of real data. Generally, the purpose of knowing the user through user research and data mining can be realized based on the user portrait. Specifically, the users can be distinguished into different types according to the difference of behaviors, viewpoints, concerned targets and the like of the users, and then typical characteristics are extracted from each type and applied to systems such as recommendation-search-advertisement-push systems, and tag characteristics can be further abstracted and applied to various marketing systems.
Currently, some user portrayal methods have emerged in the related art.
However, in the course of implementing the inventive concept, the inventors found that there are at least the following drawbacks in the related art: the existing user portrait can only reflect the characteristics of the user in a specific field and can not reflect the characteristics of the user in other fields, for example, some user portraits can only be applied to the communication field, some user portraits can only be applied to the internet field, and the user portraits applied to the internet field can only reflect the association relationship of the family attributes of the user.
In view of the above problems in the related art, no effective solution has been proposed at present.
Disclosure of Invention
The prior user portrait can only reflect the characteristics of the user in a specific field and can not reflect the characteristics of the user in other fields, so that the application field of the user portrait is very limited.
Therefore, the existing user images cannot be applied in a wider field, which is very annoying.
For this reason, there is a strong need for an improved user representation portrayal method that can be applied in a wider field.
In this context, embodiments of the present invention desire to provide an improved user representation characterization method and apparatus.
In a first aspect of embodiments of the present invention, there is provided a method for user representation depiction, comprising: aiming at a target user, acquiring original data of a user portrait used for describing the target user; determining user characteristics expressed by the target user on a plurality of first preset dimensions based on the original data; and depicting the user portrait of the target user through the user characteristics expressed by the target user on the plurality of first preset dimensions.
In one embodiment of the present invention, in the process of depicting the user representation of the target user: labeling the user characteristics expressed by the target user in at least one dimension of the plurality of first preset dimensions to obtain corresponding user characteristic labels; and depicting the user portrait of the target user through the user feature tag and other user features, wherein the other user features include: the target user may be a user feature expressed in a dimension other than the at least one dimension in the plurality of first predetermined dimensions.
In another embodiment of the present invention, tagging a user feature expressed by the target user in at least one of the plurality of first preset dimensions to obtain a corresponding user feature tag includes: for each dimension in the at least one dimension, determining a category of user features expressed by the target user in the dimension; performing semantic analysis on the user characteristics expressed by the target user in the dimension to obtain semantics corresponding to the user characteristics of the category; and associating the category with semantics corresponding to the user features of the category to obtain the user feature tag on the dimension.
In another embodiment of the present invention, tagging a user feature expressed by the target user in at least one of the plurality of first preset dimensions to obtain a corresponding user feature tag includes: for each dimension in the at least one dimension, counting the user characteristics expressed by the target user in the dimension to obtain a corresponding statistical result; acquiring additional information input from outside; and labeling the user characteristics expressed by the target user in the dimension based on the statistical result and the additional information to obtain the user characteristic label in the dimension.
In another embodiment of the present invention, determining, based on the raw data, user characteristics that the target user exhibits in a plurality of first preset dimensions includes: extracting various target data related to the portrayal of the user from the original data; and determining user characteristics of the target user expressed on the plurality of first preset dimensions based on the plurality of target data, wherein one target data corresponds to one first preset dimension.
In another embodiment of the present invention, determining, based on the plurality of target data, user characteristics that the target user exhibits in the plurality of first preset dimensions includes: translating each target data in the multiple target data according to a preset rule to obtain corresponding multiple structured data; performing data analysis on each structured data in the plurality of structured data to obtain a plurality of data objects, wherein one data object corresponds to a first preset dimension; and determining the user characteristics expressed by the target user on the plurality of first preset dimensions based on the plurality of data objects.
In still another embodiment of the present invention, the plurality of target data includes at least two of the following data: basic data, raw recorded data, factual data, statistical data, and predictive data.
In a further embodiment of the present invention, in a case where the target data includes the basic data, the method further includes: analyzing the basic data to obtain one or more of the following analyzed data: attribute data, role data, associated data, terminal data and registration data, wherein a first preset dimension corresponding to the basic data comprises: attribute dimension, role dimension, association dimension, terminal dimension and registration dimension; when the analysis data of the basic data includes the attribute data, depicting an attribute image of the target user in the attribute dimension based on the attribute data; when the analysis data of the basic data includes the character data, depicting a character portrait of the target user in the character dimension based on the character data; when the analysis data of the basic data comprises the related data, depicting the related portrait of the target user on the related dimension based on the related data; when the analysis data of the basic data comprises the terminal data, depicting a terminal portrait of the target user on the terminal dimension based on the terminal data; and when the analysis data of the basic data comprises the registration data, depicting the registration portrait of the target user on the registration dimension based on the registration data.
In a further embodiment of the present invention, in a case where the target data includes the original recording data, the method further includes: analyzing the original record data to obtain one or more of the following analyzed data: the method comprises the following steps of accessing data, operating data, first consumption data, second consumption data and feedback data, wherein a first preset dimension corresponding to the original recording data comprises: an access dimension, an operation dimension, a first consumption dimension, a second consumption dimension, and a feedback dimension; when the analysis data of the original record data comprises the access data, depicting the access portrait of the target user on the access dimension based on the access data; describing an operation image of the target user in the operation dimension based on the operation data when the analysis data of the original record data includes the operation data; when the analysis data of the original record data comprises the first consumption data, depicting a first consumption portrait of the target user on the first consumption dimension based on the first consumption data; when the analysis data of the original record data comprises the second consumption data, depicting a second consumption portrait of the target user on the second consumption dimension based on the second consumption data; and when the analysis data of the original record data comprises the feedback data, depicting the feedback portrait of the target user on the feedback dimension based on the feedback data.
In a further embodiment of the present invention, in a case where the target data includes the fact data, the method further includes: analyzing the fact data to obtain one or more of the following analyzed data: the data consumption method comprises the following steps of earliest M times of consumption data, latest N times of consumption data, former L times of consumption data and consumption data in a preset time period, wherein a first preset dimension corresponding to the fact data comprises the following steps: the method comprises the following steps of (1) obtaining an earliest M-time consumption dimension, a last N-time consumption dimension, a first L-time consumption dimension and a consumption dimension in a preset time period; when the analysis data of the fact data includes the earliest M-time consumption data, depicting the earliest M-time consumption image of the target user in the earliest M-time consumption dimension based on the earliest M-time consumption data; when the analysis data of the fact data comprises the last N consumption data, depicting the last N consumption figures of the target user on the last N consumption dimensions based on the last N consumption data; when the analysis data of the fact data comprises the previous L consumption data, describing the previous L consumption figures of the target user on the previous L consumption dimensions based on the previous L consumption data; and when the analysis data of the fact data comprises consumption data in the preset time period, depicting the consumption image in the preset time period of the target user on the consumption dimension in the preset time period based on the consumption data in the preset time period.
In a further embodiment of the present invention, in a case where the target data includes the statistical data, the method further includes: analyzing the statistical data to obtain one or more of the following analyzed data: preference data, geographic data, rule data, wherein a first preset dimension corresponding to the statistical data comprises: a preference dimension, a geographic dimension, and a rule dimension; when the analysis data of the statistical data comprises the preference data, depicting a preference portrait of the target user on the preference dimension based on the preference data; when the analysis data of the statistical data comprises the geographic data, depicting a geographic portrait of the target user on the geographic dimension based on the geographic data; and when the analysis data of the statistical data includes the rule data, depicting a rule image of the target user in the rule dimension based on the rule data.
In a further embodiment of the present invention, in a case where the target data includes the prediction data, the method further includes: analyzing the prediction data to obtain one or more of the following analysis data: probability prediction data, risk prediction data and strategy prediction data, wherein the first preset dimensionality corresponding to the prediction data comprises the following steps: a probability prediction dimension, a risk prediction dimension, and a strategy prediction dimension; when the analysis data of the prediction data includes the probability prediction data, depicting a probability prediction image of the target user in the probability prediction dimension based on the probability prediction data; describing a risk prediction image of the target user in the risk prediction dimension based on the risk prediction data when analysis data of the prediction data includes the risk prediction data; and when the analysis data of the prediction data includes the strategy prediction data, depicting a strategy prediction image of the target user in the strategy prediction dimension based on the strategy prediction data.
In a second aspect of embodiments of the present invention, there is provided a computer-readable storage medium having stored thereon computer-executable instructions, which when executed by a processing module, are configured to implement the user representation depicting method according to any one of the above embodiments.
In a third aspect of embodiments of the present invention, there is provided a user portrait depicting apparatus, comprising: the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring original data of a user portrait depicting a target user aiming at the target user; the determining module is used for determining user characteristics expressed by the target user on a plurality of first preset dimensions based on the original data; and the first depiction module is used for depicting the user portrait of the target user through the user characteristics expressed by the target user on the plurality of first preset dimensions.
In an embodiment of the present invention, the apparatus further includes: the labeling module is used for labeling the user characteristics expressed by the target user on at least one dimension of the plurality of first preset dimensions in the process of depicting the user portrait of the target user to obtain corresponding user characteristic labels; and a second depiction module for depicting the user portrait of the target user by the user feature tag and other user features, wherein the other user features include: the target user may be a user feature expressed in a dimension other than the at least one dimension in the plurality of first predetermined dimensions.
In another embodiment of the present invention, the labeling module includes: a first determining unit, configured to determine, for each of the at least one dimension, a category of a user feature expressed by the target user in the dimension; the analysis unit is used for performing semantic analysis on the user characteristics expressed by the target user in the dimension to obtain the semantics corresponding to the user characteristics of the category; and a first generating unit, configured to associate the category with a semantic corresponding to the user feature of the category to obtain a user feature tag in the dimension.
In another embodiment of the present invention, the labeling module includes: a counting unit, configured to count, for each dimension of the at least one dimension, user characteristics expressed by the target user in the dimension to obtain a corresponding statistical result; an acquisition unit for acquiring additional information input externally; and a second generating unit, configured to label, based on the statistical result and the additional information, a user feature expressed by the target user in the dimension, so as to obtain a user feature label in the dimension.
In yet another embodiment of the present invention, the determining module includes: an extraction unit for extracting a plurality of target data related to the portrayal of the user from the original data; and a second determining unit, configured to determine, based on the multiple types of target data, user characteristics that the target user exhibits in the multiple first preset dimensions, where one type of target data corresponds to one first preset dimension.
In still another embodiment of the present invention, the second determination unit includes: the translation subunit is used for translating each target data in the multiple target data according to a preset rule to obtain corresponding multiple structured data; the analysis subunit is configured to perform data analysis on each of the plurality of types of structured data to obtain a plurality of types of data objects, where one type of data object corresponds to a first preset dimension; and a determining subunit, configured to determine, based on the multiple data objects, user characteristics that the target user exhibits in the multiple first preset dimensions.
In still another embodiment of the present invention, the plurality of target data includes at least two of the following data: basic data, raw recorded data, factual data, statistical data, and predictive data.
In a further embodiment of the present invention, the second determining unit is further configured to, in a case that the target data includes the basic data, parse the basic data to obtain one or more of the following parsed data: attribute data, role data, associated data, terminal data and registration data, wherein a first preset dimension corresponding to the basic data comprises: attribute dimension, role dimension, association dimension, terminal dimension and registration dimension; and the first depicting module is further used for: when the analysis data of the basic data includes the attribute data, depicting an attribute image of the target user in the attribute dimension based on the attribute data; when the analysis data of the basic data includes the character data, depicting a character portrait of the target user in the character dimension based on the character data; when the analysis data of the basic data comprises the related data, depicting the related portrait of the target user on the related dimension based on the related data; when the analysis data of the basic data comprises the terminal data, depicting a terminal portrait of the target user on the terminal dimension based on the terminal data; and when the analysis data of the basic data comprises the registration data, depicting the registration portrait of the target user on the registration dimension based on the registration data.
In a further embodiment of the present invention, the second determining unit is further configured to, in a case that the target data includes the original recorded data, parse the original recorded data to obtain one or more of the following parsed data: the method comprises the following steps of accessing data, operating data, first consumption data, second consumption data and feedback data, wherein a first preset dimension corresponding to the original recording data comprises: an access dimension, an operation dimension, a first consumption dimension, a second consumption dimension, and a feedback dimension; and the first depicting module is further used for: when the analysis data of the original record data comprises the access data, depicting the access portrait of the target user on the access dimension based on the access data; describing an operation image of the target user in the operation dimension based on the operation data when the analysis data of the original record data includes the operation data; when the analysis data of the original record data comprises the first consumption data, depicting a first consumption portrait of the target user on the first consumption dimension based on the first consumption data; when the analysis data of the original record data comprises the second consumption data, depicting a second consumption portrait of the target user on the second consumption dimension based on the second consumption data; and when the analysis data of the original record data comprises the feedback data, depicting the feedback portrait of the target user on the feedback dimension based on the feedback data.
In a further embodiment of the present invention, the second determining unit is further configured to, in a case that the target data includes the fact data, parse the fact data to obtain one or more of the following parsed data: the data consumption method comprises the following steps of earliest M times of consumption data, latest N times of consumption data, former L times of consumption data and consumption data in a preset time period, wherein a first preset dimension corresponding to the fact data comprises the following steps: the method comprises the following steps of (1) obtaining an earliest M-time consumption dimension, a last N-time consumption dimension, a first L-time consumption dimension and a consumption dimension in a preset time period; and the first depicting module is further used for: when the analysis data of the fact data includes the earliest M-time consumption data, depicting the earliest M-time consumption image of the target user in the earliest M-time consumption dimension based on the earliest M-time consumption data; when the analysis data of the fact data comprises the last N consumption data, depicting the last N consumption figures of the target user on the last N consumption dimensions based on the last N consumption data; when the analysis data of the fact data comprises the previous L consumption data, describing the previous L consumption figures of the target user on the previous L consumption dimensions based on the previous L consumption data; and when the analysis data of the fact data comprises consumption data in the preset time period, depicting the consumption image in the preset time period of the target user on the consumption dimension in the preset time period based on the consumption data in the preset time period.
In a further embodiment of the present invention, the second determining unit is further configured to, in a case that the target data includes the statistical data, analyze the statistical data to obtain one or more of the following analyzed data: preference data, geographic data, rule data, wherein a first preset dimension corresponding to the statistical data comprises: a preference dimension, a geographic dimension, and a rule dimension; and the first depicting module is further used for: when the analysis data of the statistical data comprises the preference data, depicting a preference portrait of the target user on the preference dimension based on the preference data; when the analysis data of the statistical data comprises the geographic data, depicting a geographic portrait of the target user on the geographic dimension based on the geographic data; and when the analysis data of the statistical data includes the rule data, depicting a rule image of the target user in the rule dimension based on the rule data.
In a further embodiment of the present invention, the second determining unit is further configured to, in a case that the target data includes the predicted data, analyze the predicted data to obtain one or more of the following analyzed data: probability prediction data, risk prediction data and strategy prediction data, wherein the first preset dimensionality corresponding to the prediction data comprises the following steps: a probability prediction dimension, a risk prediction dimension, and a strategy prediction dimension; and the first depicting module is further used for: when the analysis data of the prediction data includes the probability prediction data, depicting a probability prediction image of the target user in the probability prediction dimension based on the probability prediction data; describing a risk prediction image of the target user in the risk prediction dimension based on the risk prediction data when analysis data of the prediction data includes the risk prediction data; and when the analysis data of the prediction data includes the strategy prediction data, depicting a strategy prediction image of the target user in the strategy prediction dimension based on the strategy prediction data.
In a fourth aspect of embodiments of the present invention, there is provided a computing device comprising: a processing module; and a storage module, on which executable instructions are stored, wherein the instructions, when executed by the processing module, are used for implementing the user portrait depicting method according to any one of the embodiments.
According to the user portrait depicting method and the user portrait depicting device, the portrait depicting of any user is depicted on a plurality of latitudes capable of reflecting different characteristics of the user, so that the aim of depicting the user portrait in a multi-angle and multi-layer mode can be fulfilled, the defect that the application field of the user portrait is limited due to the fact that the user portrait is depicted only on a communication layer or a family attribute association layer in the related art can be overcome at least partially, and the application field of the user portrait is expanded remarkably.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present invention will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
FIG. 1 schematically illustrates an exemplary application scenario suitable for a user representation method and apparatus in accordance with an embodiment of the present invention;
FIG. 2 schematically illustrates a flow diagram of a method of user representation depiction, in accordance with an embodiment of the present invention;
FIG. 3A schematically illustrates a diagram depicting a user representation according to an embodiment of the invention;
FIG. 3B schematically illustrates a flow diagram of tagging user features according to an embodiment of the invention;
FIG. 3C schematically illustrates a flow diagram of a tagged user feature according to another embodiment of the invention;
FIG. 3D schematically illustrates a flow chart for determining user characteristics according to an embodiment of the invention;
FIG. 3E schematically illustrates a flow chart for determining user characteristics according to another embodiment of the present invention;
FIG. 3F is a schematic diagram that schematically illustrates the composition of target data, in accordance with an embodiment of the present invention;
FIG. 3G schematically shows a diagram of the composition of the underlying data according to an embodiment of the invention;
FIG. 3H is a schematic diagram that schematically illustrates the composition of raw recorded data, in accordance with an embodiment of the present invention;
FIG. 3I schematically shows a block diagram of the composition of fact data according to an embodiment of the invention; and
FIG. 3J schematically illustrates a diagram of the composition of statistical data according to an embodiment of the invention;
FIG. 3K schematically shows a diagram of the composition of prediction data according to an embodiment of the invention;
FIG. 4 schematically illustrates a block diagram of a user representation apparatus in accordance with an embodiment of the present invention;
FIG. 5 schematically shows a schematic view of a computer-readable storage medium product according to an embodiment of the invention; and
FIG. 6 schematically shows a block diagram of a computing device according to an embodiment of the invention.
In the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Detailed Description
The principles and spirit of the present invention will be described with reference to a number of exemplary embodiments. It is understood that these embodiments are given solely for the purpose of enabling those skilled in the art to better understand and to practice the invention, and are not intended to limit the scope of the invention in any way. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
As will be appreciated by one skilled in the art, embodiments of the present invention may be embodied as a system, apparatus, device, method, or computer program product. Accordingly, the present disclosure may be embodied in the form of: entirely hardware, entirely software (including firmware, resident software, micro-code, etc.), or a combination of hardware and software.
According to the embodiment of the invention, a user portrait depicting method, a medium, a device and a computing device are provided.
In this document, it is to be understood that any number of elements in the figures are provided by way of illustration and not limitation, and any nomenclature is used for differentiation only and not in any limiting sense.
The principles and spirit of the present invention are explained in detail below with reference to several representative embodiments of the invention.
Summary of The Invention
In the process of implementing the embodiment of the present invention, the inventors found that: the existing user portrait can only reflect the characteristics of the user in a specific field and can not reflect the characteristics of the user in other fields, for example, some user portraits can only be applied to the communication field, some user portraits can only be applied to the internet field, and the user portraits applied to the internet field can only reflect the association relationship of the family attributes of the user.
The embodiment of the invention provides a method and a device for depicting a user portrait, wherein the method comprises the following steps: aiming at a target user, acquiring original data of a user portrait used for describing the target user; determining user characteristics expressed by a target user on a plurality of first preset dimensions based on the original data; and depicting the user portrait of the target user through the user characteristics expressed by the target user on a plurality of first preset dimensions. The invention adopts the mode of depicting any user portrait on a plurality of latitudes capable of reflecting different characteristics of the user, thereby realizing the aim of depicting the user portrait in a plurality of angles and layers, overcoming the defect that the application field of the user portrait is limited because the user portrait is depicted only from a communication layer or a family attribute association layer in the related technology at least partially, and obviously expanding the adaptation field of the user portrait.
Having described the general principles of the invention, various non-limiting embodiments of the invention are described in detail below.
Application scene overview
An exemplary application scenario of the user portrait depicting method and apparatus according to the embodiment of the present invention is first described in detail with reference to fig. 1.
After the internet gradually steps into the big data era, a series of changes are inevitably brought to the behaviors of products and users, and all the behaviors of the users are required to be 'visual' in front of the products. With the intensive research and application of big data technology, the focus of enterprises is increasingly on how to utilize big data to serve precise marketing, and further to deeply mine potential business value.
The user portrait is the basic production data of the current no matter on-line products or off-line products, and the user is comprehensively depicted through the user portrait, so that enterprises can be helped to know the characteristics of the users more comprehensively and deeply, marketing strategies are adjusted, or a more matched personalized marketing scheme is automatically given by utilizing the processing capacity of the server, and the purpose of increasing the transaction probability is achieved.
User representations are used in a wide variety of applications, and as shown in FIG. 1, user representation 110 may be used in advertising system 120, push system 130, recommendation system 140, and search system 150.
In particular, user portrayal may be used primarily for: (1) and (3) precise marketing: cutting user groups into finer granularity, assisting with means such as pushing short messages and mails, doing sales promotion activities and the like, and caring, recovering, exciting and the like for different users; (2) data application: such as a recommendation system, an advertisement system and a search system, the product conversion rate or the user experience can be improved by customizing a strategy according to user data or performing machine learning through characteristic data of a user portrait; (3) product analysis: the data warehouse and various labels at the service level are natural elements of multi-dimensional analysis, so that the relevant data can be inquired through the data inquiry platform.
In a word, tens of millions of people raise and one person does not know you, really knows the user, can obtain the user and can occupy a larger market. It is clear that the importance of the user representation is self evident. User portrayal is a necessary data warehouse for good products and is also an important model in AI systems.
It should be understood that the application scenarios of the present embodiment are merely illustrative and are not intended to limit or otherwise narrow the scope of the present disclosure.
Exemplary method
A user representation portrayal method in accordance with an exemplary embodiment of the present invention is described below with reference to fig. 2. It should be noted that the above application scenarios are merely illustrated for the convenience of understanding the spirit and principles of the present invention, and the embodiments of the present invention are not limited in this respect. Rather, embodiments of the present invention may be applied to any scenario where applicable.
The embodiment of the invention provides a user portrait depicting method.
FIG. 2 schematically illustrates a flow diagram of a method of user representation characterization, in accordance with an embodiment of the present invention. As shown in fig. 2, the method may include the operations of:
operation S210, for the target user, obtaining original data of a user portrait depicting the target user;
operation S220, based on the original data, determining user characteristics expressed by the target user in a plurality of first preset dimensions; and
in operation S230, the user portrait of the target user is depicted through the user features expressed by the target user in the plurality of first preset dimensions.
It should be noted that the target user related to the present invention may be any user. Further, for any user, the raw data may be all data relevant to that user, including both online and offline data. In addition, the manner of obtaining the raw data may include various manners, such as crawling from a network, reading from a corresponding database, a storage (such as the file storage 301 and the data storage 302 shown in fig. 3A) through various interfaces, and the like, which are not limited herein.
For a target user, after acquiring original data of the target user, the acquired original data can be analyzed, analysis results are classified, the different types of original data correspond to different preset dimensions, and then the corresponding types of original data are analyzed in each preset dimension to determine user characteristics expressed by the data.
After the user characteristics expressed by various data are determined, the user portrait of the user can be described on each preset dimension.
For example, as shown in FIG. 3A, a user representation system 305 for representing a user representation may retrieve raw data of a user from a file store 301 and a data store 302 and perform a corresponding user representation process based on the retrieved raw data, and after the user representation is represented, the user representation may be used in a marketing system 303 and other online services 304.
According to the embodiment of the invention, as the portrait of any user is depicted on a plurality of latitudes capable of reflecting different characteristics of the user, the aim of depicting the portrait of the user in a multi-angle and multi-level manner can be achieved, so that the defect that the application field of the portrait of the user is limited due to the fact that the portrait of the user is depicted only from a communication level or a family attribute association level in the related art can be at least partially overcome, and the application field of the portrait of the user is remarkably expanded.
The user portrait depicting method shown in FIG. 2 will be described in detail with reference to FIGS. 3B to 3K.
Generally, since the user features expressed by the user in each first preset dimension are usually relatively abstract, lengthy and complex, in order to overcome these defects, the user features expressed by the user in each preset dimension may be simplified, so that the abstract, lengthy and complex user features are materialized and simplified. For example, it may be labeled.
The user features in each preset dimension can be labeled (some user features in the preset dimensions can also be directly used without labeling), and in the labeling process, the user features with similar user features in some preset dimensions can be summarized by using technologies such as clustering and the like or manual labeling modes, so that the use is convenient. Further, the labeling manner of the user characteristics may include at least manner 1 and manner 2 as described below.
Mode 1, as shown in fig. 3B, the user features represented by each type of raw data may be clustered, and a description may be formed for each type in combination with the neuro-linguistic programming NLP.
Specifically, as an optional embodiment, the user portrait depicting method may further include, in the process of depicting the user portrait of the target user: labeling the user characteristics expressed by the target user in at least one dimension of the plurality of first preset dimensions to obtain corresponding user characteristic labels; and depicting the user representation of the target user through the user feature tag and other user features, wherein the other user features comprise: the target user features the user in other dimensions of the plurality of first predetermined dimensions except the at least one dimension.
It should be noted that, in the process of depicting the user portrait, tagging the user feature is a preferable scheme, and for a plurality of first preset dimensions, all of the first preset dimensions may be selected to be tagged, or a part of the first preset dimensions may be selected to be tagged, and how to select the first preset dimensions may be determined according to actual needs, which is not limited herein.
Further, as an optional embodiment, tagging a user feature expressed by the target user in at least one of the plurality of first preset dimensions to obtain a corresponding user feature tag includes: for each dimension in the at least one dimension, determining a category of user features expressed by the target user in the dimension; performing semantic analysis on the user characteristics expressed by the target user on the dimension to obtain semantics corresponding to the user characteristics of the category; and associating the category with semantics corresponding to the user features of the category to obtain the user feature tag on the dimension.
Specifically, for the user features in any preset dimension, the classification of the user features can be obtained through technical means such as clustering, the semantics of the user features are obtained through technologies such as natural language processing, the classification is associated with human understanding, and the user features in the preset dimension and the actual meanings are associated to form a label.
Mode 2, as shown in fig. 3C, the user features represented by each type of raw data may be statistically sorted, and a description may be formed for each type by adding a manual acceptance environment.
Specifically, as an optional embodiment, tagging a user feature expressed by the target user in at least one of the plurality of first preset dimensions, and obtaining a corresponding user feature tag may include: for each dimension in the at least one dimension, counting the user characteristics expressed by the target user on the dimension to obtain a corresponding statistical result; acquiring additional information input from outside; and labeling the user characteristics expressed by the target user on the dimension based on the statistical result and the additional information to obtain the user characteristic label on the dimension.
After the corresponding user characteristics are tagged, the tagged and untagged (i.e., structured) data can be delivered to a user (e.g., an online service, marketing system, etc.) for storage in a corresponding format for use. When the data is used for the online service, the data is mainly used for an automatic processing flow, such as machine learning, online pushing, recommendation, search, pricing and the like, and when the data is used for the marketing system, the data is mainly used for a CRM system, a push system and the like.
As an alternative embodiment, as shown in fig. 3D, the operation S220 of determining, based on the raw data, the user characteristics that the target user exhibits in the first preset dimensions may include:
operation S221, extracting various target data related to portraying the user portrait from the original data; and
in operation S222, based on the multiple target data, user characteristics expressed by the target user in the multiple first preset dimensions are determined, where one target data corresponds to one first preset dimension.
Specifically, the above process of the embodiment of the present invention can be subdivided into the following three steps: and (3) analysis: for extracting relevant data from the raw data and translating into a structured data language, such as by deserializing the data stream into recognizable key-value pairs for subsequent processing; and (3) treatment: further analyzing and processing the user data obtained by the previous analysis, such as statistics, mining and prediction; structuring: the data processed in the previous step is assembled into a data form with a certain format according to the content of specific data, so that the data form can be conveniently inserted into a database or written into a corresponding file only with strangeness, and the purpose is to facilitate subsequent use (online or offline viewing).
As an alternative embodiment, as shown in fig. 3E, the operation S222, based on the plurality of target data, determining the user characteristics that the target user exhibits in the plurality of first preset dimensions may include:
operation S2221, translate each target data in the multiple target data according to a preset rule, to obtain corresponding multiple structured data;
operation S2222, performing data analysis on each of the multiple kinds of structured data to obtain multiple kinds of data objects, where one kind of data object corresponds to a first preset dimension; and
in operation S2223, based on the plurality of data objects, user characteristics expressed by the target user in the plurality of first preset dimensions are determined.
The user portrait depicting method provided by the invention is characterized in that original data associated with a user is constructed into a corresponding user portrait through the process, the user portrait is structured data to represent user characteristics expressed by the user on various preset dimensions, a similar model file is formed, and the user portrait can be loaded into a marketing system, an advertising system, a pushing system, a recommending system and a searching system, or can be used for strategy use, and the user portrait depicted by using the method provided by the invention has the characteristic of being readable by both a machine and a human, and can also be used as a user model.
In addition, in embodiments of the present invention, the data source of the user representation generally includes a data repository, which may be generally divided into data files such as files, hdfs, documents, and other electronic files, or structured databases such as mysql, ddb, hive, redis, tair, ncr, and other databases. In the embodiment, the original data is taken out from the data warehouse, and then the related data is extracted from the original data to be used as the target data, so that the user portrait can be further constructed and used. The original data can be roughly divided into four types, namely online basic data, in-product behavior data, user content preference data and user transaction data.
Specifically, the files may be imported into a computing unit, and user features of each preset dimension may be computed or extracted according to a specific rule, which will be described below.
By the embodiment of the invention, the user portrait can be systematically and robustly depicted, so that abundant user data can be provided, an online automatic system or a semi-automatic operation system can be conveniently adopted, personalized recommendation and crowd division are convenient, and the aim of accurate marketing can be fulfilled. In addition, the user representation portrayed by the scheme provided by the invention is relatively comprehensive in information, so that subjective assumption can be reduced as much as possible in the operation of the product, users can be approached, and the real requirements of the users can be understood, thereby knowing how to better serve the users of different types.
Further, as an alternative embodiment, the plurality of target data includes at least two of the following data: basic data, raw recorded data, factual data, statistical data, and predictive data. As shown in FIG. 3F, for completeness, the target data may include both base data, raw recorded data, fact data, statistical data, and predictive data.
In particular, the content contained in the target data and the order in which the content is obtained may be adjusted as appropriate in the specific implementation according to the specific product location and the richness of the product data. Wherein: basic data, also called basic information, which is information for acquiring and processing the user, wherein the user includes the user's own information and the virtual character role information in the platform; raw record data, which is the data record that is the most raw data record of the user in the product and related products, including but not limited to behavior records, such as purchase history, etc.; fact data, also called a fact model, is used for acquiring and processing user information based on a fact dimension (state), and acquiring user characteristics under the description through a meaningful description; statistical data, also called statistical model, which is to obtain and process user images based on statistical information, and mine potential user characteristics such as taste, preference, frequent location, etc. by statistical means; the prediction data, also called prediction model, is to acquire and process some relevant data, and predict some characteristics of the user through a model, usually a machine learning model, such as risk prediction, taste prediction, etc., and also includes basic data, original recorded data, factual data, user information missing from statistical data, such as occupation, age, gender prediction, etc.
Further, as an optional embodiment, in the case that the target data includes the basic data, the method may further include: analyzing the basic data to obtain one or more of the following analyzed data: attribute data, role data, associated data, terminal data, and registration data (as shown in fig. 3G, basic data may include attribute data, role data, associated data, terminal data, and registration data at the same time), where the first preset dimension corresponding to the basic data includes: attribute dimension, role dimension, association dimension, terminal dimension and registration dimension; in the case that the analytical data of the basic data comprises the attribute data, depicting the attribute portrait of the target user on the attribute dimension based on the attribute data; in the case that the parsed data of the base data includes the role data, characterizing a role representation of the target user in the role dimension based on the role data; in the case that the parsed data of the base data includes the associated data, depicting an associated representation of the target user in the associated dimension based on the associated data; under the condition that the analysis data of the basic data comprises the terminal data, depicting the terminal portrait of the target user on the terminal dimension based on the terminal data; and in the case that the parsed data of the base data includes the registration data, characterizing a registration portrait of the target user in the registration dimension based on the registration data.
In the present invention, different types of data in the basic data are used to generate different parts of the user image, and the generation order is not limited when different types of basic data are used to generate different parts of the user image.
Specifically, attribute data herein includes, but is not limited to, one or more of the following characteristic data: identifiers (such as account number, user id, device id, cookie id and the like), user age, user gender, user native, common mobile phone number, primary account number, year of birth, constellation, user academic calendar, user character, user ethnicity, user religion, user growth age in the product and the like; the character data includes, but is not limited to, one or more of the following characteristic data: in-station roles, out-of-station roles, professional roles, family roles, community roles, and the like; the associated data includes, but is not limited to, one or more of the following characteristic data: whether the user has a baby (including recording child age, weight, gender, if any), for whom the user purchased, the vehicle the user held, user real estate, user company (i.e., company location, company industry), user income level; the terminal data includes, but is not limited to, one or more of the following characteristic data: the used terminal parameters comprise cpu, memory, screen, system, brand, time on market and the like; registration data includes, but is not limited to, one or more of the following characteristic data: the information during registration comprises a registration channel, a registration mode, a contact mode, time, reading interest, life interest and the like, and the registration portrait also comprises information actively filled and modified in the product by the user after registration is finished.
Further, as an optional embodiment, in a case that the target data includes the original recording data, the method further includes: analyzing the original record data to obtain one or more of the following analyzed data: the method includes accessing data, operating data, first consumption data, second consumption data, and feedback data (as shown in fig. 3H, the original record data may include simultaneous access data, operating data, first consumption data, second consumption data, and feedback data), wherein a first preset dimension corresponding to the original record data includes: an access dimension, an operation dimension, a first consumption dimension, a second consumption dimension, and a feedback dimension; in the case that the parsed data of the original recorded data includes the access data, depicting an access representation of the target user in the access dimension based on the access data; in the case that the analysis data of the original record data comprises the operation data, depicting an operation portrait of the target user on the operation dimension based on the operation data; in the event that parsed data of the raw record data includes the first consumption data, portraying a first consumption representation of the target user in the first consumption dimension based on the first consumption data; in the event that parsed data of the original recorded data includes the second consumption data, portraying a second consumption representation of the target user in the second consumption dimension based on the second consumption data; and in the case that the parsed data of the raw recorded data includes the feedback data, characterizing a feedback representation of the target user in the feedback dimension based on the feedback data.
In addition, different types of data in the raw record data are used for generating different parts of the user representation, and when different types of raw record data are used for generating different parts of the user representation, the generation sequence is not limited by the invention.
Specifically, capturing and processing raw record data herein includes, but is not limited to, capturing data from 5 aspects of access, manipulation, consumption, feedback, recording to complete the characterization of the portion of the user representation. The objects accessed, manipulated, consumed, fed back, recorded herein are elements that include one or more of "goods, articles, categories, brands, albums, activities," and so forth, as described in detail below.
Wherein the access data includes one or more of the following characteristics without limitation: element access traces, user display traces and stay traces in the product; the operational data includes, but is not limited to, one or more of the following characteristic data: browsing, praise, purchase, collection, purchase, sharing, grouping and searching behavior traces of elements in the product; the first consumption data comprises one or more of the following characteristics without limitation: the actual consumption information comprises the reason, channel, source, financial details and preferential use condition of consumption; second consumption data for generating a feedback representation comprising one or more of the following characteristics data: customer service feedback content, comment content, grading content, negative feedback content, uninteresting content and complaint content; feedback data for generating a recorded representation, including, without limitation, one or more of the following feature data: current total consumed, current goods in transit, current pending payment current status value.
Further, as an optional embodiment, in a case that the target data includes the fact data, the method further includes: analyzing the fact data to obtain one or more of the following analyzed data: the data consumption method includes the following steps that earliest M times of consumption data, last N times of consumption data, last L times of consumption data, and consumption data within a preset time period (as shown in fig. 3I, fact data may include earliest M times of consumption data, last N times of consumption data, last L times of consumption data, and consumption data within a preset time period at the same time), wherein a first preset dimension corresponding to the fact data includes: the method comprises the following steps of (1) obtaining an earliest M-time consumption dimension, a last N-time consumption dimension, a first L-time consumption dimension and a consumption dimension in a preset time period; in a case that the parsed data of the fact data includes the earliest M-time consumption data, depicting the earliest M-time consumption image of the target user in the earliest M-time consumption dimension based on the earliest M-time consumption data; in a case that the parsed data of the fact data includes the last N consumption data, depicting the last N consumption figures of the target user in the last N consumption dimensions based on the last N consumption data; under the condition that the analytical data of the fact data comprises the previous L consumption data, describing the previous L consumption figures of the target user on the previous L consumption dimensions based on the previous L consumption data; and under the condition that the analysis data of the fact data comprises consumption data in the preset time period, describing the consumption portrait of the target user in the consumption dimension in the preset time period based on the consumption data in the preset time period.
In addition, different types of data in the fact data are used for generating different parts of the user portrait, and the generation sequence of the different parts of the user portrait is not limited by the present invention when the different types of original record data are used for generating the different parts of the user portrait.
Specifically, in the process of acquiring and processing fact data, user feature extraction needs to be performed on any one or more of four models at least including first/last/top/time, each model includes one or more items, and the four model establishing steps (without precedence) are described in detail as follows.
Generating a FirstN model: recording and processing the time, amount, category, preferential strength, number of pieces, order distribution of various categories, amount distribution of various categories and number of various categories of commodities of a user who purchases the product for the previous N times after registration of the product, wherein the payment account number, identity information and consignee information are used for the previous N times; content, tendency of previous N reviews; the previous N times of collecting, praise and paying attention to the items.
Generating a LastN model: recording and processing the time, amount, category, preferential strength, number of pieces, order distribution of various categories, amount distribution of various categories and number of various categories of commodities of a user purchased for the last N times from the product registration to the current time, and using a payment account, identity information and consignee information for the last N times; content, trend of the last N reviews; items of recent N collections, praise, concerns.
Generation of TopN model: recording and processing the user's order number of TopN, the amount of goods purchased by TopN brand, the national purchase order number of TopN, the amount of goods purchased by TopN country, the TopN frequent payment account number and the TopN consignee information (including address mark, name mark and administrative region) from the time when the user registers the product to the current time, wherein N is a parameter which represents the frequency, and is generally selected from 10, 20, 50 and 100.
Generating a TimeN model: TimeN is the statistical data recorded within a Time window, Time (Time) typically includes descriptions of 10 minutes, 60 minutes, 4 hours, 12 hours, 24 hours, the current day, the last week, the last month, the current quarter, 180 days, the current year, etc., and during a period of a specific Time N, one or more of the following user behavior items are recorded and processed: the amount paid for the purchase, the number of purchases, the amount of orders, the number of times consumed, the extent of consumption, the amount clicked, the amount collected, the amount purchased, the user's preference for categories, the user's preference for brands, etc.
Further, as an optional embodiment, in the case that the target data includes the statistical data, the method further includes: analyzing the statistical data to obtain one or more of the following analyzed data: preference data, geographic data, and rule data (as shown in fig. 3J, the statistical data may include the preference data, the geographic data, and the rule data at the same time), wherein the first preset dimension corresponding to the statistical data includes: a preference dimension, a geographic dimension, and a rule dimension; in the case that the parsed data of the statistical data includes the preference data, characterizing a preference portrait of the target user in the preference dimension based on the preference data; in the case that the parsed data of the statistical data includes the geographic data, depicting a geographic representation of the target user in the geographic dimension based on the geographic data; and in the case that the parsed data of the statistical data includes the rule data, depicting the rule representation of the target user in the rule dimension based on the rule data.
In addition, different types of data in the statistical data are used for generating different parts of the user portrait, and when different types of raw record data are used for generating different parts of the user portrait, the generation sequence is not limited by the invention.
Specifically, the process of obtaining and processing the statistical model includes one or more of a process preference model, a geographic model and a rule model, and further, each model may include a time parameter, so that model data of different time windows including a real-time model and a long-term model may be obtained, and the models are specifically:
processing the user preference model: the method comprises the steps of processing user category preference, brand preference, price preference, color preference, style preference, rights and interests preference, service preference and time preference, and obtaining the weights through calculation of models according to different click and purchase distribution.
Processing the user geographic model: including calculating the user's geographic characteristics including GPS coordinates, IP network address, shopping location habits, city and county level, etc.
Processing the user rules model, including processing one or more of: a, calculating the price sensitivity model of the user, for example, firstly calculating the average price of each item purchased in the last 60 days, and then calculating the average discount of each category purchased by the user according to the average price; b, a user purchasing power model, wherein the average price grade of the user to the category is calculated according to the price grade of the article marked in the article model and is used as the purchasing power model of the user; c coupon dependence, i.e., the degree to which the user is dependent on coupons and promotions; a user life cycle, such as sleepiness (e.g., purchase in last 90 days, no purchase in last 60 days), such as activity (e.g., classified into three categories of high frequency, medium frequency, and low frequency, clustered according to the number of purchase days in last 60 days), such as loss (e.g., no purchase in last 90 days, once purchase, and temporarily not counted), such as registration of unpurchased (e.g., registration of unpurchased purchase, temporarily not counted), such as new user (e.g., registration of last purchase, first order purchase, or no purchase); e, searching a model, namely a searching list of a user according to a time dimension; the fRFM model, such as Recency, e.g., last time consumed in a recent month, such as Frequency, e.g., Monetary, e.g., last amount consumed in a month, etc.
Further, as an optional embodiment, in the case that the target data includes the prediction data, the method further includes: analyzing the prediction data to obtain one or more of the following analysis data: probability prediction data, risk prediction data, and strategy prediction data (as shown in fig. 3K, prediction data may include probability prediction data, risk prediction data, and strategy prediction data at the same time), where the first preset dimension corresponding to the prediction data includes: a probability prediction dimension, a risk prediction dimension, and a strategy prediction dimension; in the case that the analytic data of the prediction data includes the probabilistic prediction data, characterizing the probabilistic prediction image of the target user in the probabilistic prediction dimension based on the probabilistic prediction data; depicting a risk prediction image of the target user in the risk prediction dimension based on the risk prediction data if the analytic data of the prediction data includes the risk prediction data; and in the case that the parsed data of the prediction data includes the policy prediction data, characterizing the policy prediction image of the target user in the policy prediction dimension based on the policy prediction data.
In the present invention, the different types of data in the prediction data are used to generate different parts of the user image, and the generation order is not limited when the different types of raw record data are used to generate different parts of the user image.
Specifically, the flow of obtaining and processing the prediction model herein includes processing one or more of a probability model, a policy model and a risk model, specifically:
processing the probabilistic model including processing one or more characteristics of the user's activity in the product, loyalty, shopping type, promotional sensitivity, purchasing attribute preferences.
Processing the risk model, including processing one or more characteristics of the user in the product such as credit risk, loss risk, satisfaction risk, list abandonment risk, fraud risk, cattle/crawler identification.
Processing the policy model includes processing one or more characteristics of the following items in the user.
Shopping stage prediction, random shopping, focus finding, three-house shopping, waiting for price reduction, being in reserve, and the like.
Missing information policies including, for example, gender prediction, job prediction, consumption capability prediction for non-consumed classes, and the like.
And the operation strategy prediction comprises user indexes to be developed, user indexes to be restored, user indexes to be activated, user indexes to be maintained and the like.
Customer value prediction, including predicting the value a user brings to a product, such as monetary value, reputation value, impact value, and the like.
Intrinsic demand forecasting, including forecasting current demand and potential demand of a user, and the like.
And the aversion degree prediction comprises the prediction of the aversion degree of the user to brands, categories, commodities and the like.
According to the embodiment of the invention, as the mode of depicting the user portrait on a plurality of preset dimensions for the original data is adopted, a relatively comprehensive user portrait can be generated, so that in order to urge team members of related enterprises to throw away personal preference in the process of product design, the focus is focused on the motivation and behavior of a target user to carry out product design, and therefore, the user portrait can enable service objects of products to be more focused and more concentrated. And because the product design for specific characters is far better than the product design for fictional things in the brain, when all the people participating in the product design discuss and make decisions based on consistent users, all the directions can be easily restricted to be kept in the same general direction, and the decision making efficiency is improved.
In addition, the invention takes various composition data of the original data, namely various data samples, as the machine learning samples, and facilitates the enrichment and training of sample data on a certain machine learning example, thereby improving the commercial effect of the online automatic service, further forming a normalized and standardized user portrait generation scheme and facilitating the rapid implementation.
It should be noted that the whole process, the user sub-image obtaining method involved in each process, and the partial features of the embodiment of the present invention may include a general parameter, such as a time parameter.
Exemplary devices
Having described the method of an exemplary embodiment of the present invention, an apparatus for implementing a method for user portrayal is described in detail with reference to FIG. 4.
The embodiment of the invention provides a user portrait depicting device.
FIG. 4 schematically illustrates a block diagram of a user representation apparatus in accordance with an embodiment of the present invention. As shown in FIG. 4, the user representation characterization device 400 may include: an obtaining module 410, configured to obtain, for a target user, original data of a user portrait used for depicting the target user; a determining module 420, configured to determine, based on the raw data, user characteristics that the target user exhibits in a plurality of first preset dimensions; and a first depiction module 430, configured to depict the user portrait of the target user according to the user characteristics expressed by the target user in the plurality of first preset dimensions.
According to the embodiment of the invention, as the portrait of any user is depicted on a plurality of latitudes capable of reflecting different characteristics of the user, the aim of depicting the portrait of the user in a multi-angle and multi-level manner can be achieved, so that the defect that the application field of the portrait of the user is limited due to the fact that the portrait of the user is depicted only from a communication level or a family attribute association level in the related art can be at least partially overcome, and the application field of the portrait of the user is remarkably expanded.
As an alternative embodiment, the apparatus further comprises: the labeling module is used for labeling the user characteristics expressed by the target user on at least one dimension of the plurality of first preset dimensions in the process of depicting the user portrait of the target user to obtain corresponding user characteristic labels; and the second depiction module is used for depicting the user portrait of the target user through the user feature label and other user features, wherein the other user features comprise: the target user represents user characteristics in other dimensions of the plurality of first preset dimensions except the at least one dimension.
As an alternative embodiment, the tagging module comprises: a first determining unit, configured to determine, for each dimension of the at least one dimension, a category of a user feature expressed by the target user in the dimension; the analysis unit is used for performing semantic analysis on the user characteristics expressed by the target user in the dimension to obtain semantics corresponding to the user characteristics of the category; and the first generation unit is used for associating the category with the semantics corresponding to the user characteristics of the category to obtain the user characteristic label on the dimension.
As an alternative embodiment, the tagging module comprises: the statistical unit is used for counting the user characteristics expressed by the target user in the dimension aiming at each dimension in the at least one dimension to obtain a corresponding statistical result; an acquisition unit for acquiring additional information input externally; and the second generating unit is used for labeling the user characteristics expressed by the target user in the dimension based on the statistical result and the additional information so as to obtain the user characteristic label in the dimension.
As an alternative embodiment, the determining module includes: an extraction unit for extracting a plurality of target data related to portraying the user portrait from the original data; and a second determining unit, configured to determine, based on the multiple types of target data, user characteristics that the target user exhibits in the multiple first preset dimensions, where one type of target data corresponds to one first preset dimension.
As an alternative embodiment, the second determining unit includes: the translation subunit is used for translating each target data in the multiple target data according to a preset rule to obtain corresponding multiple structured data; the analysis subunit is configured to perform data analysis on each of the multiple kinds of structured data to obtain multiple kinds of data objects, where one kind of data object corresponds to a first preset dimension; and the determining subunit is used for determining the user characteristics expressed by the target user on the plurality of first preset dimensions based on the plurality of data objects.
As an alternative embodiment, the plurality of target data includes at least two of the following data: basic data, raw recorded data, factual data, statistical data, and predictive data.
As an optional embodiment, the second determining unit is further configured to, in a case that the target data includes the basic data, parse the basic data to obtain one or more of the following parsed data: attribute data, role data, associated data, terminal data and registration data, wherein a first preset dimension corresponding to the basic data comprises: attribute dimension, role dimension, association dimension, terminal dimension and registration dimension; and the first depiction module is further used for: in the case that the parsed data of the base data includes the attribute data, depicting an attribute representation of the target user in the attribute dimension based on the attribute data; in the case that the parsed data of the base data includes the role data, characterizing a role representation of the target user in the role dimension based on the role data; in the case that the parsed data of the base data includes the associated data, depicting an associated representation of the target user in the associated dimension based on the associated data; under the condition that the analysis data of the basic data comprises the terminal data, depicting a terminal portrait of the target user on the terminal dimension based on the terminal data; and in the event that the parsed data of the base data includes the enrollment data, characterizing an enrollment portrait of the target user in the enrollment dimension based on the enrollment data.
As an optional embodiment, the second determining unit is further configured to, in a case that the target data includes the original recorded data, parse the original recorded data to obtain one or more of the following parsed data: the method comprises the following steps of accessing data, operating data, first consumption data, second consumption data and feedback data, wherein a first preset dimension corresponding to the original recording data comprises: an access dimension, an operation dimension, a first consumption dimension, a second consumption dimension, and a feedback dimension; and the first depiction module is further used for: in the case that the parsed data of the original recorded data includes the access data, depicting an access portrait of the target user in the access dimension based on the access data; when the analysis data of the original record data comprises the operation data, depicting an operation image of the target user on the operation dimension based on the operation data; in the event that parsed data of the raw recorded data includes the first consumption data, portraying a first consumption representation of the target user in the first consumption dimension based on the first consumption data; in the event that parsed data of the raw recorded data includes the second consumption data, portraying a second consumption representation of the target user in the second consumption dimension based on the second consumption data; and in the case that the parsed data of the raw recorded data includes the feedback data, characterizing a feedback representation of the target user in the feedback dimension based on the feedback data.
As an optional embodiment, the second determining unit is further configured to, in a case that the target data includes the fact data, parse the fact data to obtain one or more of the following parsed data: the method comprises the following steps of earliest M consumption data, latest N consumption data, former L consumption data and consumption data in a preset time period, wherein a first preset dimension corresponding to fact data comprises the following steps: the method comprises the following steps of (1) obtaining an earliest M-time consumption dimension, a last N-time consumption dimension, a first L-time consumption dimension and a consumption dimension in a preset time period; and the first depiction module is further used for: in a case that parsed data of the fact data includes the earliest M-times consumption data, portraying an earliest M-times consumption portrait of the target user in the earliest M-times consumption dimension based on the earliest M-times consumption data; in a case that parsed data of the fact data includes the last N consumption data, portraying the last N consumption figures of the target user in the last N consumption dimensions based on the last N consumption data; in the case that the parsed data of the fact data includes the previous L consumption data, depicting the previous L consumption figures of the target user in the previous L consumption dimensions based on the previous L consumption data; and under the condition that the analysis data of the fact data comprises consumption data in the preset time period, describing the consumption portrait of the target user in the preset time period on the basis of the consumption data in the preset time period in the consumption dimension.
As an optional embodiment, the second determining unit is further configured to, in a case that the target data includes the statistical data, parse the statistical data to obtain one or more of the following parsed data: preference data, geographic data, rule data, wherein a first preset dimension corresponding to the statistical data comprises: a preference dimension, a geographic dimension, and a rule dimension; and the first depiction module is further used for: in the case that the parsed data of the statistical data includes the preference data, depicting a preference portrait of the target user in the preference dimension based on the preference data; in the event that parsed data of the statistical data includes the geographic data, portraying a geographic portrayal of the target user in the geographic dimension based on the geographic data; and in the case that the analysis data of the statistical data comprises the rule data, depicting the rule portrait of the target user in the rule dimension based on the rule data.
As an optional embodiment, the second determining unit is further configured to, in a case that the target data includes the predicted data, parse the predicted data to obtain one or more of the following parsed data: probability prediction data, risk prediction data and strategy prediction data, wherein the first preset dimensionality corresponding to the prediction data comprises: a probability prediction dimension, a risk prediction dimension, and a strategy prediction dimension; and the first depiction module is further used for: in the case that the parsed data of the prediction data includes the probabilistic prediction data, characterizing a probabilistic prediction image of the target user in the probabilistic prediction dimension based on the probabilistic prediction data; in the case that analytical data of the prediction data includes the risk prediction data, characterizing a risk prediction image of the target user in the risk prediction dimension based on the risk prediction data; and in the case that the parsed data of the prediction data includes the strategy prediction data, characterizing a strategy prediction image of the target user in the strategy prediction dimension based on the strategy prediction data.
It should be noted that, the embodiment of the apparatus is the same as or similar to the embodiment of the method, and the technical problems to be solved, the specific technical means to be used, and the technical effects to be achieved are also the same as or similar to each other.
Exemplary Medium
Embodiments of the present invention provide a computer-readable storage medium having stored thereon executable instructions, which when executed by the processing module, are configured to implement the user representation characterization method of any one of the method embodiments.
In some possible embodiments, aspects of the present invention may also be implemented in the form of a program product including program code for causing a terminal device to perform the steps in the user representation depicting method according to various exemplary embodiments of the present invention described in the above section "exemplary method" of this specification when the program product is run on the terminal device, for example, the terminal device may perform operation S210 shown in fig. 2, for obtaining, for a target user, original data of a user representation depicting the target user; operation S220, based on the original data, determining user characteristics expressed by the target user in a plurality of first preset dimensions; and operation S230, depicting the user portrait of the target user through the user features expressed by the target user in the plurality of first preset dimensions.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
As shown in fig. 5, a program product 50 for user representation portrayal according to an embodiment of the invention is depicted, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Exemplary computing device
Having described the method, medium, and system of exemplary embodiments of the present invention, a computing device for user representation portrayal according to an exemplary embodiment of the present invention is described next.
The embodiment of the invention also provides the computing equipment. The computing device includes: a processing module; and a storage unit storing computer-executable instructions that, when executed by the processing module, are configured to implement the user representation portrayal method of any one of the method embodiments.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as an apparatus, method or program product. Thus, various aspects of the invention may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" device.
In some possible embodiments, a computing device according to the present invention may include at least one processing module, and at least one memory unit. Wherein the storage unit stores program code that, when executed by the processing module, causes the processing module to perform the steps of the user representation method according to various exemplary embodiments of the present invention described in the above section "exemplary methods" of this specification. For example, the processing module may perform operation S210 as shown in fig. 2, for a target user, obtaining raw data for characterizing a user representation of the target user; operation S220, based on the original data, determining user characteristics expressed by the target user in a plurality of first preset dimensions; and operation S230, depicting the user portrait of the target user through the user features expressed by the target user in the plurality of first preset dimensions.
A computing device 60 for user portrayal depiction according to this embodiment of the invention is described below with reference to FIG. 6. Computing device 60 as shown in FIG. 6 is only one example and should not be taken to limit the scope of use and functionality of embodiments of the present invention.
As shown in fig. 6, computing device 60 is embodied in a general purpose computing device. Components of computing device 60 may include, but are not limited to: the at least one processing module 601, the at least one memory unit 602, and a bus 603 connecting the various system components (including the memory unit 602 and the processing module 601).
Bus 603 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures.
The storage unit 602 may include readable media in the form of volatile memory, such as Random Access Memory (RAM)6021 and/or cache memory 6022, and may further include read-only memory (ROM) 6023.
The memory unit 602 may also include a program/utility 6025 having a set (at least one) of program modules 6024, such program modules 6024 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Computing device 60 may also communicate with one or more external devices 604 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with computing device 60, and/or with any devices (e.g., router, modem, etc.) that enable computing device 60 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 605. Moreover, computing device 60 may also communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) through network adapter 606. As shown, network adapter 606 communicates with the other modules of computing device 60 over bus 603. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with computing device 60, including but not limited to: microcode, device drivers, redundant processing modules, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
It should be noted that although in the above detailed description reference is made to several units/modules or sub-units/modules of the user representation apparatus, such division is merely exemplary and not mandatory. Indeed, the features and functionality of two or more of the units/modules described above may be embodied in one unit/module according to embodiments of the invention. Conversely, the features and functions of one unit/module described above may be further divided into embodiments by a plurality of units/modules.
Moreover, while the operations of the method of the invention are depicted in the drawings in a particular order, this does not require or imply that the operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
While the spirit and principles of the invention have been described with reference to several particular embodiments, it is to be understood that the invention is not limited to the disclosed embodiments, nor is the division of aspects, which is for convenience only as the features in such aspects may not be combined to benefit. The invention is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (26)

1. A method of user representation characterization, comprising:
aiming at a target user, acquiring original data of a user portrait used for describing the target user; the raw data includes data both online and offline, the user representation is for at least one of: accurate marketing, data application and product analysis;
determining user characteristics expressed by the target user on a plurality of first preset dimensions based on the original data; and
depicting the user portrait of the target user through the user characteristics expressed by the target user on the plurality of first preset dimensions; wherein determining, based on the raw data, user characteristics that the target user exhibits in a plurality of first preset dimensions comprises:
extracting various target data related to the portrayal of the user from the original data, wherein the various target data comprise prediction data;
determining user characteristics expressed by the target user on the plurality of first preset dimensions based on the plurality of target data;
the first preset dimension corresponding to the prediction data comprises: a probability prediction dimension, a risk prediction dimension, and a strategy prediction dimension; the policy prediction dimension may include at least one of: the method comprises the steps of shopping stage strategy prediction dimensionality, missing information strategy prediction dimensionality, operation strategy prediction dimensionality, customer value strategy prediction dimensionality, internal demand strategy prediction dimensionality and aversion degree strategy prediction dimensionality;
the determining, based on the plurality of target data, user characteristics that the target user exhibits in the plurality of first preset dimensions includes:
analyzing the target data of the corresponding type in each first preset dimension, and determining the user characteristics expressed by the target user in each first preset dimension.
2. The method of claim 1, wherein, in characterizing the user representation of the target user:
labeling the user characteristics expressed by the target user in at least one dimension of the plurality of first preset dimensions to obtain corresponding user characteristic labels; and
depicting the user representation of the target user through the user feature tag and other user features, wherein the other user features comprise: the target user represents user characteristics in other dimensions of the plurality of first preset dimensions except the at least one dimension.
3. The method of claim 2, wherein tagging user characteristics expressed by the target user in at least one of the plurality of first preset dimensions to obtain corresponding user characteristic tags comprises:
for each dimension in the at least one dimension, determining a category of user features expressed by the target user in the dimension;
performing semantic analysis on the user characteristics expressed by the target user on the dimension to obtain semantics corresponding to the user characteristics of the category; and
and associating the category with the semantics corresponding to the user characteristics of the category to obtain the user characteristic label on the dimension.
4. The method of claim 2, wherein tagging user characteristics expressed by the target user in at least one of the plurality of first preset dimensions to obtain corresponding user characteristic tags comprises:
for each dimension in the at least one dimension, counting the user characteristics expressed by the target user on the dimension to obtain a corresponding statistical result;
acquiring additional information input from outside; and
and labeling the user characteristics expressed by the target user in the dimension based on the statistical result and the additional information to obtain the user characteristic label in the dimension.
5. The method of claim 1, wherein a target datum corresponds to a first predetermined dimension.
6. The method of claim 5, wherein determining, based on the plurality of target data, user characteristics exhibited by the target user in the plurality of first preset dimensions comprises:
translating each target data in the multiple target data according to a preset rule to obtain corresponding multiple structured data;
performing data analysis on each structured data in the plurality of structured data to obtain a plurality of data objects, wherein one data object corresponds to a first preset dimension; and
and determining the user characteristics expressed by the target user on the plurality of first preset dimensions based on the plurality of data objects.
7. The method of claim 6, wherein the plurality of objective data comprises the prediction data and at least one of: basic data, original recorded data, fact data and statistical data; the basic data is the data of the user; the original recorded data is the most original recorded data of the user in the product and the related products; the fact data is user data based on a fact dimension; the statistical data is user data mainly including statistical information.
8. The method of claim 7, wherein, in the case that the target data comprises the base data, the method further comprises:
analyzing the basic data to obtain one or more of the following analyzed data: attribute data, role data, associated data, terminal data and registration data, wherein a first preset dimension corresponding to the basic data comprises: attribute dimension, role dimension, association dimension, terminal dimension and registration dimension; the attribute data includes at least one of: the identifier, the age of the user, the sex of the user, the native place of the user, the common mobile phone, the birth year and month of the primary account number, the constellation, the academic history of the user, the character of the user, the ethnicity of the user, the religion of the user and the growth age of the user in the product; the association data includes at least one of: whether the user has a baby, for whom the user purchases, the vehicle the user holds, the user real estate, the user company, and the user income level; the terminal data includes terminal parameters used; the registration data includes at least one of: registration channel, registration mode, contact mode, event, reading interest and life interest;
in the case that the parsed data of the base data includes the attribute data, depicting an attribute representation of the target user in the attribute dimension based on the attribute data;
in the case that the parsed data of the base data includes the role data, characterizing a role representation of the target user in the role dimension based on the role data;
in the case that the parsed data of the base data includes the associated data, depicting an associated representation of the target user in the associated dimension based on the associated data;
under the condition that the analysis data of the basic data comprises the terminal data, depicting a terminal portrait of the target user on the terminal dimension based on the terminal data; and
in the event that parsed data of the base data includes the enrollment data, characterizing an enrollment portrait of the target user in the enrollment dimension based on the enrollment data.
9. The method of claim 7, wherein, in the case that the target data includes the raw record data, the method further comprises:
analyzing the original recorded data to obtain one or more of the following analyzed data: the method comprises the following steps of accessing data, operating data, first consumption data, second consumption data and feedback data, wherein a first preset dimension corresponding to the original recording data comprises: an access dimension, an operation dimension, a first consumption dimension, a second consumption dimension, and a feedback dimension; the access data includes at least one of: element access traces, user display traces and stay traces in the product; the operational data includes at least one of: browsing, praise, architecture, collection, purchase, sharing, grouping and searching behavior traces of elements in the product; the first consumption data includes information of actual consumption; the second consumption data comprises at least one of: the method comprises the following steps of (1) feeding back content, comment content, grading content, negative feedback content, uninteresting content and complaint content by a client; the feedback data includes at least one of: the current total amount of consumption, the current goods in transit and the current value of payment waiting;
in the case that the parsed data of the original recorded data includes the access data, depicting an access portrait of the target user in the access dimension based on the access data;
when the analysis data of the original record data comprises the operation data, depicting an operation image of the target user on the operation dimension based on the operation data;
in the event that parsed data of the raw recorded data includes the first consumption data, portraying a first consumption representation of the target user in the first consumption dimension based on the first consumption data;
in the event that parsed data of the raw recorded data includes the second consumption data, portraying a second consumption representation of the target user in the second consumption dimension based on the second consumption data; and
in the event that parsed data of the raw recorded data includes the feedback data, characterizing a feedback representation of the target user in the feedback dimension based on the feedback data.
10. The method of claim 7, wherein, in the case that the target data includes the fact data, the method further comprises:
analyzing the fact data to obtain one or more of the following analyzed data: the method comprises the following steps of earliest M consumption data, latest N consumption data, former L consumption data and consumption data in a preset time period, wherein a first preset dimension corresponding to fact data comprises the following steps: the method comprises the following steps of (1) obtaining an earliest M-time consumption dimension, a last N-time consumption dimension, a first L-time consumption dimension and a consumption dimension in a preset time period;
in a case that parsed data of the fact data includes the earliest M-times consumption data, portraying an earliest M-times consumption portrait of the target user in the earliest M-times consumption dimension based on the earliest M-times consumption data;
in a case that parsed data of the fact data includes the last N consumption data, portraying the last N consumption figures of the target user in the last N consumption dimensions based on the last N consumption data;
in the case that the parsed data of the fact data includes the previous L consumption data, depicting the previous L consumption figures of the target user in the previous L consumption dimensions based on the previous L consumption data; and
and under the condition that the analysis data of the fact data comprises consumption data in the preset time period, describing the consumption portrait of the target user in the preset time period on the basis of the consumption data in the preset time period in the consumption dimension.
11. The method of claim 7, wherein, in the event that the target data comprises the statistical data, the method further comprises:
analyzing the statistical data to obtain one or more of the following analyzed data: preference data, geographic data, rule data, wherein a first preset dimension corresponding to the statistical data comprises: a preference dimension, a geographic dimension, and a rule dimension;
in the case that the parsed data of the statistical data includes the preference data, depicting a preference portrait of the target user in the preference dimension based on the preference data;
in the event that parsed data of the statistical data includes the geographic data, portraying a geographic portrayal of the target user in the geographic dimension based on the geographic data; and
in a case where the parsed data of the statistical data includes the rule data, a rule representation of the target user is depicted in the rule dimension based on the rule data.
12. The method of claim 7, wherein, in the event that the target data comprises the prediction data, the method further comprises:
analyzing the prediction data to obtain one or more of the following analysis data: probability prediction data, risk prediction data, and strategy prediction data;
in the case that the parsed data of the prediction data includes the probabilistic prediction data, characterizing a probabilistic prediction image of the target user in the probabilistic prediction dimension based on the probabilistic prediction data;
in the case that analytical data of the prediction data includes the risk prediction data, characterizing a risk prediction image of the target user in the risk prediction dimension based on the risk prediction data; and
in the case that the parsed data of the prediction data includes the policy prediction data, characterizing a policy prediction image of the target user in the policy prediction dimension based on the policy prediction data.
13. A user representation depicting apparatus comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring original data of a user portrait used for depicting a target user aiming at the target user; the raw data includes data both online and offline, the user representation is for at least one of: accurate marketing, data application and product analysis;
the determining module is used for determining user characteristics expressed by the target user on a plurality of first preset dimensions based on the original data; and
the first depiction module is used for depicting the user portrait of the target user through the user characteristics expressed by the target user on the plurality of first preset dimensions;
wherein the determining module comprises:
an extraction unit configured to extract a plurality of target data related to characterizing a user representation from the raw data, the plurality of target data including prediction data;
a second determining unit, configured to determine, based on the multiple types of target data, user characteristics that the target user exhibits in the multiple first preset dimensions;
the first preset dimension corresponding to the prediction data comprises: a probability prediction dimension, a risk prediction dimension, and a strategy prediction dimension; the policy prediction dimension may include at least one of: the method comprises the steps of shopping stage strategy prediction dimensionality, missing information strategy prediction dimensionality, operation strategy prediction dimensionality, customer value strategy prediction dimensionality, internal demand strategy prediction dimensionality and aversion degree strategy prediction dimensionality;
the determining, based on the plurality of target data, user characteristics that the target user exhibits in the plurality of first preset dimensions includes:
analyzing the target data of the corresponding type in each first preset dimension, and determining the user characteristics expressed by the target user in each first preset dimension.
14. The apparatus of claim 13, wherein the apparatus further comprises:
the labeling module is used for labeling the user characteristics expressed by the target user on at least one dimension of the plurality of first preset dimensions in the process of depicting the user portrait of the target user to obtain corresponding user characteristic labels; and
a second depiction module, configured to depict the user representation of the target user through the user feature tag and other user features, where the other user features include: the target user represents user characteristics in other dimensions of the plurality of first preset dimensions except the at least one dimension.
15. The apparatus of claim 14, wherein the tagging module comprises:
a first determining unit, configured to determine, for each dimension of the at least one dimension, a category of a user feature expressed by the target user in the dimension;
the analysis unit is used for performing semantic analysis on the user characteristics expressed by the target user in the dimension to obtain semantics corresponding to the user characteristics of the category; and
and the first generating unit is used for associating the category with the semantics corresponding to the user characteristics of the category to obtain the user characteristic label on the dimension.
16. The apparatus of claim 14, wherein the tagging module comprises:
the statistical unit is used for counting the user characteristics expressed by the target user in the dimension aiming at each dimension in the at least one dimension to obtain a corresponding statistical result;
an acquisition unit for acquiring additional information input externally; and
and the second generating unit is used for labeling the user characteristics expressed by the target user in the dimension based on the statistical result and the additional information so as to obtain the user characteristic label in the dimension.
17. The apparatus of claim 13, wherein a target datum corresponds to a first predetermined dimension.
18. The apparatus of claim 17, wherein the second determining unit comprises:
the translation subunit is used for translating each target data in the multiple target data according to a preset rule to obtain corresponding multiple structured data;
the analysis subunit is configured to perform data analysis on each of the multiple kinds of structured data to obtain multiple kinds of data objects, where one kind of data object corresponds to a first preset dimension; and
and the determining subunit is configured to determine, based on the plurality of data objects, user characteristics that the target user exhibits in the plurality of first preset dimensions.
19. The apparatus of claim 18, wherein the plurality of objective data comprises the prediction data and at least one of: basic data, original recorded data, fact data and statistical data; the basic data is the data of the user; the original recorded data is the most original recorded data of the user in the product and the related products; the fact data is user data based on a fact dimension; the statistical data is user data mainly including statistical information.
20. The apparatus of claim 19, wherein:
the second determining unit is further configured to, when the target data includes the basic data, parse the basic data to obtain one or more of the following parsed data: attribute data, role data, associated data, terminal data and registration data, wherein a first preset dimension corresponding to the basic data comprises: attribute dimension, role dimension, association dimension, terminal dimension and registration dimension; the attribute data includes at least one of: the identifier, the age of the user, the sex of the user, the native place of the user, the common mobile phone, the birth year and month of the primary account number, the constellation, the academic history of the user, the character of the user, the ethnicity of the user, the religion of the user and the growth age of the user in the product; the association data includes at least one of: whether the user has a baby, for whom the user purchases, the vehicle the user holds, the user real estate, the user company, and the user income level; the terminal data includes terminal parameters used; the registration data includes at least one of: registration channel, registration mode, contact mode, event, reading interest and life interest; and
the first depiction module is further configured to:
in the case that the parsed data of the base data includes the attribute data, depicting an attribute representation of the target user in the attribute dimension based on the attribute data;
in the case that the parsed data of the base data includes the role data, characterizing a role representation of the target user in the role dimension based on the role data;
in the case that the parsed data of the base data includes the associated data, depicting an associated representation of the target user in the associated dimension based on the associated data;
under the condition that the analysis data of the basic data comprises the terminal data, depicting a terminal portrait of the target user on the terminal dimension based on the terminal data; and
in the event that parsed data of the base data includes the enrollment data, characterizing an enrollment portrait of the target user in the enrollment dimension based on the enrollment data.
21. The apparatus of claim 19, wherein:
the second determining unit is further configured to, when the target data includes the original recorded data, parse the original recorded data to obtain one or more of the following parsed data: the method comprises the following steps of accessing data, operating data, first consumption data, second consumption data and feedback data, wherein a first preset dimension corresponding to the original recording data comprises: an access dimension, an operation dimension, a first consumption dimension, a second consumption dimension, and a feedback dimension; the access data includes at least one of: element access traces, user display traces and stay traces in the product; the operational data includes at least one of: browsing, praise, architecture, collection, purchase, sharing, grouping and searching behavior traces of elements in the product; the first consumption data includes information of actual consumption; the second consumption data comprises at least one of: the method comprises the following steps of (1) feeding back content, comment content, grading content, negative feedback content, uninteresting content and complaint content by a client; the feedback data includes at least one of: the current total amount of consumption, the current goods in transit and the current value of payment waiting; and
the first depiction module is further configured to:
in the case that the parsed data of the original recorded data includes the access data, depicting an access portrait of the target user in the access dimension based on the access data;
when the analysis data of the original record data comprises the operation data, depicting an operation image of the target user on the operation dimension based on the operation data;
in the event that parsed data of the raw recorded data includes the first consumption data, portraying a first consumption representation of the target user in the first consumption dimension based on the first consumption data;
in the event that parsed data of the raw recorded data includes the second consumption data, portraying a second consumption representation of the target user in the second consumption dimension based on the second consumption data; and
in the event that parsed data of the raw recorded data includes the feedback data, characterizing a feedback representation of the target user in the feedback dimension based on the feedback data.
22. The apparatus of claim 19, wherein:
a second determining unit, configured to, when the target data includes the fact data, parse the fact data to obtain one or more of the following parsed data: the method comprises the following steps of earliest M consumption data, latest N consumption data, former L consumption data and consumption data in a preset time period, wherein a first preset dimension corresponding to fact data comprises the following steps: the method comprises the following steps of (1) obtaining an earliest M-time consumption dimension, a last N-time consumption dimension, a first L-time consumption dimension and a consumption dimension in a preset time period; and
the first depiction module is further configured to:
in a case that parsed data of the fact data includes the earliest M-times consumption data, portraying an earliest M-times consumption portrait of the target user in the earliest M-times consumption dimension based on the earliest M-times consumption data;
in a case that parsed data of the fact data includes the last N consumption data, portraying the last N consumption figures of the target user in the last N consumption dimensions based on the last N consumption data;
in the case that the parsed data of the fact data includes the previous L consumption data, depicting the previous L consumption figures of the target user in the previous L consumption dimensions based on the previous L consumption data; and
and under the condition that the analysis data of the fact data comprises consumption data in the preset time period, describing the consumption portrait of the target user in the preset time period on the basis of the consumption data in the preset time period in the consumption dimension.
23. The apparatus of claim 19, wherein:
the second determining unit is further configured to, when the target data includes the statistical data, analyze the statistical data to obtain one or more of the following analyzed data: preference data, geographic data, rule data, wherein a first preset dimension corresponding to the statistical data comprises: a preference dimension, a geographic dimension, and a rule dimension; and
the first depiction module is further configured to:
in the case that the parsed data of the statistical data includes the preference data, depicting a preference portrait of the target user in the preference dimension based on the preference data;
in the event that parsed data of the statistical data includes the geographic data, portraying a geographic portrayal of the target user in the geographic dimension based on the geographic data; and
in a case where the parsed data of the statistical data includes the rule data, a rule representation of the target user is depicted in the rule dimension based on the rule data.
24. The apparatus of claim 19, wherein:
a second determining unit, configured to, if the target data includes the predicted data, parse the predicted data to obtain one or more of the following parsed data: probability prediction data, risk prediction data, and strategy prediction data; and
the first depiction module is further configured to:
in the case that the parsed data of the prediction data includes the probabilistic prediction data, characterizing a probabilistic prediction image of the target user in the probabilistic prediction dimension based on the probabilistic prediction data;
in the case that analytical data of the prediction data includes the risk prediction data, characterizing a risk prediction image of the target user in the risk prediction dimension based on the risk prediction data; and
in the case that the parsed data of the prediction data includes the policy prediction data, characterizing a policy prediction image of the target user in the policy prediction dimension based on the policy prediction data.
25. A computer readable storage medium having stored thereon executable instructions for implementing a method of user representation as claimed in any of claims 1 to 12 when executed by a processing module.
26. A computing device, comprising:
a processing module; and
a storage module having stored thereon executable instructions for implementing the user representation characterization method of any one of claims 1-12 when executed by the processing module.
CN201810037604.1A 2018-01-15 2018-01-15 User portrait depicting method, device, medium and computing equipment Active CN108154401B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810037604.1A CN108154401B (en) 2018-01-15 2018-01-15 User portrait depicting method, device, medium and computing equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810037604.1A CN108154401B (en) 2018-01-15 2018-01-15 User portrait depicting method, device, medium and computing equipment

Publications (2)

Publication Number Publication Date
CN108154401A CN108154401A (en) 2018-06-12
CN108154401B true CN108154401B (en) 2022-03-29

Family

ID=62461413

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810037604.1A Active CN108154401B (en) 2018-01-15 2018-01-15 User portrait depicting method, device, medium and computing equipment

Country Status (1)

Country Link
CN (1) CN108154401B (en)

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109118270B (en) * 2018-07-12 2021-04-06 北京猫眼文化传媒有限公司 Data extraction method and device
CN109255522A (en) * 2018-08-14 2019-01-22 殷肇良 A kind of user management method
CN110020196B (en) * 2018-08-22 2021-08-06 龙凯 User analysis method and device based on different data sources and computing equipment
CN109318902A (en) * 2018-09-27 2019-02-12 上海蔚来汽车有限公司 User's operation bootstrap technique, system and controller based on user's operation behavior
CN109409936A (en) * 2018-09-28 2019-03-01 深圳壹账通智能科技有限公司 Customer consumption portrait generation method, device, equipment and readable storage medium storing program for executing
CN109359244B (en) * 2018-10-30 2021-07-20 中国科学院计算技术研究所 Personalized information recommendation method and device
CN109919437B (en) * 2019-01-29 2020-01-31 特斯联(北京)科技有限公司 big data-based intelligent tourism target matching method and system
CN110020201B (en) * 2019-03-26 2021-05-25 中国科学院软件研究所 User type automatic labeling system based on user portrait clustering
CN111966885B (en) * 2019-05-20 2023-10-31 腾讯科技(深圳)有限公司 User portrait construction method and device
CN110472145B (en) * 2019-07-25 2022-11-29 维沃移动通信有限公司 Content recommendation method and electronic equipment
CN112287208B (en) * 2019-09-30 2024-03-01 北京沃东天骏信息技术有限公司 User portrait generation method, device, electronic equipment and storage medium
CN111046263A (en) * 2019-11-22 2020-04-21 广东机电职业技术学院 Student learning interest portrait generation system, method and device and storage medium
CN111160604A (en) * 2019-11-22 2020-05-15 深圳壹账通智能科技有限公司 Missing information prediction method and device, computer equipment and storage medium
CN111104596A (en) * 2019-12-17 2020-05-05 腾讯科技(深圳)有限公司 Information processing method and device, electronic equipment and storage medium
CN111223235A (en) * 2019-12-27 2020-06-02 合肥美的智能科技有限公司 Commodity putting method of unmanned cabinet, unmanned cabinet and control device of unmanned cabinet
CN113641408A (en) * 2020-04-23 2021-11-12 百度在线网络技术(北京)有限公司 Method and device for generating shortcut entrance
CN111709813B (en) * 2020-06-19 2021-04-16 省广营销集团有限公司 Commodity recommendation method based on big data line
CN111861550B (en) * 2020-07-08 2023-09-08 上海视九信息科技有限公司 Family portrait construction method and system based on OTT equipment
CN112364222B (en) * 2021-01-13 2021-04-27 北京云真信科技有限公司 Regional portrait method of user age, computer equipment and storage medium
CN113781151A (en) * 2021-01-29 2021-12-10 北京京东拓先科技有限公司 Target data determination method and device, electronic equipment and storage medium
CN113011968B (en) * 2021-03-30 2023-07-14 腾讯科技(深圳)有限公司 Account state detection method and device, storage medium and electronic equipment
CN113297479A (en) * 2021-04-29 2021-08-24 上海淇玥信息技术有限公司 User portrait generation method and device and electronic equipment
CN114969558B (en) * 2022-08-03 2022-11-08 安徽商信政通信息技术股份有限公司 User portrait generation method and system based on user behavior habit analysis
CN116401464B (en) * 2023-06-02 2023-08-04 深圳市一览网络股份有限公司 Professional user portrait construction method, device, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105741134A (en) * 2016-01-26 2016-07-06 北京百分点信息科技有限公司 Method and apparatus for applying cross-data-source marketing crowds to marketing
CN106504099A (en) * 2015-09-07 2017-03-15 国家计算机网络与信息安全管理中心 A kind of system for building user's portrait
CN106529177A (en) * 2016-11-12 2017-03-22 杭州电子科技大学 Patient portrait drawing method and device based on medical big data
CN106874266A (en) * 2015-12-10 2017-06-20 中国电信股份有限公司 User's portrait method and the device for user's portrait

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2003258556A1 (en) * 2003-08-01 2005-03-16 Centrum Fur Ertragsoptimierung Aktiengesellschaft Measuring method and pattern recognition machine for identifying a vector characteristic of business management of a subject of knowledge and method and machine for automatically characterizing a subject of knowledge from the point of view of business management
CN103593690B (en) * 2013-11-25 2017-08-08 北京光年无限科技有限公司 User's intelligent tagging systems
CN106339891A (en) * 2015-07-13 2017-01-18 上海银橙文化传媒股份有限公司 Intelligent analysis method and system based on large data acquisition
CN106503015A (en) * 2015-09-07 2017-03-15 国家计算机网络与信息安全管理中心 A kind of method for building user's portrait
CN107526754A (en) * 2016-09-26 2017-12-29 广州速鸿信息科技有限公司 A kind of user's portrait platform method for building up based on big data
CN107423442B (en) * 2017-08-07 2020-09-25 火烈鸟网络(广州)股份有限公司 Application recommendation method and system based on user portrait behavior analysis, storage medium and computer equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106504099A (en) * 2015-09-07 2017-03-15 国家计算机网络与信息安全管理中心 A kind of system for building user's portrait
CN106874266A (en) * 2015-12-10 2017-06-20 中国电信股份有限公司 User's portrait method and the device for user's portrait
CN105741134A (en) * 2016-01-26 2016-07-06 北京百分点信息科技有限公司 Method and apparatus for applying cross-data-source marketing crowds to marketing
CN106529177A (en) * 2016-11-12 2017-03-22 杭州电子科技大学 Patient portrait drawing method and device based on medical big data

Also Published As

Publication number Publication date
CN108154401A (en) 2018-06-12

Similar Documents

Publication Publication Date Title
CN108154401B (en) User portrait depicting method, device, medium and computing equipment
US10846617B2 (en) Context-aware recommendation system for analysts
Jerath et al. Consumer click behavior at a search engine: The role of keyword popularity
Neslin et al. Defection detection: Measuring and understanding the predictive accuracy of customer churn models
Luo et al. Recovering hidden buyer–seller relationship states to measure the return on marketing investment in business-to-business markets
US11295375B1 (en) Machine learning based computer platform, computer-implemented method, and computer program product for finding right-fit technology solutions for business needs
WO2017190610A1 (en) Target user orientation method and device, and computer storage medium
US20090319365A1 (en) System and method for assessing marketing data
US20190213194A1 (en) System and method for information recommendation
Wang et al. A strategy-oriented operation module for recommender systems in E-commerce
Wang et al. Database submission—market dynamics and user-generated content about tablet computers
CN115002200B (en) Message pushing method, device, equipment and storage medium based on user portrait
CN111902837A (en) Apparatus, method, and program for analyzing attribute information of customer
CN112700271A (en) Big data image drawing method and system based on label model
CN111429161B (en) Feature extraction method, feature extraction device, storage medium and electronic equipment
Byrne The digital economy and productivity
US20150142782A1 (en) Method for associating metadata with images
CN115563176A (en) Electronic commerce data processing system and method
Shanahan et al. Digital advertising: An information scientist’s perspective
KR102429104B1 (en) Product catalog automatic classification system based on artificial intelligence
KR102238438B1 (en) System for providing commercial product transaction service using price standardization
KR102405503B1 (en) Method for creating predictive market growth index using transaction data and social data, system for creating predictive market growth index using the same and computer program for the same
Osaysa Improving the quality of marketing analytics systems
US20210350224A1 (en) Methods and systems for evaluating a new application
CN113254775A (en) Credit card product recommendation method based on client browsing behavior sequence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20191106

Address after: 310012 G building, 10 floor, A building, Paradise Software Park, 3 West Road, Hangzhou, Xihu District, Zhejiang

Applicant after: Alibaba (China) Co.,Ltd.

Address before: 310051 room 803, Qianlong building, 1786 Jianghan Road, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province

Applicant before: NETEASE KOALA (HANGZHOU) TECH CO.,LTD.

GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20221114

Address after: 518067 Room 501, building S1, Alibaba cloud building, No. 3239, Keyuan South Road, Yuehai street, Nanshan District, Shenzhen City, Guangdong Province

Patentee after: Alibaba (Shenzhen) Technology Co.,Ltd.

Address before: 310012 G Block, 10th Building, Building A, Paradise Software Park, No. 3 Xidoumen Road, Xihu District, Hangzhou City, Zhejiang Province

Patentee before: Alibaba (China) Co.,Ltd.