Disclosure of Invention
The prior user portrait can only reflect the characteristics of the user in a specific field and can not reflect the characteristics of the user in other fields, so that the application field of the user portrait is very limited.
Therefore, the existing user images cannot be applied in a wider field, which is very annoying.
For this reason, there is a strong need for an improved user representation portrayal method that can be applied in a wider field.
In this context, embodiments of the present invention desire to provide an improved user representation characterization method and apparatus.
In a first aspect of embodiments of the present invention, there is provided a method for user representation depiction, comprising: aiming at a target user, acquiring original data of a user portrait used for describing the target user; determining user characteristics expressed by the target user on a plurality of first preset dimensions based on the original data; and depicting the user portrait of the target user through the user characteristics expressed by the target user on the plurality of first preset dimensions.
In one embodiment of the present invention, in the process of depicting the user representation of the target user: labeling the user characteristics expressed by the target user in at least one dimension of the plurality of first preset dimensions to obtain corresponding user characteristic labels; and depicting the user portrait of the target user through the user feature tag and other user features, wherein the other user features include: the target user may be a user feature expressed in a dimension other than the at least one dimension in the plurality of first predetermined dimensions.
In another embodiment of the present invention, tagging a user feature expressed by the target user in at least one of the plurality of first preset dimensions to obtain a corresponding user feature tag includes: for each dimension in the at least one dimension, determining a category of user features expressed by the target user in the dimension; performing semantic analysis on the user characteristics expressed by the target user in the dimension to obtain semantics corresponding to the user characteristics of the category; and associating the category with semantics corresponding to the user features of the category to obtain the user feature tag on the dimension.
In another embodiment of the present invention, tagging a user feature expressed by the target user in at least one of the plurality of first preset dimensions to obtain a corresponding user feature tag includes: for each dimension in the at least one dimension, counting the user characteristics expressed by the target user in the dimension to obtain a corresponding statistical result; acquiring additional information input from outside; and labeling the user characteristics expressed by the target user in the dimension based on the statistical result and the additional information to obtain the user characteristic label in the dimension.
In another embodiment of the present invention, determining, based on the raw data, user characteristics that the target user exhibits in a plurality of first preset dimensions includes: extracting various target data related to the portrayal of the user from the original data; and determining user characteristics of the target user expressed on the plurality of first preset dimensions based on the plurality of target data, wherein one target data corresponds to one first preset dimension.
In another embodiment of the present invention, determining, based on the plurality of target data, user characteristics that the target user exhibits in the plurality of first preset dimensions includes: translating each target data in the multiple target data according to a preset rule to obtain corresponding multiple structured data; performing data analysis on each structured data in the plurality of structured data to obtain a plurality of data objects, wherein one data object corresponds to a first preset dimension; and determining the user characteristics expressed by the target user on the plurality of first preset dimensions based on the plurality of data objects.
In still another embodiment of the present invention, the plurality of target data includes at least two of the following data: basic data, raw recorded data, factual data, statistical data, and predictive data.
In a further embodiment of the present invention, in a case where the target data includes the basic data, the method further includes: analyzing the basic data to obtain one or more of the following analyzed data: attribute data, role data, associated data, terminal data and registration data, wherein a first preset dimension corresponding to the basic data comprises: attribute dimension, role dimension, association dimension, terminal dimension and registration dimension; when the analysis data of the basic data includes the attribute data, depicting an attribute image of the target user in the attribute dimension based on the attribute data; when the analysis data of the basic data includes the character data, depicting a character portrait of the target user in the character dimension based on the character data; when the analysis data of the basic data comprises the related data, depicting the related portrait of the target user on the related dimension based on the related data; when the analysis data of the basic data comprises the terminal data, depicting a terminal portrait of the target user on the terminal dimension based on the terminal data; and when the analysis data of the basic data comprises the registration data, depicting the registration portrait of the target user on the registration dimension based on the registration data.
In a further embodiment of the present invention, in a case where the target data includes the original recording data, the method further includes: analyzing the original record data to obtain one or more of the following analyzed data: the method comprises the following steps of accessing data, operating data, first consumption data, second consumption data and feedback data, wherein a first preset dimension corresponding to the original recording data comprises: an access dimension, an operation dimension, a first consumption dimension, a second consumption dimension, and a feedback dimension; when the analysis data of the original record data comprises the access data, depicting the access portrait of the target user on the access dimension based on the access data; describing an operation image of the target user in the operation dimension based on the operation data when the analysis data of the original record data includes the operation data; when the analysis data of the original record data comprises the first consumption data, depicting a first consumption portrait of the target user on the first consumption dimension based on the first consumption data; when the analysis data of the original record data comprises the second consumption data, depicting a second consumption portrait of the target user on the second consumption dimension based on the second consumption data; and when the analysis data of the original record data comprises the feedback data, depicting the feedback portrait of the target user on the feedback dimension based on the feedback data.
In a further embodiment of the present invention, in a case where the target data includes the fact data, the method further includes: analyzing the fact data to obtain one or more of the following analyzed data: the data consumption method comprises the following steps of earliest M times of consumption data, latest N times of consumption data, former L times of consumption data and consumption data in a preset time period, wherein a first preset dimension corresponding to the fact data comprises the following steps: the method comprises the following steps of (1) obtaining an earliest M-time consumption dimension, a last N-time consumption dimension, a first L-time consumption dimension and a consumption dimension in a preset time period; when the analysis data of the fact data includes the earliest M-time consumption data, depicting the earliest M-time consumption image of the target user in the earliest M-time consumption dimension based on the earliest M-time consumption data; when the analysis data of the fact data comprises the last N consumption data, depicting the last N consumption figures of the target user on the last N consumption dimensions based on the last N consumption data; when the analysis data of the fact data comprises the previous L consumption data, describing the previous L consumption figures of the target user on the previous L consumption dimensions based on the previous L consumption data; and when the analysis data of the fact data comprises consumption data in the preset time period, depicting the consumption image in the preset time period of the target user on the consumption dimension in the preset time period based on the consumption data in the preset time period.
In a further embodiment of the present invention, in a case where the target data includes the statistical data, the method further includes: analyzing the statistical data to obtain one or more of the following analyzed data: preference data, geographic data, rule data, wherein a first preset dimension corresponding to the statistical data comprises: a preference dimension, a geographic dimension, and a rule dimension; when the analysis data of the statistical data comprises the preference data, depicting a preference portrait of the target user on the preference dimension based on the preference data; when the analysis data of the statistical data comprises the geographic data, depicting a geographic portrait of the target user on the geographic dimension based on the geographic data; and when the analysis data of the statistical data includes the rule data, depicting a rule image of the target user in the rule dimension based on the rule data.
In a further embodiment of the present invention, in a case where the target data includes the prediction data, the method further includes: analyzing the prediction data to obtain one or more of the following analysis data: probability prediction data, risk prediction data and strategy prediction data, wherein the first preset dimensionality corresponding to the prediction data comprises the following steps: a probability prediction dimension, a risk prediction dimension, and a strategy prediction dimension; when the analysis data of the prediction data includes the probability prediction data, depicting a probability prediction image of the target user in the probability prediction dimension based on the probability prediction data; describing a risk prediction image of the target user in the risk prediction dimension based on the risk prediction data when analysis data of the prediction data includes the risk prediction data; and when the analysis data of the prediction data includes the strategy prediction data, depicting a strategy prediction image of the target user in the strategy prediction dimension based on the strategy prediction data.
In a second aspect of embodiments of the present invention, there is provided a computer-readable storage medium having stored thereon computer-executable instructions, which when executed by a processing module, are configured to implement the user representation depicting method according to any one of the above embodiments.
In a third aspect of embodiments of the present invention, there is provided a user portrait depicting apparatus, comprising: the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring original data of a user portrait depicting a target user aiming at the target user; the determining module is used for determining user characteristics expressed by the target user on a plurality of first preset dimensions based on the original data; and the first depiction module is used for depicting the user portrait of the target user through the user characteristics expressed by the target user on the plurality of first preset dimensions.
In an embodiment of the present invention, the apparatus further includes: the labeling module is used for labeling the user characteristics expressed by the target user on at least one dimension of the plurality of first preset dimensions in the process of depicting the user portrait of the target user to obtain corresponding user characteristic labels; and a second depiction module for depicting the user portrait of the target user by the user feature tag and other user features, wherein the other user features include: the target user may be a user feature expressed in a dimension other than the at least one dimension in the plurality of first predetermined dimensions.
In another embodiment of the present invention, the labeling module includes: a first determining unit, configured to determine, for each of the at least one dimension, a category of a user feature expressed by the target user in the dimension; the analysis unit is used for performing semantic analysis on the user characteristics expressed by the target user in the dimension to obtain the semantics corresponding to the user characteristics of the category; and a first generating unit, configured to associate the category with a semantic corresponding to the user feature of the category to obtain a user feature tag in the dimension.
In another embodiment of the present invention, the labeling module includes: a counting unit, configured to count, for each dimension of the at least one dimension, user characteristics expressed by the target user in the dimension to obtain a corresponding statistical result; an acquisition unit for acquiring additional information input externally; and a second generating unit, configured to label, based on the statistical result and the additional information, a user feature expressed by the target user in the dimension, so as to obtain a user feature label in the dimension.
In yet another embodiment of the present invention, the determining module includes: an extraction unit for extracting a plurality of target data related to the portrayal of the user from the original data; and a second determining unit, configured to determine, based on the multiple types of target data, user characteristics that the target user exhibits in the multiple first preset dimensions, where one type of target data corresponds to one first preset dimension.
In still another embodiment of the present invention, the second determination unit includes: the translation subunit is used for translating each target data in the multiple target data according to a preset rule to obtain corresponding multiple structured data; the analysis subunit is configured to perform data analysis on each of the plurality of types of structured data to obtain a plurality of types of data objects, where one type of data object corresponds to a first preset dimension; and a determining subunit, configured to determine, based on the multiple data objects, user characteristics that the target user exhibits in the multiple first preset dimensions.
In still another embodiment of the present invention, the plurality of target data includes at least two of the following data: basic data, raw recorded data, factual data, statistical data, and predictive data.
In a further embodiment of the present invention, the second determining unit is further configured to, in a case that the target data includes the basic data, parse the basic data to obtain one or more of the following parsed data: attribute data, role data, associated data, terminal data and registration data, wherein a first preset dimension corresponding to the basic data comprises: attribute dimension, role dimension, association dimension, terminal dimension and registration dimension; and the first depicting module is further used for: when the analysis data of the basic data includes the attribute data, depicting an attribute image of the target user in the attribute dimension based on the attribute data; when the analysis data of the basic data includes the character data, depicting a character portrait of the target user in the character dimension based on the character data; when the analysis data of the basic data comprises the related data, depicting the related portrait of the target user on the related dimension based on the related data; when the analysis data of the basic data comprises the terminal data, depicting a terminal portrait of the target user on the terminal dimension based on the terminal data; and when the analysis data of the basic data comprises the registration data, depicting the registration portrait of the target user on the registration dimension based on the registration data.
In a further embodiment of the present invention, the second determining unit is further configured to, in a case that the target data includes the original recorded data, parse the original recorded data to obtain one or more of the following parsed data: the method comprises the following steps of accessing data, operating data, first consumption data, second consumption data and feedback data, wherein a first preset dimension corresponding to the original recording data comprises: an access dimension, an operation dimension, a first consumption dimension, a second consumption dimension, and a feedback dimension; and the first depicting module is further used for: when the analysis data of the original record data comprises the access data, depicting the access portrait of the target user on the access dimension based on the access data; describing an operation image of the target user in the operation dimension based on the operation data when the analysis data of the original record data includes the operation data; when the analysis data of the original record data comprises the first consumption data, depicting a first consumption portrait of the target user on the first consumption dimension based on the first consumption data; when the analysis data of the original record data comprises the second consumption data, depicting a second consumption portrait of the target user on the second consumption dimension based on the second consumption data; and when the analysis data of the original record data comprises the feedback data, depicting the feedback portrait of the target user on the feedback dimension based on the feedback data.
In a further embodiment of the present invention, the second determining unit is further configured to, in a case that the target data includes the fact data, parse the fact data to obtain one or more of the following parsed data: the data consumption method comprises the following steps of earliest M times of consumption data, latest N times of consumption data, former L times of consumption data and consumption data in a preset time period, wherein a first preset dimension corresponding to the fact data comprises the following steps: the method comprises the following steps of (1) obtaining an earliest M-time consumption dimension, a last N-time consumption dimension, a first L-time consumption dimension and a consumption dimension in a preset time period; and the first depicting module is further used for: when the analysis data of the fact data includes the earliest M-time consumption data, depicting the earliest M-time consumption image of the target user in the earliest M-time consumption dimension based on the earliest M-time consumption data; when the analysis data of the fact data comprises the last N consumption data, depicting the last N consumption figures of the target user on the last N consumption dimensions based on the last N consumption data; when the analysis data of the fact data comprises the previous L consumption data, describing the previous L consumption figures of the target user on the previous L consumption dimensions based on the previous L consumption data; and when the analysis data of the fact data comprises consumption data in the preset time period, depicting the consumption image in the preset time period of the target user on the consumption dimension in the preset time period based on the consumption data in the preset time period.
In a further embodiment of the present invention, the second determining unit is further configured to, in a case that the target data includes the statistical data, analyze the statistical data to obtain one or more of the following analyzed data: preference data, geographic data, rule data, wherein a first preset dimension corresponding to the statistical data comprises: a preference dimension, a geographic dimension, and a rule dimension; and the first depicting module is further used for: when the analysis data of the statistical data comprises the preference data, depicting a preference portrait of the target user on the preference dimension based on the preference data; when the analysis data of the statistical data comprises the geographic data, depicting a geographic portrait of the target user on the geographic dimension based on the geographic data; and when the analysis data of the statistical data includes the rule data, depicting a rule image of the target user in the rule dimension based on the rule data.
In a further embodiment of the present invention, the second determining unit is further configured to, in a case that the target data includes the predicted data, analyze the predicted data to obtain one or more of the following analyzed data: probability prediction data, risk prediction data and strategy prediction data, wherein the first preset dimensionality corresponding to the prediction data comprises the following steps: a probability prediction dimension, a risk prediction dimension, and a strategy prediction dimension; and the first depicting module is further used for: when the analysis data of the prediction data includes the probability prediction data, depicting a probability prediction image of the target user in the probability prediction dimension based on the probability prediction data; describing a risk prediction image of the target user in the risk prediction dimension based on the risk prediction data when analysis data of the prediction data includes the risk prediction data; and when the analysis data of the prediction data includes the strategy prediction data, depicting a strategy prediction image of the target user in the strategy prediction dimension based on the strategy prediction data.
In a fourth aspect of embodiments of the present invention, there is provided a computing device comprising: a processing module; and a storage module, on which executable instructions are stored, wherein the instructions, when executed by the processing module, are used for implementing the user portrait depicting method according to any one of the embodiments.
According to the user portrait depicting method and the user portrait depicting device, the portrait depicting of any user is depicted on a plurality of latitudes capable of reflecting different characteristics of the user, so that the aim of depicting the user portrait in a multi-angle and multi-layer mode can be fulfilled, the defect that the application field of the user portrait is limited due to the fact that the user portrait is depicted only on a communication layer or a family attribute association layer in the related art can be overcome at least partially, and the application field of the user portrait is expanded remarkably.
Detailed Description
The principles and spirit of the present invention will be described with reference to a number of exemplary embodiments. It is understood that these embodiments are given solely for the purpose of enabling those skilled in the art to better understand and to practice the invention, and are not intended to limit the scope of the invention in any way. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
As will be appreciated by one skilled in the art, embodiments of the present invention may be embodied as a system, apparatus, device, method, or computer program product. Accordingly, the present disclosure may be embodied in the form of: entirely hardware, entirely software (including firmware, resident software, micro-code, etc.), or a combination of hardware and software.
According to the embodiment of the invention, a user portrait depicting method, a medium, a device and a computing device are provided.
In this document, it is to be understood that any number of elements in the figures are provided by way of illustration and not limitation, and any nomenclature is used for differentiation only and not in any limiting sense.
The principles and spirit of the present invention are explained in detail below with reference to several representative embodiments of the invention.
Summary of The Invention
In the process of implementing the embodiment of the present invention, the inventors found that: the existing user portrait can only reflect the characteristics of the user in a specific field and can not reflect the characteristics of the user in other fields, for example, some user portraits can only be applied to the communication field, some user portraits can only be applied to the internet field, and the user portraits applied to the internet field can only reflect the association relationship of the family attributes of the user.
The embodiment of the invention provides a method and a device for depicting a user portrait, wherein the method comprises the following steps: aiming at a target user, acquiring original data of a user portrait used for describing the target user; determining user characteristics expressed by a target user on a plurality of first preset dimensions based on the original data; and depicting the user portrait of the target user through the user characteristics expressed by the target user on a plurality of first preset dimensions. The invention adopts the mode of depicting any user portrait on a plurality of latitudes capable of reflecting different characteristics of the user, thereby realizing the aim of depicting the user portrait in a plurality of angles and layers, overcoming the defect that the application field of the user portrait is limited because the user portrait is depicted only from a communication layer or a family attribute association layer in the related technology at least partially, and obviously expanding the adaptation field of the user portrait.
Having described the general principles of the invention, various non-limiting embodiments of the invention are described in detail below.
Application scene overview
An exemplary application scenario of the user portrait depicting method and apparatus according to the embodiment of the present invention is first described in detail with reference to fig. 1.
After the internet gradually steps into the big data era, a series of changes are inevitably brought to the behaviors of products and users, and all the behaviors of the users are required to be 'visual' in front of the products. With the intensive research and application of big data technology, the focus of enterprises is increasingly on how to utilize big data to serve precise marketing, and further to deeply mine potential business value.
The user portrait is the basic production data of the current no matter on-line products or off-line products, and the user is comprehensively depicted through the user portrait, so that enterprises can be helped to know the characteristics of the users more comprehensively and deeply, marketing strategies are adjusted, or a more matched personalized marketing scheme is automatically given by utilizing the processing capacity of the server, and the purpose of increasing the transaction probability is achieved.
User representations are used in a wide variety of applications, and as shown in FIG. 1, user representation 110 may be used in advertising system 120, push system 130, recommendation system 140, and search system 150.
In particular, user portrayal may be used primarily for: (1) and (3) precise marketing: cutting user groups into finer granularity, assisting with means such as pushing short messages and mails, doing sales promotion activities and the like, and caring, recovering, exciting and the like for different users; (2) data application: such as a recommendation system, an advertisement system and a search system, the product conversion rate or the user experience can be improved by customizing a strategy according to user data or performing machine learning through characteristic data of a user portrait; (3) product analysis: the data warehouse and various labels at the service level are natural elements of multi-dimensional analysis, so that the relevant data can be inquired through the data inquiry platform.
In a word, tens of millions of people raise and one person does not know you, really knows the user, can obtain the user and can occupy a larger market. It is clear that the importance of the user representation is self evident. User portrayal is a necessary data warehouse for good products and is also an important model in AI systems.
It should be understood that the application scenarios of the present embodiment are merely illustrative and are not intended to limit or otherwise narrow the scope of the present disclosure.
Exemplary method
A user representation portrayal method in accordance with an exemplary embodiment of the present invention is described below with reference to fig. 2. It should be noted that the above application scenarios are merely illustrated for the convenience of understanding the spirit and principles of the present invention, and the embodiments of the present invention are not limited in this respect. Rather, embodiments of the present invention may be applied to any scenario where applicable.
The embodiment of the invention provides a user portrait depicting method.
FIG. 2 schematically illustrates a flow diagram of a method of user representation characterization, in accordance with an embodiment of the present invention. As shown in fig. 2, the method may include the operations of:
operation S210, for the target user, obtaining original data of a user portrait depicting the target user;
operation S220, based on the original data, determining user characteristics expressed by the target user in a plurality of first preset dimensions; and
in operation S230, the user portrait of the target user is depicted through the user features expressed by the target user in the plurality of first preset dimensions.
It should be noted that the target user related to the present invention may be any user. Further, for any user, the raw data may be all data relevant to that user, including both online and offline data. In addition, the manner of obtaining the raw data may include various manners, such as crawling from a network, reading from a corresponding database, a storage (such as the file storage 301 and the data storage 302 shown in fig. 3A) through various interfaces, and the like, which are not limited herein.
For a target user, after acquiring original data of the target user, the acquired original data can be analyzed, analysis results are classified, the different types of original data correspond to different preset dimensions, and then the corresponding types of original data are analyzed in each preset dimension to determine user characteristics expressed by the data.
After the user characteristics expressed by various data are determined, the user portrait of the user can be described on each preset dimension.
For example, as shown in FIG. 3A, a user representation system 305 for representing a user representation may retrieve raw data of a user from a file store 301 and a data store 302 and perform a corresponding user representation process based on the retrieved raw data, and after the user representation is represented, the user representation may be used in a marketing system 303 and other online services 304.
According to the embodiment of the invention, as the portrait of any user is depicted on a plurality of latitudes capable of reflecting different characteristics of the user, the aim of depicting the portrait of the user in a multi-angle and multi-level manner can be achieved, so that the defect that the application field of the portrait of the user is limited due to the fact that the portrait of the user is depicted only from a communication level or a family attribute association level in the related art can be at least partially overcome, and the application field of the portrait of the user is remarkably expanded.
The user portrait depicting method shown in FIG. 2 will be described in detail with reference to FIGS. 3B to 3K.
Generally, since the user features expressed by the user in each first preset dimension are usually relatively abstract, lengthy and complex, in order to overcome these defects, the user features expressed by the user in each preset dimension may be simplified, so that the abstract, lengthy and complex user features are materialized and simplified. For example, it may be labeled.
The user features in each preset dimension can be labeled (some user features in the preset dimensions can also be directly used without labeling), and in the labeling process, the user features with similar user features in some preset dimensions can be summarized by using technologies such as clustering and the like or manual labeling modes, so that the use is convenient. Further, the labeling manner of the user characteristics may include at least manner 1 and manner 2 as described below.
Mode 1, as shown in fig. 3B, the user features represented by each type of raw data may be clustered, and a description may be formed for each type in combination with the neuro-linguistic programming NLP.
Specifically, as an optional embodiment, the user portrait depicting method may further include, in the process of depicting the user portrait of the target user: labeling the user characteristics expressed by the target user in at least one dimension of the plurality of first preset dimensions to obtain corresponding user characteristic labels; and depicting the user representation of the target user through the user feature tag and other user features, wherein the other user features comprise: the target user features the user in other dimensions of the plurality of first predetermined dimensions except the at least one dimension.
It should be noted that, in the process of depicting the user portrait, tagging the user feature is a preferable scheme, and for a plurality of first preset dimensions, all of the first preset dimensions may be selected to be tagged, or a part of the first preset dimensions may be selected to be tagged, and how to select the first preset dimensions may be determined according to actual needs, which is not limited herein.
Further, as an optional embodiment, tagging a user feature expressed by the target user in at least one of the plurality of first preset dimensions to obtain a corresponding user feature tag includes: for each dimension in the at least one dimension, determining a category of user features expressed by the target user in the dimension; performing semantic analysis on the user characteristics expressed by the target user on the dimension to obtain semantics corresponding to the user characteristics of the category; and associating the category with semantics corresponding to the user features of the category to obtain the user feature tag on the dimension.
Specifically, for the user features in any preset dimension, the classification of the user features can be obtained through technical means such as clustering, the semantics of the user features are obtained through technologies such as natural language processing, the classification is associated with human understanding, and the user features in the preset dimension and the actual meanings are associated to form a label.
Mode 2, as shown in fig. 3C, the user features represented by each type of raw data may be statistically sorted, and a description may be formed for each type by adding a manual acceptance environment.
Specifically, as an optional embodiment, tagging a user feature expressed by the target user in at least one of the plurality of first preset dimensions, and obtaining a corresponding user feature tag may include: for each dimension in the at least one dimension, counting the user characteristics expressed by the target user on the dimension to obtain a corresponding statistical result; acquiring additional information input from outside; and labeling the user characteristics expressed by the target user on the dimension based on the statistical result and the additional information to obtain the user characteristic label on the dimension.
After the corresponding user characteristics are tagged, the tagged and untagged (i.e., structured) data can be delivered to a user (e.g., an online service, marketing system, etc.) for storage in a corresponding format for use. When the data is used for the online service, the data is mainly used for an automatic processing flow, such as machine learning, online pushing, recommendation, search, pricing and the like, and when the data is used for the marketing system, the data is mainly used for a CRM system, a push system and the like.
As an alternative embodiment, as shown in fig. 3D, the operation S220 of determining, based on the raw data, the user characteristics that the target user exhibits in the first preset dimensions may include:
operation S221, extracting various target data related to portraying the user portrait from the original data; and
in operation S222, based on the multiple target data, user characteristics expressed by the target user in the multiple first preset dimensions are determined, where one target data corresponds to one first preset dimension.
Specifically, the above process of the embodiment of the present invention can be subdivided into the following three steps: and (3) analysis: for extracting relevant data from the raw data and translating into a structured data language, such as by deserializing the data stream into recognizable key-value pairs for subsequent processing; and (3) treatment: further analyzing and processing the user data obtained by the previous analysis, such as statistics, mining and prediction; structuring: the data processed in the previous step is assembled into a data form with a certain format according to the content of specific data, so that the data form can be conveniently inserted into a database or written into a corresponding file only with strangeness, and the purpose is to facilitate subsequent use (online or offline viewing).
As an alternative embodiment, as shown in fig. 3E, the operation S222, based on the plurality of target data, determining the user characteristics that the target user exhibits in the plurality of first preset dimensions may include:
operation S2221, translate each target data in the multiple target data according to a preset rule, to obtain corresponding multiple structured data;
operation S2222, performing data analysis on each of the multiple kinds of structured data to obtain multiple kinds of data objects, where one kind of data object corresponds to a first preset dimension; and
in operation S2223, based on the plurality of data objects, user characteristics expressed by the target user in the plurality of first preset dimensions are determined.
The user portrait depicting method provided by the invention is characterized in that original data associated with a user is constructed into a corresponding user portrait through the process, the user portrait is structured data to represent user characteristics expressed by the user on various preset dimensions, a similar model file is formed, and the user portrait can be loaded into a marketing system, an advertising system, a pushing system, a recommending system and a searching system, or can be used for strategy use, and the user portrait depicted by using the method provided by the invention has the characteristic of being readable by both a machine and a human, and can also be used as a user model.
In addition, in embodiments of the present invention, the data source of the user representation generally includes a data repository, which may be generally divided into data files such as files, hdfs, documents, and other electronic files, or structured databases such as mysql, ddb, hive, redis, tair, ncr, and other databases. In the embodiment, the original data is taken out from the data warehouse, and then the related data is extracted from the original data to be used as the target data, so that the user portrait can be further constructed and used. The original data can be roughly divided into four types, namely online basic data, in-product behavior data, user content preference data and user transaction data.
Specifically, the files may be imported into a computing unit, and user features of each preset dimension may be computed or extracted according to a specific rule, which will be described below.
By the embodiment of the invention, the user portrait can be systematically and robustly depicted, so that abundant user data can be provided, an online automatic system or a semi-automatic operation system can be conveniently adopted, personalized recommendation and crowd division are convenient, and the aim of accurate marketing can be fulfilled. In addition, the user representation portrayed by the scheme provided by the invention is relatively comprehensive in information, so that subjective assumption can be reduced as much as possible in the operation of the product, users can be approached, and the real requirements of the users can be understood, thereby knowing how to better serve the users of different types.
Further, as an alternative embodiment, the plurality of target data includes at least two of the following data: basic data, raw recorded data, factual data, statistical data, and predictive data. As shown in FIG. 3F, for completeness, the target data may include both base data, raw recorded data, fact data, statistical data, and predictive data.
In particular, the content contained in the target data and the order in which the content is obtained may be adjusted as appropriate in the specific implementation according to the specific product location and the richness of the product data. Wherein: basic data, also called basic information, which is information for acquiring and processing the user, wherein the user includes the user's own information and the virtual character role information in the platform; raw record data, which is the data record that is the most raw data record of the user in the product and related products, including but not limited to behavior records, such as purchase history, etc.; fact data, also called a fact model, is used for acquiring and processing user information based on a fact dimension (state), and acquiring user characteristics under the description through a meaningful description; statistical data, also called statistical model, which is to obtain and process user images based on statistical information, and mine potential user characteristics such as taste, preference, frequent location, etc. by statistical means; the prediction data, also called prediction model, is to acquire and process some relevant data, and predict some characteristics of the user through a model, usually a machine learning model, such as risk prediction, taste prediction, etc., and also includes basic data, original recorded data, factual data, user information missing from statistical data, such as occupation, age, gender prediction, etc.
Further, as an optional embodiment, in the case that the target data includes the basic data, the method may further include: analyzing the basic data to obtain one or more of the following analyzed data: attribute data, role data, associated data, terminal data, and registration data (as shown in fig. 3G, basic data may include attribute data, role data, associated data, terminal data, and registration data at the same time), where the first preset dimension corresponding to the basic data includes: attribute dimension, role dimension, association dimension, terminal dimension and registration dimension; in the case that the analytical data of the basic data comprises the attribute data, depicting the attribute portrait of the target user on the attribute dimension based on the attribute data; in the case that the parsed data of the base data includes the role data, characterizing a role representation of the target user in the role dimension based on the role data; in the case that the parsed data of the base data includes the associated data, depicting an associated representation of the target user in the associated dimension based on the associated data; under the condition that the analysis data of the basic data comprises the terminal data, depicting the terminal portrait of the target user on the terminal dimension based on the terminal data; and in the case that the parsed data of the base data includes the registration data, characterizing a registration portrait of the target user in the registration dimension based on the registration data.
In the present invention, different types of data in the basic data are used to generate different parts of the user image, and the generation order is not limited when different types of basic data are used to generate different parts of the user image.
Specifically, attribute data herein includes, but is not limited to, one or more of the following characteristic data: identifiers (such as account number, user id, device id, cookie id and the like), user age, user gender, user native, common mobile phone number, primary account number, year of birth, constellation, user academic calendar, user character, user ethnicity, user religion, user growth age in the product and the like; the character data includes, but is not limited to, one or more of the following characteristic data: in-station roles, out-of-station roles, professional roles, family roles, community roles, and the like; the associated data includes, but is not limited to, one or more of the following characteristic data: whether the user has a baby (including recording child age, weight, gender, if any), for whom the user purchased, the vehicle the user held, user real estate, user company (i.e., company location, company industry), user income level; the terminal data includes, but is not limited to, one or more of the following characteristic data: the used terminal parameters comprise cpu, memory, screen, system, brand, time on market and the like; registration data includes, but is not limited to, one or more of the following characteristic data: the information during registration comprises a registration channel, a registration mode, a contact mode, time, reading interest, life interest and the like, and the registration portrait also comprises information actively filled and modified in the product by the user after registration is finished.
Further, as an optional embodiment, in a case that the target data includes the original recording data, the method further includes: analyzing the original record data to obtain one or more of the following analyzed data: the method includes accessing data, operating data, first consumption data, second consumption data, and feedback data (as shown in fig. 3H, the original record data may include simultaneous access data, operating data, first consumption data, second consumption data, and feedback data), wherein a first preset dimension corresponding to the original record data includes: an access dimension, an operation dimension, a first consumption dimension, a second consumption dimension, and a feedback dimension; in the case that the parsed data of the original recorded data includes the access data, depicting an access representation of the target user in the access dimension based on the access data; in the case that the analysis data of the original record data comprises the operation data, depicting an operation portrait of the target user on the operation dimension based on the operation data; in the event that parsed data of the raw record data includes the first consumption data, portraying a first consumption representation of the target user in the first consumption dimension based on the first consumption data; in the event that parsed data of the original recorded data includes the second consumption data, portraying a second consumption representation of the target user in the second consumption dimension based on the second consumption data; and in the case that the parsed data of the raw recorded data includes the feedback data, characterizing a feedback representation of the target user in the feedback dimension based on the feedback data.
In addition, different types of data in the raw record data are used for generating different parts of the user representation, and when different types of raw record data are used for generating different parts of the user representation, the generation sequence is not limited by the invention.
Specifically, capturing and processing raw record data herein includes, but is not limited to, capturing data from 5 aspects of access, manipulation, consumption, feedback, recording to complete the characterization of the portion of the user representation. The objects accessed, manipulated, consumed, fed back, recorded herein are elements that include one or more of "goods, articles, categories, brands, albums, activities," and so forth, as described in detail below.
Wherein the access data includes one or more of the following characteristics without limitation: element access traces, user display traces and stay traces in the product; the operational data includes, but is not limited to, one or more of the following characteristic data: browsing, praise, purchase, collection, purchase, sharing, grouping and searching behavior traces of elements in the product; the first consumption data comprises one or more of the following characteristics without limitation: the actual consumption information comprises the reason, channel, source, financial details and preferential use condition of consumption; second consumption data for generating a feedback representation comprising one or more of the following characteristics data: customer service feedback content, comment content, grading content, negative feedback content, uninteresting content and complaint content; feedback data for generating a recorded representation, including, without limitation, one or more of the following feature data: current total consumed, current goods in transit, current pending payment current status value.
Further, as an optional embodiment, in a case that the target data includes the fact data, the method further includes: analyzing the fact data to obtain one or more of the following analyzed data: the data consumption method includes the following steps that earliest M times of consumption data, last N times of consumption data, last L times of consumption data, and consumption data within a preset time period (as shown in fig. 3I, fact data may include earliest M times of consumption data, last N times of consumption data, last L times of consumption data, and consumption data within a preset time period at the same time), wherein a first preset dimension corresponding to the fact data includes: the method comprises the following steps of (1) obtaining an earliest M-time consumption dimension, a last N-time consumption dimension, a first L-time consumption dimension and a consumption dimension in a preset time period; in a case that the parsed data of the fact data includes the earliest M-time consumption data, depicting the earliest M-time consumption image of the target user in the earliest M-time consumption dimension based on the earliest M-time consumption data; in a case that the parsed data of the fact data includes the last N consumption data, depicting the last N consumption figures of the target user in the last N consumption dimensions based on the last N consumption data; under the condition that the analytical data of the fact data comprises the previous L consumption data, describing the previous L consumption figures of the target user on the previous L consumption dimensions based on the previous L consumption data; and under the condition that the analysis data of the fact data comprises consumption data in the preset time period, describing the consumption portrait of the target user in the consumption dimension in the preset time period based on the consumption data in the preset time period.
In addition, different types of data in the fact data are used for generating different parts of the user portrait, and the generation sequence of the different parts of the user portrait is not limited by the present invention when the different types of original record data are used for generating the different parts of the user portrait.
Specifically, in the process of acquiring and processing fact data, user feature extraction needs to be performed on any one or more of four models at least including first/last/top/time, each model includes one or more items, and the four model establishing steps (without precedence) are described in detail as follows.
Generating a FirstN model: recording and processing the time, amount, category, preferential strength, number of pieces, order distribution of various categories, amount distribution of various categories and number of various categories of commodities of a user who purchases the product for the previous N times after registration of the product, wherein the payment account number, identity information and consignee information are used for the previous N times; content, tendency of previous N reviews; the previous N times of collecting, praise and paying attention to the items.
Generating a LastN model: recording and processing the time, amount, category, preferential strength, number of pieces, order distribution of various categories, amount distribution of various categories and number of various categories of commodities of a user purchased for the last N times from the product registration to the current time, and using a payment account, identity information and consignee information for the last N times; content, trend of the last N reviews; items of recent N collections, praise, concerns.
Generation of TopN model: recording and processing the user's order number of TopN, the amount of goods purchased by TopN brand, the national purchase order number of TopN, the amount of goods purchased by TopN country, the TopN frequent payment account number and the TopN consignee information (including address mark, name mark and administrative region) from the time when the user registers the product to the current time, wherein N is a parameter which represents the frequency, and is generally selected from 10, 20, 50 and 100.
Generating a TimeN model: TimeN is the statistical data recorded within a Time window, Time (Time) typically includes descriptions of 10 minutes, 60 minutes, 4 hours, 12 hours, 24 hours, the current day, the last week, the last month, the current quarter, 180 days, the current year, etc., and during a period of a specific Time N, one or more of the following user behavior items are recorded and processed: the amount paid for the purchase, the number of purchases, the amount of orders, the number of times consumed, the extent of consumption, the amount clicked, the amount collected, the amount purchased, the user's preference for categories, the user's preference for brands, etc.
Further, as an optional embodiment, in the case that the target data includes the statistical data, the method further includes: analyzing the statistical data to obtain one or more of the following analyzed data: preference data, geographic data, and rule data (as shown in fig. 3J, the statistical data may include the preference data, the geographic data, and the rule data at the same time), wherein the first preset dimension corresponding to the statistical data includes: a preference dimension, a geographic dimension, and a rule dimension; in the case that the parsed data of the statistical data includes the preference data, characterizing a preference portrait of the target user in the preference dimension based on the preference data; in the case that the parsed data of the statistical data includes the geographic data, depicting a geographic representation of the target user in the geographic dimension based on the geographic data; and in the case that the parsed data of the statistical data includes the rule data, depicting the rule representation of the target user in the rule dimension based on the rule data.
In addition, different types of data in the statistical data are used for generating different parts of the user portrait, and when different types of raw record data are used for generating different parts of the user portrait, the generation sequence is not limited by the invention.
Specifically, the process of obtaining and processing the statistical model includes one or more of a process preference model, a geographic model and a rule model, and further, each model may include a time parameter, so that model data of different time windows including a real-time model and a long-term model may be obtained, and the models are specifically:
processing the user preference model: the method comprises the steps of processing user category preference, brand preference, price preference, color preference, style preference, rights and interests preference, service preference and time preference, and obtaining the weights through calculation of models according to different click and purchase distribution.
Processing the user geographic model: including calculating the user's geographic characteristics including GPS coordinates, IP network address, shopping location habits, city and county level, etc.
Processing the user rules model, including processing one or more of: a, calculating the price sensitivity model of the user, for example, firstly calculating the average price of each item purchased in the last 60 days, and then calculating the average discount of each category purchased by the user according to the average price; b, a user purchasing power model, wherein the average price grade of the user to the category is calculated according to the price grade of the article marked in the article model and is used as the purchasing power model of the user; c coupon dependence, i.e., the degree to which the user is dependent on coupons and promotions; a user life cycle, such as sleepiness (e.g., purchase in last 90 days, no purchase in last 60 days), such as activity (e.g., classified into three categories of high frequency, medium frequency, and low frequency, clustered according to the number of purchase days in last 60 days), such as loss (e.g., no purchase in last 90 days, once purchase, and temporarily not counted), such as registration of unpurchased (e.g., registration of unpurchased purchase, temporarily not counted), such as new user (e.g., registration of last purchase, first order purchase, or no purchase); e, searching a model, namely a searching list of a user according to a time dimension; the fRFM model, such as Recency, e.g., last time consumed in a recent month, such as Frequency, e.g., Monetary, e.g., last amount consumed in a month, etc.
Further, as an optional embodiment, in the case that the target data includes the prediction data, the method further includes: analyzing the prediction data to obtain one or more of the following analysis data: probability prediction data, risk prediction data, and strategy prediction data (as shown in fig. 3K, prediction data may include probability prediction data, risk prediction data, and strategy prediction data at the same time), where the first preset dimension corresponding to the prediction data includes: a probability prediction dimension, a risk prediction dimension, and a strategy prediction dimension; in the case that the analytic data of the prediction data includes the probabilistic prediction data, characterizing the probabilistic prediction image of the target user in the probabilistic prediction dimension based on the probabilistic prediction data; depicting a risk prediction image of the target user in the risk prediction dimension based on the risk prediction data if the analytic data of the prediction data includes the risk prediction data; and in the case that the parsed data of the prediction data includes the policy prediction data, characterizing the policy prediction image of the target user in the policy prediction dimension based on the policy prediction data.
In the present invention, the different types of data in the prediction data are used to generate different parts of the user image, and the generation order is not limited when the different types of raw record data are used to generate different parts of the user image.
Specifically, the flow of obtaining and processing the prediction model herein includes processing one or more of a probability model, a policy model and a risk model, specifically:
processing the probabilistic model including processing one or more characteristics of the user's activity in the product, loyalty, shopping type, promotional sensitivity, purchasing attribute preferences.
Processing the risk model, including processing one or more characteristics of the user in the product such as credit risk, loss risk, satisfaction risk, list abandonment risk, fraud risk, cattle/crawler identification.
Processing the policy model includes processing one or more characteristics of the following items in the user.
Shopping stage prediction, random shopping, focus finding, three-house shopping, waiting for price reduction, being in reserve, and the like.
Missing information policies including, for example, gender prediction, job prediction, consumption capability prediction for non-consumed classes, and the like.
And the operation strategy prediction comprises user indexes to be developed, user indexes to be restored, user indexes to be activated, user indexes to be maintained and the like.
Customer value prediction, including predicting the value a user brings to a product, such as monetary value, reputation value, impact value, and the like.
Intrinsic demand forecasting, including forecasting current demand and potential demand of a user, and the like.
And the aversion degree prediction comprises the prediction of the aversion degree of the user to brands, categories, commodities and the like.
According to the embodiment of the invention, as the mode of depicting the user portrait on a plurality of preset dimensions for the original data is adopted, a relatively comprehensive user portrait can be generated, so that in order to urge team members of related enterprises to throw away personal preference in the process of product design, the focus is focused on the motivation and behavior of a target user to carry out product design, and therefore, the user portrait can enable service objects of products to be more focused and more concentrated. And because the product design for specific characters is far better than the product design for fictional things in the brain, when all the people participating in the product design discuss and make decisions based on consistent users, all the directions can be easily restricted to be kept in the same general direction, and the decision making efficiency is improved.
In addition, the invention takes various composition data of the original data, namely various data samples, as the machine learning samples, and facilitates the enrichment and training of sample data on a certain machine learning example, thereby improving the commercial effect of the online automatic service, further forming a normalized and standardized user portrait generation scheme and facilitating the rapid implementation.
It should be noted that the whole process, the user sub-image obtaining method involved in each process, and the partial features of the embodiment of the present invention may include a general parameter, such as a time parameter.
Exemplary devices
Having described the method of an exemplary embodiment of the present invention, an apparatus for implementing a method for user portrayal is described in detail with reference to FIG. 4.
The embodiment of the invention provides a user portrait depicting device.
FIG. 4 schematically illustrates a block diagram of a user representation apparatus in accordance with an embodiment of the present invention. As shown in FIG. 4, the user representation characterization device 400 may include: an obtaining module 410, configured to obtain, for a target user, original data of a user portrait used for depicting the target user; a determining module 420, configured to determine, based on the raw data, user characteristics that the target user exhibits in a plurality of first preset dimensions; and a first depiction module 430, configured to depict the user portrait of the target user according to the user characteristics expressed by the target user in the plurality of first preset dimensions.
According to the embodiment of the invention, as the portrait of any user is depicted on a plurality of latitudes capable of reflecting different characteristics of the user, the aim of depicting the portrait of the user in a multi-angle and multi-level manner can be achieved, so that the defect that the application field of the portrait of the user is limited due to the fact that the portrait of the user is depicted only from a communication level or a family attribute association level in the related art can be at least partially overcome, and the application field of the portrait of the user is remarkably expanded.
As an alternative embodiment, the apparatus further comprises: the labeling module is used for labeling the user characteristics expressed by the target user on at least one dimension of the plurality of first preset dimensions in the process of depicting the user portrait of the target user to obtain corresponding user characteristic labels; and the second depiction module is used for depicting the user portrait of the target user through the user feature label and other user features, wherein the other user features comprise: the target user represents user characteristics in other dimensions of the plurality of first preset dimensions except the at least one dimension.
As an alternative embodiment, the tagging module comprises: a first determining unit, configured to determine, for each dimension of the at least one dimension, a category of a user feature expressed by the target user in the dimension; the analysis unit is used for performing semantic analysis on the user characteristics expressed by the target user in the dimension to obtain semantics corresponding to the user characteristics of the category; and the first generation unit is used for associating the category with the semantics corresponding to the user characteristics of the category to obtain the user characteristic label on the dimension.
As an alternative embodiment, the tagging module comprises: the statistical unit is used for counting the user characteristics expressed by the target user in the dimension aiming at each dimension in the at least one dimension to obtain a corresponding statistical result; an acquisition unit for acquiring additional information input externally; and the second generating unit is used for labeling the user characteristics expressed by the target user in the dimension based on the statistical result and the additional information so as to obtain the user characteristic label in the dimension.
As an alternative embodiment, the determining module includes: an extraction unit for extracting a plurality of target data related to portraying the user portrait from the original data; and a second determining unit, configured to determine, based on the multiple types of target data, user characteristics that the target user exhibits in the multiple first preset dimensions, where one type of target data corresponds to one first preset dimension.
As an alternative embodiment, the second determining unit includes: the translation subunit is used for translating each target data in the multiple target data according to a preset rule to obtain corresponding multiple structured data; the analysis subunit is configured to perform data analysis on each of the multiple kinds of structured data to obtain multiple kinds of data objects, where one kind of data object corresponds to a first preset dimension; and the determining subunit is used for determining the user characteristics expressed by the target user on the plurality of first preset dimensions based on the plurality of data objects.
As an alternative embodiment, the plurality of target data includes at least two of the following data: basic data, raw recorded data, factual data, statistical data, and predictive data.
As an optional embodiment, the second determining unit is further configured to, in a case that the target data includes the basic data, parse the basic data to obtain one or more of the following parsed data: attribute data, role data, associated data, terminal data and registration data, wherein a first preset dimension corresponding to the basic data comprises: attribute dimension, role dimension, association dimension, terminal dimension and registration dimension; and the first depiction module is further used for: in the case that the parsed data of the base data includes the attribute data, depicting an attribute representation of the target user in the attribute dimension based on the attribute data; in the case that the parsed data of the base data includes the role data, characterizing a role representation of the target user in the role dimension based on the role data; in the case that the parsed data of the base data includes the associated data, depicting an associated representation of the target user in the associated dimension based on the associated data; under the condition that the analysis data of the basic data comprises the terminal data, depicting a terminal portrait of the target user on the terminal dimension based on the terminal data; and in the event that the parsed data of the base data includes the enrollment data, characterizing an enrollment portrait of the target user in the enrollment dimension based on the enrollment data.
As an optional embodiment, the second determining unit is further configured to, in a case that the target data includes the original recorded data, parse the original recorded data to obtain one or more of the following parsed data: the method comprises the following steps of accessing data, operating data, first consumption data, second consumption data and feedback data, wherein a first preset dimension corresponding to the original recording data comprises: an access dimension, an operation dimension, a first consumption dimension, a second consumption dimension, and a feedback dimension; and the first depiction module is further used for: in the case that the parsed data of the original recorded data includes the access data, depicting an access portrait of the target user in the access dimension based on the access data; when the analysis data of the original record data comprises the operation data, depicting an operation image of the target user on the operation dimension based on the operation data; in the event that parsed data of the raw recorded data includes the first consumption data, portraying a first consumption representation of the target user in the first consumption dimension based on the first consumption data; in the event that parsed data of the raw recorded data includes the second consumption data, portraying a second consumption representation of the target user in the second consumption dimension based on the second consumption data; and in the case that the parsed data of the raw recorded data includes the feedback data, characterizing a feedback representation of the target user in the feedback dimension based on the feedback data.
As an optional embodiment, the second determining unit is further configured to, in a case that the target data includes the fact data, parse the fact data to obtain one or more of the following parsed data: the method comprises the following steps of earliest M consumption data, latest N consumption data, former L consumption data and consumption data in a preset time period, wherein a first preset dimension corresponding to fact data comprises the following steps: the method comprises the following steps of (1) obtaining an earliest M-time consumption dimension, a last N-time consumption dimension, a first L-time consumption dimension and a consumption dimension in a preset time period; and the first depiction module is further used for: in a case that parsed data of the fact data includes the earliest M-times consumption data, portraying an earliest M-times consumption portrait of the target user in the earliest M-times consumption dimension based on the earliest M-times consumption data; in a case that parsed data of the fact data includes the last N consumption data, portraying the last N consumption figures of the target user in the last N consumption dimensions based on the last N consumption data; in the case that the parsed data of the fact data includes the previous L consumption data, depicting the previous L consumption figures of the target user in the previous L consumption dimensions based on the previous L consumption data; and under the condition that the analysis data of the fact data comprises consumption data in the preset time period, describing the consumption portrait of the target user in the preset time period on the basis of the consumption data in the preset time period in the consumption dimension.
As an optional embodiment, the second determining unit is further configured to, in a case that the target data includes the statistical data, parse the statistical data to obtain one or more of the following parsed data: preference data, geographic data, rule data, wherein a first preset dimension corresponding to the statistical data comprises: a preference dimension, a geographic dimension, and a rule dimension; and the first depiction module is further used for: in the case that the parsed data of the statistical data includes the preference data, depicting a preference portrait of the target user in the preference dimension based on the preference data; in the event that parsed data of the statistical data includes the geographic data, portraying a geographic portrayal of the target user in the geographic dimension based on the geographic data; and in the case that the analysis data of the statistical data comprises the rule data, depicting the rule portrait of the target user in the rule dimension based on the rule data.
As an optional embodiment, the second determining unit is further configured to, in a case that the target data includes the predicted data, parse the predicted data to obtain one or more of the following parsed data: probability prediction data, risk prediction data and strategy prediction data, wherein the first preset dimensionality corresponding to the prediction data comprises: a probability prediction dimension, a risk prediction dimension, and a strategy prediction dimension; and the first depiction module is further used for: in the case that the parsed data of the prediction data includes the probabilistic prediction data, characterizing a probabilistic prediction image of the target user in the probabilistic prediction dimension based on the probabilistic prediction data; in the case that analytical data of the prediction data includes the risk prediction data, characterizing a risk prediction image of the target user in the risk prediction dimension based on the risk prediction data; and in the case that the parsed data of the prediction data includes the strategy prediction data, characterizing a strategy prediction image of the target user in the strategy prediction dimension based on the strategy prediction data.
It should be noted that, the embodiment of the apparatus is the same as or similar to the embodiment of the method, and the technical problems to be solved, the specific technical means to be used, and the technical effects to be achieved are also the same as or similar to each other.
Exemplary Medium
Embodiments of the present invention provide a computer-readable storage medium having stored thereon executable instructions, which when executed by the processing module, are configured to implement the user representation characterization method of any one of the method embodiments.
In some possible embodiments, aspects of the present invention may also be implemented in the form of a program product including program code for causing a terminal device to perform the steps in the user representation depicting method according to various exemplary embodiments of the present invention described in the above section "exemplary method" of this specification when the program product is run on the terminal device, for example, the terminal device may perform operation S210 shown in fig. 2, for obtaining, for a target user, original data of a user representation depicting the target user; operation S220, based on the original data, determining user characteristics expressed by the target user in a plurality of first preset dimensions; and operation S230, depicting the user portrait of the target user through the user features expressed by the target user in the plurality of first preset dimensions.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
As shown in fig. 5, a program product 50 for user representation portrayal according to an embodiment of the invention is depicted, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Exemplary computing device
Having described the method, medium, and system of exemplary embodiments of the present invention, a computing device for user representation portrayal according to an exemplary embodiment of the present invention is described next.
The embodiment of the invention also provides the computing equipment. The computing device includes: a processing module; and a storage unit storing computer-executable instructions that, when executed by the processing module, are configured to implement the user representation portrayal method of any one of the method embodiments.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as an apparatus, method or program product. Thus, various aspects of the invention may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" device.
In some possible embodiments, a computing device according to the present invention may include at least one processing module, and at least one memory unit. Wherein the storage unit stores program code that, when executed by the processing module, causes the processing module to perform the steps of the user representation method according to various exemplary embodiments of the present invention described in the above section "exemplary methods" of this specification. For example, the processing module may perform operation S210 as shown in fig. 2, for a target user, obtaining raw data for characterizing a user representation of the target user; operation S220, based on the original data, determining user characteristics expressed by the target user in a plurality of first preset dimensions; and operation S230, depicting the user portrait of the target user through the user features expressed by the target user in the plurality of first preset dimensions.
A computing device 60 for user portrayal depiction according to this embodiment of the invention is described below with reference to FIG. 6. Computing device 60 as shown in FIG. 6 is only one example and should not be taken to limit the scope of use and functionality of embodiments of the present invention.
As shown in fig. 6, computing device 60 is embodied in a general purpose computing device. Components of computing device 60 may include, but are not limited to: the at least one processing module 601, the at least one memory unit 602, and a bus 603 connecting the various system components (including the memory unit 602 and the processing module 601).
Bus 603 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures.
The storage unit 602 may include readable media in the form of volatile memory, such as Random Access Memory (RAM)6021 and/or cache memory 6022, and may further include read-only memory (ROM) 6023.
The memory unit 602 may also include a program/utility 6025 having a set (at least one) of program modules 6024, such program modules 6024 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Computing device 60 may also communicate with one or more external devices 604 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with computing device 60, and/or with any devices (e.g., router, modem, etc.) that enable computing device 60 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 605. Moreover, computing device 60 may also communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) through network adapter 606. As shown, network adapter 606 communicates with the other modules of computing device 60 over bus 603. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with computing device 60, including but not limited to: microcode, device drivers, redundant processing modules, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
It should be noted that although in the above detailed description reference is made to several units/modules or sub-units/modules of the user representation apparatus, such division is merely exemplary and not mandatory. Indeed, the features and functionality of two or more of the units/modules described above may be embodied in one unit/module according to embodiments of the invention. Conversely, the features and functions of one unit/module described above may be further divided into embodiments by a plurality of units/modules.
Moreover, while the operations of the method of the invention are depicted in the drawings in a particular order, this does not require or imply that the operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
While the spirit and principles of the invention have been described with reference to several particular embodiments, it is to be understood that the invention is not limited to the disclosed embodiments, nor is the division of aspects, which is for convenience only as the features in such aspects may not be combined to benefit. The invention is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.