CN113010795A - User dynamic portrait generation method, system, storage medium and electronic device - Google Patents

User dynamic portrait generation method, system, storage medium and electronic device Download PDF

Info

Publication number
CN113010795A
CN113010795A CN202110388225.9A CN202110388225A CN113010795A CN 113010795 A CN113010795 A CN 113010795A CN 202110388225 A CN202110388225 A CN 202110388225A CN 113010795 A CN113010795 A CN 113010795A
Authority
CN
China
Prior art keywords
user
portrait
subclass
user portrait
constructing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110388225.9A
Other languages
Chinese (zh)
Inventor
张茂洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Mininglamp Software System Co ltd
Original Assignee
Beijing Mininglamp Software System Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Mininglamp Software System Co ltd filed Critical Beijing Mininglamp Software System Co ltd
Priority to CN202110388225.9A priority Critical patent/CN113010795A/en
Publication of CN113010795A publication Critical patent/CN113010795A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9536Search customisation based on social or collaborative filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking

Abstract

The application discloses a method and a system for generating a user dynamic portrait, a storage medium and an electronic device, wherein the method comprises the following steps: generating a candidate set: constructing a user representation system with multiple user representation dimensions according to historical information, wherein the subclass of the user representation system comprises: the system comprises a population property subclass, an account information subclass, a social characteristic subclass, an environmental characteristic subclass, an interactive interest characteristic subclass and a cross characteristic subclass; and (3) constructing a user portrait: and extracting attributes and behaviors of the user from the user information according to the user portrait content of the subclass of the user portrait system for prediction so as to construct the user portrait of the user. The invention can be suitable for the user portrait requirement of any news scene, is automatic and efficient, and greatly saves the development cost and the time cost.

Description

User dynamic portrait generation method, system, storage medium and electronic device
Technical Field
The invention belongs to the field of user dynamic portrait generation, and particularly relates to a user dynamic portrait generation method, a user dynamic portrait generation system, a storage medium and electronic equipment.
Background
The concept of User portrayal (User Persona) was first proposed by the parent Alan Cooper of interactive design, which is a target User model built on a series of attribute data. The system is a typical user which is generally abstracted from a user group by product design and operators, and is essentially a tool for describing user requirements. The user profile contains new connotations: and the tagged user model is abstracted according to the information such as the user demographic characteristics, the network browsing content, the network social activity, the consumption behavior and the like. The core work of the system is mainly to analyze and mine by using massive logs stored on a server and massive data in a database, and to label a user with a label, wherein the label is an identifier capable of representing a certain dimension characteristic of the user.
In a news scenario, most news platforms (apps or websites) distribute and recommend news content to users according to user images. However, the common user image system is mainly based on the attribute of the user to count the distribution of age, gender, region, model and the like; and counting the aggregate data quantity of exposure, click, collection, comment, praise and the like based on the interactive data of the user. Such a result of the user portrait being too heavily aggregated statistics may cause the average level and average preference of the entire user to be reflected by the user portrait data, and it is difficult to reflect the personalized needs and preferences of the user.
For example, the prior art is disclosed in publication No.: the patent of CN109978630A collects static information and dynamic information data of users based on network probes, and the feature processing of the data is actually to split the user data into dimensions that can be aggregated and counted, for example, the age is counted by segments, and users in the user group are less than 18 years old, 19-24 years old, 25-30 years old, 31-35 years old, 36-40 years old, 41-45 years old, and more than 45 years old. The result of the simple sectional aggregation statistics can only show the section characteristics and the overall distribution of the user, and the personalized effect for a single user is poor.
In summary, the feature processing of the data in the existing scheme is actually to split the user data into dimensions capable of performing aggregation statistics, and only the interval features and the overall distribution to which the user belongs can be seen from the result of such segmented aggregation statistics, so that the personalization effect for a single user is poor. It is therefore desirable to develop a method, system, storage medium and electronic device for generating a user dynamic representation that overcomes the above-mentioned drawbacks.
Disclosure of Invention
The embodiment of the application provides a user dynamic portrait generation method, a user dynamic portrait generation system, a storage medium and electronic equipment, and aims to at least solve the problems that the existing sectional aggregation statistics result only shows the section characteristics and the integral distribution of a user, and the individual effect of a single user is poor.
The invention provides a method for generating a user dynamic portrait, which comprises the following steps:
generating a candidate set: constructing a user portrait system with multi-user portrait dimensions according to historical information;
and (3) constructing a user portrait: and extracting attributes and behaviors of the user from the user information according to the user portrait content of the subclass of the user portrait system for prediction so as to construct the user portrait of the user.
The user dynamic portrait generation method, wherein the step of constructing the user portrait comprises:
constructing a user portrait based on the environmental characteristic subclasses: and according to the user portrait content of the environmental characteristic subclass of the user portrait system, extracting the time sequence characteristics of the attributes and behaviors of the user in the user information to construct the user portrait of the user.
The user dynamic portrait generation method, wherein the step of constructing the user portrait comprises:
constructing a user portrait based on the interactive interest feature subclass: extracting the attribute and the time sequence characteristic of the user in the user information according to the user portrait content of the interactive interest characteristic subclass of the user portrait system, calculating to obtain a demand index of the user according to the attribute and the time sequence characteristic of the user, and constructing the user portrait of the user according to the demand index and the attribute and the time sequence characteristic of the user; or;
constructing a user portrait based on the interactive interest feature subclass: extracting the attribute and the time sequence characteristic of the user in the user information according to the user portrait content of the interactive interest characteristic subclass of the user portrait system, extracting key words according to the attribute and the time sequence characteristic of the user, clustering the key words to obtain labels with middle granularity, and constructing the user portrait of the user according to the labels and the attribute and the time sequence characteristic of the user.
The user dynamic portrait generation method, wherein the step of constructing the user portrait comprises:
constructing a user portrait based on the cross feature subclasses: and performing cross combination on the user portrait contents in the user portrait system, the account information subclass, the social characteristic subclass, the environmental characteristic subclass and the interactive interest characteristic subclass to obtain the user portrait content of the cross characteristic subclass, and extracting the user portrait of the user according to the user portrait content of the interactive interest characteristic subclass and the time sequence characteristics of the user attribute and behavior in the user information to construct the user portrait of the user.
The invention also provides a system for generating a user dynamic representation, which comprises:
the candidate set generating module is used for constructing a user portrait system with multi-user portrait dimensions according to historical information;
and the user portrait construction module extracts attributes and behaviors of the user from user information according to the user portrait content of the subclass of the user portrait system for prediction so as to construct the user portrait of the user.
The above system for generating a user dynamic representation, wherein the module for constructing a user representation comprises:
and constructing a user portrait unit based on the environmental characteristic subclass, wherein the user portrait constructing unit based on the environmental characteristic subclass of the user portrait system extracts the time sequence characteristics of the attributes and behaviors of the user in the user information according to the user portrait content of the environmental characteristic subclass to construct the user portrait of the user.
The above system for generating a user dynamic representation, wherein the module for constructing a user representation comprises:
constructing a user portrait unit based on an interactive interest feature subclass, extracting attributes and time sequence features of behaviors of the user in the user information according to user portrait contents of the interactive interest feature subclass of the user portrait system, calculating and obtaining requirement indexes of the user according to the attributes and the time sequence features of the user, and constructing a user portrait of the user according to the requirement indexes and the time sequence features of the attributes and the behaviors of the user; or;
the user portrait constructing unit extracts the attribute and the time sequence characteristic of the user in the user information according to the user portrait content of the interactive interest feature subclass of the user portrait system, extracts key words according to the attribute and the time sequence characteristic of the user, clusters the key words to obtain labels with middle granularity, and constructs the user portrait of the user according to the labels and the time sequence characteristic of the user in combination with the attribute and the time sequence characteristic of the behavior.
The above system for generating a user dynamic representation, wherein the module for constructing a user representation comprises:
the user portrait construction method comprises the steps of constructing a user portrait unit based on a cross feature subclass, and performing cross combination on user portrait contents in the user portrait system, the account information subclass, the social feature subclass, the environmental feature subclass and the interaction interest feature subclass to obtain user portrait contents of the cross feature subclass, and extracting user attributes and time sequence characteristics of behaviors in user information according to the user portrait contents of the interaction interest feature subclass to construct a user portrait of a user.
The present invention also includes an electronic device including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements any of the above-described user dynamic representation generation methods when executing the computer program.
The present invention further includes a storage medium having a computer program stored thereon, wherein the program, when executed by a processor, implements any of the above-described user moving image generation methods.
The invention belongs to the technical field of recommendation. The invention has the beneficial effects that:
1. the invention can be suitable for the user portrait requirement of any news scene, is automatic and efficient, and greatly saves the development cost and the time cost.
2. The user portrait modeling needs to pay attention to granularity, the label has no generalization capability and use value due to the excessively fine granularity, the label has no discrimination due to the excessively coarse granularity, and in order to ensure that the interest portrait has certain accuracy and better generalization, a hierarchical interest label system is constructed, and the labels with a plurality of granularities are used for matching in use, so that the accuracy of the label is ensured, and the generalization of the label is also ensured.
3. The invention takes the comparison between a certain behavior of the user and the average behavior of the whole platform as the label of the user interest, avoids the interference of hot news or platform forced news on the user image, and is more accurate and personalized.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application.
In the drawings:
FIG. 1 is a flow chart of a user animation generation method of the present invention;
FIG. 2 is a flow chart illustrating the substeps of step S2 in FIG. 1;
FIG. 3 is a schematic diagram of a user representation generation system according to the present invention;
fig. 4 is a frame diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described and illustrated below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments provided in the present application without any inventive step are within the scope of protection of the present application.
It is obvious that the drawings in the following description are only examples or embodiments of the present application, and that it is also possible for a person skilled in the art to apply the present application to other similar contexts on the basis of these drawings without inventive effort. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of ordinary skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms referred to herein shall have the ordinary meaning as understood by those of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar words throughout this application are not to be construed as limiting in number, and may refer to the singular or the plural. The present application is directed to the use of the terms "including," "comprising," "having," and any variations thereof, which are intended to cover non-exclusive inclusions; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or elements, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Reference to "connected," "coupled," and the like in this application is not intended to be limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The term "plurality" as referred to herein means two or more. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. Reference herein to the terms "first," "second," "third," and the like, are merely to distinguish similar objects and do not denote a particular ordering for the objects.
The present invention is described in detail with reference to the embodiments shown in the drawings, but it should be understood that these embodiments are not intended to limit the present invention, and those skilled in the art should understand that functional, methodological, or structural equivalents or substitutions made by these embodiments are within the scope of the present invention.
Before describing in detail the various embodiments of the present invention, the core inventive concepts of the present invention are summarized and described in detail by the following several embodiments.
The first embodiment is as follows:
referring to FIG. 1, FIG. 1 is a flow chart of a method for generating a user dynamic representation. As shown in FIG. 1, the method for generating a user dynamic representation of the present invention comprises:
generating candidate set step S1: constructing a user representation system with multiple user representation dimensions according to historical information, wherein the subclass of the user representation system comprises: the system comprises a population property subclass, an account information subclass, a social characteristic subclass, an environmental characteristic subclass, an interactive interest characteristic subclass and a cross characteristic subclass.
Specifically, in view of the unlimited subdivision of the dimension of the user portrait, the user portrait required in different application scenarios and services is inconsistent, for example, if the considered index is the user churn rate, the user portrait focuses more on the indexes of active users and churn users. The invention focuses on scheme design around user portrait dimensions required by news recommendation scenes. The core of news recommendation recommends content to a user at an appropriate time and on an appropriate occasion that meets the current user's interests. To meet this requirement, the user representation must be designed to meet the requirements of personalization and dynamic changes. Therefore, the user profile system of the present invention specifically includes 6 subclasses, as detailed in table 1 below. The demographic attribute information, the account information and the social characteristics belong to fact data, and are mainly supported by various classification and aggregation statistics. The invention distinguishes user hometown (native), user frequent place and user current location according to user region attribute, to better distinguish user portrait dynamic change. For a specific user, the hometown is unchanged, the frequent place is a place which appears more frequently within 3 months, and the current location refers to the geographic position of the time point when the user triggers the request.
Table 1 user representation system:
Figure BDA0003015801950000071
it should be noted that the user representation contents in table 1 are only for illustration and do not limit the present invention.
A user portrait constructing step S2: and extracting attributes and behaviors of the user from the user information according to the user portrait content of the subclass of the user portrait system for prediction so as to construct the user portrait of the user.
Therefore, in a news scene, the demand and preference of the user higher than the average level are calculated based on the single user, the behavior track of the user in the news content is calculated based on the time sequence characteristics of the user behavior, and the behavior preference of the user is well predicted.
Referring to fig. 2, fig. 2 is a flowchart illustrating a sub-step of step S2 in fig. 1. As shown in fig. 2, the step S2 of constructing a user representation includes:
and a step S21 of constructing a user portrait based on the environmental characteristic subclasses: and extracting the time sequence characteristics of the attributes and behaviors of the user in the user information according to the user portrait content of the environmental characteristic subclass to construct the user portrait of the user.
Specifically, the environmental characteristics are the sum of various scenes at the time of the departure request of the user. For example, the time node that triggers the request is in the morning, noon, afternoon, evening, morning; whether the time node triggering the request is a workday or a holiday; the time difference between the current starting request and the last triggering request; whether a certain periodicity exists in the user starting request or not, and the like. All user profiles are updated in real time according to the real-time behavior of the user. For example, a user logs in an app every morning to browse news information for about 30 minutes, browses news for 20 minutes briefly in the middle of the day, and browses news information for about 30 minutes in the afternoon, wherein the logging in the morning and the afternoon has a period fluctuating with working days, the browsing at the middle of the day is not fixed, if comprehensively inferred by combining information such as the current location of the user, the occupation of the user and the like, the user is in the white collar, the commuting time is 30 minutes every day, the user can watch the news on the commuting road, and the user can browse the news briefly in one hour of noon break. Therefore, a certain attribute of the user is more accurate and definite, and the content recommended to the user can be more suitable for the scene where the user is located.
For another example, a certain user adopts PC equipment for browsing news in working hours every day, and adopts iPad for browsing news in rest hours at night, so that the scene characteristics of the user are obvious, professional and time-efficient news contents are recommended to the user as much as possible in working hours, and entertainment and relaxing news contents are recommended to the user as much as possible in rest hours.
And constructing a user portrait based on the interaction interest feature subclass step S22: extracting the attribute and the time sequence characteristic of the user in the user information according to the user portrait content of the interaction interest characteristic subclass, calculating to obtain a demand index of the user according to the attribute and the time sequence characteristic of the user, and constructing the user portrait of the user according to the demand index and the attribute and the time sequence characteristic of the user; or;
a step S22' of constructing a user portrait based on the interaction interest feature subclass: and extracting the attribute and the time sequence characteristic of the user in the user information according to the user portrait content of the interactive interest characteristic subclass, extracting key words according to the attribute and the time sequence characteristic of the user, clustering the key words to obtain a label with middle granularity, and constructing the user portrait of the user according to the label and the attribute and the time sequence characteristic of the user.
Specifically, the interactive features take the number of clicks and click preferences (contents) of the latest 1/3/7/14 days as an example, and the statistics of the number of clicks in each period does not directly act on the user portrait, and the present invention is easy to realize in the existing scheme that the number of clicks of the user cannot directly reflect the interest of the user, because the number of clicks directly affects the number of clicks, and the exposure of news is platform-defined and is a passive acceptance behavior for the user, because the exposure of the high-heat news is higher than that of ordinary information, and if the interest of the user is measured simply by the number of clicks, it is inaccurate. According to the method, the user counts the click rate of the user on different columns and different topics in a certain period of time and compares the click rate with the average click rate of the whole platform under different columns and different topics, and if the click rate of the user under a certain column or topic is higher than the average click rate of the whole platform, the column or topic is indicated as the topic which the user is interested in. Similarly, the like is the like.
The click preference is obtained by combining the extraction of content keywords and counting the click conditions of the user under different columns, topics and keywords. Content tags need to describe content of interest to a user as accurately as possible, so topics and keywords are essential.
For example, in the perspective of a topic, a sports news item is selected, and the news item classification of sports can indicate the interest of the user, but the label is too thick, the user may only be interested in football, and the label of sports is not accurate enough. As another example, keywords in news, especially the internal proper nouns (names of people and organizations), such as "morse", "asena" and "ezuel", also indicate the interests of the user. The main problem with keywords is that the granularity is too fine, and if these keywords do not appear in the news of a day, the user cannot be recommended content. Therefore, it is desirable to have a tag with a medium granularity, both with a certain accuracy and with a certain generalization ability. According to the scheme, keywords are tried to be clustered, one type of keywords are used as one label, or news under one classification is split, and a topic label with the granularity between the keywords and the classification like football is generated. And a step S23 of constructing a user portrait based on the cross feature subclasses: and performing cross combination on the user portrait contents in the population attribute subclass, the account information subclass, the social characteristic subclass, the environmental characteristic subclass and the interactive interest characteristic subclass to obtain the user portrait content of the cross characteristic subclass, and extracting the time sequence characteristics of the attributes and behaviors of the user in the user information according to the user portrait content of the interactive interest characteristic subclass to construct the user portrait of the user.
Specifically, the cross feature is a configurable feature option, that is, a new feature with business significance is formed by cross combination of the above 5 types of features, and needs to be expanded according to actual business requirements. Especially the cross-feature of the binding time is particularly valuable. Such as: one is that the user's interest accumulation is linear, the value will be very large, and the old interest weight will be particularly high; the other is that the interest of the user has strong timeliness, yesterday clicks are more important than clicks before one month, and the recent interest cannot be highlighted by linear superposition. To solve such a problem, it is necessary to perform frequency attenuation and time attenuation on the user interest score. This is done to ensure that interests at an earlier time become very weak after a while interests at a later time are weighted more heavily. Similarly, the interest may be attenuated on a weekly level, a monthly level, or an hourly level, depending on the rate of change of user interest, user activity, and other factors.
Therefore, the invention designs the label granularity, thereby eliminating the problem that the label of interest of the user is interfered by external factors.
Example two:
referring to FIG. 3, FIG. 3 is a schematic diagram of a user dynamic image generation system according to the present invention. FIG. 3 shows a user animation generation system of the present invention, which comprises:
the generation candidate set module is used for constructing a user portrait system with multi-user portrait dimensions according to historical information, and subclasses of the user portrait system comprise: the system comprises a population property subclass, an account information subclass, a social characteristic subclass, an environmental characteristic subclass, an interactive interest characteristic subclass and a cross characteristic subclass;
and the user portrait construction module extracts attributes and behaviors of the user from user information according to the user portrait content of the subclass of the user portrait system for prediction so as to construct the user portrait of the user.
Wherein the construct user representation module comprises:
and the user portrait constructing unit based on the environmental characteristic subclass extracts the time sequence characteristics of the attributes and behaviors of the user in the user information according to the user portrait content of the environmental characteristic subclass to construct the user portrait of the user.
Wherein the construct user representation module comprises:
constructing a user portrait unit based on an interactive interest feature subclass, extracting attributes and time sequence features of behaviors of the user in the user information according to user portrait content of the interactive interest feature subclass, calculating and obtaining a demand index of the user according to the attributes and the time sequence features of the behaviors of the user, and constructing a user portrait of the user according to the demand index and the attributes and the time sequence features of the behaviors of the user; or;
the user portrait constructing unit based on the interactive interest feature subclass extracts the attribute and the time sequence feature of the user in the user information according to the user portrait content of the interactive interest feature subclass, extracts a keyword according to the attribute and the time sequence feature of the user, clusters the keyword to obtain a label with middle granularity, and constructs the user portrait of the user according to the label and the attribute and the time sequence feature of the user.
Wherein the construct user representation module comprises:
and constructing a user portrait based on a cross feature subclass, wherein the user portrait constructing unit performs cross combination on user portrait contents in the population attribute subclass, the account information subclass, the social feature subclass, the environmental feature subclass and the interaction interest feature subclass to obtain user portrait contents of the cross feature subclass, and extracts attributes and time sequence characteristics of behaviors of the user in the user information according to the user portrait contents of the interaction interest feature subclass to construct the user portrait of the user.
Example three:
referring to fig. 4, this embodiment discloses a specific implementation of an electronic device. The electronic device may include a processor 81 and a memory 82 storing computer program instructions.
Specifically, the processor 81 may include a Central Processing Unit (CPU), or A Specific Integrated Circuit (ASIC), or may be configured to implement one or more Integrated circuits of the embodiments of the present Application.
Memory 82 may include, among other things, mass storage for data or instructions. By way of example, and not limitation, memory 82 may include a Hard Disk Drive (Hard Disk Drive, abbreviated to HDD), a floppy Disk Drive, a Solid State Drive (SSD), flash memory, an optical Disk, a magneto-optical Disk, tape, or a Universal Serial Bus (USB) Drive or a combination of two or more of these. Memory 82 may include removable or non-removable (or fixed) media, where appropriate. The memory 82 may be internal or external to the data processing apparatus, where appropriate. In a particular embodiment, the memory 82 is a Non-Volatile (Non-Volatile) memory. In particular embodiments, Memory 82 includes Read-Only Memory (ROM) and Random Access Memory (RAM). The ROM may be mask-programmed ROM, Programmable ROM (PROM), Erasable PROM (EPROM), Electrically Erasable PROM (EEPROM), Electrically rewritable ROM (EAROM), or FLASH Memory (FLASH), or a combination of two or more of these, where appropriate. The RAM may be a Static Random-Access Memory (SRAM) or a Dynamic Random-Access Memory (DRAM), where the DRAM may be a Fast Page Mode Dynamic Random-Access Memory (FPMDRAM), an Extended data output Dynamic Random-Access Memory (EDODRAM), a Synchronous Dynamic Random-Access Memory (SDRAM), and the like.
The memory 82 may be used to store or cache various data files for processing and/or communication use, as well as possible computer program instructions executed by the processor 81.
The processor 81 reads and executes computer program instructions stored in the memory 82 to implement any of the user moving image generation methods in the above embodiments.
In some of these embodiments, the electronic device may also include a communication interface 83 and a bus 80. As shown in fig. 4, the processor 81, the memory 82, and the communication interface 83 are connected via the bus 80 to complete communication therebetween.
The communication interface 83 is used for implementing communication between modules, devices, units and/or equipment in the embodiment of the present application. The communication port 83 may also be implemented with other components such as: the data communication is carried out among external equipment, image/data acquisition equipment, a database, external storage, an image/data processing workstation and the like.
The bus 80 includes hardware, software, or both to couple the components of the electronic device to one another. Bus 80 includes, but is not limited to, at least one of the following: data Bus (Data Bus), Address Bus (Address Bus), Control Bus (Control Bus), Expansion Bus (Expansion Bus), and Local Bus (Local Bus). By way of example, and not limitation, Bus 80 may include an Accelerated Graphics Port (AGP) or other Graphics Bus, an Enhanced Industry Standard Architecture (EISA) Bus, a Front-Side Bus (FSB), a Hyper Transport (HT) Interconnect, an ISA (ISA) Bus, an InfiniBand (InfiniBand) Interconnect, a Low Pin Count (LPC) Bus, a memory Bus, a microchannel Architecture (MCA) Bus, a PCI (Peripheral Component Interconnect) Bus, a PCI-Express (PCI-X) Bus, a Serial Advanced Technology Attachment (SATA) Bus, a Video Electronics Bus (audio Electronics Association), abbreviated VLB) bus or other suitable bus or a combination of two or more of these. Bus 80 may include one or more buses, where appropriate. Although specific buses are described and shown in the embodiments of the application, any suitable buses or interconnects are contemplated by the application.
The electronic device may be generated based on a user-animated representation to implement the methods described in conjunction with fig. 1-2.
In addition, in combination with the user dynamic image generation method in the foregoing embodiment, the embodiment of the present application may provide a computer-readable storage medium to implement. The computer readable storage medium having stored thereon computer program instructions; the computer program instructions, when executed by a processor, implement any of the user-animated representation generation methods in the above embodiments.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
In conclusion, the method has the advantages that the method can be suitable for the user portrait requirements of any news scene, is automatic and efficient, and greatly saves development cost and time cost; the user portrait modeling needs to pay attention to granularity, the label has no generalization capability and use value due to the excessively fine granularity, and the label has no discrimination due to the excessively coarse granularity; the invention takes the comparison between a certain behavior of the user and the average behavior of the whole platform as the label of the user interest, avoids the interference of hot news or platform forced news on the user image, and is more accurate and personalized.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (10)

1. A method for generating a user dynamic representation, comprising:
generating a candidate set: constructing a user portrait system with multi-user portrait dimensions according to historical information;
and (3) constructing a user portrait: and extracting attributes and behaviors of the user from the user information according to the user portrait content of the subclass of the user portrait system for prediction so as to construct the user portrait of the user.
2. A user representation generation method as claimed in claim 1, wherein said step of constructing a user representation comprises:
constructing a user portrait based on the environmental characteristic subclasses: and according to the user portrait content of the environmental characteristic subclass of the user portrait system, extracting the time sequence characteristics of the attributes and behaviors of the user in the user information to construct the user portrait of the user.
3. A user representation generation method as claimed in claim 1, wherein said step of constructing a user representation comprises:
constructing a user portrait based on the interactive interest feature subclass: extracting the attribute and the time sequence characteristic of the user in the user information according to the user portrait content of the interactive interest characteristic subclass of the user portrait system, calculating to obtain a demand index of the user according to the attribute and the time sequence characteristic of the user, and constructing the user portrait of the user according to the demand index and the attribute and the time sequence characteristic of the user; or;
constructing a user portrait based on the interactive interest feature subclass: extracting the attribute and the time sequence characteristic of the user in the user information according to the user portrait content of the interactive interest characteristic subclass of the user portrait system, extracting key words according to the attribute and the time sequence characteristic of the user, clustering the key words to obtain labels with middle granularity, and constructing the user portrait of the user according to the labels and the attribute and the time sequence characteristic of the user.
4. A user representation generation method as claimed in claim 1, wherein said step of constructing a user representation comprises:
constructing a user portrait based on the cross feature subclasses: and performing cross combination on user portrait contents in the population attribute subclass, the account information subclass, the social characteristic subclass, the environmental characteristic subclass and the interactive interest characteristic subclass of the user portrait system to obtain user portrait contents of the cross characteristic subclass, and extracting the time sequence characteristics of the attributes and behaviors of the users in the user information according to the user portrait contents of the interactive interest characteristic subclass to construct the user portrait of the users.
5. A user animation generation system, comprising:
the candidate set generating module is used for constructing a user portrait system with multi-user portrait dimensions according to historical information;
and the user portrait construction module extracts attributes and behaviors of the user from user information according to the user portrait content of the subclass of the user portrait system for prediction so as to construct the user portrait of the user.
6. The user representation generation system of claim 5, wherein said construct user representation module comprises:
and the user portrait constructing unit based on the environmental characteristic subclass extracts the attribute and the time sequence characteristic of the behavior of the user in the user information according to the user portrait content of the environmental characteristic subclass of the user portrait system to construct the user portrait of the user.
7. The user representation generation system of claim 5, wherein said construct user representation module comprises:
constructing a user portrait unit based on an interactive interest feature subclass, extracting attributes and time sequence features of behaviors of the user in the user information according to user portrait contents of the interactive interest feature subclass of the user portrait system, calculating and obtaining requirement indexes of the user according to the attributes and the time sequence features of the user, and constructing a user portrait of the user according to the requirement indexes and the time sequence features of the attributes and the behaviors of the user; or;
the user portrait constructing unit extracts the attribute and the time sequence characteristic of the user in the user information according to the user portrait content of the interactive interest feature subclass of the user portrait system, extracts key words according to the attribute and the time sequence characteristic of the user, clusters the key words to obtain labels with middle granularity, and constructs the user portrait of the user according to the labels and the time sequence characteristic of the user in combination with the attribute and the time sequence characteristic of the behavior.
8. The user representation generation system of claim 5, wherein said construct user representation module comprises:
the user portrait construction method comprises the steps of constructing a user portrait unit based on a cross feature subclass, and performing cross combination on user portrait contents in the user portrait system, the account information subclass, the social feature subclass, the environmental feature subclass and the interaction interest feature subclass to obtain user portrait contents of the cross feature subclass, and extracting user attributes and time sequence characteristics of behaviors in user information according to the user portrait contents of the interaction interest feature subclass to construct a user portrait of a user.
9. An electronic device comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the user dynamic representation generation method as claimed in any one of claims 1 to 4 when executing the computer program.
10. A storage medium having stored thereon a computer program, wherein the program, when executed by a processor, implements a method for generating a user-moving representation as claimed in any one of claims 1 to 4.
CN202110388225.9A 2021-04-12 2021-04-12 User dynamic portrait generation method, system, storage medium and electronic device Pending CN113010795A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110388225.9A CN113010795A (en) 2021-04-12 2021-04-12 User dynamic portrait generation method, system, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110388225.9A CN113010795A (en) 2021-04-12 2021-04-12 User dynamic portrait generation method, system, storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN113010795A true CN113010795A (en) 2021-06-22

Family

ID=76388310

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110388225.9A Pending CN113010795A (en) 2021-04-12 2021-04-12 User dynamic portrait generation method, system, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN113010795A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116091112A (en) * 2022-12-29 2023-05-09 江苏玖益贰信息科技有限公司 Consumer portrait generating device and portrait analyzing method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106503015A (en) * 2015-09-07 2017-03-15 国家计算机网络与信息安全管理中心 A kind of method for building user's portrait
WO2017157146A1 (en) * 2016-03-15 2017-09-21 平安科技(深圳)有限公司 User portrait-based personalized recommendation method and apparatus, server, and storage medium
CN111178950A (en) * 2019-12-19 2020-05-19 车智互联(北京)科技有限公司 User portrait construction method and device and computing equipment
CN111444236A (en) * 2020-03-23 2020-07-24 华南理工大学 Mobile terminal user portrait construction method and system based on big data

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106503015A (en) * 2015-09-07 2017-03-15 国家计算机网络与信息安全管理中心 A kind of method for building user's portrait
WO2017157146A1 (en) * 2016-03-15 2017-09-21 平安科技(深圳)有限公司 User portrait-based personalized recommendation method and apparatus, server, and storage medium
CN111178950A (en) * 2019-12-19 2020-05-19 车智互联(北京)科技有限公司 User portrait construction method and device and computing equipment
CN111444236A (en) * 2020-03-23 2020-07-24 华南理工大学 Mobile terminal user portrait construction method and system based on big data

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116091112A (en) * 2022-12-29 2023-05-09 江苏玖益贰信息科技有限公司 Consumer portrait generating device and portrait analyzing method

Similar Documents

Publication Publication Date Title
CN106030571B (en) Dynamically modifying elements of a user interface based on a knowledge graph
US9310879B2 (en) Methods and systems for displaying web pages based on a user-specific browser history analysis
CN102054003B (en) Methods and systems for recommending network information and creating network resource index
US9158850B2 (en) Personal trends module
CN102932206B (en) The method and system of monitoring website access information
CN109511015B (en) Multimedia resource recommendation method, device, storage medium and equipment
US20180307733A1 (en) User characteristic extraction method and apparatus, and storage medium
CN106021583B (en) Statistical method and system for page flow data
Lee et al. Leveraging microblogging big data with a modified density-based clustering approach for event awareness and topic ranking
CN107229754B (en) Information sorting method and device, electronic equipment and storage medium
CN112288464A (en) Commodity recommendation method and device, computer equipment and storage medium
US11449553B2 (en) Systems and methods for generating real-time recommendations
CN112613938B (en) Model training method and device and computer equipment
WO2015185020A1 (en) Information category obtaining method and apparatus
CN111597449B (en) Candidate word construction method and device for search, electronic equipment and readable medium
CN104881774A (en) Method and apparatus for automatically establishing schedule
CN112699295A (en) Webpage content recommendation method and device and computer readable storage medium
CN110020273B (en) Method, device and system for generating thermodynamic diagram
CN103595747A (en) User-information recommending method and system
CN100555283C (en) A kind of directly at the dissemination method and the system of user's relevant information
CN113010795A (en) User dynamic portrait generation method, system, storage medium and electronic device
CN104281581A (en) Method and system for monitoring exposure of content at recommendation position of webpage
CN108319622A (en) A kind of media content recommendations method and device
US11630817B2 (en) Method and system for data indexing and reporting
CN113297436A (en) User policy distribution method and device based on relational graph network and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination