CN112465565A - User portrait prediction method and device based on machine learning - Google Patents

User portrait prediction method and device based on machine learning Download PDF

Info

Publication number
CN112465565A
CN112465565A CN202011460997.0A CN202011460997A CN112465565A CN 112465565 A CN112465565 A CN 112465565A CN 202011460997 A CN202011460997 A CN 202011460997A CN 112465565 A CN112465565 A CN 112465565A
Authority
CN
China
Prior art keywords
information
user
target user
portrait
image information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011460997.0A
Other languages
Chinese (zh)
Other versions
CN112465565B (en
Inventor
行康泽
余承乐
彭喜喜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Addnewer Corp
Original Assignee
Addnewer Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Addnewer Corp filed Critical Addnewer Corp
Priority to CN202011460997.0A priority Critical patent/CN112465565B/en
Publication of CN112465565A publication Critical patent/CN112465565A/en
Application granted granted Critical
Publication of CN112465565B publication Critical patent/CN112465565B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • G06Q30/0269Targeted advertisements based on user profile or attribute
    • G06Q30/0271Personalized advertisement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9537Spatial or temporal dependent retrieval, e.g. spatiotemporal queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Abstract

The embodiment of the application discloses a user portrait prediction method based on machine learning. The method comprises the following steps: acquiring action hotspot information, geographical position information and time information of a target user and a first user within preset time, wherein the first user is a user who already has portrait information; determining first image information of the target user by associating the action hotspot information, the geographical position information and the time information of the target user and the first user; calculating second image information of the target user by using a nearest neighbor method; determining a result predicted by deep learning of partial tag information of the first user as third pictorial information of the target user; and determining the predicted image information of the target user by combining the first image information, the second image information and the third image information. Therefore, the basic data of the user portrait information is obtained from multiple dimensions, and reliable data sources are enriched, so that the condition that the user portrait prediction result is not accurate is reduced.

Description

User portrait prediction method and device based on machine learning
Technical Field
The embodiment of the application relates to the field of artificial intelligence, in particular to a user portrait prediction method and device based on machine learning.
Background
With the gradual increase of the current advertisement data volume and the continuous accumulation of the user number, the requirement of accurate advertisement service delivery is more and more, the importance of user portrait is increased day by day, and the value of the data can be better exerted only by accurately depicting the portrait of the user and clearly analyzing the behavior track of the user, so that the value of the advertisement can be exerted to the maximum.
In the prior art, the main depicting modes of users include several types, wherein a user portrait is drawn by using rules, and the user portrait is depicted by using the geographical position rules of the user in the past every day through the behavior data of the user in the last month; in addition, a machine learning method is used to predict the image data of the user, that is, a machine learning or deep learning method is used to predict the image data of the user. In the prior art, if the definition of the use rule is inaccurate or the learning sample is inaccurate, the situation that the user portrait prediction result is inaccurate may be caused.
Disclosure of Invention
The embodiment of the application provides a user portrait prediction method and device based on machine learning, which are used for reducing the situation that the user portrait prediction result is not accurate.
In a first aspect, an embodiment of the present application provides a method for user portrait prediction based on machine learning, including:
acquiring action hotspot information, geographical position information and time information of a target user and a first user within preset time, wherein the first user is a user who already has portrait information;
determining first image information of the target user by associating the action hotspot information, the geographical position information and the time information of the target user and the first user;
calculating second image information of the target user by using a nearest neighbor method;
determining a result predicted by deep learning of partial tag information of the first user as third pictorial information of the target user; and determining the predicted image information of the target user by combining the first image information, the second image information and the third image information.
Optionally, the determining the first pictorial information of the target user by associating the action hotspot information, the geographic location information, and the time information of the target user and the first user includes:
associating the action hotspot information, the geographic location information, and the time information of the target user with the first user;
determining a relationship of the target user to the first user;
and calculating the portrait information of the first user through a rule to determine the first portrait information of the target user.
Optionally, determining a result predicted by deep learning of partial tag information of the first user as third image information of the target user; the method comprises the following steps:
performing model training on part of label information of the first user to generate a training module, wherein the part of labels are labels not carried by the target user;
and predicting third image information of the target user through the training model, wherein the third image information comprises the partial label information.
Optionally, the obtaining of the action hotspot information, the geographic location information, and the time information of the target user and the first user within a preset time, where the first user is a user who already has portrait information, includes:
acquiring portrait basic data information of a target user within a first preset time;
extracting action hotspot information, geographical position information and time information in the portrait basic data information;
calculating and counting the action hotspot information, the geographical position information and the time information of the target user and the first user within a preset time, wherein the first user is a user who already has user portrait information.
Optionally, the first preset time is less than the preset time.
Optionally, before determining the predicted image information of the target user by combining the first image information, the second image information and the third image information, the method further includes:
and integrating by screening label information similar to the first image information, the second image information and the third image information.
In a second aspect, an embodiment of the present application provides an apparatus for user portrait prediction based on machine learning, including:
the system comprises an acquisition unit, a storage unit and a display unit, wherein the acquisition unit is used for acquiring action hotspot information, geographical position information and time information of a target user and a first user within preset time, and the first user is a user who already has portrait information;
the first determining unit is used for determining first image information of the target user by associating the action hotspot information, the geographical position information and the time information of the target user and the first user;
a calculating unit, configured to calculate second image information of the target user by using a nearest neighbor method;
a second determination unit configured to determine a result predicted by deep learning of partial tag information of the first user as third image information of the target user;
and a third determining unit, configured to determine predicted image information of the target user by combining the first image information, the second image information, and the third image information.
Optionally, the first determining unit includes:
an association module, configured to associate the action hotspot information, the geographic location information, and the time information of the target user and the first user;
a first determining module, configured to determine a relationship between the target user and the first user;
and the second determining module is used for calculating the portrait information of the first user through a rule to determine the first portrait information of the target user.
Optionally, the second determining unit includes:
the generating module is used for carrying out model training on part of label information of the first user to generate a training module, wherein the part of labels are labels which are not carried by the target user;
and the prediction module is used for predicting third image information of the target user through the training model, wherein the third image information comprises the part of label information.
Optionally, the obtaining unit includes:
the acquisition module is used for acquiring portrait basic data information of a target user within a first preset time;
the extraction module is used for extracting action hotspot information, geographical position information and time information in the portrait basic data information;
and the counting module is used for counting the action hotspot information, the geographical position information and the time information of the target user and the first user in preset time through calculation, wherein the first user is a user who already has user portrait information.
Optionally, before the third determining unit, the apparatus further includes:
and the integration unit is used for integrating the tag information similar to the first image information, the second image information and the third image information by screening.
A third aspect of the embodiments of the present application provides an apparatus for user portrait prediction based on machine learning, including:
the device comprises a processor, a memory, an input and output unit and a bus;
the processor is connected with the memory, the input and output unit and the bus;
the processor specifically performs the following operations:
acquiring action hotspot information, geographical position information and time information of a target user and a first user within preset time, wherein the first user is a user who already has portrait information;
determining first image information of the target user by associating the action hotspot information, the geographical position information and the time information of the target user and the first user;
calculating second image information of the target user by using a nearest neighbor method;
determining a result predicted by deep learning of partial tag information of the first user as third pictorial information of the target user; and determining the predicted image information of the target user by combining the first image information, the second image information and the third image information.
Optionally, the processor is further configured to perform the operations of any of the alternatives of the first aspect.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium for user portrait prediction based on machine learning, including:
the computer-readable storage medium has stored thereon a program that executes the aforementioned method for machine learning-based user representation prediction on a computer.
According to the technical scheme, the action hotspot information, the geographic position information and the time information of a target user and a first user in preset time are obtained, wherein the first user is a user who already has user portrait information, the first portrait information of the target user is obtained by associating the action hotspot information, the geographic position information and the time information of the target user and the first user, the second portrait information of the target user is obtained by calculation through a nearest neighbor method, the third portrait information of the target user is determined by deep learning of part of labels of the first user, and the portrait information of the target user is predicted by performing statistical combination on the first portrait information, the second portrait information and the third portrait information. Therefore, the basic data of the user portrait information is obtained from multiple dimensions, and reliable data sources are enriched, so that the condition that the user portrait prediction result is not accurate is reduced.
Drawings
FIG. 1 is a flow chart illustrating an embodiment of a method for machine learning based user profile prediction in an embodiment of the present application;
FIG. 2-1 is a flow chart illustrating an embodiment of a method for machine learning based user profile prediction in accordance with the present disclosure;
FIG. 2-2 is a flow chart illustrating an embodiment of a method for machine learning based user profile prediction in accordance with the present disclosure;
FIG. 3 is a block diagram of an embodiment of an apparatus for machine learning based user profile prediction in accordance with an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating an apparatus for machine learning based user profile prediction according to another embodiment of the present application;
FIG. 5 is a schematic structural diagram illustrating an apparatus for predicting a user profile based on machine learning according to another embodiment of the present disclosure.
Detailed Description
The embodiment of the application provides a user portrait prediction method and device based on machine learning, which are used for reducing the situation that the user portrait prediction result is not accurate.
In the present application, the method for predicting a user portrait based on machine learning may be implemented in a system, a server, or a terminal, and is not specifically limited.
Referring to fig. 1, an implementation subject is described as an example of a system, and an embodiment of a method for predicting a user portrait based on machine learning according to the present application includes:
101. acquiring action hotspot information, geographical position information and time information of a target user and a first user within preset time, wherein the first user is a user who already has portrait information;
in an actual scene, at present, there are several main depicting ways of user portrait, for example, a user fills in own portrait data, so that there is a problem that the user is likely to be wrong in filling in itself or the information filled by the user is not complete; in addition, the user portrait is carved through rules after the geographical position of the user coming and going every day in the last month is obtained, so that the whole portrait of the user is inaccurate if the rule definition is not accurate; and the portrait data of the user is predicted by using a machine learning method, so that the problem of inaccurate prediction result caused by inaccurate sample of the application can occur.
Therefore, in this embodiment, before obtaining more accurate portrait information of the target user, the system collects portrait information of the user obtained through different dimensions, where one dimension is to first obtain action hotspot information, geographic location information, and time information of the target user and the user who already possesses the portrait information within a preset time. For example, when the user A watches a video, the action information in a certain time is { "Ip": 192.168.0.1), "wifi _ name": wifi of a small famous family "}, the geographic position information is {" wm629s9": geographical name address information; a name of a traffic place; road name' }, time information is 20:00-21: 00.
102. Determining first image information of a target user by associating the action hotspot information, the geographical position information and the time information of the target user and the first user;
in this embodiment, the system associates the action hotspot information of the target user with the action hotspot information of the first user, associates the geographical location information of the target user with the geographical location information of the first user, associates the time information of the target user with the time information of the first user, for example, the system acquires action hotspot information of the user a as { "Ip": "192.168.0.1", "wifi _ name": wifi of small famous family "}, {" Ip ": "192.168.0.1", "wifi _ name": wifi of small famous family "}, {" Ip ": "192.168.0.2", "wifi _ name": "wifi of company a" }, then extracting the behavior hotspot information of the user stored in the system, comparing the data information to find out similar characteristic information, the previous relevance of these associated data information may be predicted to determine the first pictorial information of the target user.
103. Calculating second image information of the target user by using a nearest neighbor method;
the other dimension is to calculate the second portrait information of the target user by a Nearest Neighbor method, specifically, a Neighbor algorithm or K-Nearest Neighbor (KNN, K-Nearest Neighbor) is one of the simplest methods in data mining classification counting, and the core idea is that if most of K adjacent samples in the feature space belong to a certain class, the sample also belongs to the class and has the features of the samples in the class.
For example, K users with portrait information in the system are adjacent to the target user, and the province accounts for the largest proportion of the north Hu province, so that the portrait information province of the target user is the north Hu province.
104. Determining a result predicted by deep learning of partial tag information of the first user as third image information of the target user; in this embodiment, the system may obtain the portrait information of the target user through the associated data information and the nearest neighbor algorithm dimension, and whether a part of tags are not complete or not, the system may determine the third portrait information of the target user by deep learning the user part tags of the existing portrait information, for example, the target user lacks interest characteristic information, the user with the portrait information has a portrait of the tag of interest, and the system may deep learn the tag of interest.
105. The predicted image information of the target user is determined in conjunction with the first image information, the second image information, and the third image information.
In the embodiment, the first portrait information of the target user is determined through the relevant data information dimension, the second portrait information of the target user is determined through calculation of a nearest neighbor algorithm, finally the third portrait information of the target user is obtained through deep learning of the characteristic information lacked by the target user, the portrait information of the target user can be predicted through integration of the portrait information through multi-dimensional calculation, and the situation that the portrait information structure of the user is inaccurate is reduced through abundant and reliable data sources.
Referring to fig. 2, an implementation subject is described as an example of a system, and another embodiment of the method for predicting a user portrait based on machine learning according to the embodiment of the present application includes:
201. acquiring portrait basic data information of a target user within a first preset time;
in an actual scene, when a user watches a video advertisement, media can push the flow of the user to an advertisement delivery system, the system performs exposure preferentially for the user, and in the process, logs of the user can be generated and include some portrait basic data information of the user.
In this embodiment, when the user watches the video advertisement, the system obtains basic data information of the user, for example, the basic data information includes the last name, the gender, the telephone number, and the like of the user. It should be noted that the first preset time is set according to the amount of the push data acquired by the system and other external factors, and may be set as one day here.
202. Extracting action hotspot information, geographical position information and time information in the portrait basic data information;
in this embodiment, since the basic data acquired by the system is relatively cluttered, the data required by the system to predict the image information of the target user is not completely included, and therefore, the system needs to extract the action hotspot information, the geographic position information and the time information in the image basic data information as the image information basis data of the subsequent multi-dimensional prediction target user.
203. Calculating and counting the action hotspot information, the geographical position information and the time information of a target user and a first user within preset time, wherein the first user is a user who already has user portrait information;
it should be noted that the preset time is not equal to the first preset time, the preset time may be set according to basic data information of the target user, and may be set to one month, and the preset time is certainly greater than the first preset time, so that the data is richer and more accurate.
The system acquires the action hotspot information, the geographic position information and the time information of a target user in one day and then calculates the action hotspot information, the geographic position information and the time information of the target user in one month, meanwhile, the system extracts the stored basic data information of the user with the portrait information, and calculates the action hotspot information, the geographic position information and the time information of the user with the portrait information in one month.
204. Associating action hotspot information, geographical position information and time information of the target user and the first user;
the system carries out data association on the calculated action hotspot information, the geographical position information and the time information of the target user and the user who already possesses the portrait information, wherein the association is one-to-one.
205. Determining a relationship between a target user and a first user;
after the system associates the data, the relationship between the target user and the first user can be determined through the association result. For example, by associating the action hotspot information of the target user with the action hotspot information of the first user, discovering that both are wifi of a small-name family, discovering that the geographical location information of the target user is the same as the geographical location information of the first user by associating the geographical location information of the target user with the geographical location information of the first user, and taking out 5 users from these similar first users, it is possible to approximately predict that the relationship between the target user and these 5 users is a relationship between relatives.
206. Calculating portrait information of a first user through a rule to determine first portrait information of a target user;
the system predicts the relationship between the target user and the first user, then roughly predicts some portrait data information of the target user, calculates portrait information of 5 users by rules, for example, the 5 users are under 10 years old and over 60 years old, roughly predicts the late generation and the long generation of the target user, and then determines the home address of the target user and portrait information of family members according to their address and time calculation.
207. Calculating second image information of the target user by using a nearest neighbor method;
step 207 in this embodiment is similar to step 103, and is not described herein.
208. Performing model training on part of label information of the first user to generate a training module, wherein part of labels are labels not carried by a target user;
in this embodiment, the system has predicted most of the portrait information of the target user through multiple dimensions, and the rest can be predicted through deep learning. Specifically, a model is generated by deep learning of part of tag information of a user with existing portrait information, wherein the part of tag is a tag which is not carried by a target user, for example, portrait information such as height, weight, mobile phone number and home address of the target user is predicted by a system, but the feature of interest and hobby is not available, and deep learning can be performed by a user having relevance with the target user.
209. Predicting third portrait information of the target user through the training model, wherein the third portrait information comprises partial label information;
in this embodiment, the process of model training is similar to that in practical application, except that the model training sample data is less, that is, the training process is simpler, and only part of the label sample data is trained, so that the process is simpler and the explanatory performance is strong.
210. The method comprises the steps of screening label information similar to first image information, second image information and third image information to be integrated;
in this embodiment, the system obtains the portrait information of the target user through multi-dimensional calculation, and similar data information may exist, and specifically, the system extracts the similar data information, comprehensively sorts the data information, and counts the most accurate data information.
211. The predicted image information of the target user is determined in conjunction with the first image information, the second image information, and the third image information.
Step 211 in this embodiment is similar to step 105, and is not described herein again.
The above describes a method for predicting a user portrait based on machine learning in the embodiment of the present application, and the following describes an apparatus for predicting a user portrait based on machine learning in the embodiment of the present application:
referring to fig. 3, an embodiment of an apparatus for user portrait prediction based on machine learning according to the present application includes:
the acquiring unit 301 is configured to acquire action hotspot information, geographical location information and time information of a target user and a first user within preset time, where the first user is a user who already owns portrait information;
a first determining unit 302, configured to determine first image information of a target user by associating the target user with action hotspot information, geographical location information, and time information of the first user;
a calculating unit 303, configured to calculate second image information of the target user by using a nearest neighbor method;
a second determination unit 304 for determining a result predicted by deep learning partial tag information of the first user as third image information of the target user; a third determination unit 305 for determining the predicted image information of the target user in combination with the first image information, the second image information and the third image information.
In this embodiment, after the obtaining unit 301 obtains the action hotspot information, the geographic position information, and the time information of the target user and the user who already possesses the portrait information, the first determining unit determines the first portrait information of the target user by associating the action hotspot information, the geographic position information, and the time information of the target user with the action hotspot information, the geographic position information, and the time information of the first user, then calculates the second portrait information of the target user by using a nearest neighbor calculation method, the associated data information and the nearest neighbor algorithm cannot completely represent the user portrait information, a part of the third portrait information is determined by deeply learning the partial tag information of the user who already possesses the portrait information, and finally, the portrait information of the target user is predicted by combining the first portrait information, the second portrait information, and the third portrait information. Thus, the user portrait information data obtained by multidimensional calculation is more accurate.
Referring to fig. 4, the apparatus for predicting a user portrait based on machine learning in the embodiment of the present application is described in detail below, and another embodiment of the apparatus for predicting a user portrait based on machine learning in the embodiment of the present application includes:
an obtaining unit 401, configured to obtain action hotspot information, geographic position information, and time information of a target user and a first user within a preset time, where the first user is a user who already has portrait information;
a first determining unit 402, configured to determine first image information of a target user by associating the target user with action hotspot information, geographical location information, and time information of the first user;
a calculating unit 403, configured to calculate second image information of the target user by using a nearest neighbor method;
a second determination unit 404 for determining a result predicted by deep learning partial tag information of the first user as third image information of the target user; a third determination unit 405 for determining the predicted image information of the target user in combination with the first image information, the second image information and the third image information.
In this embodiment, the obtaining unit 401 may include:
the obtaining module 4011 is configured to obtain portrait base data information of a target user within a first preset time;
the extraction module 4012 is configured to extract mobile hotspot information, geographic location information, and time information from the sketch basic data information;
the statistic module 4013 is configured to calculate and count action hotspot information, geographic location information, and time information of the target user and the first user within a preset time, where the first user is a user who already has user portrait information.
The first determining unit 402 in this embodiment may include:
the association module 4021 is configured to associate the action hotspot information, the geographic location information, and the time information of the target user and the first user;
a first determining module 4022, configured to determine a relationship between a target user and a first user;
a second determining module 4023, configured to determine the first image information of the target user by calculating the image information of the first user according to a rule.
The second determining unit 404 in this embodiment may include:
the generating module 4041 is configured to perform algorithm deep learning on a part of the tags of the first user to generate a training model, where the part of the tags are tags that are not carried by the target user;
the predicting module 4042 is configured to predict third portrait information of the target user through the training model, where the third portrait information includes part of the label information.
In this embodiment, the functions of each unit and each module correspond to the steps in the embodiment shown in fig. 2, and are not described herein again.
Referring to fig. 5, the apparatus for predicting a user portrait based on machine learning in the embodiment of the present application is described in detail below, and another embodiment of the apparatus for predicting a user portrait based on machine learning in the embodiment of the present application includes:
a processor 501, a memory 502, an input/output unit 503, and a bus 504;
the processor 501 is connected with the memory 502, the input/output unit 503 and the bus 504;
the processor 501 performs the following operations:
acquiring action hotspot information, geographical position information and time information of a target user and a first user within preset time, wherein the first user is a user who already has portrait information;
determining first image information of a target user by associating the action hotspot information, the geographical position information and the time information of the target user and the first user;
calculating second image information of the target user by using a nearest neighbor method;
determining a result predicted by deep learning of partial tag information of the first user as third image information of the target user; the predicted image information of the target user is determined in conjunction with the first image information, the second image information, and the third image information.
Optionally, the functions of the processor 501 correspond to the steps in the embodiments shown in fig. 1 to fig. 2, and are not described herein again.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and the like.

Claims (10)

1. A method for user portrait prediction based on machine learning, comprising:
acquiring action hotspot information, geographical position information and time information of a target user and a first user within preset time, wherein the first user is a user who already has portrait information;
determining first image information of the target user by associating the action hotspot information, the geographical position information and the time information of the target user and the first user;
calculating second image information of the target user by using a nearest neighbor method;
determining a result predicted by deep learning of partial tag information of the first user as third pictorial information of the target user; and determining the predicted image information of the target user by combining the first image information, the second image information and the third image information.
2. The method of claim 1, wherein determining the first pictorial information of the target user by associating the target user with the action hotspot information, geographic location information, and time information of the first user comprises:
associating the action hotspot information, the geographic location information, and the time information of the target user with the first user;
determining a relationship of the target user to the first user;
and calculating the portrait information of the first user through a rule to determine the first portrait information of the target user.
3. The method according to claim 1, wherein the determining, as the third image information of the target user, a result predicted by deep learning of the partial tag information of the first user comprises:
performing a model on part of label information of the first user to generate a training module, wherein the part of labels are labels which are not carried by the target user;
and predicting third image information of the target user through the training model, wherein the third image information comprises the partial label information.
4. The method of claim 1, wherein the obtaining of the action hotspot information, the geographic location information and the time information of the target user and a first user within a preset time, the first user being a user already having portrait information, comprises:
acquiring portrait basic data information of a target user within a first preset time;
extracting action hotspot information, geographical position information and time information in the portrait basic data information;
calculating and counting the action hotspot information, the geographical position information and the time information of the target user and the first user within a preset time, wherein the first user is a user who already has user portrait information.
5. The method of claim 4, wherein the first predetermined time is less than the predetermined time.
6. The method of any of claims 1-4, wherein prior to determining the predicted image information of the target user in combination with the first image information, the second image information, and the third image information, the method further comprises:
and integrating by screening label information similar to the first image information, the second image information and the third image information.
7. An apparatus for machine learning-based user profile prediction, comprising:
the system comprises an acquisition unit, a storage unit and a display unit, wherein the acquisition unit is used for acquiring action hotspot information, geographical position information and time information of a target user and a first user within preset time, and the first user is a user who already has portrait information;
the first determining unit is used for determining first image information of the target user by associating the action hotspot information, the geographical position information and the time information of the target user and the first user;
a calculating unit, configured to calculate second image information of the target user by using a nearest neighbor method;
a second determination unit configured to determine a result predicted by deep learning of partial tag information of the first user as third image information of the target user;
and a third determining unit, configured to determine predicted image information of the target user by combining the first image information, the second image information, and the third image information.
8. The apparatus according to claim 7, wherein the first determining unit comprises:
an association module, configured to associate the action hotspot information, the geographic location information, and the time information of the target user and the first user;
a first determining module, configured to determine a relationship between the target user and the first user;
and the second determining module is used for calculating the portrait information of the first user through a rule to determine the first portrait information of the target user.
9. The apparatus according to claim 7, wherein the second determining unit comprises:
the generating module is used for carrying out algorithm deep learning on part of labels of the first user to generate a training model, wherein the part of labels are labels which are not carried by the target user;
and the prediction module is used for predicting third image information of the target user through the training model, wherein the third image information comprises the part of label information.
10. The apparatus of claim 7, wherein the obtaining unit comprises:
the acquisition module is used for acquiring portrait basic data information of a target user within a first preset time;
the extraction module is used for extracting action hotspot information, geographical position information and time information in the portrait basic data information;
and the counting module is used for counting the action hotspot information, the geographical position information and the time information of the target user and the first user in preset time through calculation, wherein the first user is a user who already has user portrait information.
CN202011460997.0A 2020-12-11 2020-12-11 User portrait prediction method and device based on machine learning Active CN112465565B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011460997.0A CN112465565B (en) 2020-12-11 2020-12-11 User portrait prediction method and device based on machine learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011460997.0A CN112465565B (en) 2020-12-11 2020-12-11 User portrait prediction method and device based on machine learning

Publications (2)

Publication Number Publication Date
CN112465565A true CN112465565A (en) 2021-03-09
CN112465565B CN112465565B (en) 2023-09-26

Family

ID=74803586

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011460997.0A Active CN112465565B (en) 2020-12-11 2020-12-11 User portrait prediction method and device based on machine learning

Country Status (1)

Country Link
CN (1) CN112465565B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112819593A (en) * 2021-04-19 2021-05-18 平安科技(深圳)有限公司 Data analysis method, device, equipment and medium based on position information
CN113840392A (en) * 2021-09-17 2021-12-24 杭州云深科技有限公司 Method and device for determining user intimacy, computer equipment and storage medium

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170372225A1 (en) * 2016-06-28 2017-12-28 Microsoft Technology Licensing, Llc Targeting content to underperforming users in clusters
CN107807997A (en) * 2017-11-08 2018-03-16 北京奇虎科技有限公司 User's portrait building method, device and computing device based on big data
CN107862053A (en) * 2017-11-08 2018-03-30 北京奇虎科技有限公司 User's portrait building method, device and computing device based on customer relationship
CN108021929A (en) * 2017-11-16 2018-05-11 华南理工大学 Mobile terminal electric business user based on big data, which draws a portrait, to establish and analysis method and system
CN108769198A (en) * 2018-05-29 2018-11-06 百度在线网络技术(北京)有限公司 Method and apparatus for pushed information
CN109086377A (en) * 2018-07-24 2018-12-25 江苏通付盾科技有限公司 Generation method, device and the calculating equipment of equipment portrait
CN109345348A (en) * 2018-09-30 2019-02-15 重庆誉存大数据科技有限公司 The recommended method of multidimensional information portrait based on travel agency user
CN109727077A (en) * 2019-01-22 2019-05-07 深圳魔数智擎科技有限公司 User's future draws a portrait generation method, computer storage medium and computer equipment
CN109858953A (en) * 2019-01-02 2019-06-07 深圳壹账通智能科技有限公司 User's portrait method, apparatus, computer equipment and storage medium
CN109934619A (en) * 2019-02-13 2019-06-25 北京三快在线科技有限公司 User's portrait tag modeling method, apparatus, electronic equipment and readable storage medium storing program for executing
CN110431585A (en) * 2018-01-22 2019-11-08 华为技术有限公司 A kind of generation method and device of user's portrait
CN110782289A (en) * 2019-10-28 2020-02-11 方文珠 Service recommendation method and system based on user portrait
CN111079056A (en) * 2019-10-11 2020-04-28 深圳壹账通智能科技有限公司 Method, device, computer equipment and storage medium for extracting user portrait
CN111191092A (en) * 2019-12-31 2020-05-22 腾讯科技(深圳)有限公司 Portrait data processing method and portrait model training method
CN111190939A (en) * 2019-12-27 2020-05-22 深圳市优必选科技股份有限公司 User portrait construction method and device
WO2020114135A1 (en) * 2018-12-06 2020-06-11 西安光启未来技术研究院 Feature recognition method and apparatus
CN111915366A (en) * 2020-07-20 2020-11-10 上海燕汐软件信息科技有限公司 User portrait construction method and device, computer equipment and storage medium
CN112749323A (en) * 2019-10-31 2021-05-04 北京沃东天骏信息技术有限公司 Method and device for constructing user portrait
WO2022100518A1 (en) * 2020-11-12 2022-05-19 北京沃东天骏信息技术有限公司 User profile-based object recommendation method and device
CN115098583A (en) * 2022-06-28 2022-09-23 国网湖南省电力有限公司 User portrait depicting method for energy user
CN115660725A (en) * 2022-07-21 2023-01-31 国网上海市电力公司 Method for depicting multi-dimensional energy user portrait

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170372225A1 (en) * 2016-06-28 2017-12-28 Microsoft Technology Licensing, Llc Targeting content to underperforming users in clusters
CN107807997A (en) * 2017-11-08 2018-03-16 北京奇虎科技有限公司 User's portrait building method, device and computing device based on big data
CN107862053A (en) * 2017-11-08 2018-03-30 北京奇虎科技有限公司 User's portrait building method, device and computing device based on customer relationship
CN108021929A (en) * 2017-11-16 2018-05-11 华南理工大学 Mobile terminal electric business user based on big data, which draws a portrait, to establish and analysis method and system
CN110431585A (en) * 2018-01-22 2019-11-08 华为技术有限公司 A kind of generation method and device of user's portrait
CN108769198A (en) * 2018-05-29 2018-11-06 百度在线网络技术(北京)有限公司 Method and apparatus for pushed information
CN109086377A (en) * 2018-07-24 2018-12-25 江苏通付盾科技有限公司 Generation method, device and the calculating equipment of equipment portrait
CN109345348A (en) * 2018-09-30 2019-02-15 重庆誉存大数据科技有限公司 The recommended method of multidimensional information portrait based on travel agency user
WO2020114135A1 (en) * 2018-12-06 2020-06-11 西安光启未来技术研究院 Feature recognition method and apparatus
CN109858953A (en) * 2019-01-02 2019-06-07 深圳壹账通智能科技有限公司 User's portrait method, apparatus, computer equipment and storage medium
CN109727077A (en) * 2019-01-22 2019-05-07 深圳魔数智擎科技有限公司 User's future draws a portrait generation method, computer storage medium and computer equipment
CN109934619A (en) * 2019-02-13 2019-06-25 北京三快在线科技有限公司 User's portrait tag modeling method, apparatus, electronic equipment and readable storage medium storing program for executing
CN111079056A (en) * 2019-10-11 2020-04-28 深圳壹账通智能科技有限公司 Method, device, computer equipment and storage medium for extracting user portrait
CN110782289A (en) * 2019-10-28 2020-02-11 方文珠 Service recommendation method and system based on user portrait
CN112749323A (en) * 2019-10-31 2021-05-04 北京沃东天骏信息技术有限公司 Method and device for constructing user portrait
CN111190939A (en) * 2019-12-27 2020-05-22 深圳市优必选科技股份有限公司 User portrait construction method and device
CN111191092A (en) * 2019-12-31 2020-05-22 腾讯科技(深圳)有限公司 Portrait data processing method and portrait model training method
CN111915366A (en) * 2020-07-20 2020-11-10 上海燕汐软件信息科技有限公司 User portrait construction method and device, computer equipment and storage medium
WO2022100518A1 (en) * 2020-11-12 2022-05-19 北京沃东天骏信息技术有限公司 User profile-based object recommendation method and device
CN115098583A (en) * 2022-06-28 2022-09-23 国网湖南省电力有限公司 User portrait depicting method for energy user
CN115660725A (en) * 2022-07-21 2023-01-31 国网上海市电力公司 Method for depicting multi-dimensional energy user portrait

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HS OOI,等: "ANNIE: integrated de novo protein sequence annotation", NUCLEIC ACIDS RESEARCH, vol. 37, pages 435 - 5 *
张晗: "基于用户画像的数字图书馆精准推荐服务研究", 中国优秀硕士学位论文全文数据库 信息科技辑, no. 11, pages 143 - 3 *
王振军,等: "基于Spark的矩阵分解与最近邻融合的推荐算法", 计算机系统应用, vol. 26, no. 04, pages 124 - 129 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112819593A (en) * 2021-04-19 2021-05-18 平安科技(深圳)有限公司 Data analysis method, device, equipment and medium based on position information
CN113840392A (en) * 2021-09-17 2021-12-24 杭州云深科技有限公司 Method and device for determining user intimacy, computer equipment and storage medium
CN113840392B (en) * 2021-09-17 2023-09-22 杭州云深科技有限公司 User intimacy determination method, device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN112465565B (en) 2023-09-26

Similar Documents

Publication Publication Date Title
CN108595461B (en) Interest exploration method, storage medium, electronic device and system
CN107678800B (en) Background application cleaning method and device, storage medium and electronic equipment
CN111148018B (en) Method and device for identifying and positioning regional value based on communication data
CN109104688A (en) Wireless network access point model is generated using aggregation technique
CN112465565B (en) User portrait prediction method and device based on machine learning
CN110472154A (en) A kind of resource supplying method, apparatus, electronic equipment and readable storage medium storing program for executing
EP3537365A1 (en) Method, device, and system for increasing users
CN104239327A (en) Location-based mobile internet user behavior analysis method and device
CN111026969B (en) Content recommendation method and device, storage medium and server
CN110855487B (en) Network user similarity management method, device and storage medium
CN110675179A (en) Marketing information processing method and device, electronic equipment and readable storage medium
CN109451334B (en) User portrait generation processing method and device and electronic equipment
CN107547626B (en) User portrait sharing method and device
CN109697224B (en) Bill message processing method, device and storage medium
CN103093213A (en) Video file classification method and terminal
CN112035736B (en) Information pushing method, device and server
CN111368858A (en) User satisfaction evaluation method and device
CN112291625B (en) Information quality processing method, information quality processing device, electronic equipment and storage medium
CN109919197B (en) Random forest model training method and device
CN113887518A (en) Behavior detection method and device, electronic equipment and storage medium
CN113792211A (en) Resource pushing processing method and device, electronic equipment and storage medium
CN113065894A (en) Data collection method and device based on user portrait and order analysis and storage medium
CN113556368A (en) User identification method, device, server and storage medium
CN111552850A (en) Type determination method and device, electronic equipment and computer readable storage medium
CN115757049B (en) Multi-service module log recording method, system, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant