CN116450952A - Internet user portrait generation method and system based on deep learning technology - Google Patents

Internet user portrait generation method and system based on deep learning technology Download PDF

Info

Publication number
CN116450952A
CN116450952A CN202310716358.3A CN202310716358A CN116450952A CN 116450952 A CN116450952 A CN 116450952A CN 202310716358 A CN202310716358 A CN 202310716358A CN 116450952 A CN116450952 A CN 116450952A
Authority
CN
China
Prior art keywords
user
data
deep learning
behavior
industry
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310716358.3A
Other languages
Chinese (zh)
Inventor
周丽娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Xingyao Jiuzhou Technology Co ltd
Original Assignee
Tianjin Xingyao Jiuzhou Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Xingyao Jiuzhou Technology Co ltd filed Critical Tianjin Xingyao Jiuzhou Technology Co ltd
Priority to CN202310716358.3A priority Critical patent/CN116450952A/en
Publication of CN116450952A publication Critical patent/CN116450952A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • G06F16/215Improving data quality; Data cleansing, e.g. de-duplication, removing invalid entries or correcting typographical errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of data processing, in particular to an Internet user portrait generation method and system based on a deep learning technology. The method comprises the following steps: collecting behavior data of a user; according to the invention, the characteristics are extracted through the deep learning model, the influence of subjective factors on the image generation result is greatly reduced, the accuracy and stability of image generation are improved, the diversity and complexity of data can be effectively processed through multi-model fusion, the accuracy and robustness of image generation are improved, the accuracy and efficiency of user image generation are improved through the characteristics with obvious influence and discrimination through the screening of characteristic selection, the important basis is provided for marketing and advertisement release of enterprises through predicting accurate user images, the method has higher commercial value and application prospect, and the generated user images are prevented from being too prone to some users through introducing a bias control technology, so that the user characteristics are more objectively and comprehensively described.

Description

Internet user portrait generation method and system based on deep learning technology
Technical Field
The invention relates to the technical field of data processing, in particular to an Internet user portrait generation method and system based on a deep learning technology.
Background
In the Internet age, user data become precious resources, an Internet enterprise can better understand user demands through analysis and mining of the user data, personalized services are provided, an Internet user portrait is used as a user description model, the user data are arranged and analyzed, a more accurate user analysis and accurate marketing means can be provided for the enterprise, abnormal behaviors of a user can be attributed to the data when analysis and prediction are performed on the user behaviors, and generated user portrait factors tend too, so that the Internet user portrait generation method and system based on the deep learning technology are provided.
Disclosure of Invention
The invention aims to provide an Internet user portrait generation method and system based on a deep learning technology, which are used for solving the problems in the background technology.
In order to solve the above technical problems, one of the purposes of the present invention is to provide an internet user portrait generating method based on deep learning technology, comprising:
s1, collecting behavior data of a user, and sending a questionnaire to the user;
s2, establishing an industry data acquisition function, and acquiring other data characteristics of a user according to the industry data acquisition function;
s3, preprocessing based on the behavior characteristics acquired by the S1 and other data acquired by the S2;
s4, carrying out combined evaluation based on the data processed in the S3, and correcting the data according to an evaluation result;
s5, predicting the corrected data in the S4, and generating a user personalized image.
As a further improvement of the technical scheme, the step of sending the questionnaire to the user by the S1 is as follows:
s1.1, detecting the behavior of a user on the Internet, and collecting behavior data of the user at the same time;
s1.2, sending a behavior questionnaire to the user, and extracting information from the fed-back questionnaire.
As a further improvement of the technical scheme, the S2 collects the data characteristics of the user industry:
s2.1, establishing an industry data acquisition function to acquire the industry data of a user;
s2.2, analyzing according to the industry data of the user acquired in the S2.1, and acquiring the frequency required by the user for the industry objects.
As a further improvement of the technical scheme, the step of preprocessing the behavior characteristics and the data in S3 is as follows:
s3.1, cleaning the behavior data acquired by the S1.1, the information extracted by the S1.2 and the user industry data acquired by the S2.2;
and S3.2, screening the data after the S3.1 is cleaned by a characteristic selection algorithm to obtain characteristic data with obvious influence and distinguishing degree.
As a further improvement of the technical scheme, the step S3.2 is to screen the characteristic data expression with obvious influence and discrimination through a characteristic selection algorithm as follows:
a significant influence formula: .
Wherein,,and->Is two variables, +.>Is the number of samples, +.>And->Are respectively->And->Mean value of->For influencing the force value, the larger the value is, the more obvious the difference is;
discrimination formula:
wherein,,and->Respectively the actual observations and the expected observations, < +.>And->Row and column number, respectively, ">The larger the value, the larger the deviation is, for the difference of distinction.
As a further improvement of the present technical solution, the step of correcting the data according to the evaluation result in S4 is as follows:
s4.1, carrying out combination evaluation according to the data screened in the S3.2 to obtain a user portrait;
s4.2, introducing a deviation control algorithm to the user portrait acquired in the S4.1 to correct the user portrait, and updating the user portrait.
As a further improvement of the technical scheme, the expression of correcting the user image by introducing the deviation control algorithm into the S4.2 is as follows:
wherein,,is the original predicted value, namely the deviation value in the user portrait;
is a true value, i.e., the collected user data; />The weight of the deviation factor reflects the influence of different deviation factors;
is the data volume; />Is the variance of the deviation noise, and is required to be estimated by a statistical method;
is the corrected predicted value, i.e., the updated user representation.
As a further improvement of the technical scheme, the step of simulating and training the user portrait in S5 is as follows:
s5.1, predicting the user portrait updated in the step S4.2 to obtain the behavior mode and interest preference data of the user;
s5.2, fusing the data acquired in the S5.1 to generate a comprehensive three-dimensional user personality portrait.
The second object of the invention is to provide an internet user portrait generating system based on the deep learning technology, which comprises the internet user portrait generating method based on the deep learning technology, wherein the internet user portrait generating method based on the deep learning technology comprises a behavior acquisition unit, an industry acquisition unit, a data processing unit, a data correction unit and a portrait generating unit;
the behavior acquisition unit is used for acquiring behavior data of a user and sending a questionnaire to the user;
the industry acquisition unit is used for establishing an industry data acquisition function and acquiring other data characteristics of a user according to the industry data acquisition function;
the data processing unit is used for preprocessing the collected behavior characteristics and other collected data;
the data correction unit is used for carrying out combined evaluation on the processed data and correcting the data according to an evaluation result;
the portrait generation unit is used for predicting corrected data and generating a personalized portrait of a user.
Compared with the prior art, the invention has the beneficial effects that: the method has the advantages that the characteristics are extracted through the deep learning model, the influence of subjective factors on the image generation result is greatly reduced, the accuracy and the stability of image generation are improved, the diversity and the complexity of data can be effectively processed through multi-model fusion, the accuracy and the robustness of image generation are improved, the characteristics with obvious influence and degree of distinction are selected through the characteristics selection, the accuracy and the efficiency of user image generation are improved, important basis is provided for marketing and advertisement putting of enterprises through predicting accurate user images, higher commercial value and application prospect are achieved, and the user characteristics are more objectively and comprehensively described by introducing a deviation control technology, so that the generated user images are prevented from being too prone to certain users.
Drawings
FIG. 1 is an overall flow diagram of the present invention;
FIG. 2 is a block flow diagram of the present invention for sending a questionnaire to a user;
FIG. 3 is a block flow diagram of the present invention for collecting other data features of a user;
FIG. 4 is a block flow diagram of preprocessing other data in accordance with the present invention;
FIG. 5 is a block flow diagram of the present invention for correcting data based on evaluation results;
FIG. 6 is a block flow diagram of the present invention for generating a user personalized portrait;
fig. 7 is a block flow diagram of a behavior acquisition unit of the present invention.
The meaning of each reference sign in the figure is:
10. a behavior acquisition unit; 20. an industry acquisition unit; 30. a data processing unit; 40. a data correction unit; 50. an image generating unit.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Examples: as shown in fig. 1 to 7, one of the purposes of the present invention is to provide an internet user portrait generating method based on deep learning technology, comprising the following steps:
s1, collecting behavior data of a user, and sending a questionnaire to the user;
the step of sending the questionnaire to the user by the S1 is as follows:
s1.1, detecting the behavior of a user on the Internet, and collecting behavior data of the user at the same time; obtaining user behavior data from multiple channels (e.g., websites, mobile applications, social media) of an internet user, including user browsing records, search records, social media activity records; the method comprises the following steps:
deploying a tracking code:
corresponding tracking codes, such as Google analysis tracking codes and Facebook pixel codes, are deployed for websites/application programs/RSS subscriptions and social media;
observing user behavior:
collecting all data related to user behavior, such as access to pages, search records, click on links, purchase behavior, submit forms;
and (3) carrying out data analysis:
classifying, sorting, calculating and analyzing the collected data, knowing the behavior trend and the characteristics of the user, and providing basis for optimizing urban selection in future;
information disclosure:
the user is presented with the acquisition behavior, purpose and manner of use, and the information security risks that may be involved.
S1.2, sending a behavior questionnaire to the user, and extracting information from the fed-back questionnaire. The method comprises the following steps:
designing a questionnaire: according to the information and target audience to be known, designing questionnaire questions and options, and using an online questionnaire tool such as a Google form;
sending a questionnaire: the designed questionnaire is sent to target audience through mailbox, social media and chat application mode, and the audience answering time and the questionnaire purpose can be informed in advance;
collecting data: collecting answer results of all audiences, and cleaning and merging questionnaire data;
such as: "how much time you spend on the cell phone every day? "
"how long you will use the computer for an average of one week? "
"what books or articles you have seen recently? "
"how often does you use a certain social media? "
Data analysis: carrying out investigation data analysis on the questionnaire results by using a statistical analysis method to obtain related data reports and conclusions;
report summary: integrating and summarizing analysis results to form a questionnaire feedback report, and providing basis for decision making.
S2, establishing an industry data acquisition function, and acquiring other data characteristics of a user according to the industry data acquisition function;
s2, collecting data characteristics of the user industry:
s2.1, establishing an industry data acquisition function to acquire the industry data of a user;
determining a data acquisition target: the types of industry data which need to be collected are clear, such as market scale, industry growth rate and competitor situation;
and (3) making an acquisition scheme: formulating a data acquisition scheme, including an acquisition method, a data source, a data verification and a screening mode;
developing a data acquisition function: the programming language and tools are utilized to develop industry data acquisition functions, and are compatible with common data formats and protocols, such as CSV, XML, JSON;
testing and verifying: comparing the acquired data with the original data, and verifying the accuracy and the integrity of the acquisition function;
and (3) online operation: the industry data acquisition function is deployed into an online environment, so that sustainable support of data acquisition and updating is ensured.
S2.2, analyzing according to the industry data of the user acquired in the S2.1, and acquiring the frequency required by the user for the industry objects. And (3) data processing: cleaning the acquired industry data of the user, and screening out data related to the use frequency of the article;
data analysis: analyzing and calculating related industry data by using a statistical analysis method to obtain a data result aiming at the frequency required by the industry articles;
conclusion summary: and according to the analysis result, summarizing the conclusion of the user on the frequency required by the articles in the industry.
S3, preprocessing based on the behavior characteristics acquired by the S1 and other data acquired by the S2;
the step of preprocessing the behavior characteristics and the data in the S3 is as follows:
s3.1, cleaning the behavior data acquired by the S1.1, the information extracted by the S1.2 and the user industry data acquired by the S2.2;
and S3.2, screening the data after the S3.1 is cleaned by a characteristic selection algorithm to obtain characteristic data with obvious influence and distinguishing degree. The method comprises the following steps:
the de-duplication formula: IF (COUNTIF (c1:b 1, A1) >1, ", 1), in a given range c1:b1, IF the current cell A1 has already appeared within this range (i.e. the number of times repetition is calculated to be present is greater than 1), then the value of the current cell will be empty (i.e. the value of repetition is removed), otherwise 1 will be returned (i.e. the value of the current cell is not repeated), this formula can be used to find and delete duplicate rows;
null filling formula: =if (isblast (A1), "Unknown", A1), in a given cell A1, IF the current cell A1 is null (i.e. there is no value or formula), then that cell will be filled with "Unknown", otherwise the original value or formula will be retained. This formula is often used for data input processing, and can be used for filling null cells by replacing null values with a default value and improving the integrity and readability of the data.
And S3.2, screening a characteristic data expression with obvious influence and discrimination by a characteristic selection algorithm as follows:
a significant influence formula:
wherein,,and->Is two variables, +.>Is the number of samples, +.>And->Are respectively->And->Mean value of->For influencing the force value, the larger the value is, the more obvious the difference is;
discrimination formula:
wherein,,and->Respectively the actual observations and the expected observations, < +.>And->Row and column number, respectively, ">The larger the value, the larger the deviation is, for the difference of distinction.
S4, carrying out combined evaluation based on the data processed in the S3, and correcting the data according to an evaluation result;
the step of correcting the data according to the evaluation result is as follows:
s4.1, carrying out combination evaluation according to the data screened in the S3.2 to obtain a user portrait; the method comprises the following steps:
data analysis: combining different features to perform data analysis, such as clustering and regression analysis, to find potential user groups and features;
user portrayal creation: and establishing a user portrait model according to the data analysis result, and describing the user characteristics and requirements.
S4.2, introducing a deviation control algorithm to the user portrait acquired in the S4.1 to correct the user portrait, and updating the user portrait;
the S4.2 introduces an expression for correcting the user image by a deviation control algorithm as follows:
wherein,,is the original predicted value, namely the deviation value in the user portrait;
is a true value, i.e., the collected user data; />The weight of the deviation factor reflects the influence of different deviation factors;
is the data volume; />Is the variance of the deviation noise, and is required to be estimated by a statistical method;
is the corrected predicted value, i.e., the updated user representation.
S5, predicting the corrected data of the S4 to generate a user personalized image;
the step S5 of simulating and training the user portrait is as follows:
s5.1, predicting the user portrait updated in the step S4.2 to obtain the behavior mode and interest preference data of the user; predicting and analyzing new user behavior data by using the trained deep learning model to obtain the behavior mode and interest preference information of the user, wherein the formula of the deep learning model is as follows:
wherein the method comprises the steps ofFor outputting (I)>Is weight(s)>For input, & lt + & gt>For biasing (I)>Is an activation function;
s5.2, fusing the data acquired in the S5.1 to generate a comprehensive three-dimensional user personality portrait. And generating personalized portraits of the user according to the prediction result, wherein the personalized portraits comprise interest preferences, purchasing trends and social circle information of the user. These representations can be used in personalized recommendation, precision marketing applications, as follows:
model prediction: predicting and analyzing different features using a plurality of deep learning models, such as convolutional neural networks, recurrent neural networks;
fusion of results: fusing the prediction results of the multiple models, and optimizing and adjusting according to weights of different models;
image creation: and combining different characteristics and the fused prediction results to generate a comprehensive three-dimensional personalized portrait of the user, and describing habits, interests, demands and consumption behaviors of the user.
The second object of the present invention is to provide an internet user portrait creation system based on a deep learning technique, including any one of the above internet user portrait creation methods based on a deep learning technique, including a behavior acquisition unit 10, an industry acquisition unit 20, a data processing unit 30, a data correction unit 40, and a portrait creation unit 50;
the behavior acquisition unit 10 is used for acquiring behavior data of a user and sending a questionnaire to the user;
the industry acquisition unit 20 is used for establishing an industry data acquisition function and acquiring other data characteristics of a user according to the industry data acquisition function;
the data processing unit 30 is used for preprocessing the collected behavior characteristics and other collected data;
the data correction unit 40 is used for performing combined evaluation on the processed data and correcting the data according to the evaluation result;
the image generating unit 50 predicts the corrected data and generates a user-customized image.
The foregoing has shown and described the basic principles, principal features and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the above-described embodiments, and that the above-described embodiments and descriptions are only preferred embodiments of the present invention, and are not intended to limit the invention, and that various changes and modifications may be made therein without departing from the spirit and scope of the invention as claimed. The scope of the invention is defined by the appended claims and their equivalents.

Claims (9)

1. An Internet user portrait generation method based on a deep learning technology is characterized in that: the method comprises the following steps:
s1, collecting behavior data of a user, and sending a questionnaire to the user;
s2, establishing an industry data acquisition function, and acquiring other data characteristics of a user according to the industry data acquisition function;
s3, preprocessing based on the behavior characteristics acquired by the S1 and other data acquired by the S2;
s4, carrying out combined evaluation based on the data processed in the S3, and correcting the data according to an evaluation result;
s5, predicting the corrected data in the S4, and generating a user personalized image.
2. The internet user portrayal generating method based on the deep learning technique according to claim 1, characterized in that: the step of sending the questionnaire to the user by the S1 is as follows:
s1.1, detecting the behavior of a user on the Internet, and collecting behavior data of the user at the same time;
s1.2, sending a behavior questionnaire to the user, and extracting information from the fed-back questionnaire.
3. The internet user portrayal generating method based on the deep learning technique according to claim 2, characterized in that: s2, collecting data characteristics of the user industry:
s2.1, establishing an industry data acquisition function to acquire the industry data of a user;
s2.2, analyzing according to the industry data of the user acquired in the S2.1, and acquiring the frequency required by the user for the industry objects.
4. The internet user portrayal generating method based on the deep learning technique according to claim 3, characterized in that: the step of preprocessing the behavior characteristics and the data in the S3 is as follows:
s3.1, cleaning the behavior data acquired by the S1.1, the information extracted by the S1.2 and the user industry data acquired by the S2.2;
and S3.2, screening the data after the S3.1 is cleaned by a characteristic selection algorithm to obtain characteristic data with obvious influence and distinguishing degree.
5. The method for generating internet user portraits based on deep learning technology of claim 4, wherein: and S3.2, screening a characteristic data expression with obvious influence and discrimination by a characteristic selection algorithm as follows:
a significant influence formula:
wherein,,and->Is two variables, +.>Is the number of samples, +.>And->Are respectively->And->Mean value of->For influencing the force value, the larger the value is, the more obvious the difference is;
discrimination formula:
wherein,,and->Respectively the actual observations and the expected observations, < +.>And->Row and column number, respectively, ">The larger the value, the larger the deviation is, for the difference of distinction.
6. The method for generating internet user portraits based on deep learning technology of claim 4, wherein: the step of correcting the data according to the evaluation result is as follows:
s4.1, carrying out combination evaluation according to the data screened in the S3.2 to obtain a user portrait;
s4.2, introducing a deviation control algorithm to the user portrait acquired in the S4.1 to correct the user portrait, and updating the user portrait.
7. The method for generating internet user portraits based on deep learning technology of claim 6, wherein: the S4.2 introduces an expression for correcting the user image by a deviation control algorithm as follows:
wherein,,is the original predicted value, namely the deviation value in the user portrait;
is a true value, i.e., the collected user data; />The weight of the deviation factor reflects the influence of different deviation factors;
is the data volume; />Is the variance of the deviation noise, and is required to be estimated by a statistical method;
is the corrected predicted value, i.e., the updated user representation.
8. The internet user portrayal generating method based on the deep learning technique according to claim 7, characterized in that: the step S5 of simulating and training the user portrait is as follows:
s5.1, predicting the user portrait updated in the step S4.2 to obtain the behavior mode and interest preference data of the user;
s5.2, fusing the data acquired in the S5.1 to generate a comprehensive three-dimensional user personality portrait.
9. An internet user portrayal generating system based on the deep learning technology, comprising the internet user portrayal generating method based on the deep learning technology as defined in any one of claims 1-8, characterized in that: comprises a behavior acquisition unit (10), an industry acquisition unit (20), a data processing unit (30), a data correction unit (40) and an image generation unit (50);
the behavior acquisition unit (10) is used for acquiring behavior data of a user and sending a questionnaire to the user;
the industry acquisition unit (20) is used for establishing an industry data acquisition function and acquiring other data characteristics of a user according to the industry data acquisition function;
the data processing unit (30) is used for preprocessing the collected behavior characteristics and other collected data;
the data correction unit (40) is used for carrying out combined evaluation on the processed data and correcting the data according to an evaluation result;
the image generation unit (50) is used for predicting corrected data and generating a user personalized image.
CN202310716358.3A 2023-06-16 2023-06-16 Internet user portrait generation method and system based on deep learning technology Pending CN116450952A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310716358.3A CN116450952A (en) 2023-06-16 2023-06-16 Internet user portrait generation method and system based on deep learning technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310716358.3A CN116450952A (en) 2023-06-16 2023-06-16 Internet user portrait generation method and system based on deep learning technology

Publications (1)

Publication Number Publication Date
CN116450952A true CN116450952A (en) 2023-07-18

Family

ID=87128877

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310716358.3A Pending CN116450952A (en) 2023-06-16 2023-06-16 Internet user portrait generation method and system based on deep learning technology

Country Status (1)

Country Link
CN (1) CN116450952A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106651424A (en) * 2016-09-28 2017-05-10 国网山东省电力公司电力科学研究院 Electric power user figure establishment and analysis method based on big data technology
CN109903097A (en) * 2019-03-05 2019-06-18 云南电网有限责任公司信息中心 A kind of user draws a portrait construction method and user draws a portrait construction device
CN110782289A (en) * 2019-10-28 2020-02-11 方文珠 Service recommendation method and system based on user portrait
CN112052270A (en) * 2020-08-26 2020-12-08 南京越扬科技有限公司 Method and system for carrying out user portrait depth analysis through big data

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106651424A (en) * 2016-09-28 2017-05-10 国网山东省电力公司电力科学研究院 Electric power user figure establishment and analysis method based on big data technology
CN109903097A (en) * 2019-03-05 2019-06-18 云南电网有限责任公司信息中心 A kind of user draws a portrait construction method and user draws a portrait construction device
CN110782289A (en) * 2019-10-28 2020-02-11 方文珠 Service recommendation method and system based on user portrait
CN112052270A (en) * 2020-08-26 2020-12-08 南京越扬科技有限公司 Method and system for carrying out user portrait depth analysis through big data

Similar Documents

Publication Publication Date Title
CN110442790B (en) Method, device, server and storage medium for recommending multimedia data
US10699204B2 (en) Knowledge discovery from belief networks
Efstratiadis et al. One decade of multi-objective calibration approaches in hydrological modelling: a review
US20200380335A1 (en) Anomaly detection in business intelligence time series
US10108919B2 (en) Multi-variable assessment systems and methods that evaluate and predict entrepreneurial behavior
CN108665311B (en) Electric commercial user time-varying feature similarity calculation recommendation method based on deep neural network
US20190236491A1 (en) Systems and methods for continuous active machine learning with document review quality monitoring
US20210398164A1 (en) System and method for analyzing and predicting emotion reaction
CN110647678B (en) Recommendation method based on user character label
CN112149352B (en) Prediction method for marketing activity clicking by combining GBDT automatic characteristic engineering
CN102365637A (en) Characterizing user information
CN112132209B (en) Attribute prediction method based on biasing characteristics
CN111984873A (en) Service recommendation system and method
CN117829914B (en) Digital media advertisement effect evaluation system
CN110502639B (en) Information recommendation method and device based on problem contribution degree and computer equipment
CN114245185A (en) Video recommendation method, model training method, device, electronic equipment and medium
CN111143700B (en) Activity recommendation method, activity recommendation device, server and computer storage medium
CN116450952A (en) Internet user portrait generation method and system based on deep learning technology
US20220342912A1 (en) Automated data set enrichment, analysis, and visualization
CN114282095A (en) Course recommendation method, device and equipment and readable storage medium
CN116485352B (en) Member management and data analysis method, device, equipment and storage medium
Devi et al. Soft Cosine Gradient and Gaussian Mixture Joint Probability Recommender System for Online Social Networks.
CN117035873B (en) Multi-task combined prediction method for few-sample advertisement
CN117934087B (en) Intelligent advertisement delivery method and system based on user interaction data
CN117689426B (en) Multi-channel advertisement effect evaluation method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination