CN112131475A - Interpretable and interactive user portrait method and device - Google Patents

Interpretable and interactive user portrait method and device Download PDF

Info

Publication number
CN112131475A
CN112131475A CN202011024688.9A CN202011024688A CN112131475A CN 112131475 A CN112131475 A CN 112131475A CN 202011024688 A CN202011024688 A CN 202011024688A CN 112131475 A CN112131475 A CN 112131475A
Authority
CN
China
Prior art keywords
user
label
feedback
portrait
user portrait
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011024688.9A
Other languages
Chinese (zh)
Other versions
CN112131475B (en
Inventor
郑驰
蔡苗
夏燕
张金凤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN202011024688.9A priority Critical patent/CN112131475B/en
Publication of CN112131475A publication Critical patent/CN112131475A/en
Application granted granted Critical
Publication of CN112131475B publication Critical patent/CN112131475B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/216Parsing using statistical methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Probability & Statistics with Applications (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention relates to an interpretable and interactive user portrait method and device, and belongs to the technical field of computers. The method comprises the steps of firstly constructing a user portrait label according to an interpretable method, then storing relevant data of the user portrait label by using Hive, enabling the user portrait to be visual and feedback to a user by using EChats, feeding user opinions back to a system for optimization according to adjustment of the user portrait by the user, and finally adopting anti-discrimination and accuracy testing to check the performance of the user portrait. The interpretable and interactive user portrait method constructs the user portrait label according to an interpretable mode, improves the understandability of the user portrait, supports the user to adjust the user portrait, protects the user's right of awareness, autonomy and privacy, and can prevent the problems of big data killing, region discrimination and the like.

Description

Interpretable and interactive user portrait method and device
Technical Field
The invention belongs to the technical field of computers, and relates to an interpretable and interactive user portrait method and device.
Background
For user attribute judgment, behavior prediction and risk assessment, data need to be collected and analyzed in an associated mode, so that new knowledge is obtained, the flow is optimized, the decision-making capability is improved, the user portrayal technology can be used for portraying user characteristics, potential value information can be mined, the decision-making capability is improved, and accurate service and modern treatment are achieved. User portrait generally needs to construct a user portrait label system, and the prior patent specification discloses a privacy protection user portrait generation method, which protects the privacy of a user by processing the count value of each rectangular unit of label data in a label data set when clustering the user label data set.
However, even if privacy rights are protected by this method, there is still a problem that user information autonomy, right to know, and equal rights are impaired in user representation applications. The user portrait is generally not disclosed to the user, the user directly receives the personalized push and risk assessment results based on the user portrait, active participation, interaction and supervision in the user portrait construction process are lacked, the user is unaware of the portrait, the push result and the assessment result can only be passively received, the autonomy of user selection information is damaged, the accuracy of the user portrait is difficult to directly verify, the inaccurate user portrait can cause poor user experience and reduced user stickiness, and unfair portrait rules can even cause social problems such as regional disparities, ethnic disparities and the like.
Disclosure of Invention
In view of the foregoing, an object of the present invention is to provide an interpretable and interactive user rendering method and apparatus.
In order to achieve the purpose, the invention provides the following technical scheme:
an interpretable, interactable user representation method, the method comprising:
s10: constructing a user portrait label in an interpretable manner;
s20: using Hive to store user portrait label related data;
s30: the EChats are utilized to make the user portrait visible and feedbackable to the user;
s40: feeding back user opinions to a system for optimization according to the adjustment of the user on the user image;
s50: the performance of the user representation is verified by inverse discrimination and accuracy testing.
Optionally, the S10 specifically includes:
determining the used label types including a statistic label, a rule label and a mining label;
when constructing the user portrait, natural language interpretation is carried out on the label of the user portrait, including the interpretation of the label category, the label data source and the label inference rule;
and determining the proportion of the tags to be 50% of the statistical class tags, 30% of the rule class tags and 20% of the mining class tags according to the difficulty of explanation.
Optionally, in the tag types determined to be used, mining class tags utilize hidden factor models LFM and TF-IDF to mine data, and submit Spark tasks for calculation.
Optionally, the S20 specifically includes:
establishing a Hive user label table, and determining the name, content and explanation columns of the label;
and inserting the calculated user label vector value into the content of the Hive user label table, and releasing the natural language solution of the label in an interpretation column in the user label table.
Optionally, the S30 specifically includes:
introducing an ECharts file, and appointing to use a radar chart;
the indicator of the radar map is user tags stored in a Hive data warehouse, and the data of the radar map is the score of each user on the corresponding user tag;
setting an expansion field on an axis of the radar map, wherein a keyword of the expansion field is 'explanation', and the content is corresponding explanation and is derived from an explanation column in a Hive user label table;
a click event is set on the radar map indicator, a message table appears after the click event, and a user can input objection and other feedback to the user portrait;
the feedback interface is laid out using linear layout and specifies the feedback box to fit the screen.
Optionally, the S40 specifically includes:
performing text word segmentation on the feedback opinions of the user by using a python tool;
performing subject word extraction on the feedback opinions based on a TF-IDF algorithm to obtain a feedback data file of the user;
based on the feedback data, adjusting a label vector value of the user and updating the user portrait;
recommendations are made based on the updated user profile.
Optionally, the S50 specifically includes:
testing the anti-discrimination, accuracy and feedback mechanism of the user portrait, wherein samples of a normal group and an vulnerable group with the same amount are extracted, the average index difference of the normal group and the vulnerable group in a violent index label, a credit index and a crime possibility index sensitive label is compared, and the anti-discrimination of the user portrait to the vulnerable group is tested;
the accuracy of the class labels and the accuracy of the rule class labels are counted by using a cross validation test, and the accuracy of the mining class labels is developed by using a sampling validation test;
classifying feedback contents of users, extracting a certain sample of feedback users for each category, dividing each category into A, B two groups, wherein the A group is the activity before the feedback of the users, the B group is the activity after the feedback of the users, comparing the A, B two groups, and testing the operation effect of a feedback mechanism.
An interpretable, interactable user-portrait apparatus, the apparatus comprising:
the user portrait interpretation module is used for performing natural language interpretation on a label of the user portrait when the user portrait is constructed, and comprises an interpreted content unit and a proportion unit; the interpretation content unit is used for performing natural language interpretation on the label of the user portrait, determining three items of an interpretation label category, a label data source and a label inference rule, and determining the used label category which comprises a statistic label, a rule label and a mining label; the proportion unit comprises a statistic class label 50%, a rule class label 30% and a mining class label 20% according to the proportion of the tags determined by the explanation difficulty;
the user portrait storage module is used for establishing a Hive user tag table and storing the name, content and explanation of the tag;
the user-oriented visualization and feedback module is used for establishing an ECharts radar map, making a user portrait and explanation thereof visible to a user, and supporting the user to input objection and other feedback to the user portrait;
the user portrait optimization module is used for adjusting the label vector value of the user by using a python tool and a TF-IDF algorithm according to the feedback of the user to the user portrait and updating the user portrait;
the verification module is used for testing the anti-discrimination, accuracy and feedback mechanism of the user portrait and comprises an anti-discrimination verification unit, an accuracy verification unit and a feedback mechanism verification unit; the anti-discrimination verification unit is used for testing the anti-discrimination on the average index difference of the sensitive labels of the normal population and the weak population; the accuracy verification unit is used for using the accuracy of the cross validation test statistics class label and the accuracy of the rule class label and using the accuracy of the sampling validation test development mining class label, and the feedback mechanism verification unit is used for using A, B grouping comparison test feedback mechanism effect.
An electronic device having stored therein computer program instructions which, when read and executed by a processor, perform the steps of the method of any of claims 1-7.
The invention has the beneficial effects that:
the interpretable and interactive user portrait method provided by the invention can ensure that the user portrait and the explanation thereof are visible to the user by constructing the user portrait label system with strong interpretability, thereby protecting the right of knowledge of the user and facilitating the user to understand the reason for making the decision. Meanwhile, by supporting user feedback and adopting feedback-based optimization, the user can adjust the portrait result of the user, so that the information autonomy of the user is protected, the user is not trapped in an information cocoon room, and the user can supervise problems such as discrimination possibly occurring in the portrait and the like, and risks are avoided in advance; on the other hand, the effects of personalized pushing and accurate service can be improved through interaction, and the cost is reduced.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the means of the instrumentalities and combinations particularly pointed out hereinafter.
Drawings
For the purposes of promoting a better understanding of the objects, aspects and advantages of the invention, reference will now be made to the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 is a flowchart of an interpretable interactive user representation method according to a first embodiment of the invention;
FIG. 2 is a flowchart illustrating a method for constructing a user portrait label according to an interpretable method according to a first embodiment of the invention;
FIG. 3 is a detailed flow chart of user representation user-oriented visualization and feedback using EChats according to a first embodiment of the present invention;
FIG. 4 is a flowchart of an embodiment of an anti-discrimination test for verifying user image performance using anti-discrimination and accuracy testing according to the present invention;
FIG. 5 is a block diagram of an interpretable interactive user-portrait apparatus according to a second embodiment of the invention;
fig. 6 is a block diagram of an electronic device according to a third embodiment of the present invention.
Reference numerals: 100-an interpretable, interactive user-rendering device; 110-a user representation interpretation module; 120-a user representation storage module; 130-user oriented visualization and feedback module; 140-a user representation optimization module; 150-a verification module; 200-an electronic device; 201-a CPU; 202-a memory; 203-an input device; 204-output means.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention in a schematic way, and the features in the following embodiments and examples may be combined with each other without conflict.
Wherein the showings are for the purpose of illustrating the invention only and not for the purpose of limiting the same, and in which there is shown by way of illustration only and not in the drawings in which there is no intention to limit the invention thereto; to better illustrate the embodiments of the present invention, some parts of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if there is an orientation or positional relationship indicated by terms such as "upper", "lower", "left", "right", "front", "rear", etc., based on the orientation or positional relationship shown in the drawings, it is only for convenience of description and simplification of description, but it is not an indication or suggestion that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and therefore, the terms describing the positional relationship in the drawings are only used for illustrative purposes, and are not to be construed as limiting the present invention, and the specific meaning of the terms may be understood by those skilled in the art according to specific situations.
First embodiment
The applicant has found that user portrayal is used in many scenarios, such as personalized recommendations, credit scoring, crime assessment. Because the user representation can analyze a large amount of user data to obtain predictive and judgment knowledge, the knowledge is embodied as a series of labels forming the user representation, and based on the user representation covering the knowledge content, a mechanism can make more scientific and fine decisions, so the user representation is widely applied. For example, a user frequently browses automobile videos, the video website judges that the user preference is automobiles, a preference label of 'love for watching automobiles' is marked on the user, and more automobile related videos are recommended to the user based on the user preference portrait. However, the existing user portrait is mainly used for internal observation and use of a mechanism and is not disclosed to a user, the construction of the user portrait lacks active participation, interaction and supervision of the user, so that the accuracy of the user portrait is difficult to be directly verified, the inaccurate user portrait may cause poor user experience and reduced user stickiness, and an unfair portrait mode may even cause infringement on the equity of the user. To solve the above problems, a first embodiment of the present invention provides an interpretable and interactive user rendering method and apparatus.
Referring to fig. 1, fig. 1 is a flowchart illustrating an interpretable and interactive user representation method and apparatus according to a first embodiment of the invention. The method and the device for explaining and interacting the user portrait specifically comprise the following steps:
step S10: the user portrait label is constructed in an interpretable manner.
Step S20: hive is used to store relevant data such as user portrait labels.
Step S30: the use of ECharts allows user portrayal to be user-oriented and visualized and feedback-able.
Step S40: and feeding back user opinions to the system for optimization according to the selection and adjustment of the user to the user portrait.
Step S50: the performance of the user representation is verified by inverse discrimination and accuracy testing.
With respect to step S10, a user portrait label is constructed in an interpretable manner. The user portrait label is constructed by an interpretable method, natural language interpretation is carried out on the label of the user portrait when the user portrait is constructed, and the proportion of the label is determined according to the difficulty of the interpretation. Referring to fig. 2, fig. 2 is a flowchart illustrating steps of constructing a user portrait label according to an interpretable method according to a first embodiment of the present invention. Specifically, constructing a user portrait label in an interpretable manner S10 may include the sub-steps of:
s11 the method determines the label types used, including statistic label, rule label and mining label. The statistical type label is obtained after the data is subjected to quantitative calculation and objectively describes the label of the user; the rule class label is designed based on platform service requirements, has a rule set artificially and may have discrimination; the mining label refers to a label obtained by automatic learning of data mining and machine learning technologies, and the problem of operation of a black box exists.
S12 explains the user portrait label, including label type, label data source and label inference rule. For example, the last 3 purchases of the user a week, interpreted as a statistical type tag. The label data is derived from shopping behavior data of the user, data which is consistent with that the payment date is more than or equal to 7 days and less than or equal to yesterday is extracted from a user log, and then quantitative statistics is carried out on the data. The inference rule of the labels is that the statistic label refers to a label for objectively describing the user, the last 3 times of shopping time label refers to the time of adding statistics, and the user carries out 3 times of shopping in the last week. The explanation of the rule class label is the explanation of the label class, the label data source and the label inference rule. For example, a user whose cumulative consumption amount reaches 6100 yuan in the last week is marked with a "high consumption user" label, and the label is interpreted as a rule class label. The rule class label refers to a label designed based on platform service requirements and with rules set artificially. The "high-volume consuming user" label means that the consuming power of the user is high. The tag data is derived from shopping behavior data of the user. The inference rule of the label is that accumulated consumption amount data meeting the payment date of more than or equal to 7 days and less than or equal to yesterday is extracted from the user consumption order table, and a high consumption user label is added to the user meeting the accumulated consumption amount of more than 5000 yuan in the last week according to the data. The reason why the business ranks average consumption amounts of all users in each week from high to low according to the consumption trend of the users, selects the average value of the consumption amounts of the first five users to calculate the average value, obtains the consumption amount standard of 5000 yuan in a week, and accordingly determines the rule that the accumulated consumption amount of the last week reaches more than 5000 yuan is the high consumption user.
S13 in order to ensure the interpretability of the result, the method determines the proportion of the three types of labels, and the statistical type label is 50%, the rule type label is 30% and the mining type label is 20%. Because the interpretability of the statistical class label, the rule class label and the mining class label is sequentially decreased, in order to ensure the interpretability of the result, the proportion of the three kinds of labels is sequentially decreased. The statistical class labels are obtained by objective data statistics, the rule class labels relate to subjective selection of designers, and the mining class labels relate to machine learning. For rule class labels, the design by the developer is required by the business with some interpretability, but there may be a risk of potential discrimination by the developer. For example, the selection of nationalities as a reference factor for evaluating violence index tags by developers may not give a reasonable reason, and in this case, the violence index of certain nationalities may be too high to be treated inequality. The mining type labels are related to machine learning, so that the risk of 'black box' exists, developers and users cannot understand the operation process of the system and the generated result, even if deviation exists, correction is difficult to find, and the right of knowing, objectionability, correction and deletion of the users are difficult to exercise, so that the idea that the proportion of the three types of labels is gradually decreased is adopted, the proportion of the three types of labels is determined to be respectively 50% higher, 30% middle and 20% lower, and unfairness and difficult interpretation of the image result are avoided.
S14 the data mining technique involved in mining class labels in the method is hidden factor model (LFM) mining. The LFM is used to mine out the interests implicit to the user. The formula used is:
Figure BDA0002701808050000061
pufcalculating the relation between the tags such as the interest or risk of the user u and the f hidden factor, qifAnd calculating the relation between the event i and the f-th hidden factor, wherein the two parameters are obtained by learning a training set, the training set comprises positive and negative samples, and finally the user portrait of the i is obtained.
The interpretation of the mining class label by the S14 is the interpretation of the label category, the label data source and the label inference rule. For example, the user's view preference "comedy" tab, which is interpreted as a mining-type tab, indicates that the user's view preference "comedy" tab is more preferred by the user. The tag data is derived from historical data such as user's viewing. The inference rule of the label is that the mining label refers to a label obtained by data mining and automatic learning of a machine learning technology. The LFM is used to mine the interests, i.e., hidden factors, that are implicit to the user. The formula used is:
Figure BDA0002701808050000071
pufthe relation between the interest of the user u and the f hidden factor q is calculatedifAnd calculating the relation between the film i and the f-th hidden factor, wherein the two parameters are obtained by learning a training set, the training set comprises positive and negative samples, and finally the preference degree of the user to the film is obtained. The value range of the hidden factor is set to be 0-10, the user's preference degree to comedies is 9, and other types such as horror and action scores are 4 and 3. And obtaining a 'comedy' label of the watching preference of the user.
For step S20: hive is used to store relevant data such as user portrait labels. The Hive is a data warehouse tool based on Hadoop, is used for data extraction, transformation and loading, and can store, inquire and analyze large-scale data stored in Hadoop. The Hive data warehouse tool can map the structured data file into a database table, provide SQL query function and convert SQL sentences into MapReduce tasks for execution. As an alternative embodiment, the present invention uses Hive as a data repository for storing relevant data such as user portrait tags. Firstly, establishing a Hive user label table, determining the name, content meaning and an explanation column of a label, then extracting data from a user log table related to the label, submitting a Spark task for calculation, finally inserting a calculated vector value into the Hive data table, and releasing a natural language solution of the label in the explanation column in the user label table.
For step S30: the use of ECharts allows user portrayal to be user-oriented and visualized and feedback-able. The visualization can guarantee the right of knowledge of the user, the feedback can guarantee the dissimilarity right of the user, and through the interchangeability of the system, the stakeholders can know the right and feedback, find potential hazards to rights and interests possibly caused by the system, solve the problems before damage occurs, and reduce the cost of each party. In one embodiment, a user representation is presented to the user in the form of a radar map, corresponding user representation tags are displayed at the location of each vertex of the radar map, the tags may reveal content, and an interpretation page appears, i.e., a natural language interpretation of the user representation tags. The tag can be clicked, after the tag is clicked, a user can input objection and other feedback to the user portrait, and the user portrait is sent to the system after clicking is finished. Referring to fig. 3 as an alternative implementation, fig. 3 is a specific flowchart of a user representation visualized and feedbackable by using ECharts according to a first embodiment of the present invention. In particular, the user representation user-oriented visualization using EChats, feedbackable step S30 may include the sub-steps of:
s31 implements the user representation in the form of a radar map using ECharts. The EChats file is firstly introduced, a div container with a width and a height is prepared for the EChats file, and configuration items and data of the diagram are specified, wherein the configuration items and the data comprise titles of icons, position distances, widths, heights and directions of legend components and coordinate system components of the radar diagram.
The indicator of the radar map of S32 is the user label stored in the database, and the data of the radar map is the score of each user on the corresponding user label.
S33 sets a "fold" expansion field on the axis of the radar chart, where the keyword of the expansion field is "explanation" and the content is the corresponding explanation.
S34, a click event is set on the radar map indicator, a message table appears after the click event, and the user can input objection and other feedback to the user portrait.
The S35 feedback interface is laid out using LinearLayout and adapts the screen by specifying a feedback box so that the feedback box is never covered by the soft keyboard. The method comprises the steps of creating a user feedback table in a data warehouse, wherein the table comprises five field definitions of 'user id', 'user nickname', 'feedback content', 'feedback date' and 'feedback time', corresponding fields are 'user _ id', 'user _ name', 'user _ content', 'model _ date' and 'model _ time', and user feedback is stored in the table.
For step S40: and feeding back user opinions to the system for optimization according to the selection and adjustment of the user to the user portrait. The feedback opinions of the user need to be converted into a data base for adjusting the portrait of the user, text word segmentation can be adopted, and word segmentation is a process of recombining continuous word sequences into word sequences according to certain specifications. By utilizing the statistical probability established by the corpus, for a new sentence, the word segmentation method corresponding to the maximum probability can be found by calculating the joint distribution probability corresponding to various word segmentation methods, namely the optimal word segmentation. As an implementation mode, text word segmentation is carried out on feedback opinions of a user, a python tool is used for word segmentation, and subject word extraction is carried out on the feedback opinions based on a TF-IDF algorithm to obtain a feedback data file of the user. python is a cross-platform computer programming language, a high-level scripting language that combines interpretive, compiled, interactive, and object-oriented programming. The TF-IDF algorithm measures the importance of a word by word frequency, and expresses the word by a statistical language, namely, an importance weight is assigned to each word on the basis of the word frequency. And after a feedback data file of the user is obtained, according to the word frequency, the word vector is used for representing a feedback text, so that the label vector value of the user is adjusted, and the user portrait is updated.
For step S50: the performance of the user representation is verified by inverse discrimination and accuracy testing. The performance of the user portrait is tested by adopting inverse discrimination and accuracy test, namely the inverse discrimination, the accuracy and the feedback mechanism of the user portrait are tested. For the accuracy detection of the user portrait, the accuracy of the statistical class label and the accuracy of the rule class label are tested by adopting a cross validation method. The basic level and proportion should be checked before cross-validation. In addition, in order to further detect the accuracy of the user portrait, the accuracy is verified by adopting a reverse derivation mode. And the reverse derivation is to find out a user group with the same or multiple labels according to one or multiple label search categories and accurate search, then analyze later dynamic purchasing behaviors and perform reverse derivation. If the result is consistent with the idea of establishing the label, the idea of establishing the label is correct, and the user portrait model is accurate. For developing and mining type labels, after a user portrait system is built, a sampling verification method is used, a specified part of experiment users participate in specific category activities, the conformity of behaviors and the labels is observed, experiment results are analyzed, and further the system building process is optimized. And (3) testing a feedback mechanism, extracting a certain sample of the feedback user for each category by classifying the feedback content of the user, dividing each category into two groups of A/B, wherein the group A is the activity before the feedback of the user, the group B is the activity after the feedback of the user, comparing the two groups of A, B, and if the activity of the group A is changed compared with that of the group B, proving that the user portrait feedback mechanism is effective.
As an alternative implementation, please refer to fig. 4, fig. 4 is a flowchart illustrating a specific flowchart of the anti-discriminative test for a user image according to the first embodiment of the present invention. In particular, the step of testing the inverse discriminativity of the user representation may comprise the sub-steps of:
s51, extracting samples of normal groups and weak groups with the same quantity, and inquiring user portrait labels;
s52, calculating average indexes h1, h2, h3 and the like of sensitivity indexes such as violence index, credit index, crime possibility and the like of the normal population;
s53, calculating average indexes H1, H2, H3 and the like of the sensitivity indexes such as violence index, credit index, crime possibility and the like of the vulnerable group;
s54 compares H and H, and if the difference is too large, it is proved that the user image system may be discriminative.
Second embodiment
In order to cooperate with the interpretable and interactive user-portrayal method provided by the first embodiment of the invention, a second embodiment of the invention also provides an interpretable and interactive user-portrayal device.
Referring to FIG. 5, FIG. 5 is a block diagram of an interpretable interactive user-portrait apparatus according to a second embodiment of the invention.
Interpretable, interactive user representation device 100 includes a user representation interpretation module 110, a user representation storage module 120, a user-oriented visualization and feedback module 130, a user representation optimization module 140, and a verification module 150.
The user portrait interpretation module 110 is used for performing natural language interpretation on the user portrait label when constructing the user portrait, wherein the natural language interpretation includes three items, namely an interpreted label category, a label data source and a label inference rule; determining the used label types including a statistic label, a rule label and a mining label; and determining the proportion of the tags to be 50% of the statistical class tags, 30% of the rule class tags and 20% of the mining class tags according to the difficulty of explanation.
And the user portrait storage module 120 is used for establishing a Hive user tag table and storing the name, content and explanation of the tag.
A user-oriented visualization and feedback module 130, configured to determine data of a radar map as a user tag score by using an ECharts radar map; the radar chart axis expansion field is a corresponding explanation which is derived from an explanation column in the Hive user label table; a click event is set on the radar map indicator, a message table appears after the click event, and a user can input objection and other feedback to the user portrait.
And the user portrait optimization module 140 is used for obtaining a feedback data file of the user by using a python tool and a TF-IDF algorithm according to the feedback of the user to the user portrait, adjusting the label vector value of the user based on the feedback data, and updating the user portrait.
Further, given that the established interpretation scheme and the optimized user representation may not necessarily accurately reflect the characteristics of the user, there may be an unfair design in the label inference rule, and therefore the interpretable, interactive user representation apparatus should further include a verification module for verifying whether the user representation is accurate and whether there is discrimination.
The verification module comprises an inverse discrimination verification unit, an accuracy verification unit and a feedback mechanism verification unit. The anti-discrimination verification unit is used for testing the anti-discrimination of the average index difference of the sensitive labels of the normal population and the weak population. The accuracy verification unit is used for counting the accuracy of the class labels and the accuracy of the rule class labels by using cross verification test and developing the accuracy of the mining class labels by using sampling verification test, and the feedback mechanism verification unit is used for testing the feedback mechanism effect by using A/B grouping comparison.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working process of the apparatus described above may refer to the corresponding process in the foregoing method, and will not be described in too much detail herein.
Third embodiment
In order to implement the step counting method, the third embodiment of the present invention provides an electronic device 200. Referring to fig. 6, fig. 6 is a schematic view of an electronic device according to a third embodiment of the present invention.
Fig. 6 is a schematic structural diagram of an electronic device according to a third embodiment of the present invention, and as shown in fig. 6, the electronic device includes a CPU, a memory, an input device, and an output device.
The memory stores programs that control the operation of the computer, where the memory may be, but is not limited to, Random Access Memory (RAM), Read Only Memory (ROM). The memory stores program instructions or modules corresponding to interpretable, interactable user representations in embodiments of the invention, such as a user representation interpretation module in an interpretable, interactable user representation device; a user representation storage module; a user-oriented visualization and feedback module; a user representation optimization module; and a verification module.
The CPU is provided with a control unit, an Arithmetic Logic Unit (ALU), a register (small memory area) realized by using a D flip-flop, and a program counter. The execution operation is that the control unit fetches a program instruction or module from the memory, and determines the position of the instruction or module using the program counter. An instruction or module is decoded into a language understood by an Arithmetic Logic Unit (ALU), the operands required to execute the instruction or module are fetched from memory into registers, the Arithmetic Logic Unit (ALU) executes the instruction or module, and the results are placed in memory.
The input device is used for inputting data into the system and realizing the interaction between a user and the system. And signal conversion between the input device and the bus is realized through the external interface. The input device may be, but is not limited to, a mouse, a keyboard, a touch screen, etc.
The output device is used for outputting the system data to the user, and interaction between the user and the system is realized. And signal conversion between the input device and the bus is realized through the external interface. The input device may be, but is not limited to, a display, a speaker, a printer, etc. In this embodiment, the input/output device may adopt an interrupt-driven input/output control mode, the interrupt-driven input/output control mode communicates through an intermediate interrupt controller, when a process wants to start a certain input/output device, the CPU sends an input/output command to the controller, and then immediately returns to continue executing the original task, and the controller controls the designated input/output device according to the requirement of the command.
It is to be understood that the configuration shown in fig. 6 is merely exemplary, and the electronic device 200 may include more or fewer components than shown in fig. 6, or may have a different configuration than shown in fig. 6. The components shown in fig. 6 may be implemented in hardware, software, or a combination thereof.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working process of the apparatus described above may refer to the corresponding process in the foregoing method, and will not be described in too much detail herein.
In summary, embodiments of the present invention provide an interpretable and interactive user representation method and apparatus, where the interpretable and interactive user representation method constructs a user representation label system with strong interpretability, and makes the user representation and its interpretation visible to the user, thereby protecting the right of knowledge of the user and facilitating the user to understand the reason for making a decision. Meanwhile, by supporting user feedback and adopting feedback-based optimization, the user can adjust the portrait result of the user, so that the information autonomy of the user is protected, the user is not trapped in an information cocoon room, and the user can supervise problems such as discrimination possibly occurring in the portrait and the like, and risks are avoided in advance; on the other hand, the effects of personalized pushing and accurate service can be improved through interaction, and the cost is reduced.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Finally, the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all of them should be covered by the claims of the present invention.

Claims (9)

1. An interpretable, interactive user portrayal method, comprising: the method comprises the following steps:
s10: constructing a user portrait label in an interpretable manner;
s20: using Hive to store user portrait label related data;
s30: the EChats are utilized to make the user portrait visible and feedbackable to the user;
s40: feeding back user opinions to a system for optimization according to the adjustment of the user on the user image;
s50: the performance of the user representation is verified by inverse discrimination and accuracy testing.
2. An interpretable, interactable user representation method as claimed in claim 1, wherein: the S10 specifically includes:
determining the used label types including a statistic label, a rule label and a mining label;
when constructing the user portrait, natural language interpretation is carried out on the label of the user portrait, including the interpretation of the label category, the label data source and the label inference rule;
and determining the proportion of the tags to be 50% of the statistical class tags, 30% of the rule class tags and 20% of the mining class tags according to the difficulty of explanation.
3. An interpretable, interactable user representation method as claimed in claim 2, wherein: in the determined used label types, mining class labels utilize hidden factor models LFM and TF-IDF to mine data, and submit Spark tasks for calculation.
4. An interpretable, interactable user representation method as claimed in claim 1, wherein: the S20 specifically includes:
establishing a Hive user label table, and determining the name, content and explanation columns of the label;
and inserting the calculated user label vector value into the content of the Hive user label table, and releasing the natural language solution of the label in an interpretation column in the user label table.
5. An interpretable, interactable user representation method as claimed in claim 1, wherein: the S30 specifically includes:
introducing an ECharts file, and appointing to use a radar chart;
the indicator of the radar map is user tags stored in a Hive data warehouse, and the data of the radar map is the score of each user on the corresponding user tag;
setting an expansion field on an axis of the radar map, wherein a keyword of the expansion field is 'explanation', and the content is corresponding explanation and is derived from an explanation column in a Hive user label table;
a click event is set on the radar map indicator, a message table appears after the click event, and a user can input objection and other feedback to the user portrait;
the feedback interface is laid out using linear layout and specifies the feedback box to fit the screen.
6. An interpretable, interactable user representation method as claimed in claim 1, wherein: the S40 specifically includes:
performing text word segmentation on the feedback opinions of the user by using a python tool;
performing subject word extraction on the feedback opinions based on a TF-IDF algorithm to obtain a feedback data file of the user;
based on the feedback data, adjusting a label vector value of the user and updating the user portrait;
recommendations are made based on the updated user profile.
7. An interpretable, interactable user representation method as claimed in claim 1, wherein: the S50 specifically includes:
testing the anti-discrimination, accuracy and feedback mechanism of the user portrait, wherein samples of a normal group and an vulnerable group with the same amount are extracted, the average index difference of the normal group and the vulnerable group in a violent index label, a credit index and a crime possibility index sensitive label is compared, and the anti-discrimination of the user portrait to the vulnerable group is tested;
the accuracy of the class labels and the accuracy of the rule class labels are counted by using a cross validation test, and the accuracy of the mining class labels is developed by using a sampling validation test;
classifying feedback contents of users, extracting a certain sample of feedback users for each category, dividing each category into A, B two groups, wherein the A group is the activity before the feedback of the users, the B group is the activity after the feedback of the users, comparing the A, B two groups, and testing the operation effect of a feedback mechanism.
8. An interpretable, interactive user-portrayal apparatus, comprising: the device includes:
the user portrait interpretation module is used for performing natural language interpretation on a label of the user portrait when the user portrait is constructed, and comprises an interpreted content unit and a proportion unit; the interpretation content unit is used for performing natural language interpretation on the label of the user portrait, determining three items of an interpretation label category, a label data source and a label inference rule, and determining the used label category which comprises a statistic label, a rule label and a mining label; the proportion unit comprises a statistic class label 50%, a rule class label 30% and a mining class label 20% according to the proportion of the tags determined by the explanation difficulty;
the user portrait storage module is used for establishing a Hive user tag table and storing the name, content and explanation of the tag;
the user-oriented visualization and feedback module is used for establishing an ECharts radar map, making a user portrait and explanation thereof visible to a user, and supporting the user to input objection and other feedback to the user portrait;
the user portrait optimization module is used for adjusting the label vector value of the user by using a python tool and a TF-IDF algorithm according to the feedback of the user to the user portrait and updating the user portrait;
the verification module is used for testing the anti-discrimination, accuracy and feedback mechanism of the user portrait and comprises an anti-discrimination verification unit, an accuracy verification unit and a feedback mechanism verification unit; the anti-discrimination verification unit is used for testing the anti-discrimination on the average index difference of the sensitive labels of the normal population and the weak population; the accuracy verification unit is used for using the accuracy of the cross validation test statistics class label and the accuracy of the rule class label and using the accuracy of the sampling validation test development mining class label, and the feedback mechanism verification unit is used for using A, B grouping comparison test feedback mechanism effect.
9. An electronic device having computer program instructions stored therein, which when read and executed by a processor, perform the steps of the method of any one of claims 1 to 7.
CN202011024688.9A 2020-09-25 2020-09-25 Interpretable and interactive user portrayal method and device Active CN112131475B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011024688.9A CN112131475B (en) 2020-09-25 2020-09-25 Interpretable and interactive user portrayal method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011024688.9A CN112131475B (en) 2020-09-25 2020-09-25 Interpretable and interactive user portrayal method and device

Publications (2)

Publication Number Publication Date
CN112131475A true CN112131475A (en) 2020-12-25
CN112131475B CN112131475B (en) 2023-10-10

Family

ID=73839435

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011024688.9A Active CN112131475B (en) 2020-09-25 2020-09-25 Interpretable and interactive user portrayal method and device

Country Status (1)

Country Link
CN (1) CN112131475B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114840286A (en) * 2021-06-16 2022-08-02 杨永飞 Service processing method based on big data and server
CN117807190A (en) * 2024-02-28 2024-04-02 青岛他坦科技服务有限公司 Intelligent identification method for sensitive data of energy big data

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016054908A1 (en) * 2014-10-10 2016-04-14 中兴通讯股份有限公司 Internet of things big data platform-based intelligent user profiling method and apparatus
WO2017041372A1 (en) * 2015-09-07 2017-03-16 百度在线网络技术(北京)有限公司 Man-machine interaction method and system based on artificial intelligence
CN106651424A (en) * 2016-09-28 2017-05-10 国网山东省电力公司电力科学研究院 Electric power user figure establishment and analysis method based on big data technology
WO2017080176A1 (en) * 2015-11-12 2017-05-18 乐视控股(北京)有限公司 Individual user profiling method and system
CN109992982A (en) * 2019-04-11 2019-07-09 北京信息科技大学 Big data access authorization methods, device and big data platform
WO2019232891A1 (en) * 2018-06-06 2019-12-12 平安科技(深圳)有限公司 Method and device for acquiring user portrait, computer apparatus and storage medium
CN110796470A (en) * 2019-08-13 2020-02-14 广州中国科学院软件应用技术研究所 Market subject supervision and service oriented data analysis system
CN111159276A (en) * 2018-11-08 2020-05-15 北京航天长峰科技工业集团有限公司 Holographic image system construction method based on hybrid storage mode
CN111191125A (en) * 2019-12-24 2020-05-22 长威信息科技发展股份有限公司 Data analysis method based on tagging
CN111210326A (en) * 2019-12-27 2020-05-29 大象慧云信息技术有限公司 Method and system for constructing user portrait
CN111339409A (en) * 2020-02-20 2020-06-26 深圳壹账通智能科技有限公司 Map display method and system
CN111368548A (en) * 2018-12-07 2020-07-03 北京京东尚科信息技术有限公司 Semantic recognition method and device, electronic equipment and computer-readable storage medium
CN111444368A (en) * 2020-03-25 2020-07-24 平安科技(深圳)有限公司 Method and device for constructing user portrait, computer equipment and storage medium
CN111444236A (en) * 2020-03-23 2020-07-24 华南理工大学 Mobile terminal user portrait construction method and system based on big data

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016054908A1 (en) * 2014-10-10 2016-04-14 中兴通讯股份有限公司 Internet of things big data platform-based intelligent user profiling method and apparatus
WO2017041372A1 (en) * 2015-09-07 2017-03-16 百度在线网络技术(北京)有限公司 Man-machine interaction method and system based on artificial intelligence
WO2017080176A1 (en) * 2015-11-12 2017-05-18 乐视控股(北京)有限公司 Individual user profiling method and system
CN106651424A (en) * 2016-09-28 2017-05-10 国网山东省电力公司电力科学研究院 Electric power user figure establishment and analysis method based on big data technology
WO2019232891A1 (en) * 2018-06-06 2019-12-12 平安科技(深圳)有限公司 Method and device for acquiring user portrait, computer apparatus and storage medium
CN111159276A (en) * 2018-11-08 2020-05-15 北京航天长峰科技工业集团有限公司 Holographic image system construction method based on hybrid storage mode
CN111368548A (en) * 2018-12-07 2020-07-03 北京京东尚科信息技术有限公司 Semantic recognition method and device, electronic equipment and computer-readable storage medium
CN109992982A (en) * 2019-04-11 2019-07-09 北京信息科技大学 Big data access authorization methods, device and big data platform
CN110796470A (en) * 2019-08-13 2020-02-14 广州中国科学院软件应用技术研究所 Market subject supervision and service oriented data analysis system
CN111191125A (en) * 2019-12-24 2020-05-22 长威信息科技发展股份有限公司 Data analysis method based on tagging
CN111210326A (en) * 2019-12-27 2020-05-29 大象慧云信息技术有限公司 Method and system for constructing user portrait
CN111339409A (en) * 2020-02-20 2020-06-26 深圳壹账通智能科技有限公司 Map display method and system
CN111444236A (en) * 2020-03-23 2020-07-24 华南理工大学 Mobile terminal user portrait construction method and system based on big data
CN111444368A (en) * 2020-03-25 2020-07-24 平安科技(深圳)有限公司 Method and device for constructing user portrait, computer equipment and storage medium

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
FRED HOHMAN等: "summit:scaling deep learning interpretability by visualizing activation and attribution summarizations", IEEE TRANSACTIONS ONVISUALIZATION AND COMPUTER GRAPHICS, vol. 26, no. 1, pages 1096 - 1106 *
张宇;阮雪灵;: "大数据环境下移动用户画像的构建方法研究", 中国信息化, no. 04, pages 65 - 68 *
张宇航等: "个性化推荐系统综述", 价值工程, vol. 39, no. 2, pages 287 - 292 *
覃召敬;: "一种基于大数据的证券业客户标签计算系统设计", 中国科技信息, no. 12, pages 84 - 85 *
郑驰: "用户画像风险与法律规制研究", 中国优秀硕士学位论文全文数据库社会科学I辑, no. 7, pages 116 - 32 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114840286A (en) * 2021-06-16 2022-08-02 杨永飞 Service processing method based on big data and server
CN117807190A (en) * 2024-02-28 2024-04-02 青岛他坦科技服务有限公司 Intelligent identification method for sensitive data of energy big data
CN117807190B (en) * 2024-02-28 2024-05-31 国网河南省电力公司经济技术研究院 Intelligent identification method for sensitive data of energy big data

Also Published As

Publication number Publication date
CN112131475B (en) 2023-10-10

Similar Documents

Publication Publication Date Title
US11182564B2 (en) Text recommendation method and apparatus, and electronic device
KR102564144B1 (en) Method, apparatus, device and medium for determining text relevance
US10713432B2 (en) Classifying and ranking changes between document versions
Chuang et al. Interpretation and trust: Designing model-driven visualizations for text analysis
Ulitzsch et al. Combining clickstream analyses and graph-modeled data clustering for identifying common response processes
CN109906450A (en) For the method and apparatus by similitude association to electronic information ranking
US11238225B2 (en) Reading difficulty level based resource recommendation
KR20120135218A (en) Matching metadata sources using rules for characterizing matches
US8676738B2 (en) Relationship detector, relationship detection method, and recording medium
US20180285176A1 (en) Methods and systems for selecting potentially erroneously ranked documents by a machine learning algorithm
Angriman et al. Guidelines for experimental algorithmics: A case study in network analysis
US20190073636A1 (en) Semi-automatic object reuse across application parts
CN112131475B (en) Interpretable and interactive user portrayal method and device
US10664927B2 (en) Automation of crowd-sourced polling
CN115705501A (en) Hyper-parametric spatial optimization of machine learning data processing pipeline
Liu et al. Supporting features updating of apps by analyzing similar products in App stores
Bu et al. An FAR-SW based approach for webpage information extraction
Becker et al. Free benchmark corpora for preservation experiments: using model-driven engineering to generate data sets
Wang et al. UISMiner: Mining UI suggestions from user reviews
Marszałkowski et al. Fast algorithms for online construction of web tag clouds
US20230317215A1 (en) Machine learning driven automated design of clinical studies and assessment of pharmaceuticals and medical devices
US20220129856A1 (en) Method and apparatus of matching data, device and computer readable storage medium
Reinhartz-Berger et al. Reuse of similarly behaving software through polymorphism-inspired variability mechanisms
CN113901996A (en) Equipment screen perspective detection model training method and equipment screen perspective detection method
KR100888329B1 (en) System and method for automatically detecting information in real-time using rule

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant