CN112214680A - Method and apparatus for obscuring a user representation - Google Patents

Method and apparatus for obscuring a user representation Download PDF

Info

Publication number
CN112214680A
CN112214680A CN202011133207.8A CN202011133207A CN112214680A CN 112214680 A CN112214680 A CN 112214680A CN 202011133207 A CN202011133207 A CN 202011133207A CN 112214680 A CN112214680 A CN 112214680A
Authority
CN
China
Prior art keywords
application
user
usage data
data
script
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011133207.8A
Other languages
Chinese (zh)
Inventor
颜晨雁
杨逸文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Guangzhou Mobile R&D Center
Samsung Electronics Co Ltd
Original Assignee
Samsung Guangzhou Mobile R&D Center
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Guangzhou Mobile R&D Center, Samsung Electronics Co Ltd filed Critical Samsung Guangzhou Mobile R&D Center
Priority to CN202011133207.8A priority Critical patent/CN112214680A/en
Publication of CN112214680A publication Critical patent/CN112214680A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data

Abstract

The present disclosure provides a method and apparatus for obscuring a user representation. The method may comprise: monitoring an application used by a user to acquire user usage data of the application; in response to a request by a user to blur a representation of the user, generating a simulation script for running the application based on user usage data for the application; and when the user does not use the application, executing the simulation script. Also, the above-described method of obscuring a user representation performed by an electronic device may be performed using an artificial intelligence model.

Description

Method and apparatus for obscuring a user representation
Technical Field
The present disclosure relates to the field of electronic technologies, and in particular, to a method and an apparatus for blurring a user portrait.
Background
The use behaviors of users are analyzed more and more widely by various big news, entertainment and E-commerce platforms such as today's headlines, trembles, panning and the like so as to carry out accurate ' feeding ' on the users. "feeding" is based on the usage traces of the users, and as these merchants accumulate user data collections, merchants analyze and predict user usage behavior more and more accurately. But this is not necessarily what the user expects.
Disclosure of Invention
The present disclosure is to provide a method and an apparatus for blurring a user portrait, which can overcome the shortcomings of the prior art and better satisfy the needs and expectations of users.
According to an embodiment of the present disclosure, there is provided a method of obscuring a user representation, the method including: monitoring an application used by a user to acquire user usage data of the application; in response to a request by a user to blur a representation of the user, generating a simulation script for running the application based on user usage data for the application; and when the user does not use the application, executing the simulation script.
Optionally, the monitored application is a monitoring target set by a user.
Optionally, the step of monitoring the application used by the user to obtain user usage data of the application comprises:
the method further includes identifying which user usage data from among the user usage data of the application is to be collected by the application and recording the user usage data that is to be collected by the application.
Optionally, the step of identifying which of the user usage data of the application will be collected by the application comprises: according to the privacy terms of the application, user usage data indicated in the privacy terms that the application will collect is identified as user usage data that will be collected by the application.
Optionally, the step of identifying which of the user usage data of the application will be collected by the application comprises: generating application recommendation data of the application according to the content recommended to the user by the application; user usage data associated with the application recommendation data among the user usage data of the application is identified as user usage data to be collected by the application by comparing the user usage data with the application recommendation data over a period of time.
Optionally, the method further comprises: generating application recommendation data of the application according to the content recommended to the user by the application; and generating a current portrait of the user according to the application recommendation data.
Optionally, the method further comprises: providing at least one of user usage data and a current representation of the user to be collected by the application; the user is asked if he wants to blur the user representation.
Optionally, the method further comprises: the user's wish portrayal is set by the user selecting the type of character of the wish portrayal or by the user selecting the content category and/or the content category fraction contained by the application.
Optionally, the simulation script includes an action frame and a category list for running the application, and the step of generating the simulation script for running the application includes: the action framework is formulated according to user usage data that will be collected by the application, and the category list is formulated according to the type of application.
Optionally, the step of executing the simulation script comprises: when the simulation script is executed, the execution action of the simulation script is determined and executed in real time according to the difference between the current portrait and the wish portrait of the user.
Optionally, the step of generating a simulation script for running the application comprises: training an artificial intelligence model using user usage data that would be collected by an application; and generating the simulation script by using the trained artificial intelligence model.
Optionally, the method further comprises: recording the execution process of the simulation script to generate a recording video; generating fuzzy portrait data according to the recorded execution process, and recording the fuzzy portrait data; generating an analysis report based on user usage data and blurred portrait data that may be collected by the application; providing at least one of the recorded video and the analysis report to a user.
Optionally, the type of the application includes: video type, shopping type, news type, music type, and social type.
According to an embodiment of the present disclosure, there is provided an apparatus to blur a user representation, the apparatus including: the application monitoring unit is used for monitoring the application used by the user to acquire the user use data of the application; a script generating unit which generates a simulation script for running an application according to user usage data of the application in response to a request that a user wants to blur a user representation; and the script execution unit is used for executing the simulation script when the user does not use the application.
Optionally, the monitored application is a monitoring target set by a user.
Optionally, the application monitoring unit identifies which user usage data among the user usage data of the application are to be collected by the application and records the user usage data that are to be collected by the application.
Optionally, the application monitoring unit identifies, according to the privacy term of the application, the user usage data that the application is indicated to collect in the privacy term as the user usage data that the application is to collect.
Optionally, the application monitoring unit generates application recommendation data of the application according to the content recommended to the user by the application; the application monitoring unit identifies user usage data associated with the application recommendation data among the user usage data of the application as user usage data to be collected by the application by comparing the user usage data with the application recommendation data over a period of time.
19. The apparatus of claim 16, wherein,
the script generation unit is also used for generating application recommendation data of the application according to the content recommended to the user by the application;
the script generation unit generates a current portrait of the user according to the application recommendation data.
Optionally, the device further comprises a display unit providing at least one of the user usage data collected by the application and the user's current representation to the user; the display unit asks the user whether the user wants to blur the user representation.
Optionally, the device further comprises a display unit, and the user sets the user's wish portrait by selecting the character type of the wish portrait or selecting the content category and/or the content category ratio contained in the application through the display unit.
Optionally, the simulation script includes an action frame for running the application and a category list, and the script execution unit formulates the action frame according to user usage data to be collected by the application and formulates the category list according to the type of the application.
Optionally, the script execution unit determines and executes an execution action of the simulation script in real time according to a difference between a current portrait and a desired portrait of a user when executing the simulation script.
Optionally, the script generation unit trains the artificial intelligence model using user usage data that would be collected by the application; and the script generation unit generates the simulation script by using the trained artificial intelligence model.
Optionally, the apparatus further comprises: the script execution recording unit records the execution process of the simulation script and generates a recording video; the script execution recording unit generates fuzzy portrait data according to the recorded execution process and records the fuzzy portrait data; the script execution recording unit generates an analysis report based on the user usage data and the fuzzy portrait data which are collected by the application; the display unit provides at least one of the recorded video and the analysis report to a user.
Optionally, the type of the application includes: video type, shopping type, news type, music type, and social type.
According to an embodiment of the present disclosure, there is provided a computer readable storage medium storing a computer program which, when executed by a processor, implements the method of obscuring a user representation as described above.
According to an embodiment of the present disclosure, there is provided an electronic apparatus including: a processor and a memory, the memory storing a computer program which, when executed by the processor, implements a method of obscuring a user representation as described above.
According to the method and the equipment for blurring the user portrait, the use behaviors analyzed when the user portrait is constructed by the merchant can be guessed, the operation of the user is simulated, the confusion effect is achieved, and the user portrait analyzed by the merchant is blurred.
Additional aspects and/or advantages of the present general inventive concept will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the general inventive concept.
Drawings
The above and other objects and features of exemplary embodiments of the present disclosure will become more apparent from the following description taken in conjunction with the accompanying drawings which illustrate exemplary embodiments, wherein:
FIG. 1 is a flow diagram of a method of obscuring a user representation in accordance with an exemplary embodiment of the present disclosure;
FIG. 2 is a flowchart of a process of generating a simulation script according to an exemplary embodiment of the present disclosure;
FIG. 3 is a flow diagram of determining an action to perform in real time during execution of a simulation script;
FIG. 4 is a flowchart of a process of generating a simulation script according to another exemplary embodiment of the present disclosure;
FIG. 5 is a flowchart of operations in executing a simulation script according to an example embodiment of the present disclosure;
FIG. 6 is a block diagram of an apparatus to blur a user representation in accordance with an exemplary embodiment of the present disclosure;
fig. 7 to 11 are schematic diagrams of display contents provided to a user according to an exemplary embodiment of the present disclosure.
The present invention will hereinafter be described in detail with reference to the drawings, wherein like or similar elements are designated by like or similar reference numerals throughout.
Detailed Description
With the accumulation of user data collection by merchants, the analysis and prediction of user use behaviors by the merchants are more accurate, but at the same time, the privacy of the users is disturbed, and the recommended contents are more and more monotonous. As for the user, privacy is peeped, and in some scenes, the user does not want to be accurately depicted. Meanwhile, recommended contents have more and more single limitation, and a user does not want the contents recommended to the user by a merchant to be closer to the past use behavior of the user, so that the recommended contents are one-sided and not objective.
The method simulates the operation of the user by guessing the use behavior analyzed when the merchant constructs the user portrait, achieves the confusion effect, and blurs the user portrait analyzed by the merchant. When the mobile phone is idle, executing a script simulating screen clicking behavior to generate fuzzy image data; performing screen recording operation on the whole process, recording simulated screen clicking behaviors by using a video, and providing the simulated screen clicking behaviors for a user; an analysis report is generated, including detecting which data an Application (APP) may have collected, which blur representation data the script produced, etc., and provided to the user. For example, the whole process is subjected to screen recording operation, and simulated screen clicking behaviors are recorded. A video and analysis report is generated telling the user which data the APP has detected may have collected, which blurred image data the script has produced, and so on.
The following description with reference to the accompanying drawings is provided to assist in a comprehensive understanding of exemplary embodiments of the present disclosure as defined by the claims and their equivalents. The description includes various specific details to aid understanding, but these details are to be regarded as illustrative only. Thus, one of ordinary skill in the art will recognize that: various changes and modifications may be made to the embodiments described herein without departing from the scope and spirit of the present invention. Moreover, descriptions of well-known functions and constructions may be omitted for clarity and conciseness.
A method of obscuring a user representation according to the present disclosure may include: monitoring an application used by a user to acquire user usage data of the application; generating a simulation script for running the application based on user usage data of the application and a desire profile set by the user in response to a request that the user wants to obfuscate the user profile; the simulation script is executed when the user is not using the application.
The method and apparatus for obscuring a user representation according to the present disclosure may be applied to any electronic device installed with an Application (APP), such as a mobile device, a tablet computer, a vehicle-mounted device, and the like. When the user of the electronic device needs, the method and the device for obscuring the user portrait can be utilized to confuse the judgment of the application business on the user use data, so that the recommended content received by the user when using the application is more diversified, and the user does not become one-sided along with the increase of the use time.
The application of the electronic device used by the user is taken as an exemplary scene, but the invention is not limited to the application, and the method and the device for blurring the user portrait according to the invention can be applied to any scene of the application used by the user according to actual needs.
FIG. 1 is an exemplary flow chart of a method of obscuring a user representation according to an exemplary embodiment of the present disclosure. The present invention is briefly described herein with reference to the example of fig. 1, but is not limited thereto.
In step S11, the application to be monitored may be specified by the user. For example, icons of all or part of applications installed on the electronic device may be provided to the user through the electronic device to prompt the user to select any one or more or all of the applications on the electronic device as the monitoring target, i.e., the monitored application may be the monitoring target set by the user. Also, the monitored application may be flagged for monitoring.
In step S12, the applications used by the user may be monitored. For example, application usage may be continuously monitored as a user uses or runs an application to be monitored to obtain user usage data for the application. The user usage data may include data related to the user's usage behavior during the user's use of the application, such as browsing, click-to-view, search queries, favorites, add to shopping cart, transactions, after-sales consultations, focus-on-sharing information, publishing information, IP address of the electronic device, browser type, telecommunications carrier, language used, date and time of access to the application.
At step S13, it may be identified whether the application will collect user usage data, i.e., whether the application is likely to collect user usage data.
For example, whether an application will collect user usage data may be identified based on privacy terms in the application. For example, the privacy clause statement of the "panning for cell phone" application "we may automatically collect your usage and store it as weblog information, including: device information: according to specific operations of software installation and/or use, information related to devices used by people (including software and hardware characteristic information such as device models, operating system versions, device settings, unique device identifiers, device environments and the like) and information related to positions of the devices (including GPS positions authorized by people and sensor information such as WLAN access points, Bluetooth and base stations and the like) are received and recorded. Service log information: when you use the products or services provided by our website or client, we will automatically collect the detailed usage of our services, which are saved as service log, including browsing, click-to-view, search query, collection, adding to shopping cart, transaction, after-sale, paying attention to sharing information, publishing information, and IP address, browser type, telecom operator, language used, date and time of visit ". In this way, it may be determined that the "mobile phone panning" application automatically collects user usage data according to the privacy terms of the "mobile phone panning" application, where the collected user usage data may include device information, service log information, and the like.
Alternatively, application recommendation data for the application may be generated based on content recommended to the user by the application, and the user usage data and the application recommendation data over a period of time may be compared to determine whether the application has collected the user usage data based on whether the user usage data is associated with the application recommendation data. For example, the user usage data acquired through step S12 may be recorded in real time, and the application recommendation content recommended to the user by the application may be recorded in real time while the user usage data is recorded in real time. Alternatively, the real-time recording of the application recommendation content may continue until a period of time after the real-time recording of the user usage data, i.e., the duration of the real-time recording of the application recommendation content may be longer than the duration of the real-time recording of the user usage data. The recorded user usage data is then compared to the recorded application recommendation content to identify which of the user usage data is associated with the application recommendation content, the user usage data associated with the application recommendation content being identified as user usage data to be collected by the application. The user usage data contains historical usage behaviors of the user, and if the application recommended content is coincident or partially coincident with the historical usage behaviors, the application is determined to possibly recommend the content according to the historical usage behaviors of the user, namely the user usage data associated with the application recommended content exists in the user usage data.
For example, user usage data recorded when a user uses a "mobile panning" application may include user browsing data related to what goods the user browsed, user search data related to what goods the user searched, user collection data related to what goods the user collected, user purchase data related to what goods the user purchased, and other data related to user usage behavior. Accordingly, the recorded application recommendation data may include data related to what goods were recommended by the application, what categories the recommended goods belong to, and so forth. The user usage data of the "mobile phone Taobao" application may be compared to the application recommendation data to determine whether the items viewed, searched, collected, and/or purchased by the user are associated with the items recommended by the application, e.g., whether they are the same or similar items, whether they are of the same or similar category, whether they are the same or similar items, whether they are matching or related items (e.g., toothpaste and toothbrush, mobile phone and mobile phone housing, shoes and socks, etc.). Thus, it can be determined whether the "mobile panning" application will collect user usage data.
If it is recognized that the application will collect user usage data, then step S14 is performed, otherwise the user portrait obscuring operation is skipped and the process ends.
At step S14, it may be identified which user usage data from among the user usage data of the application is to be collected by the application. For example, the user usage data that the application is indicated to collect in the privacy clauses may be identified as the user usage data that the application is to collect according to the privacy clauses of the application, or the user usage data associated with the application recommendation data among the user usage data of the application may be identified as the user usage data that the application is to collect by comparing the user usage data with the application recommendation data for a period of time.
For example, for the "mobile phone panning" application, it can be recognized that device information, service log information, and the like in the user usage data of the "mobile phone panning" application are collected by the application according to the privacy terms of the "mobile phone panning" application. Alternatively, the user usage data of the "panning for mobile phone" application may be compared with the application recommendation data, for example, by comparing, if it is determined that the goods browsed by the user are associated with the goods recommended by the application, the user browsing data among the user usage data is identified as the user usage data to be collected by the application; identifying user search data among the user usage data as user usage data to be collected by the application if it is determined that the goods searched by the user are associated with goods recommended by the application; identifying user collection data among the user usage data as user usage data to be collected by the application if it is determined that the goods collected by the user are associated with goods recommended by the application; if it is determined that the goods purchased by the user are associated with the goods recommended by the application, user purchase data among the user usage data is identified as user usage data to be collected by the application.
At step S15, user usage data that would be collected by the application may be recorded. For example, user usage data that would be collected by an application may be continuously recorded in real-time as the user uses the application until the user stops using the application or the application stops running.
Further, a current representation of the user may be generated based on the application recommendation data. For example, the application recommendation data may be statistically analyzed in a manner including, but not limited to, performing summary analysis on categories of the application recommendation data, calculating a proportion of the application recommendation data in the entire data resources of the application, and the like, where the proportion of the application recommendation data in the entire data resources of the application may represent a current representation of the user. The generated current representation may be the same or similar to the user representation portrayed by the merchant of the application.
At least one of the user usage data collected by the application and the user' S current representation may be provided to the user at step S16, while the user may also be asked whether the user wants to blur the user representation to ask whether the user wants to blur the user representation, i.e., it may be determined at step S16 whether a request from the user is received that the user wants to blur the user representation. For example, when a user uses up a monitored or flagged application and clicks off, the user may be asked through the display of the electronic device "do the user need to portray the blur operation? ", if the user clicks" yes "on the display, then it is determined that the user wants to blur the user representation, otherwise the user representation blur operation is skipped.
In addition, the user may be queried while being informed of user usage data and/or the user's current representation that may be collected by the application, thereby facilitating the user in determining whether a user representation needs to be blurred.
For example, referring to fig. 7, after using the "panning on cell phone" application for a period of time, the user can recognize from the user usage data that "this use browses 25 women's dresses, searches for a ' white T ', and stays in the ' jeans ' category for more than nine months. Data may have been collected by Taobao, which was hypothesized to have the following analysis: you are the users who are willing to buy from women's clothes. ", the user may then be notified via the user's electronic device display, and asked" do the user need to render the user obscure? ".
If it is determined that the user wants to blur the user representation, step S17 is performed to generate a simulation script for running the application. The process of generating the simulation script will be described in detail below with reference to fig. 2.
In step S18, the simulation script may be executed when the user is not using the application. For example, after the user exits the application or when the electronic device is idle (i.e., the user does not use the electronic device), the simulation script may be executed in the electronic device to simulate the operation of the user to use the application on the electronic device, thereby achieving the effect of confusion and obscuring the user image analyzed by the merchant of the application.
FIG. 2 is a flowchart of a process of generating a simulation script according to an exemplary embodiment of the present disclosure. In this embodiment, the simulation script may include an action frame and a category list for running the application, but the present invention is not limited thereto, and the simulation script may further include any other operation elements for simulating the operation of the user using the application.
At step S21, an action framework for the simulation script may be formulated based on the user usage data that will be collected by the application (i.e., the monitored or flagged application). For example, the user's habitual operations in using an application (i.e., the application being monitored or flagged) may be counted based on user usage data that may be collected by the application, including but not limited to what pages the user browses, how fast pages are browsed, how busy the user clicks to view content in the content that has been presented, how long each page stays, and so forth.
At step S22, a category list of simulation scripts may be formulated according to the type of application. Types of applications may include, but are not limited to, video types (e.g., tremble APP, fast-hand APP, etc.), shopping types (e.g., nam APP, kyoto APP, etc.), news types (e.g., today's first-line APP, etc.), music types (e.g., internet-of-things cloud music APP, etc.), social types (e.g., newsbook microblog APP, etc.), and so forth. Because the content categories in different types of applications differ, the category list of simulation scripts for running different types of applications also differs. The following description will be made in connection with different types of application examples.
In step S23, a current representation of the user may be generated based on the application recommendation data. For example, the application recommendation data may be statistically analyzed in a manner including, but not limited to, performing summary analysis on categories of the application recommendation data, calculating a proportion of the application recommendation data in the entire data resources of the application, and the like, where the proportion of the application recommendation data in the entire data resources of the application may represent a current representation of the user. The generated current representation may be the same or similar to the user representation portrayed by the merchant of the application. The user' S current representation may also be obtained via step S15.
In step S24, the user may be prompted to set a wish picture, which is received in response to a user' S setting input. The wish portrayal corresponds to the content category and/or the content category fraction that the user wishes to apply to the recommendation.
For example, a wish representation may be set by a user selecting a type of character for the wish representation, which may include, but is not limited to: gender, age, hobbies, favorite style, character type, attention category, etc. of the person. In this example, each person type corresponds to a category to be recommended or actively presented by the application and/or a category duty to be recommended or actively presented. In addition, a wish profile may also be set by a user selecting a content category and/or a percentage of content categories included with the application.
Further, step S21, step S22, step S23, and step S24 may be performed in any sequential order or may be performed in parallel. A simulation script may then be generated and output (step S25).
FIG. 3 is a flow diagram for determining actions to perform in real time during execution of a simulation script.
After the simulation script is started to execute the marked application (step S31), the gap size between the user' S current representation and the wish representation can be counted in real time (step S32). When the simulation script is executed, the execution action of the simulation script can be determined in real time according to the difference between the current portrait and the desired portrait of the user (step S33). For example, the execution of the content type in the simulation script may be determined based on the size of the gap between the content type occupancy corresponding to the current representation and the content type occupancy corresponding to the desired representation. If the gaps corresponding to the same content category are large, the execution of the simulation script may be caused to change the frequency of use of the corresponding content category. If the gaps corresponding to the same content category are small, the execution of the simulation script may be maintained or the frequency of use of the corresponding content category may be reduced appropriately.
Examples are given below in connection with different types of applications.
For shopping-like applications (e.g., Taobao APP, Jingdong APP, etc.), an action framework of simulation scripts may be formulated from user usage data that may be collected by the application. There is a need to analyze the user's usage habits based on the user usage data that will be collected by the application, including but not limited to what pages to view, the speed of viewing, the duty ratio of click-through in the exposed merchandise, the length of stay on each page, and so forth. Because the use habits of each person are different, the action frame of the simulation script needs to be customized by counting the action habits of the users.
For example, a user has used an application for a period of time, and his actions are as follows: clicking an application icon, entering a home page of the application, and starting to slide downwards, wherein the sliding operation occurs at a rate of once every 1-2 seconds; staying for 2-3 seconds at part of the commodities, clicking individual commodities in the part of the commodities, entering a commodity description page, browsing the commodities in 15-30 seconds, and switching part of the commodities to a detail page for continuous browsing; in the browsing process of the detail page, part of commodities can read the details completely, and other commodities exit after being read completely; then returning to the home page, clicking a selection tab every day, averagely browsing at a speed of one commodity every 0.5 seconds, and staying for 4-6 seconds in part of commodities; as browsing time increases, the probability of clicking on view details becomes lower.
The statistical analysis is performed on the user usage data corresponding to the user operation, so as to generate a corresponding action frame, which is as follows: a. clicking an APP icon, and waiting for 1 second; b. performing downward sliding action every 1-2 seconds; c. selecting a certain commodity in the gliding process, staying for 2-3 seconds at a probability of 50%, and clicking to check at a probability of 80% after staying; d. if the user clicks and checks, entering a commodity description page, staying for 1-2 seconds, sliding downwards to start browsing, waiting for 15-30 seconds, clicking a detail page control, entering a detail page, starting sliding, and clicking a return control to return when the probability of 50% is 1-20 seconds again; e. after returning to the home page, clicking a 'selection every day' control, and entering a selection every day page; f. continue to slide and select portions of the item to view details.
A category list of simulation scripts can be formulated and entered according to the shopping application. Shopping applications require clear category generalizations, which can customize categories or classifications of merchandise or refer to existing categories of merchandise in applications (e.g., detailed classifications in Jingdong APP include 31 first-class classifications, 258 second-class classifications, and 1708 third-class classifications). The shopping application may formulate a corresponding category list and enter the category list into the database by using a uniform sorting criterion or adaptively adjusting the sorting criterion according to a particular shopping application.
A current representation of the user may be generated based on the application recommendation data. For example, the products currently recommended to the user can be applied to shopping for analysis, the products are classified according to the characters and images of the products, and finally the proportion of the product category of the currently recommended content is counted and summarized, wherein the proportion represents the current portrait of the user.
The user may then be prompted to set a wish profile, which is received in response to a user setting input. For example, referring to FIG. 8, after the user exits the application and the user requests that the user portrait be blurred, an interface as shown in FIG. 8 may be displayed on the display of the electronic device, prompting the user to "please select the categories of items contained in your wish portrait" and requesting the user to set the percentage of items in the wish portrait. Alternatively, the interface shown in FIG. 9 may be displayed on the display of the electronic device after the user exits the application and the user requests that the user portrait be obscured, requesting that the user set a wish portrait, i.e., set a tag for the wish portrait.
After the simulation script is started to run the shopping application, the gap size between the current portrait and the wish portrait of the user can be counted in real time (step S32). When the simulation script is executed, the execution action of the simulation script can be determined in real time according to the difference between the current portrait and the desired portrait of the user. For example, when a recommended page browsing action is executed, characters and images of an exposed commodity can be analyzed in real time, the difference between a current portrait and a desired portrait is counted, and whether a click-to-view action in an action frame of a simulation script is executed or not is determined. For example, if the food percentage of the current portrait is 10% and the food percentage of the wish portrait is 50%, the frequency of clicking the food category can be increased by the execution of the simulation script; if the food occupancy of the current portrait is 50% and the food occupancy of the desired portrait is 10%, the execution of the simulation script may be made to reduce the frequency of clicks on the food category. If the food ratio of the current portrait is 45% and the food ratio of the wish portrait is 50%, the click frequency of the food category can be maintained by the execution action of the simulation script, or the click frequency of the food category can be reduced by the execution action of the simulation script due to the high food ratio of the current portrait and the food ratio of the wish portrait, so as to blur the user portrait; if the food occupancy of the current portrait is 5% and the food occupancy of the desired portrait is 6%, the frequency of clicks of the food category may be maintained for the execution of the simulation script. The "click to see" action is merely used as an example for illustration, but is not limited thereto, and whether to perform the action may also be determined according to a gap size corresponding to any other action in the action frame.
For news-like applications (e.g., today's top APP, etc.), an action framework can be formulated that simulates scripts based on user usage data that would be collected by the application. The user usage data that may be collected by the application may include the speed at which the user browses the recommendation page, the reading speed of the news body, the probability of reading the entire news, the content searched, and so on. Thus, an action framework of the simulation script of the application can be formulated, for example: a. clicking an application icon, and waiting for 1 second; b. performing a downward sliding motion every 5 seconds; c. selecting an interested news in the downslide, staying for 2-3 seconds at a probability of 50%, and clicking to check at a probability of 80% after staying; d. for news text read at 500 words per minute, the full article will be read 80% of the time, and then the news in the relevant recommendation will be clicked 50% of the time to continue reading.
A category list of simulation scripts may be formulated and entered according to the news application. Traditional classifications of news may include: political news, economic news, legal news, military news, scientific news, cultural and educational news, sports news, social news, and the like. After the mainstream news applications are investigated, a unified classification standard in the news applications is established, a corresponding category list is formulated, and the category list is recorded into a database.
A current representation of the user may be generated based on the application recommendation data. For example, news currently recommended to a user may be analyzed for news categories, the news may be classified according to news headlines, and finally, the statistics may be applied to the proportion of the news category of the currently recommended content, which may represent the current portrait.
In addition, a user may be prompted to set a wish profile, which is received in response to a user's setting input.
When the simulation script is executed and the recommended page browsing action occurs, news titles in the page can be analyzed in real time, the difference between the current portrait and the wish portrait is counted, and whether the action of 'clicking and checking' in the script action frame is executed or not is determined according to the difference. The "click to see" action is merely used as an example for illustration, but is not limited thereto, and whether to perform the action may also be determined according to a gap size corresponding to any other action in the action frame.
For video-type applications (e.g., tremble APP, etc.), an action framework for the simulation script may be formulated from user usage data that will be collected by the application. The action frame of the video-class application may include a probability of clicking on the author details page, a probability and duration of viewing the comments, a probability of interaction, and the like. Thus, an action framework of the simulation script of the application can be formulated, for example: a. clicking an application icon, and waiting for 1 second; b. starting to watch the video, 80% of the probability will be finished watching the video, 10% of the probability will be viewing the comments, and 5% of the probability will go to the author page to watch other videos.
A category list of simulation scripts can be formulated and entered according to the video-type application. After the mainstream news applications are investigated, a unified classification standard in the news applications is established, a corresponding category list is formulated, and the category list is recorded into a database.
A current representation of the user may be generated based on the application recommendation data. For example, videos may be classified according to information such as the title, author information, and comment content of the video viewed by the user, and the proportion of the category of the currently recommended video may be statistically applied, which may represent the current portrait.
In addition, a user may be prompted to set a wish profile, which is received in response to a user's setting input.
When the simulation script is executed, in the process of browsing the video, the difference between the current portrait and the desired portrait is counted, and the stay time for watching the video and whether other actions in the action frame are executed or not are determined according to the difference.
According to another exemplary embodiment of the present disclosure, the simulation script may also be generated by an artificial intelligence algorithm (e.g., a deep reinforcement learning algorithm). For example, an artificial intelligence model may be trained using user usage data that would be collected by an application; and running an application by using the trained artificial intelligence model to generate the simulation script.
FIG. 4 is a flowchart of a process of generating a simulation script according to another exemplary embodiment of the present disclosure. Referring to FIG. 4, in step S41, user usage data that would be collected by an application may be obtained as training samples. At step S42, an Artificial Intelligence (AI) model is built using an AI algorithm (e.g., a deep reinforcement learning algorithm), the AI model is trained using user usage data that will be collected by the application, and a simulation script is generated and output using the trained AI model (step S43).
In accordance with the present invention, a method of obscuring a user representation may be performed using an artificial intelligence model by utilizing user usage data that may be collected by an application. A pre-processing operation may be performed on user usage data to be collected by an application using a processor of an electronic device to convert to a form suitable for use as an artificial intelligence model input.
The artificial intelligence model may be obtained through training. Here, "obtained by training" means that a basic artificial intelligence model having a plurality of training data is trained by a training algorithm so as to obtain a predefined operation rule or artificial intelligence model configured to perform a desired feature (or purpose).
The functions associated with the AI may be performed by the non-volatile memory, the volatile memory, and the processor.
The processor may include one or more processors. At this time, the one or more processors may be general-purpose processors such as a Central Processing Unit (CPU), an Application Processor (AP), etc., processors for graphics only (e.g., a Graphics Processor (GPU), a Vision Processor (VPU), and/or an AI-specific processor (e.g., a Neural Processing Unit (NPU)).
The one or more processors control the processing of the input data according to predefined operating rules or Artificial Intelligence (AI) models stored in the non-volatile memory and the volatile memory. Predefined operating rules or artificial intelligence models may be provided through training or learning. Here, the provision by learning means that a predefined operation rule or AI model having a desired characteristic is formed by applying a learning algorithm to a plurality of learning data. The learning may be performed in the device itself performing the AI according to the embodiment, and/or may be implemented by a separate server/device/system.
As an example, the artificial intelligence model may be composed of multiple neural network layers. Each layer has a plurality of weight values, and a layer operation is performed by calculation of a previous layer and operation of the plurality of weight values. Examples of neural networks include, but are not limited to, Convolutional Neural Networks (CNNs), Deep Neural Networks (DNNs), Recurrent Neural Networks (RNNs), Restricted Boltzmann Machines (RBMs), Deep Belief Networks (DBNs), Bidirectional Recurrent Deep Neural Networks (BRDNNs), generative countermeasure networks (GANs), and deep Q networks.
A learning algorithm is a method of training a predetermined target device (e.g., a robot) using a plurality of learning data to cause, allow, or control the target device to make a determination or prediction. Examples of learning algorithms include, but are not limited to, supervised learning, unsupervised learning, semi-supervised learning, or reinforcement learning.
FIG. 5 is a flowchart of operations in executing a simulation script according to an example embodiment of the present disclosure. When the user is not using the application, the simulation script may be executed to run the application.
In step S51, the execution process of the simulation script may be recorded. For example, recording may be by way of screen recording or the like. The recorded video may then be generated (step S52). Fig. 10 shows a part of screenshots in a recorded video generated by the execution process of running a panning APP by a recording simulation script. The circles in FIG. 10 represent click positions of the simulation script. Screenshot A represents "open Taobao home page"; screenshot B represents "simulation of search behavior"; screenshot C represents "do a simulation of browsing search results"; the screenshot D represents "simulation for commodity view".
At step S53, fuzzy image data may be generated according to the recorded execution procedure, and the fuzzy image data may include traces of operations of the simulation script in the application, frequency and duration of the operations, duty ratio of the frequency of the operations, duty ratio of the duration of the operations, content category corresponding to the operations, and the like. Meanwhile, fuzzy portrait data can be recorded.
Then, at step S54, an analysis report may be generated based on the user usage data and the blurred profile data that would be collected by the application. The analysis report may include: the user uses a trace statistical chart, the operation trace of the simulation script in the application, the frequency or the times of the operation, the duration of the operation, the proportion of the operation times, the proportion of the operation duration, the content type corresponding to the operation, the operation which is executed by the simulation script and is different from the previous use of the user, a new image which can be generated by the simulation script, a fuzzy image effect which can be achieved by executing the simulation script and the like. For example, after the user exits the shopping application (e.g., Taobao APP), the simulation script of the application may be executed, and the generated analysis report is shown in FIG. 11, which includes: the method comprises the following steps of obtaining a statistical bar graph of recent use traces of a user (for example, browsing 25 jeans, 22 shoes and 16 daily necessities), simulating operation traces of a script (for example, simulating the use traces of four major categories of fresh products, foods, furniture and mothers and babies and confusing the use traces of the last women's clothing of the user in the current simulation operation), simulating new images possibly generated by the script (for example, predicting that the user images at the current stage comprise five major categories of clothing, fresh products, foods, home furnishings and mothers and babies), comparing the duration of various simulation operations corresponding to the content categories of the application, comparing the times of various search operations, and realizing fuzzy image effects possibly achieved by executing the simulation script (for example, the female clothing categories tend to be weakened and other products tend to be increased in the user images through the current simulation operation), and the like.
At step S55, at least one of the recorded video and the analysis report may be provided to a user. For example, recorded videos and analysis reports may be presented to a user via a display of the electronic device.
Taking panning as an example, after browsing for a period of "home" goods, panning recommends many home goods. By using the scheme to simulate browsing behaviors, the statistics of the Taobao on the user behaviors is not limited to the household categories, so that the Taobao cannot obviously judge that the user has the requirement of purchasing household articles, and the commodity recommended categories received by the user are richer.
FIG. 6 is a block diagram of an apparatus 10 that blurs a user representation according to an exemplary embodiment of the present disclosure.
The apparatus 10 for obscuring a user representation may include an application monitoring unit 101, a script generation unit 102, a script execution unit 103, a script execution recording unit 104, and a display unit 105. Among other things, the display unit 105 may be a display in the user's electronic device or may be a separate display device.
The application monitoring unit 101 is configured to monitor an application used by a user to obtain user usage data of the application. The script generation unit 102 is configured to generate a simulation script for running an application according to user usage data of the application in response to a request that a user wants to blur a user representation. The script execution unit 103 is used for executing the simulation script when the user does not use the application.
As an example, the monitored application may be a monitoring target set by a user. For example, the user may select and specify the application to be monitored through the display unit 105 and/or a setting button or the like.
As an example, the application monitoring unit 101 may identify which user usage data among the user usage data of the application will be collected by the application and record the user usage data that will be collected by the application.
As an example, the application monitoring unit 101 may identify, according to the privacy terms of the application, the user usage data that indicates that the application will collect in the privacy terms as the user usage data that will be collected by the application.
As an example, the application monitoring unit 101 may generate application recommendation data of an application according to content recommended to the user by the application; the application monitoring unit 101 may identify user usage data associated with the application recommendation data among the user usage data of the application as user usage data to be collected by the application by comparing the user usage data with the application recommendation data for a period of time.
As an example, the script generating unit 102 may generate application recommendation data of an application according to content recommended to the user by the application; the script generation unit 102 may generate a current representation of the user based on the application recommendation data.
As an example, the display unit 105 may provide at least one of user usage data collected by the application and a current representation of the user to the user; the display unit asks the user whether the user wants to blur the user representation.
As an example, the user may set the user's wish profile by selecting a character type of the wish profile or selecting a content category and/or a content category fraction contained by the application through the display unit 105.
By way of example, the simulation script may include an action frame and a category manifest for running an application. The script execution unit 103 may formulate the action frame according to user usage data to be collected by the application, and formulate the category list according to the type of the application.
As an example, the script execution unit 103 may determine and execute an execution action of the simulation script in real time according to a difference size between a current portrait and a desired portrait of a user when the simulation script is executed.
By way of example, the script generation unit 120 may train an artificial intelligence model with user usage data that would be collected by the application; the script generation unit 120 may generate the simulation script using the trained artificial intelligence model.
As an example, the script execution recording unit 104 may record the execution process of the simulation script, and generate a recorded video; the script execution recording unit 104 may generate the blurred image data according to the recorded execution process, and record the blurred image data; script execution recording unit 104 may generate an analysis report based on user usage data and blur representation data that may be collected by an application. The display unit 105 may provide at least one of the recorded video and the analysis report to a user.
It should be understood that the specific processing performed by the apparatus for obscuring a user representation according to the exemplary embodiment of the present disclosure has been described in detail with reference to fig. 1 to 5 and 7 to 11, and the details thereof will not be repeated here.
According to the method and the device for obscuring the user portrait, the judgment of a merchant on the use data of the merchant can be confused when the user needs; the content received by the user is more diversified, and is not more scene-wise along with the increase of the use time.
Further, it should be understood that various units in a device obscuring a user representation according to exemplary embodiments of the present disclosure may be implemented as hardware components and/or software components. The individual units may be implemented, for example, using Field Programmable Gate Arrays (FPGAs) or Application Specific Integrated Circuits (ASICs), depending on the processing performed by the individual units as defined by the skilled person.
A computer-readable storage medium according to an exemplary embodiment of the present disclosure stores a computer program that, when executed by a processor, causes the processor to perform the method of blurring a user representation of the above-described exemplary embodiment. The computer readable storage medium is any data storage device that can store data which can be read by a computer system. Examples of computer-readable storage media include: read-only memory, random access memory, read-only optical disks, magnetic tapes, floppy disks, optical data storage devices, and carrier waves (such as data transmission through the internet via wired or wireless transmission paths).
Although a few exemplary embodiments of the present disclosure have been shown and described, it would be appreciated by those skilled in the art that changes may be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the claims and their equivalents.

Claims (28)

1. A method of obscuring a user representation, wherein the method comprises:
monitoring an application used by a user to acquire user usage data of the application;
in response to a request by a user to blur a representation of the user, generating a simulation script for running the application based on user usage data for the application;
and when the user does not use the application, executing the simulation script.
2. The method of claim 1, wherein the monitored application is a monitoring subject set by a user.
3. The method of claim 1, wherein monitoring the application used by the user to obtain user usage data for the application comprises:
the method further includes identifying which user usage data from among the user usage data of the application is to be collected by the application and recording the user usage data that is to be collected by the application.
4. The method of claim 3, wherein identifying which of the user usage data of the application will be collected by the application comprises:
according to the privacy terms of the application, user usage data indicated in the privacy terms that the application will collect is identified as user usage data that will be collected by the application.
5. The method of claim 3, wherein identifying which of the user usage data of the application will be collected by the application comprises:
generating application recommendation data of the application according to the content recommended to the user by the application;
user usage data associated with the application recommendation data among the user usage data of the application is identified as user usage data to be collected by the application by comparing the user usage data with the application recommendation data over a period of time.
6. The method of claim 3, wherein the method further comprises:
generating application recommendation data of the application according to the content recommended to the user by the application;
and generating a current portrait of the user according to the application recommendation data.
7. The method of claim 6, wherein the method further comprises:
providing at least one of user usage data and a current representation of the user to be collected by the application;
the user is asked if he wants to blur the user representation.
8. The method of claim 6, wherein the method further comprises: the user's wish portrayal is set by the user selecting the type of character of the wish portrayal or by the user selecting the content category and/or the content category fraction contained by the application.
9. The method of claim 8, wherein the simulation script comprises an action frame and a category manifest for running an application,
the step of generating a simulation script for running an application includes: the action framework is formulated according to user usage data that will be collected by the application, and the category list is formulated according to the type of application.
10. The method of claim 9, wherein executing the simulation script comprises:
when the simulation script is executed, the execution action of the simulation script is determined and executed in real time according to the difference between the current portrait and the wish portrait of the user.
11. The method of claim 3, wherein generating a simulation script for running an application comprises:
training an artificial intelligence model using user usage data that would be collected by an application;
and generating the simulation script by using the trained artificial intelligence model.
12. The method of claim 3, wherein the method further comprises:
recording the execution process of the simulation script to generate a recording video;
generating fuzzy portrait data according to the recorded execution process, and recording the fuzzy portrait data;
generating an analysis report based on user usage data and blurred portrait data that may be collected by the application;
providing at least one of the recorded video and the analysis report to a user.
13. The method of claim 1, wherein the type of application comprises: video type, shopping type, news type, music type, and social type.
14. An apparatus that blurs a user representation, wherein the apparatus comprises:
the application monitoring unit is used for monitoring the application used by the user to acquire the user use data of the application;
a script generating unit which generates a simulation script for running an application according to user usage data of the application in response to a request that a user wants to blur a user representation;
and the script execution unit is used for executing the simulation script when the user does not use the application.
15. The apparatus of claim 14, wherein the monitored application is a monitoring object set by a user.
16. The apparatus of claim 14, wherein,
the application monitoring unit identifies which user usage data among the user usage data of the application are to be collected by the application and records the user usage data that are to be collected by the application.
17. The apparatus of claim 16, wherein,
the application monitoring unit identifies user usage data, which indicates that the application will collect in the privacy clauses, as user usage data to be collected by the application, according to the privacy clauses of the application.
18. The apparatus of claim 16, wherein,
the application monitoring unit generates application recommendation data of the application according to the content recommended to the user by the application;
the application monitoring unit identifies user usage data associated with the application recommendation data among the user usage data of the application as user usage data to be collected by the application by comparing the user usage data with the application recommendation data over a period of time.
19. The apparatus of claim 16, wherein,
the script generation unit is also used for generating application recommendation data of the application according to the content recommended to the user by the application;
the script generation unit generates a current portrait of the user according to the application recommendation data.
20. The apparatus of claim 19, wherein the apparatus further comprises a display unit,
the display unit provides at least one of user usage data collected by the application and a current representation of the user to the user;
the display unit asks the user whether the user wants to blur the user representation.
21. The apparatus of claim 19, wherein the apparatus further comprises a display unit, and the user sets the user's wish profile by selecting a character type of the wish profile or selecting a content category and/or a content category fraction included in the application through the display unit.
22. The device of claim 21, wherein the simulation script comprises an action frame and a category manifest for running an application,
the script execution unit formulates the action frame according to user usage data collected by the application, and formulates the category list according to the type of the application.
23. The apparatus of claim 22, wherein,
and when the script execution unit executes the simulation script, the execution action of the simulation script is determined and executed in real time according to the difference between the current portrait and the desired portrait of the user.
24. The apparatus of claim 16, wherein,
the script generation unit trains an artificial intelligence model by using the user usage data collected by the application;
and the script generation unit generates the simulation script by using the trained artificial intelligence model.
25. The apparatus of claim 16, wherein the apparatus further comprises: a script execution recording unit and a display unit,
the script execution recording unit records the execution process of the simulation script and generates a recording video;
the script execution recording unit generates fuzzy portrait data according to the recorded execution process and records the fuzzy portrait data;
the script execution recording unit generates an analysis report based on the user usage data and the fuzzy portrait data which are collected by the application;
the display unit provides at least one of the recorded video and the analysis report to a user.
26. The device of claim 14, wherein the type of application comprises: video type, shopping type, news type, music type, and social type.
27. A computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method of obscuring a user representation as claimed in any of claims 1 to 13.
28. An electronic device, wherein the electronic device comprises:
a processor;
a memory storing a computer program which, when executed by the processor, implements a method of obscuring a user representation as claimed in any of claims 1 to 13.
CN202011133207.8A 2020-10-21 2020-10-21 Method and apparatus for obscuring a user representation Pending CN112214680A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011133207.8A CN112214680A (en) 2020-10-21 2020-10-21 Method and apparatus for obscuring a user representation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011133207.8A CN112214680A (en) 2020-10-21 2020-10-21 Method and apparatus for obscuring a user representation

Publications (1)

Publication Number Publication Date
CN112214680A true CN112214680A (en) 2021-01-12

Family

ID=74056386

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011133207.8A Pending CN112214680A (en) 2020-10-21 2020-10-21 Method and apparatus for obscuring a user representation

Country Status (1)

Country Link
CN (1) CN112214680A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060136294A1 (en) * 2004-10-26 2006-06-22 John Linden Method for performing real-time click fraud detection, prevention and reporting for online advertising
CN109871713A (en) * 2019-02-12 2019-06-11 重庆邮电大学 A kind of method for secret protection based on Internet robot
CN109871478A (en) * 2018-12-24 2019-06-11 阿里巴巴集团控股有限公司 Network search method and device
CN110766489A (en) * 2018-07-25 2020-02-07 北京三星通信技术研究有限公司 Method for requesting content and providing content and corresponding device
CN110889133A (en) * 2019-11-07 2020-03-17 中国科学院信息工程研究所 Anti-network tracking privacy protection method and system based on identity behavior confusion
CN111353091A (en) * 2018-12-24 2020-06-30 北京三星通信技术研究有限公司 Information processing method and device, electronic equipment and readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060136294A1 (en) * 2004-10-26 2006-06-22 John Linden Method for performing real-time click fraud detection, prevention and reporting for online advertising
CN110766489A (en) * 2018-07-25 2020-02-07 北京三星通信技术研究有限公司 Method for requesting content and providing content and corresponding device
CN109871478A (en) * 2018-12-24 2019-06-11 阿里巴巴集团控股有限公司 Network search method and device
CN111353091A (en) * 2018-12-24 2020-06-30 北京三星通信技术研究有限公司 Information processing method and device, electronic equipment and readable storage medium
CN109871713A (en) * 2019-02-12 2019-06-11 重庆邮电大学 A kind of method for secret protection based on Internet robot
CN110889133A (en) * 2019-11-07 2020-03-17 中国科学院信息工程研究所 Anti-network tracking privacy protection method and system based on identity behavior confusion

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘慧: "大数据杀熟及其对抗技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, pages 138 - 246 *

Similar Documents

Publication Publication Date Title
Teo et al. Adaptive, personalized diversity for visual discovery
CN107622427B (en) Deep learning method, device and system
CN111815415B (en) Commodity recommendation method, system and equipment
US9881332B2 (en) Systems and methods for customizing search results and recommendations
US7610255B2 (en) Method and system for computerized searching and matching multimedia objects using emotional preference
Bari et al. Predictive analytics for dummies
Kumar et al. Combined artificial bee colony algorithm and machine learning techniques for prediction of online consumer repurchase intention
US11538083B2 (en) Cognitive fashion product recommendation system, computer program product, and method
US9195753B1 (en) Displaying interest information
RU2725659C2 (en) Method and system for evaluating data on user-element interactions
CN111724235A (en) Online commodity recommendation method based on user novelty
CN111818370A (en) Information recommendation method and device, electronic equipment and computer-readable storage medium
US11514123B2 (en) Information retrieval system, method and computer program product
CN113641916B (en) Content recommendation method and device, electronic equipment and storage medium
CN116308640A (en) Recommendation method and related device
CN112214680A (en) Method and apparatus for obscuring a user representation
CN114780865A (en) Information recommendation method and device, computer equipment and storage medium
CN113297406A (en) Picture searching method and system and electronic equipment
Ntalianis et al. Wall-content selection in social media: A revelance feedback scheme based on explicit crowdsourcing
Rajapaksha et al. Recommendations to Increase the Customer Interaction of E-commerce Applications with Web Usage Mining
CN117391824B (en) Method and device for recommending articles based on large language model and search engine
US11947616B2 (en) Systems and methods for implementing session cookies for content selection
Babu Converting an E-commerce prospect into a customer using streaming analytics
JP7249222B2 (en) Information processing device, information processing method and information processing program
Liu et al. Discovering proper neighbors to improve session-based recommendation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination