CN111198960A - Method and device for determining user portrait data, electronic equipment and storage medium - Google Patents
Method and device for determining user portrait data, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN111198960A CN111198960A CN201911381002.9A CN201911381002A CN111198960A CN 111198960 A CN111198960 A CN 111198960A CN 201911381002 A CN201911381002 A CN 201911381002A CN 111198960 A CN111198960 A CN 111198960A
- Authority
- CN
- China
- Prior art keywords
- picture
- key information
- user
- module
- pictures
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 66
- 238000003860 storage Methods 0.000 title claims abstract description 14
- 238000010801 machine learning Methods 0.000 claims description 41
- 238000004590 computer program Methods 0.000 claims description 19
- 230000006870 function Effects 0.000 claims description 11
- 238000004458 analytical method Methods 0.000 claims description 7
- 238000001514 detection method Methods 0.000 claims description 5
- 230000010354 integration Effects 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 13
- 230000008569 process Effects 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 235000021168 barbecue Nutrition 0.000 description 3
- 230000001186 cumulative effect Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/5866—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9535—Search customisation based on user profiles and personalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Library & Information Science (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the invention provides a method, a device, electronic equipment and a storage medium for determining user portrait data, wherein the method comprises the steps of obtaining at least one picture or a webpage with the picture clicked or browsed by a user when the user uses an application program APP; identifying each picture or the pictures in the webpage to obtain the corresponding key information in each picture; and using key information in all pictures as user portrait data. In the embodiment of the invention, the key information in the corresponding picture is obtained by respectively identifying one or more pictures clicked or browsed by a user when the user uses the application program, and the corresponding key information in the one or more pictures is used as the user portrait data. The key information in the picture is used as a collection source of the user portrait data, so that the user portrait data is perfected, and the accuracy of the user portrait data is improved.
Description
Technical Field
The present invention relates to the field of terminal technologies, and in particular, to a method and an apparatus for determining user portrait data, an electronic device, and a storage medium.
Background
User portrayal (Persona) is a virtual representation of a real user, a target user model built on top of a series of real data. Currently, user representation data in a user representation is collected in two ways: the first mode is as follows: user portrait data is obtained by way of active participation of the user. For example, the user may actively fill in information when registering an account to collect a representation. The second way is: the method comprises the steps of collecting browsing data of a user, for example, 58 city-sharing apps, recording footprints of the user, wherein the footprints of the user exist, analyzing the footprints of the user by a background server to obtain fuzzy user portrait data, and recommending corresponding information for the user by using the fuzzy user portrait data, namely displaying information which the user may like.
However, in the first method, it is not perfect to collect user image data of the user by a survey method, and it is not possible to collect more perfect user image data by allowing the user to answer several tens of questions at the time of user registration. However, for the user portrait data collected in the second manner, the user portrait data is too dependent on a text, for example, a post, three pictures are displayed, and the user clicks one picture to enter a detail page.
Therefore, how to acquire accurate user portrait data is a technical problem to be solved at present.
Disclosure of Invention
The technical problem to be solved by the embodiments of the present invention is to provide a method for determining user portrait data, so as to solve the technical problem in the related art that accuracy of obtaining user portrait data is reduced due to inaccurate obtaining of user portrait data.
Correspondingly, the embodiment of the invention also provides a user portrait data device, electronic equipment and a storage medium, which are used for ensuring the realization and application of the method.
In order to solve the problems, the invention is realized by the following technical scheme:
a first aspect provides a method of determining user representation data, the method comprising:
acquiring at least one picture or a webpage with the picture clicked or browsed by a user when the user uses an application program APP;
identifying each picture or the pictures in the webpage to obtain the corresponding key information in each picture;
and using key information in all pictures as user portrait data.
Optionally, after obtaining the key information in each corresponding picture, the method further includes:
judging whether the key information in all the pictures is repeated;
if yes, weighting the repeated key information;
and using the key information subjected to accumulative weighting processing as user portrait data.
Optionally, the identifying each picture or the picture in the web page to obtain the key information in each corresponding picture includes:
and identifying each picture or the pictures in the webpage by using a picture identification machine learning model to obtain the corresponding key information in each picture.
Optionally, the identifying each picture or the pictures in the web page by using the picture identification machine learning model to obtain the corresponding key information in each picture includes:
detecting a parameter input by a user, wherein the parameter is the picture;
and calling a prediction Application Program Interface (API) function of the picture recognition machine learning model to perform prediction analysis on each picture to obtain corresponding key information in each picture.
Optionally, before the obtaining of at least one picture or a webpage with a picture clicked or browsed by the user when using the application program, the method further includes:
receiving an operation instruction input by a user;
and integrating a picture recognition machine learning model in the application program APP according to the operation instruction.
A second aspect provides an apparatus for determining user representation data, comprising:
the acquisition module is used for acquiring at least one picture or a webpage with the picture clicked or browsed by a user when the user uses the application program;
the identification module is used for identifying each picture or the pictures in the webpage to obtain the corresponding key information in each picture;
and the first determining module is used for taking the key information in all the pictures as user portrait data.
Optionally, the apparatus further comprises:
the judging module is used for judging whether the key information in all the pictures obtained by the identifying module is repeated or not;
the weighting module is used for performing accumulated weighting processing on the repeated key information when the judging module judges that the repeated key information exists;
and the second determination module is used for taking the key information subjected to the accumulated weighting processing by the weighting processing module as the user portrait data.
Optionally, the identification module is specifically configured to identify each picture or the pictures in the web page by using a picture identification machine learning model, so as to obtain the corresponding key information in each picture.
Optionally, the identification module includes:
the detection module is used for detecting a parameter input by a user, wherein the parameter is the picture;
and the prediction module is used for calling a prediction Application Program Interface (API) function of the picture recognition machine learning model to perform prediction analysis on each picture so as to obtain the corresponding key information in each picture.
Optionally, the apparatus further comprises:
the receiving module is used for receiving an operation instruction input by a user before the acquiring module acquires at least one picture or a webpage with the picture clicked or browsed by the user when the user uses the application program;
and the integration module is used for integrating the image recognition machine learning model into the application program APP according to the operation instruction.
A third aspect provides an electronic device comprising: a memory, a processor and a computer program stored on the memory and executable on the processor, the computer program when executed by the processor implementing the steps of the method of determining user representation data as described above.
A fourth aspect provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps in the method of determining user representation data as described above.
Compared with the prior art, the embodiment of the invention has the following advantages:
in the embodiment of the invention, when at least one picture or a webpage with pictures clicked or browsed by a user when using an application program APP is acquired, each picture or picture in the webpage is identified to acquire corresponding key information in each picture; then, key information in all pictures is used as user portrait data. That is to say, in the embodiment of the present invention, the key information in the corresponding picture is obtained by respectively identifying one or more pictures clicked or browsed by the user when using the application program, and the corresponding key information in the one or more pictures is used as the user portrait data. The key information in the picture is used as a collection source of the user portrait data, so that the user portrait data is perfected, and the accuracy of the user portrait data is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
FIG. 1 is a flow chart illustrating a method for determining user representation data according to an embodiment of the present invention;
FIG. 2 is another flow chart of a method for determining user representation data according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an apparatus for determining user portrait data according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of another embodiment of an apparatus for determining user representation data;
fig. 5 is a schematic structural diagram of an identification module according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of another structure of an apparatus for determining user portrait data according to an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Referring to fig. 1, a flowchart of a method for determining user portrait data according to an embodiment of the present invention may include the following steps:
step 101: acquiring a webpage which is clicked or browsed by a user through an application program or has a picture;
step 102: identifying each picture or the pictures in the webpage to obtain the corresponding key information in each picture;
step 103: and using key information in all pictures as user portrait data.
In this embodiment, the key information (or the name of the key information) identified from the picture may be used as an information source for the user representation data, and of course, the user representation data included in the user representation may also include the gender, age, preferences, and the like of the user. So that the follow-up App can carry out personalized recommendation on the user according to more accurate user portrait data.
The method for determining user portrait data provided by the embodiment of the present invention may be applied to a mobile terminal, a server, a client (including an APP client), a backend or a system, and the like, and is not limited herein, and the device implemented by the method may be an electronic device such as a smart phone, a laptop, a tablet computer, and the like, and is not limited herein.
The following describes in detail specific implementation steps of a method for determining user image data according to an embodiment of the present invention with reference to fig. 1.
Firstly, executing step 101, acquiring at least one picture or a webpage with the picture clicked or browsed by a user when using an application program;
in this step, a user may click or browse a favorite picture or a web page with a picture (e.g., a post with a picture) in a process of using an Application (APP, Application) (e.g., opening the APP, entering a certain channel on the APP, etc.), that is, the Application APP obtains at least one picture or web page with a picture clicked or browsed by the user when using the Application APP, and uses the pictures as target pictures. It should be noted that, in the APP in this embodiment, a picture recognition machine learning model is integrated in advance, and one or more pictures used for clicking or browsing are recognized to obtain key information in the picture, such as names of people or things included in the picture.
Secondly, executing step 102, identifying each picture or the pictures in the webpage to obtain corresponding key information in each picture;
in this step, each picture or picture in the web page may be identified by using a picture identification machine learning model, so as to obtain an identification result feature (Features) corresponding to each picture, where the Features include key information (name, name of specific key information) and a value (value), where the value includes a region size (bounds) of the key information in the picture, and the region size includes a position and a size. There may be one or more pieces of key information in each picture obtained by identification, which is not limited in this embodiment. It should be noted that, different image recognition machine learning models may output different features of the result.
The image recognition machine learning model in the embodiment of the present invention utilizes an image recognition technology based on machine learning, where the image recognition technology is an image classification technology, and the model can recognize key information in an image through a machine learning training model, then implement classification, and determine what object, face, or any other object the key information in the image is, and of course, the image recognition machine learning model may also include the region sizes (including positions and sizes) of the object, face, or any other object in the image.
The method for identifying the picture by using the picture identification machine learning model comprises the following steps:
firstly, detecting a parameter input by a user by an APP integrated with a picture recognition machine learning model, wherein the parameter is the picture; then, a prediction Application Program Interface (API) function of the picture recognition machine learning model is called to perform prediction analysis on each picture, so as to obtain a corresponding prediction result feature, where the prediction result feature includes: the method comprises the steps of obtaining key information (name) and a numerical value (value) in a picture, wherein the value contains the area size (bounds) of the key information in the picture, and the area size comprises the position and the size.
That is, in this embodiment, a picture is used as a parameter of an input picture recognition machine learning model, a prediction application program interface API function (or API method) of the picture recognition machine learning model is called to perform prediction, machine learning analysis is performed on a prediction result, and a recognized result is output. It should be noted that, the image recognition machine learning model is different, and the output result may be different.
If the Resnet50 training model is taken as an example, the Resnet50APP model is obtained first, and there are various obtaining manners, which are not described in this embodiment, and then, after the Resnet50APP model is integrated into the APP, a class and a method are automatically generated through an integrated device electronics (IDE, such as XCode of apple). The generated classes and methods are then invoked to identify key information in the picture. For example, the training model predicts the key information in the picture by adopting a prediction method (func) as prediction, the Input of the prediction is a Resnet50Input type, and the specific Input type is a parameter Image; the Output is of the Resnet50Output type, and the specific Output is predicted results, Features, which may include (name) and (value) of key information. In this embodiment, only the name of the identified key information is needed, for example, the key information in the picture is elements such as a woman, a man and/or barbecue.
For another example, if a wblogoimage classifier model is taken as an example, the wblogoimage classifier model is obtained first, then the model is integrated into an APP, a class and a method automatically generated by an Integrated Device Electronics (IDE), for example, XCode of apple, are called to identify key information in a picture. Wherein the model is named WBLogoImageClassifier; the input and output class name is formed by adding other keywords to the model as a prefix, for example, WBLogoImageClassifierInput is the input class name of the model, and WBLogoImageClassifierOutput is the output class name of the model. The model may employ a predictionfromfetasources method, with the input parameters being wblogoimagemacclassifierininput instances (objects) and the output being predicted results wblogoimageclassieroutput instances (objects).
Finally, step 103 is executed to use the key information in all pictures as the user portrait data.
In this step, if the user clicks or browses only one picture at this time, one or more pieces of key information in the identified picture are used as user portrait data, that is, single collection is performed. That is, the key information identified above is saved as user representation data to the user representation.
In the embodiment of the invention, when at least one picture or a webpage with pictures clicked or browsed by a user when using an application program APP is acquired, each picture or picture in the webpage is identified to acquire corresponding key information in each picture; then, key information in all pictures is used as user portrait data. That is to say, in the embodiment of the present invention, the key information in the corresponding picture is obtained by respectively identifying one or more pictures clicked or browsed by the user when using the application program, and the corresponding key information in the one or more pictures is used as the user portrait data. The key information in the picture is used as a collection source of the user portrait data, so that the user portrait data is perfected, and the accuracy of the user portrait data is improved.
Referring to fig. 2, another flow chart of a method for determining user portrait data according to an embodiment of the present invention is shown, where the method includes the following steps:
step 201: acquiring at least one picture or a webpage with the picture clicked or browsed by a user when the user uses an application program APP;
step 202: identifying each picture or the pictures in the webpage to obtain the corresponding key information in each picture;
in this embodiment, step 201 and step 202 are the same as step 101 and step 102, and the specific implementation process thereof is described above and will not be described herein again.
Step 203: judging whether the key information in all the pictures is repeated, if so, executing a step 204; otherwise, returning to step 203;
in this step, when the user clicks or browses multiple pictures (for example, multiple similar pictures, etc.), the application APP identifies the multiple pictures multiple times, and uses the result of each identification (i.e., the key information) as the user portrait data. Therefore, some key information in the user portrait data is duplicated, and in order to avoid duplication, duplication judgment needs to be performed on the result (i.e., the key information) obtained by each recognition.
Step 204: performing accumulative weighting processing on the repeated key information;
based on step 203, when it is determined that there are multiple pieces of repeated key information, the multiple pieces of repeated key information need to be subjected to cumulative weighting processing, for example, if there are two key information whose names are barbecue, then the cumulative weighted key information is barbecue (2), etc. As shown in table 1, table 1 is only an example, and in practical applications, the invention is not limited thereto.
TABLE 1
It should be noted that, as the weighted weight in table 1 is higher, it indicates that the user preference is also high, and the highest weight indicates that the user preference is highest, and the lowest weight indicates that the user preference is also lowest.
Step 205: and using the key information subjected to accumulative weighting processing as user portrait data.
In the embodiment of the invention, the identification result of one or more pictures clicked by the user is possibly not accurate enough as the user portrait data, but the obtained user portrait data is more and more accurate after the pictures clicked by the user more times are identified and the same key information in the identification result is subjected to continuous accumulative weighting processing.
Optionally, in another embodiment, on the basis of the above embodiment, before the obtaining of at least one picture or a web page with a picture clicked or browsed by the user when using the application program, the method may further include: 1) receiving an operation instruction input by a user; 2) and integrating a picture recognition machine learning model in the application program APP according to the operation instruction.
That is, the application APP integrates the picture recognition machine learning model into the application APP in advance according to the operation instruction of the executor or the developer, which is a preliminary preparation of the executor or the developer. Namely, the executor or the developer needs to acquire the picture recognition machine learning model first and then integrate the picture recognition machine learning model.
There are many methods for obtaining the image recognition machine learning model, and this embodiment takes three types as an example:
the first method comprises the following steps: an executor or a developer directly downloads an existing image recognition machine learning model (or a machine type learning model) from the internet, and what type of image recognition module is specifically needed can be selected according to requirements.
And the second method comprises the following steps: the performer or developer creates and trains the model according to the requirement. The process requires a large amount of work, and a large amount of training is required to ensure the accuracy of the training model. For example, a CoreML model of a company, there is a corresponding tool to create and generate a training model, for example, creation is using CreateML, and the training tool is TuriCreate. Of course, it can be created and trained by the products of other companies and then converted into models supported by the current platform through the first tool.
And the third is that: utilizing an already integrated machine learning framework in the operating system. Such as the iOS operating system, into which a Vision framework is integrated that supports Face Detection, Human and Animal Detection, TextRecognition, Object Recognition, Machine-Learning Image Analysis, and the like. With this mature framework, other ML models can no longer be relied upon, so long as the Vision framework can support the current needs.
2) Integrating a picture recognition machine learning model in the application APP.
In this step, it is well known to those skilled in the art that a picture recognition machine learning model is integrated in the application program APP, and will not be described herein again.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 3, a schematic structural diagram of a device for determining user portrait data according to an embodiment of the present invention may include: an acquisition module 301, a recognition module 302 and a first determination module 303, wherein,
the acquiring module 301 is configured to acquire at least one picture or a web page with a picture clicked or browsed by a user when using an application program;
the identification module 302 is configured to identify each picture or a picture in the web page to obtain key information in each corresponding picture;
the first determining module 303 is configured to use key information in all pictures as user portrait data.
Optionally, in another embodiment, on the basis of the above embodiment, the apparatus may further include: a judgment module 401, a weighting module 402 and a second determination module 403, which are schematically shown in fig. 4, wherein,
the determining module 401 is configured to determine whether the key information in all the pictures obtained by the identifying module 302 is repeated;
the weighting module 402 is configured to perform cumulative weighting processing on the repeated key information when the determining module 401 determines that the repeated key information exists;
the second determining module 403 is configured to use the key information obtained by the weighting processing module 402 to accumulate the weighted key information as user portrait data.
Optionally, in another embodiment, on the basis of the above embodiment, the identification module is specifically configured to identify each picture or a picture in the web page by using a picture identification machine learning model, so as to obtain key information in each corresponding picture.
Optionally, in another embodiment, on the basis of the foregoing embodiment, the identifying module 302 includes: the detection module 501 and the prediction module 502 are schematically shown in fig. 5, wherein,
the detecting module 501 is configured to detect a parameter input by a user, where the parameter is the picture;
the prediction module 502 is configured to call a prediction application program interface API function of the picture recognition machine learning model to perform prediction analysis on each picture, so as to obtain key information in each corresponding picture.
Optionally, in another embodiment, on the basis of the above embodiment, the apparatus may further include: the receiving module 601 and the integrating module 602 are schematically shown in fig. 6, wherein,
the receiving module 601 is configured to receive an operation instruction input by a user before the obtaining module 301 obtains at least one picture or a web page with pictures clicked or browsed by the user when using the application program;
the integrating module 602 is configured to integrate a picture recognition machine learning model into the application APP according to the operation instruction.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
Optionally, an embodiment of the present invention further provides an electronic device, including a processor, a memory, and a computer program stored in the memory and capable of running on the processor, where the computer program, when executed by the processor, implements each process of the foregoing method for determining user image data, and can achieve the same technical effect, and details are not repeated here to avoid repetition.
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the foregoing method for determining user image data, and can achieve the same technical effect, and in order to avoid repetition, the computer program is not described herein again. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminals (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element. A
The foregoing describes in detail a method, apparatus, electronic device or storage medium for determining user portrait data according to the present invention, and specific examples are applied herein to illustrate principles and embodiments of the present invention, and the description of the foregoing examples is only provided to help understand the method and core ideas of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.
Claims (12)
1. A method for determining user portrait data, comprising:
acquiring at least one picture or a webpage with the picture clicked or browsed by a user when the user uses an application program APP;
identifying each picture or the pictures in the webpage to obtain the corresponding key information in each picture;
and using key information in all pictures as user portrait data.
2. The method of claim 1, wherein after obtaining the key information in each corresponding picture, the method further comprises:
judging whether the key information in all the pictures is repeated;
if yes, weighting the repeated key information;
and using the key information subjected to accumulative weighting processing as user portrait data.
3. The method according to claim 1 or 2, wherein the identifying each picture or the picture in the web page to obtain the key information in each corresponding picture comprises:
and identifying each picture or the pictures in the webpage by using a picture identification machine learning model to obtain the corresponding key information in each picture.
4. The method according to claim 3, wherein the identifying each picture or the pictures in the web page by using the picture recognition machine learning model to obtain the corresponding key information in each picture comprises:
detecting a parameter input by a user, wherein the parameter is the picture;
and calling a prediction Application Program Interface (API) function of the picture recognition machine learning model to perform prediction analysis on each picture to obtain corresponding key information in each picture.
5. The method according to claim 1 or 2, wherein before the acquiring at least one picture or web page with a picture clicked or browsed by the user when using the application program, the method further comprises:
receiving an operation instruction input by a user;
and integrating a picture recognition machine learning model in the application program APP according to the operation instruction.
6. An apparatus for determining user portrait data, comprising:
the acquisition module is used for acquiring at least one picture or a webpage with the picture clicked or browsed by a user when the user uses the application program;
the identification module is used for identifying each picture or the pictures in the webpage to obtain the corresponding key information in each picture;
and the first determining module is used for taking the key information in all the pictures as user portrait data.
7. The apparatus of claim 6, further comprising:
the judging module is used for judging whether the key information in all the pictures obtained by the identifying module is repeated or not;
the weighting module is used for performing accumulated weighting processing on the repeated key information when the judging module judges that the repeated key information exists;
and the second determination module is used for taking the key information subjected to the accumulated weighting processing by the weighting processing module as the user portrait data.
8. The apparatus according to claim 6 or 7,
the identification module is specifically configured to identify each picture or the pictures in the web page by using a picture identification machine learning model to obtain the corresponding key information in each picture.
9. The apparatus of claim 8, wherein the identification module comprises:
the detection module is used for detecting a parameter input by a user, wherein the parameter is the picture;
and the prediction module is used for calling a prediction Application Program Interface (API) function of the picture recognition machine learning model to perform prediction analysis on each picture so as to obtain the corresponding key information in each picture.
10. The apparatus of claim 6 or 7, further comprising:
the receiving module is used for receiving an operation instruction input by a user before the acquiring module acquires at least one picture or a webpage with the picture clicked or browsed by the user when the user uses the application program;
and the integration module is used for integrating the image recognition machine learning model into the application program APP according to the operation instruction.
11. An electronic device, comprising: memory, a processor and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the method of determining user representation data of any of claims 1 to 5.
12. A computer-readable storage medium, having stored thereon a computer program which, when executed by a processor, carries out the steps of the method of determining user representation data according to any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911381002.9A CN111198960A (en) | 2019-12-27 | 2019-12-27 | Method and device for determining user portrait data, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911381002.9A CN111198960A (en) | 2019-12-27 | 2019-12-27 | Method and device for determining user portrait data, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111198960A true CN111198960A (en) | 2020-05-26 |
Family
ID=70744457
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911381002.9A Pending CN111198960A (en) | 2019-12-27 | 2019-12-27 | Method and device for determining user portrait data, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111198960A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112395498A (en) * | 2020-11-02 | 2021-02-23 | 北京五八信息技术有限公司 | Topic recommendation method and device, electronic equipment and storage medium |
CN116628153A (en) * | 2023-05-10 | 2023-08-22 | 上海任意门科技有限公司 | Method, device, equipment and medium for controlling dialogue of artificial intelligent equipment |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106095884A (en) * | 2016-06-03 | 2016-11-09 | 深圳码隆科技有限公司 | A kind of relative article information processing method based on picture and device |
CN106790366A (en) * | 2016-11-22 | 2017-05-31 | 东软集团股份有限公司 | Access website identification method and device and build the method and server of user's portrait |
US20180033040A1 (en) * | 2007-02-01 | 2018-02-01 | Iii Holdings 4, Llc | Dynamic reconfiguration of web pages based on user behavioral portrait |
CN109063542A (en) * | 2018-06-11 | 2018-12-21 | 平安科技(深圳)有限公司 | Image identification method, device, computer equipment and storage medium |
CN109815381A (en) * | 2018-12-21 | 2019-05-28 | 平安科技(深圳)有限公司 | User's portrait construction method, system, computer equipment and storage medium |
CN109815386A (en) * | 2018-12-21 | 2019-05-28 | 厦门市美亚柏科信息股份有限公司 | A kind of construction method, device and storage medium based on user's portrait |
-
2019
- 2019-12-27 CN CN201911381002.9A patent/CN111198960A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180033040A1 (en) * | 2007-02-01 | 2018-02-01 | Iii Holdings 4, Llc | Dynamic reconfiguration of web pages based on user behavioral portrait |
CN106095884A (en) * | 2016-06-03 | 2016-11-09 | 深圳码隆科技有限公司 | A kind of relative article information processing method based on picture and device |
CN106790366A (en) * | 2016-11-22 | 2017-05-31 | 东软集团股份有限公司 | Access website identification method and device and build the method and server of user's portrait |
CN109063542A (en) * | 2018-06-11 | 2018-12-21 | 平安科技(深圳)有限公司 | Image identification method, device, computer equipment and storage medium |
CN109815381A (en) * | 2018-12-21 | 2019-05-28 | 平安科技(深圳)有限公司 | User's portrait construction method, system, computer equipment and storage medium |
CN109815386A (en) * | 2018-12-21 | 2019-05-28 | 厦门市美亚柏科信息股份有限公司 | A kind of construction method, device and storage medium based on user's portrait |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112395498A (en) * | 2020-11-02 | 2021-02-23 | 北京五八信息技术有限公司 | Topic recommendation method and device, electronic equipment and storage medium |
CN116628153A (en) * | 2023-05-10 | 2023-08-22 | 上海任意门科技有限公司 | Method, device, equipment and medium for controlling dialogue of artificial intelligent equipment |
CN116628153B (en) * | 2023-05-10 | 2024-03-15 | 上海任意门科技有限公司 | Method, device, equipment and medium for controlling dialogue of artificial intelligent equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107690657B (en) | Trade company is found according to image | |
RU2631770C2 (en) | Method and device for return to previously viewed page control | |
US20200202226A1 (en) | System and method for context based deep knowledge tracing | |
KR20190016653A (en) | System and method for providing intelligent counselling service | |
US10769196B2 (en) | Method and apparatus for displaying electronic photo, and mobile device | |
CN103988202A (en) | Image attractiveness based indexing and searching | |
CN110827236B (en) | Brain tissue layering method, device and computer equipment based on neural network | |
CN107735766A (en) | The system and method for providing recommendation for the proactive property of the user to computing device | |
CN111191133B (en) | Service search processing method, device and equipment | |
TWI791176B (en) | Method, system, device and computer program carrier for automatically identifying effective data collection modules | |
EP3739470A1 (en) | Method and apparatus for performing categorised matching of videos, and selection engine | |
CN109903127A (en) | Group recommendation method and device, storage medium and server | |
CN107992602A (en) | Search result methods of exhibiting and device | |
KR101450453B1 (en) | Method and apparatus for recommending contents | |
CN111198960A (en) | Method and device for determining user portrait data, electronic equipment and storage medium | |
CN111294620A (en) | Video recommendation method and device | |
US20190324778A1 (en) | Generating contextual help | |
CN117349515A (en) | Search processing method, electronic device and storage medium | |
CN106445934A (en) | Data processing method and apparatus | |
US20230169527A1 (en) | Utilizing a knowledge graph to implement a digital survey system | |
Lu et al. | An emotional-aware mobile terminal accessibility-assisted recommendation system for the elderly based on haptic recognition | |
Fernández et al. | Estimating context aware human-object interaction using deep learning-based object recognition architectures | |
CN111915637A (en) | Picture display method and device, electronic equipment and storage medium | |
CN111209501B (en) | Picture display method and device, electronic equipment and storage medium | |
CN109902531B (en) | User management method, device, medium and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200526 |
|
RJ01 | Rejection of invention patent application after publication |