CN108804704A - A kind of user's depth portrait method and device - Google Patents
A kind of user's depth portrait method and device Download PDFInfo
- Publication number
- CN108804704A CN108804704A CN201810633410.8A CN201810633410A CN108804704A CN 108804704 A CN108804704 A CN 108804704A CN 201810633410 A CN201810633410 A CN 201810633410A CN 108804704 A CN108804704 A CN 108804704A
- Authority
- CN
- China
- Prior art keywords
- data
- user
- portrait
- depth
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0201—Market modelling; Market analysis; Collecting market data
Landscapes
- Business, Economics & Management (AREA)
- Strategic Management (AREA)
- Engineering & Computer Science (AREA)
- Accounting & Taxation (AREA)
- Development Economics (AREA)
- Finance (AREA)
- Entrepreneurship & Innovation (AREA)
- Game Theory and Decision Science (AREA)
- Data Mining & Analysis (AREA)
- Economics (AREA)
- Marketing (AREA)
- Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The present invention provides a kind of user's depth portrait method and devices, are related to big data user's Portrait brand technology field.User's depth portrait method obtains the initial data of user first, determine the data type of the initial data, and according to the data type, abstract conversion processing is carried out to the initial data, obtain abstract data, then representative learning neural network corresponding with the type of the abstract data is determined, the abstract data is modeled using the representative learning neural network, obtain data characterization model, further according to the data characterization modelling and the object function of the training representative learning model, and depth portrait is carried out to the user using the object function, obtain user's depth portrait.User data is mapped to hyperspace and carries out modeling and user's depth portrait by user's depth portrait method, is remained more user data, while shielding private data, is more suitable for complicated data mining and multi-service scene.
Description
Technical field
The present invention relates to big data user's Portrait brand technology field, in particular to a kind of user's depth draw a portrait method and
Device.
Background technology
With the rapid development of internet and universal, more behaviors of people are by information-based and digitization, big data technology
It is the information processing of the correlative relationship therefrom showed using the total data resource of any system as object and between discovery data
Technology, have been widely used at present process optimization, targeted message and the advertisement pushing of internet, user individual service with
Improve etc. becomes the powerful background support in network service behind.By big data processing mode to the number of numerous users
The group behavior trend of user can be obtained according to processing is carried out, and carrying out artificial screening and analysis to user data can expend largely
Manpower and materials, operation cost is high, on the other hand, not accurate by the customer analysis result artificially analyzed, there are use
The problem of family position inaccurate.Therefore mostly use greatly now user draw a portrait mode to the big data of user carry out it is abstract refine to
User model is obtained, has the advantages that high degree of automation, speed are fast, customer analysis is accurate.
But the data modeling of current user's portrait is mainly data label form, label is the business scenario system that is based on
It is fixed, to the Data Identification of user's dimension, such as gender, educational background, consumption propensity.Although label portrait is because interpretable strong, easy
It is widely used applied to business game, but there are reusability between different business is poor, it is weak to the ability to express of data, and
Label is easy the problem of leak data information.
Invention content
In view of this, the embodiment of the present invention is designed to provide a kind of user's depth portrait method and device, to solve
The problem of data expression capability of the above-mentioned portrait of user in the prior art is weak, label is easy leak data information.
In a first aspect, an embodiment of the present invention provides a kind of user's depth portrait method, user's depth portrait method
Including:The initial data for obtaining user, determines the data type of the initial data;According to the data type, to the original
Beginning data carry out abstract conversion processing, obtain abstract data;Determine representative learning god corresponding with the type of the abstract data
Through network, the abstract data is modeled using the representative learning neural network, obtains data characterization model;According to institute
The object function of data characterization modelling and the training representative learning model is stated, and using the object function to the use
Family carries out depth portrait, obtains user's depth portrait.
Synthesis is in a first aspect, the target letter designed according to the characterize data and train the representative learning model
Number, including:Determine the label for labelling situation of the characterize data;It is determined according to the label for labelling situation and uses supervised learning
Mode, unsupervised learning mode, semi-supervised learning mode or intensified learning mode design according to the characterize data and train institute
State the object function of representative learning model.
Synthesis is in a first aspect, in the target letter designed using the characterize data and train the representative learning model
After number, user's depth portrait method further includes:Determine loss function corresponding with the representative learning model;Using institute
It states loss function and verifies convergence effect of the characterize data in the training of the representative learning model.
Synthesis to the user using the object function described in a first aspect, carry out depth portrait, acquisition user is deep
After degree portrait, user's depth portrait method further includes:It draws a portrait to user's depth and carries out data visualization analysis, root
According to the application effect of the representative learning model after the assessment training of data visualization analysis result.
Second aspect, an embodiment of the present invention provides a kind of user's depth portrait device, user's depth portrait device
Including original data processing module, data abstraction module, data characterization model building module and portrait acquisition module.It is described original
Data processing module is used to obtain the initial data of user, determines the data type of the initial data.The data abstraction mould
Block is used for according to the data type, is carried out abstract conversion processing to the initial data, is obtained abstract data.The tables of data
Model building module is levied for determining representative learning neural network corresponding with the type of the abstract data, using the characterization
Learning neural network models the abstract data, obtains data characterization model.The portrait acquisition module is used for basis
The object function of the data characterization modelling and the training representative learning model, and using the object function to described
User carries out depth portrait, obtains user's depth portrait.
Comprehensive second aspect, the portrait acquisition module includes label condition determination unit and model training unit.It is described
Label condition determination unit is used to determine the label for labelling situation of the characterize data.The model training unit is used for according to institute
Label for labelling situation is stated to determine using supervised learning mode, unsupervised learning mode, semi-supervised learning mode or intensified learning
Mode designs according to the characterize data and trains the object function of the representative learning model.
Comprehensive second aspect, user's depth portrait device further includes authentication module, and the authentication module is for verifying
Whether the data characterization model restrains in model training, is additionally operable to verify whether the representative learning model has in the application
Effect.
Comprehensive second aspect, the authentication module include loss function determination unit and convergence compliance test result unit.It is described
Loss function determination unit is for determining loss function corresponding with the representative learning model.The convergence compliance test result unit
For verifying convergence effect of the characterize data in the training of the representative learning model using the loss function.
Comprehensive second aspect, the authentication module further includes application effect authentication unit, the application effect authentication unit
Data visualization analysis is carried out for drawing a portrait to user's depth, according to the institute after the assessment training of data visualization analysis result
State the application effect of representative learning model.
The third aspect, the embodiment of the present invention additionally provide a kind of storage medium, and the storage medium is stored in computer,
The storage medium includes a plurality of instruction, and a plurality of instruction is configured such that the computer executes above-mentioned method.
Advantageous effect provided by the invention is:
It draws a portrait method and device the present invention provides a kind of user's depth, user's depth portrait method is by by user
Initial data be converted into abstract data, and carry out the modeling of representative learning neural network for the abstract data, remain
More information in the initial data of user, while data are encapsulated in modeling process, to shield number of users
Private Parts in, improves Information Security.Further, the data characterization model obtained using modeling is to representative learning
Model is designed and trains to obtain object function, is carried out user's depth portrait to user using the object function, is made
Durability of user's depth portrait under various complex scenes gets a promotion, and user's depth portrait is made to determine user
Level exactness has obtained great promotion.
Other features and advantages of the present invention will be illustrated in subsequent specification, also, partly be become from specification
It is clear that by implementing understanding of the embodiment of the present invention.The purpose of the present invention and other advantages can be by saying what is write
Specifically noted structure is realized and is obtained in bright book, claims and attached drawing.
Description of the drawings
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below will be to needed in the embodiment attached
Figure is briefly described, it should be understood that the following drawings illustrates only certain embodiments of the present invention, therefore is not construed as pair
The restriction of range for those of ordinary skill in the art without creative efforts, can also be according to this
A little attached drawings obtain other relevant attached drawings.
Fig. 1 is a kind of flow chart for user's depth portrait method that first embodiment of the invention provides;
Fig. 2 is a kind of particular flow sheet for object function obtaining step that first embodiment of the invention provides;
Fig. 3 is a kind of module map for user's depth portrait device that second embodiment of the invention provides;
Fig. 4 is the module diagram for a kind of electronic equipment that third embodiment of the invention provides.
Icon:100- user's depth portrait device;110- original data processing modules;120- data abstraction modules;
130- data characterization model building modules;140- portrait acquisition modules;150- authentication modules;200- electronic equipments;201-
Memory;202- storage controls;203- processors;204- Peripheral Interfaces;205- input-output units;206- audios
Unit;207- display units.
Specific implementation mode
Below in conjunction with attached drawing in the embodiment of the present invention, technical solution in the embodiment of the present invention carries out clear, complete
Ground describes, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.Usually exist
The component of the embodiment of the present invention described and illustrated in attached drawing can be arranged and be designed with a variety of different configurations herein.Cause
This, the detailed description of the embodiment of the present invention to providing in the accompanying drawings is not intended to limit claimed invention below
Range, but it is merely representative of the selected embodiment of the present invention.Based on the embodiment of the present invention, those skilled in the art are not doing
The every other embodiment obtained under the premise of going out creative work, shall fall within the protection scope of the present invention.
It should be noted that:Similar label and letter indicate similar terms in following attached drawing, therefore, once a certain Xiang Yi
It is defined, then it further need not be defined and explained in subsequent attached drawing in a attached drawing.Meanwhile the present invention's
In description, term " first ", " second " etc. are only used for distinguishing description, are not understood to indicate or imply relative importance.
First embodiment
It is logical at present through the applicant the study found that when carrying out customer analysis to the excessively huge user data of data volume
Often selection user portrait mode analyzes user data.User portrait be according to the social property of user, living habit and
The information such as consumer behavior and the user model of a labeling taken out.The core work for building user's portrait is to user
It labels " ", and " label " is the highly refined signature identification by analyzing user information.For example, if certain
User often buys some toys, then electric business website can buy situation according to toy replaces its tagged " having child ", very
The approximate age that can also extremely judge child, sticks " child for having 5-10 Sui " more specifically label in this way, and these institutes
It labels after reunification, just draws a portrait at the user of the user, accordingly it is also possible to say that user's portrait is exactly to judge that a people is
What kind of person.But it can be seen from the above, the data modeling of existing user portrait is mainly data label form, label is drawn a portrait
Reusability is poor between same business, weak to the ability to express of data, while label is easy leak data information.It is above-mentioned in order to solve
Problem, first embodiment of the invention provide a kind of user's depth portrait method.
The flow chart of method referring to FIG. 1, a kind of user's depth that Fig. 1, which is first embodiment of the invention, to be provided is drawn a portrait.Institute
User's depth portrait method is stated to be as follows:
Step S10:The initial data for obtaining user, determines the data type of the initial data.
Step S20:According to the data type, abstract conversion processing is carried out to the initial data, obtains abstract data.
Step S30:It determines representative learning neural network corresponding with the type of the abstract data, is learned using the characterization
It practises neural network to model the abstract data, obtains data characterization model.
Step S40:According to the data characterization modelling and the object function of the training representative learning model, and profit
Depth portrait is carried out to the user with the object function, obtains user's depth portrait.
For step S10, as an alternative embodiment, the initial data can be by user account, use
The almost all kinds of data that the channels such as user data library and consumer record obtain.Wherein, the initial data may include using
Family essential information, user's revenue and expenditure information, user hold product, the use of user's channel, user's history transaction, user's treasury trade,
Consumer's risk grade, user social contact network, user receive and dispatch picture, user operation records etc..
For step S20, i.e.,:According to the data type, abstract conversion processing is carried out to the initial data, is taken out
Image data.Wherein, data abstraction is abstracted from actual people, object, thing and concept interested in extraction to one kind of real world
Denominator, ignore nonessential details each conception of species of these characteristics accurately described these concepts and constitute certain
Kind model only outwardly provides key message, and hides the realization details on its backstage, i.e., only show necessary information without being in
Existing details.As an implementation, the data type can be generally divided into user behavior record, image and social relationships number
According to etc. types.For example, it is behavior sequence after the user behavior record carries out abstract conversion processing, described image is taken out
As being 2D signal after conversion processing, the social relationships data be abstracted after conversion processing as relational network.The behavior
Sequence can also be called " user behavior based on time series ", be in certain a period of time, according to chronological order record
People is engaged in certain movable each walking.
According to user's depth portrait method provided by the invention, step S30 next should be executed, i.e.,:Determine with
The corresponding representative learning neural network of type of the abstract data, using the representative learning neural network to the abstract number
According to being modeled, data characterization model is obtained.The representative learning is the set for the technology for learning a feature:By initial data
A kind of form of effective exploitation can be carried out by machine learning by being converted into.It avoids the trouble of manual extraction feature, allows to count
While the study of calculation machine is using feature, also learn how to extract feature:How study learns.For machine learning task, such as
Classification problem is usually required input and mathematically or computationally is all highly convenient for handling, under the premise of such, characterization
Study is indispensable.Network is a kind of operational model by the god, by mutual between a large amount of node (or neuron)
Connection is constituted, a kind of each specific output function of node on behalf, referred to as excitation function (activationfunction).Every two
Connection between a node all represents one for the weighted value by the connection signal, referred to as weight, this is equivalent to artificial god
Memory through network.The output of network is then according to the connection type of network, the difference of weighted value and excitation function and different, and network
Itself usually all it is to approach certain algorithm of nature or function, it is also possible to a kind of expression of logic strategy.Wherein,
The representative learning neural network is then neural network corresponding with the type of the abstract data, for example, behavior sequence corresponds to
Recursive Networks, 2D signal correspond to convolutional network, and relational network then corresponds to DeepWalk networks.Optionally, in other embodiment
In, the representative learning neural network can also be Node2Vec networks corresponding with other abstract data types, GCN networks
Deng.Wherein, described to be modeled as data modeling, refer to the abstract data to Various types of data, determine range that database need to administer,
Organizational form of data etc. is until be converted to the database i.e. data characterization model of reality, wherein by modeling the data obtained
Characterization model includes the weight information of abstract data.The DeepWalk networks are a kind of implicit characterizations of learning network interior joint
Novel method, these implicit characterizations are encoded to social relationships in the wieldy continuous vector space of statistical model,
DeepWalk uses the local message obtained from the random walk deleted, goes out implicit characterization by migration equivalence sentence learning.
DeepWalk or distensible, it is the on-line learning algorithm of a structure incremental result, and is parallel, these characteristics
It is set to be widely used in practical application, such as network class or abnormality detection.
For step S40:According to the data characterization modelling and the object function of the training representative learning model,
And depth portrait is carried out to the user using the object function, obtain user's depth portrait.The object function is exactly to use
Design variable is come the object form pursued that indicates, so object function is exactly the function of design variable, it is a scalar, this
Embodiment obtains user's portrait (can be user's behavior prediction, user preference prediction etc.) by the object function.As
A kind of optional embodiment, referring to FIG. 2, Fig. 2 is a kind of object function obtaining step that first embodiment of the invention provides
Particular flow sheet.Specifically, object function obtaining step S40 may include following sub-step:
Step S41:Determine the label for labelling situation of the characterize data.
Step S42:It is determined using supervised learning mode, unsupervised learning mode, half according to the label for labelling situation
Supervised learning mode or intensified learning mode design according to the characterize data and train the target letter of the representative learning model
Number.
Wherein, the supervised learning is to train network using the example of known correct option;The unsupervised learning
Mode is suitable for data set but the case where without label;Semi-supervised learning mode combines a large amount of unlabelled in the training stage
Data and a small amount of label data, compared with the model for using all label datas, using the training pattern of training set in training
Can be more accurate, and training cost is lower;The intensified learning mode is by being built while exploring to circumstances not known
Vertical environmental model and the optimal policy that learns, the case where suitable for no label, no data collection.
Optionally, after to object function complete design and training, it is also necessary to characterize data whether in model training
Convergence is verified, and step specifically includes:Determine loss function corresponding with the representative learning model;Using the loss
Function verifies convergence effect of the characterize data in the training of the representative learning model.
Further, in order to obtain better user's portrait, the present embodiment also needs to analyze user's depth portrait,
Its specific steps may include:It draws a portrait to user's depth and carries out data visualization analysis, analyzed and tied according to data visualization
The application effect of the representative learning model after fruit assessment training.
User's depth portrait method provided in this embodiment carries out data abstraction to the initial data of user and obtains abstract number
According to carrying out data modeling using the abstract data, more users letter remained while shielding privacy of user data
Breath makes user's depth portrait be suitable for more complicated application scenarios.
Second embodiment
For the user's depth portrait method for coordinating first embodiment of the invention to provide, second embodiment of the invention is also
Provide a kind of user's depth portrait device 100.
The module map of device referring to FIG. 3, a kind of user's depth that Fig. 3, which is second embodiment of the invention, to be provided is drawn a portrait.
User's depth portrait device 100 includes original data processing module 110, data abstraction module 120, data characterization mould
Type establishes module 130 and portrait acquisition module 140.
Original data processing module 110, the initial data for obtaining user determine the data class of the initial data
Type.
Data abstraction module 120, for according to the data type, abstract conversion processing to be carried out to the initial data,
Obtain abstract data.
Data characterization model building module 130, for determining representative learning god corresponding with the type of the abstract data
Through network, the abstract data is modeled using the representative learning neural network, obtains data characterization model.
Portrait acquisition module 140 is used for according to the data characterization modelling and trains the representative learning model
Object function, and depth portrait is carried out to the user using the object function, obtain user's depth portrait.
As an alternative embodiment, portrait acquisition module 140 may include label condition determination unit and model
Training unit.The label condition determination unit, the label for labelling situation for determining the characterize data.The model training
Unit, for being determined using supervised learning mode, unsupervised learning mode, semi-supervised learning according to the label for labelling situation
Mode or intensified learning mode design according to the characterize data and train the object function of the representative learning model.
Further, it is contemplated that the model and object function of foundation may not be able to accurately reflect the feature of user,
Therefore user's depth portrait device 100 should also include for verifying whether the data characterization model is received in model training
Hold back, be additionally operable to verify the representative learning model in the application whether effective authentication module 150.
Authentication module 150 includes loss function determination unit, convergence compliance test result unit and application effect authentication unit.Institute
Loss function determination unit is stated, for determining loss function corresponding with the representative learning model.The convergence compliance test result
Unit, for verifying convergence effect of the characterize data in the training of the representative learning model using the loss function
Fruit.The application effect authentication unit carries out data visualization analysis, according to data visualization for drawing a portrait to user's depth
Change the application effect of the representative learning model after analysis result assessment training.
It is apparent to those skilled in the art that for convenience and simplicity of description, the device of foregoing description
Specific work process, can refer to preceding method in corresponding process, no longer excessively repeat herein.
3rd embodiment
In order to realize that above-mentioned step-recording method, third embodiment of the invention provide a kind of electronic equipment 200.It please refers to
Fig. 4, Fig. 4 are the module diagram for a kind of electronic equipment that third embodiment of the invention provides.
Electronic equipment 200 may include user's depth portrait device 100, memory 201, storage control 202, processor
203, Peripheral Interface 204, input-output unit 205, audio unit 206, display unit 207.
The memory 201, storage control 202, processor 203, Peripheral Interface 204, input-output unit 205, sound
Frequency unit 206,207 each element of display unit are directly or indirectly electrically connected between each other, to realize the transmission or friendship of data
Mutually.It is electrically connected for example, these elements can be realized between each other by one or more communication bus or signal wire.The user
Depth portrait device 100 can be stored in the memory 201 including at least one in the form of software or firmware (firmware)
In or be solidificated in user's depth portrait device 100 operating system (operating system, OS) in software function module.
The processor 203 is used to execute the executable module stored in memory 201, such as user's depth portrait device 100 includes
Software function module or computer program.
Wherein, memory 201 may be, but not limited to, random access memory (Random Access Memory,
RAM), read-only memory (Read Only Memory, ROM), programmable read only memory (Programmable Read-
Only Memory, PROM), erasable read-only memory (Erasable Programmable Read-Only Memory,
EPROM), electricallyerasable ROM (EEROM) (Electric Erasable Programmable Read-Only Memory,
EEPROM) etc..Wherein, memory 201 is for storing program, and the processor 203 is after receiving and executing instruction, described in execution
Program, the method performed by server that the stream process that aforementioned any embodiment of the embodiment of the present invention discloses defines can be applied to
In processor 203, or realized by processor 203.
Processor 203 can be a kind of IC chip, the processing capacity with signal.Above-mentioned processor 203 can
To be general processor, including central processing unit (Central Processing Unit, abbreviation CPU), network processing unit
(Network Processor, abbreviation NP) etc.;Can also be digital signal processor (DSP), application-specific integrated circuit (ASIC),
Ready-made programmable gate array (FPGA) either other programmable logic device, discrete gate or transistor logic, discrete hard
Part component.It may be implemented or execute disclosed each method, step and the logic diagram in the embodiment of the present invention.General processor
Can be microprocessor or the processor 203 can also be any conventional processor etc..
The Peripheral Interface 204 couples various input/output devices to processor 203 and memory 201.At some
In embodiment, Peripheral Interface 204, processor 203 and storage control 202 can be realized in one single chip.Other one
In a little examples, they can be realized by independent chip respectively.
Input-output unit 205 is for being supplied to user input data to realize user and the server (or local terminal)
Interaction.The input-output unit 205 may be, but not limited to, the equipment such as mouse and keyboard.
Audio unit 206 provides a user audio interface, may include that one or more microphones, one or more raises
Sound device and voicefrequency circuit.
Display unit 207 provides an interactive interface (such as user's operation circle between the electronic equipment 200 and user
Face) or for display image data give user reference.In the present embodiment, the display unit 207 can be liquid crystal display
Or touch control display.Can be the capacitance type touch control screen or resistance for supporting single-point and multi-point touch operation if touch control display
Formula touch screen etc..Single-point and multi-point touch operation is supported to refer to touch control display and can sense on the touch control display one
Or at multiple positions simultaneously generate touch control operation, and by the touch control operation that this is sensed transfer to processor 203 carry out calculate and
Processing.
It is appreciated that structure shown in Fig. 4 is only to illustrate, the electronic equipment 200 may also include more than shown in Fig. 4
Either less component or with the configuration different from shown in Fig. 4.Hardware, software may be used in each component shown in Fig. 4
Or combinations thereof realize.
It is apparent to those skilled in the art that for convenience and simplicity of description, the device of foregoing description
Specific work process, can refer to preceding method in corresponding process, no longer excessively repeat herein.
In conclusion an embodiment of the present invention provides a kind of user's depth portrait method and device, user's depth is drawn
Image space method carries out representative learning nerve net by converting the initial data of user to abstract data for the abstract data
The modeling of network remains more information in the initial data of user, while being encapsulated to data in modeling process, from
And the Private Parts in user data is shielded, improve Information Security.Further, the data characterization obtained using modeling
Model is designed and is trained to representative learning model to obtain object function, is used user using the object function
Family depth portrait makes durability of user's depth portrait under various complex scenes get a promotion, and makes user's depth
Portrait has obtained great promotion to the positional accuracy of user.
In several embodiments provided herein, it should be understood that disclosed device and method can also pass through
Other modes are realized.The apparatus embodiments described above are merely exemplary, for example, the flow chart in attached drawing and block diagram
Show the device of multiple embodiments according to the present invention, the architectural framework in the cards of method and computer program product,
Function and operation.In this regard, each box in flowchart or block diagram can represent the one of a module, section or code
Part, a part for the module, section or code, which includes that one or more is for implementing the specified logical function, to be held
Row instruction.It should also be noted that at some as in the realization method replaced, the function of being marked in box can also be to be different from
The sequence marked in attached drawing occurs.For example, two continuous boxes can essentially be basically executed in parallel, they are sometimes
It can execute in the opposite order, this is depended on the functions involved.It is also noted that every in block diagram and or flow chart
The combination of box in a box and block diagram and or flow chart can use function or the dedicated base of action as defined in executing
It realizes, or can be realized using a combination of dedicated hardware and computer instructions in the system of hardware.
In addition, each function module in each embodiment of the present invention can integrate to form an independent portion
Point, can also be modules individualism, can also two or more modules be integrated to form an independent part.
It, can be with if the function is realized and when sold or used as an independent product in the form of software function module
It is stored in a computer read/write memory medium.Based on this understanding, technical scheme of the present invention is substantially in other words
The part of the part that contributes to existing technology or the technical solution can be expressed in the form of software products, the meter
Calculation machine software product is stored in a storage medium, including some instructions are used so that a computer equipment (can be
People's computer, server or network equipment etc.) it performs all or part of the steps of the method described in the various embodiments of the present invention.
And storage medium above-mentioned includes:USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), arbitrary access
The various media that can store program code such as memory (RAM, Random Access Memory), magnetic disc or CD.
The foregoing is only a preferred embodiment of the present invention, is not intended to restrict the invention, for the skill of this field
For art personnel, the invention may be variously modified and varied.All within the spirits and principles of the present invention, any made by repair
Change, equivalent replacement, improvement etc., should all be included in the protection scope of the present invention.It should be noted that:Similar label and letter exist
Similar terms are indicated in following attached drawing, therefore, once being defined in a certain Xiang Yi attached drawing, are then not required in subsequent attached drawing
It is further defined and is explained.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any
Those familiar with the art in the technical scope disclosed by the present invention, can easily think of the change or the replacement, and should all contain
Lid is within protection scope of the present invention.Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
It should be noted that herein, relational terms such as first and second and the like are used merely to a reality
Body or operation are distinguished with another entity or operation, are deposited without necessarily requiring or implying between these entities or operation
In any actual relationship or order or sequence.Moreover, the terms "include", "comprise" or its any other variant are intended to
Non-exclusive inclusion, so that the process, method, article or equipment including a series of elements is not only wanted including those
Element, but also include other elements that are not explicitly listed, or further include for this process, method, article or equipment
Intrinsic element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that
There is also other identical elements in process, method, article or equipment including the element.
Claims (10)
- A kind of method 1. user's depth is drawn a portrait, which is characterized in that user's depth portrait method includes:The initial data for obtaining user, determines the data type of the initial data;According to the data type, abstract conversion processing is carried out to the initial data, obtains abstract data;Representative learning neural network corresponding with the type of the abstract data is determined, using the representative learning neural network pair The abstract data is modeled, and data characterization model is obtained;According to the data characterization modelling and the object function of the training representative learning model, and utilize the target letter It is several that depth portrait is carried out to the user, obtain user's depth portrait.
- The method 2. user's depth according to claim 1 is drawn a portrait, which is characterized in that described to be designed according to the characterize data And the object function of the training representative learning model, including:Determine the label for labelling situation of the characterize data;It is determined using supervised learning mode, unsupervised learning mode, semi-supervised learning mode according to the label for labelling situation Or intensified learning mode, the object function of the representative learning model is designed and trained according to the characterize data.
- The method 3. user's depth according to claim 1 or 2 is drawn a portrait, which is characterized in that utilize the characterization number described According to designing and after the object function of the training representative learning model, user's depth portrait method further includes:Determine loss function corresponding with the representative learning model;Convergence effect of the characterize data in the training of the representative learning model is verified using the loss function.
- The method 4. user's depth according to claim 1 or 2 is drawn a portrait, which is characterized in that utilize the target letter described Several to carry out depth portrait to the user, after obtaining user's depth portrait, user's depth portrait method further includes:It draws a portrait to user's depth and carries out data visualization analysis, according to the institute after the assessment training of data visualization analysis result State the application effect of representative learning model.
- The device 5. a kind of user's depth is drawn a portrait, which is characterized in that user's depth portrait device includes:Original data processing module, the initial data for obtaining user determine the data type of the initial data;Data abstraction module, for according to the data type, carrying out abstract conversion processing to the initial data, being abstracted Data;Data characterization model building module, for determining representative learning neural network corresponding with the type of the abstract data, The abstract data is modeled using the representative learning neural network, obtains data characterization model;Portrait acquisition module, for the target letter according to the data characterization modelling and the training representative learning model Number, and depth portrait is carried out to the user using the object function, obtain user's depth portrait.
- The device 6. user's depth according to claim 5 is drawn a portrait, which is characterized in that the portrait acquisition module includes:Label condition determination unit, the label for labelling situation for determining the characterize data;Model training unit, for being determined using supervised learning mode, unsupervised learning side according to the label for labelling situation Formula, semi-supervised learning mode or intensified learning mode design according to the characterize data and train the representative learning model Object function.
- The device 7. user's depth according to claim 5 is drawn a portrait, which is characterized in that user's depth portrait device also wraps It includes:Authentication module is additionally operable to verify the characterization for verifying whether the data characterization model restrains in model training Whether learning model is effective in the application.
- The device 8. user's depth according to claim 7 is drawn a portrait, which is characterized in that the authentication module includes:Loss function determination unit, for determining loss function corresponding with the representative learning model;Compliance test result unit is restrained, for verifying the characterize data in the representative learning model using the loss function Convergence effect in training.
- The device 9. user's depth according to claim 8 is drawn a portrait, which is characterized in that the authentication module further includes:Application effect authentication unit carries out data visualization analysis, according to data visualization for drawing a portrait to user's depth The application effect of the representative learning model after analysis result assessment training.
- 10. a kind of storage medium, which is characterized in that be stored with computer program instructions, the computer in the storage medium When program instruction is read and run by a processor, perform claim requires the step in any one of 1-4 the methods.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810633410.8A CN108804704A (en) | 2018-06-19 | 2018-06-19 | A kind of user's depth portrait method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810633410.8A CN108804704A (en) | 2018-06-19 | 2018-06-19 | A kind of user's depth portrait method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108804704A true CN108804704A (en) | 2018-11-13 |
Family
ID=64083796
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810633410.8A Pending CN108804704A (en) | 2018-06-19 | 2018-06-19 | A kind of user's depth portrait method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108804704A (en) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109858953A (en) * | 2019-01-02 | 2019-06-07 | 深圳壹账通智能科技有限公司 | User's portrait method, apparatus, computer equipment and storage medium |
CN110134860A (en) * | 2019-04-12 | 2019-08-16 | 阿里巴巴集团控股有限公司 | User's portrait generation method, device and equipment |
CN110298529A (en) * | 2019-01-10 | 2019-10-01 | 广州市易纬电子有限公司 | Labour effect portrait method, equipment, readable storage medium storing program for executing and computer equipment |
CN110399404A (en) * | 2019-07-25 | 2019-11-01 | 北京明略软件系统有限公司 | A kind of the user's expression generation method and device of computer |
CN110737730A (en) * | 2019-10-21 | 2020-01-31 | 腾讯科技(深圳)有限公司 | Unsupervised learning-based user classification method, unsupervised learning-based user classification device, unsupervised learning-based user classification equipment and storage medium |
CN111091410A (en) * | 2019-11-04 | 2020-05-01 | 南京光普信息技术有限公司 | Node embedding and user behavior characteristic combined net point sales prediction method |
CN111475851A (en) * | 2020-01-16 | 2020-07-31 | 支付宝(杭州)信息技术有限公司 | Privacy data processing method and device based on machine learning and electronic equipment |
WO2020192460A1 (en) * | 2019-03-25 | 2020-10-01 | 华为技术有限公司 | Data processing method, terminal-side device, cloud-side device, and terminal-cloud collaboration system |
CN111737688A (en) * | 2020-06-08 | 2020-10-02 | 上海交通大学 | Attack defense system based on user portrait |
CN111767930A (en) * | 2019-04-01 | 2020-10-13 | 北京百度网讯科技有限公司 | Method for detecting abnormal time series data of Internet of things and related equipment thereof |
CN111784301A (en) * | 2020-07-02 | 2020-10-16 | 中国银行股份有限公司 | User portrait construction method and device, storage medium and electronic equipment |
CN111861065A (en) * | 2019-04-30 | 2020-10-30 | 北京嘀嘀无限科技发展有限公司 | User data management method and device, electronic equipment and storage medium |
CN112052270A (en) * | 2020-08-26 | 2020-12-08 | 南京越扬科技有限公司 | Method and system for carrying out user portrait depth analysis through big data |
CN113191700A (en) * | 2021-07-01 | 2021-07-30 | 成都飞机工业(集团)有限责任公司 | Professional technology employee portrait construction method for enterprise interior |
CN113672818A (en) * | 2020-05-13 | 2021-11-19 | 中南大学 | Method and system for obtaining user portrait of social media |
CN118378152A (en) * | 2024-06-24 | 2024-07-23 | 浙江聚米为谷信息科技有限公司 | User portrait classification method and system based on behavior data analysis |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106446988A (en) * | 2016-09-18 | 2017-02-22 | 青岛文潮电子有限公司 | Asset security management system based on RFID and GSM |
CN106489159A (en) * | 2016-06-29 | 2017-03-08 | 深圳狗尾草智能科技有限公司 | A kind of user's portrait based on deep neural network represents learning system and method |
CN107240395A (en) * | 2017-06-16 | 2017-10-10 | 百度在线网络技术(北京)有限公司 | A kind of acoustic training model method and apparatus, computer equipment, storage medium |
CN107301199A (en) * | 2017-05-17 | 2017-10-27 | 北京融数云途科技有限公司 | A kind of data label generation method and device |
CN107729560A (en) * | 2017-11-08 | 2018-02-23 | 北京奇虎科技有限公司 | User's portrait building method, device and computing device based on big data |
CN107862053A (en) * | 2017-11-08 | 2018-03-30 | 北京奇虎科技有限公司 | User's portrait building method, device and computing device based on customer relationship |
-
2018
- 2018-06-19 CN CN201810633410.8A patent/CN108804704A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106489159A (en) * | 2016-06-29 | 2017-03-08 | 深圳狗尾草智能科技有限公司 | A kind of user's portrait based on deep neural network represents learning system and method |
CN106446988A (en) * | 2016-09-18 | 2017-02-22 | 青岛文潮电子有限公司 | Asset security management system based on RFID and GSM |
CN107301199A (en) * | 2017-05-17 | 2017-10-27 | 北京融数云途科技有限公司 | A kind of data label generation method and device |
CN107240395A (en) * | 2017-06-16 | 2017-10-10 | 百度在线网络技术(北京)有限公司 | A kind of acoustic training model method and apparatus, computer equipment, storage medium |
CN107729560A (en) * | 2017-11-08 | 2018-02-23 | 北京奇虎科技有限公司 | User's portrait building method, device and computing device based on big data |
CN107862053A (en) * | 2017-11-08 | 2018-03-30 | 北京奇虎科技有限公司 | User's portrait building method, device and computing device based on customer relationship |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109858953A (en) * | 2019-01-02 | 2019-06-07 | 深圳壹账通智能科技有限公司 | User's portrait method, apparatus, computer equipment and storage medium |
CN110298529A (en) * | 2019-01-10 | 2019-10-01 | 广州市易纬电子有限公司 | Labour effect portrait method, equipment, readable storage medium storing program for executing and computer equipment |
WO2020192460A1 (en) * | 2019-03-25 | 2020-10-01 | 华为技术有限公司 | Data processing method, terminal-side device, cloud-side device, and terminal-cloud collaboration system |
CN111767930A (en) * | 2019-04-01 | 2020-10-13 | 北京百度网讯科技有限公司 | Method for detecting abnormal time series data of Internet of things and related equipment thereof |
CN110134860A (en) * | 2019-04-12 | 2019-08-16 | 阿里巴巴集团控股有限公司 | User's portrait generation method, device and equipment |
CN110134860B (en) * | 2019-04-12 | 2023-04-07 | 创新先进技术有限公司 | User portrait generation method, device and equipment |
CN111861065A (en) * | 2019-04-30 | 2020-10-30 | 北京嘀嘀无限科技发展有限公司 | User data management method and device, electronic equipment and storage medium |
CN110399404A (en) * | 2019-07-25 | 2019-11-01 | 北京明略软件系统有限公司 | A kind of the user's expression generation method and device of computer |
CN110737730A (en) * | 2019-10-21 | 2020-01-31 | 腾讯科技(深圳)有限公司 | Unsupervised learning-based user classification method, unsupervised learning-based user classification device, unsupervised learning-based user classification equipment and storage medium |
CN110737730B (en) * | 2019-10-21 | 2024-03-26 | 腾讯科技(深圳)有限公司 | User classification method, device, equipment and storage medium based on unsupervised learning |
CN111091410B (en) * | 2019-11-04 | 2022-03-11 | 南京光普信息技术有限公司 | Node embedding and user behavior characteristic combined net point sales prediction method |
CN111091410A (en) * | 2019-11-04 | 2020-05-01 | 南京光普信息技术有限公司 | Node embedding and user behavior characteristic combined net point sales prediction method |
CN111475851A (en) * | 2020-01-16 | 2020-07-31 | 支付宝(杭州)信息技术有限公司 | Privacy data processing method and device based on machine learning and electronic equipment |
CN113672818A (en) * | 2020-05-13 | 2021-11-19 | 中南大学 | Method and system for obtaining user portrait of social media |
CN113672818B (en) * | 2020-05-13 | 2023-11-14 | 中南大学 | Method and system for acquiring social media user portraits |
CN111737688A (en) * | 2020-06-08 | 2020-10-02 | 上海交通大学 | Attack defense system based on user portrait |
CN111737688B (en) * | 2020-06-08 | 2023-10-20 | 上海交通大学 | Attack defense system based on user portrait |
CN111784301A (en) * | 2020-07-02 | 2020-10-16 | 中国银行股份有限公司 | User portrait construction method and device, storage medium and electronic equipment |
CN112052270A (en) * | 2020-08-26 | 2020-12-08 | 南京越扬科技有限公司 | Method and system for carrying out user portrait depth analysis through big data |
CN113191700A (en) * | 2021-07-01 | 2021-07-30 | 成都飞机工业(集团)有限责任公司 | Professional technology employee portrait construction method for enterprise interior |
CN118378152A (en) * | 2024-06-24 | 2024-07-23 | 浙江聚米为谷信息科技有限公司 | User portrait classification method and system based on behavior data analysis |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108804704A (en) | A kind of user's depth portrait method and device | |
Liu et al. | Modelling urban change with cellular automata: Contemporary issues and future research directions | |
CN108416198B (en) | Device and method for establishing human-machine recognition model and computer readable storage medium | |
CN110245213A (en) | Questionnaire generation method, device, equipment and storage medium | |
CN107291715A (en) | Resume appraisal procedure and device | |
CN107563429A (en) | A kind of sorting technique and device of network user colony | |
CN108256568A (en) | A kind of plant species identification method and device | |
US11086754B2 (en) | Automated feedback-based application optimization | |
CN105117460A (en) | Learning resource recommendation method and system | |
CN109299258A (en) | A kind of public sentiment event detecting method, device and equipment | |
CN108062377A (en) | The foundation of label picture collection, definite method, apparatus, equipment and the medium of label | |
CN104346408B (en) | A kind of method and apparatus being labeled to the network user | |
CN110348895A (en) | A kind of personalized recommendation method based on user tag, device and electronic equipment | |
Albatayneh et al. | Image retraining using TensorFlow implementation of the pretrained inception-v3 model for evaluating gravel road dust | |
CN107978189A (en) | Intelligent exercise pushing method and system and terminal equipment | |
CN110580489B (en) | Data object classification system, method and equipment | |
CN107229731A (en) | Method and apparatus for grouped data | |
CN109978619A (en) | Method, system, equipment and the medium of air ticket pricing Policy Filtering | |
CN107944026A (en) | A kind of method, apparatus, server and the storage medium of atlas personalized recommendation | |
CN115391670B (en) | Knowledge graph-based internet behavior analysis method and system | |
CN110009012A (en) | A kind of risk specimen discerning method, apparatus and electronic equipment | |
CN105843793A (en) | Detection and creation of appropriate row concept during automated model generation | |
CN111126629B (en) | Model generation method, brush list identification method, system, equipment and medium | |
CN110717101B (en) | User classification method and device based on application behaviors and electronic equipment | |
CN108733784B (en) | Teaching courseware recommendation method, device and equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181113 |
|
RJ01 | Rejection of invention patent application after publication |