CN110175298A - User matching method - Google Patents
User matching method Download PDFInfo
- Publication number
- CN110175298A CN110175298A CN201910295641.7A CN201910295641A CN110175298A CN 110175298 A CN110175298 A CN 110175298A CN 201910295641 A CN201910295641 A CN 201910295641A CN 110175298 A CN110175298 A CN 110175298A
- Authority
- CN
- China
- Prior art keywords
- user
- target
- candidate
- matching
- voice
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 71
- 239000013598 vector Substances 0.000 claims description 233
- 238000012360 testing method Methods 0.000 claims description 59
- 238000011156 evaluation Methods 0.000 claims description 28
- 238000012549 training Methods 0.000 claims description 21
- 230000009471 action Effects 0.000 claims description 20
- 239000002131 composite material Substances 0.000 claims description 20
- 238000013145 classification model Methods 0.000 claims description 12
- 238000000605 extraction Methods 0.000 claims description 11
- 230000000694 effects Effects 0.000 claims description 7
- 238000010586 diagram Methods 0.000 description 45
- 230000008569 process Effects 0.000 description 17
- 238000004590 computer program Methods 0.000 description 16
- 238000004364 calculation method Methods 0.000 description 13
- 230000006399 behavior Effects 0.000 description 12
- 230000002452 interceptive effect Effects 0.000 description 11
- 230000003542 behavioural effect Effects 0.000 description 7
- 238000012545 processing Methods 0.000 description 7
- 238000004458 analytical method Methods 0.000 description 6
- 238000010801 machine learning Methods 0.000 description 6
- 230000000366 juvenile effect Effects 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 4
- 239000000284 extract Substances 0.000 description 4
- 230000000977 initiatory effect Effects 0.000 description 4
- 238000012216 screening Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 235000013372 meat Nutrition 0.000 description 3
- 238000012935 Averaging Methods 0.000 description 2
- 241001269238 Data Species 0.000 description 2
- 235000009508 confectionery Nutrition 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 241000894007 species Species 0.000 description 2
- 238000007476 Maximum Likelihood Methods 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 150000001875 compounds Chemical class 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000000116 mitigating effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000000465 moulding Methods 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9536—Search customisation based on social or collaborative filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
- G06Q50/01—Social networking
Abstract
This application involves a kind of user matching methods, comprising: obtains target biometric data corresponding with target user and target user's preference data, and candidate biological attribute data corresponding with candidate user and candidate user preference data;According to the target biometric data and the candidate user preference data, determine the target user for the first matching degree of the candidate user;According to the candidate biological attribute data and target user's preference data, determine the candidate user for the second matching degree of the target user;When the comprehensive matching degree determined by the first matching degree and the second matching degree as the candidate user meets matching condition, the candidate user is determined as the corresponding matching user of the target user.The matched accuracy rate of user can be improved in scheme provided by the present application.
Description
Technical field
This application involves field of computer technology, more particularly to a kind of user matching method.
Background technique
The fast development of computer technology and communication network technology so that people's lives mode it is also more and more intelligent and
Facilitation.For example, now people can by friend-making software carry out intelligent Matching, with friend-making software recommend user conversate or
The operations such as information sharing, to carry out social activity.
However, traditional user's matching way carried out based on friend-making software, is usually all the basic money by user
Material carries out the obtained test result of simple test to user to carry out matching recommendation, usually can generate the user of recommendation not
Be user really interested user the case where, cause user's matching accuracy rate low.
Summary of the invention
Based on this, it is necessary to for the low technical problem of user's matching accuracy rate, provide a kind of user matching method, dress
It sets, computer readable storage medium and computer equipment.
A kind of user matching method, comprising:
Obtain target biometric data corresponding with target user and target user's preference data, and and candidate user
Corresponding candidate's biological attribute data and candidate user preference data;
According to the target biometric data and the candidate user preference data, determine the target user for institute
State the first matching degree of candidate user;
According to the candidate biological attribute data and target user's preference data, determine the candidate user for institute
State the second matching degree of target user;
It, will when the comprehensive matching degree determined by first matching degree and second matching degree meets matching condition
The candidate user is determined as the corresponding matching user of the target user.
A kind of user's coalignment, which is characterized in that described device includes:
Module is obtained, for obtaining target biometric data corresponding with target user and target user's preference data,
And candidate biological attribute data corresponding with candidate user and candidate user preference data;
Determining module, described in determining according to the target biometric data and the candidate user preference data
First matching degree of the target user for the candidate user;
The determining module is also used to determine according to the candidate biological attribute data and target user's preference data
Second matching degree of the candidate user for the target user;
Matching module, for meeting when the comprehensive matching degree determined by first matching degree and second matching degree
When matching condition, the candidate user is determined as the corresponding matching user of the target user.
A kind of computer readable storage medium is stored with computer program, when the computer program is executed by processor,
So that the processor executes following steps:
Obtain target biometric data corresponding with target user and target user's preference data, and and candidate user
Corresponding candidate's biological attribute data and candidate user preference data;
According to the target biometric data and the candidate user preference data, determine the target user for institute
State the first matching degree of candidate user;
According to the candidate biological attribute data and target user's preference data, determine the candidate user for institute
State the second matching degree of target user;
It, will when the comprehensive matching degree determined by first matching degree and second matching degree meets matching condition
The candidate user is determined as the corresponding matching user of the target user.
A kind of computer equipment, including memory and processor, the memory are stored with computer program, the calculating
When machine program is executed by the processor, so that the processor executes following steps:
Obtain target biometric data corresponding with target user and target user's preference data, and and candidate user
Corresponding candidate's biological attribute data and candidate user preference data;
According to the target biometric data and the candidate user preference data, determine the target user for institute
State the first matching degree of candidate user;
According to the candidate biological attribute data and target user's preference data, determine the candidate user for institute
State the second matching degree of target user;
It, will when the comprehensive matching degree determined by first matching degree and second matching degree meets matching condition
The candidate user is determined as the corresponding matching user of the target user.
Above-mentioned user matching method, device, computer readable storage medium and computer equipment, it is corresponding according to target user
Target biometric data and the corresponding candidate user preference data of candidate user, the target user can be determined for the time
Select first matching degree at family.Correspondingly, according to the corresponding candidate biological attribute data of the candidate user and the target user couple
The target user's preference data answered can determine the candidate user for the second matching degree of the target user.According to first
With degree and the second matching degree, filtered out from candidate user and the matched matching user of target user.In this way, based on to different use
The biological attribute data at family and respective user preference data are mutually matched, so that it may the target user be enabled to be matched to this
Target user likes and likes the other users of the target user, greatly improves the matched accuracy rate of user.
A kind of user matching method, comprising:
Obtain the biological attribute data corresponding with target user of acquisition;
When generating user's matching instruction, obtain and the matched user information for matching user of the target user;It is described
User preference data corresponding to matching user is matched with biological attribute data corresponding to the target user, and the matching is used
Biological attribute data corresponding to family is matched with user preference data corresponding to the target user;
Export the user information of the matching user.
A kind of user's coalignment, which is characterized in that described device includes:
Module is obtained, for obtaining the biological attribute data corresponding with target user of acquisition;
The acquisition module is also used to when generating user's matching instruction, is obtained and the target user is matched matches use
The user information at family;User preference data corresponding to the matching user and biological characteristic number corresponding to the target user
According to matching, biological attribute data corresponding to the matching user and user preference data corresponding to the target user
Match;
Output module, for exporting the user information of the matching user.
A kind of computer readable storage medium is stored with computer program, when the computer program is executed by processor,
So that the processor executes following steps:
Obtain the biological attribute data corresponding with target user of acquisition;
When generating user's matching instruction, obtain and the matched user information for matching user of the target user;It is described
User preference data corresponding to matching user is matched with biological attribute data corresponding to the target user, and the matching is used
Biological attribute data corresponding to family is matched with user preference data corresponding to the target user;
Export the user information of the matching user.
A kind of computer equipment, including memory and processor, the memory are stored with computer program, the calculating
When machine program is executed by the processor, so that the processor executes following steps:
Obtain the biological attribute data corresponding with target user of acquisition;
When generating user's matching instruction, obtain and the matched user information for matching user of the target user;It is described
User preference data corresponding to matching user is matched with biological attribute data corresponding to the target user, and the matching is used
Biological attribute data corresponding to family is matched with user preference data corresponding to the target user;
Export the user information of the matching user.
Above-mentioned user matching method, device, computer readable storage medium and computer equipment, obtain acquisition and target
The corresponding biological attribute data of user obtains and the target user is matched matches user's when generating user's matching instruction
User information simultaneously exports.Wherein, user preference data corresponding to matching user and biology corresponding to the target user are special
Levy Data Matching, biological attribute data corresponding to matching user and user preference data corresponding to the target user
Match.In this way, based on to user biological attribute data and user preference data be mutually matched, so that it may so that the target user
The other users that the target user likes and likes the target user can be matched to, the matched accuracy rate of user is greatly improved.
Detailed description of the invention
Fig. 1 is the applied environment figure of user matching method in one embodiment;
Fig. 2 is the flow diagram of user matching method in one embodiment;
Fig. 3 is the processing frame diagram classified in one embodiment to user speech;
Fig. 4 is the flow diagram of user matching method in one embodiment;
Fig. 5 (a) is the interface schematic diagram that target user carries out preference test to test user images in one embodiment;
Fig. 5 (b) is the interface schematic diagram that target user carries out preference test to test user speech in one embodiment;
Fig. 6 is that process the step of obtaining the biological attribute data corresponding with target user of acquisition in one embodiment is shown
It is intended to;
Fig. 7 is the interface schematic diagram of user interface in one embodiment;
Fig. 8 (a) is the interface schematic diagram of user images acquisition interface in one embodiment;
Fig. 8 (b) is the interface schematic diagram that user images show list in one embodiment;
Fig. 8 (c) is the interface schematic diagram of user's upload user image in one embodiment;
Fig. 8 (d) is the interface schematic diagram completed the user images after user images upload in one embodiment and show list;
Fig. 9 (a) is the interface schematic diagram of user speech acquisition interface in one embodiment;
Fig. 9 (b) is the interface schematic diagram that user carries out voice recording in one embodiment;
Fig. 9 (c) is that user speech records the interface schematic diagram completed in one embodiment;
Fig. 9 (d) is the interface schematic diagram completed the user speech after voice data uploads in one embodiment and show list;
Figure 10 (a) is the interface schematic diagram of user speech analysis in one embodiment;
Figure 10 (b) is the schematic diagram that voice identification result is presented in one embodiment;
Figure 11 (a) is the interface schematic diagram at user's match triggers interface in one embodiment;
Figure 11 (b) is that terminal waits matching result after initiating matching request corresponding with target user in one embodiment
Interface schematic diagram;
Figure 11 (c) is the interface schematic diagram that matching result shows interface in one embodiment;
Figure 12 is the content schematic diagram of user's evaluation report in one embodiment;
Figure 13 is the timing diagram of user matching method in a specific embodiment;
Figure 14 is the structural block diagram of user's coalignment in one embodiment;
Figure 15 is the structural block diagram of user's coalignment in another embodiment;
Figure 16 is the structural block diagram of user's coalignment in another embodiment;
Figure 17 is the structural block diagram of computer equipment in one embodiment.
Specific embodiment
It is with reference to the accompanying drawings and embodiments, right in order to which the objects, technical solutions and advantages of the application are more clearly understood
The application is further elaborated.It should be appreciated that specific embodiment described herein is only used to explain the application, and
It is not used in restriction the application.
Fig. 1 is the applied environment figure of user matching method in one embodiment.Referring to Fig.1, the user matching method application
In user's matching system.User's matching system includes terminal 110 and server 120.Terminal 110 and server 120 pass through net
Network connection.Terminal 110 specifically can be terminal console or mobile terminal, and mobile terminal specifically can be with mobile phone, tablet computer, notes
At least one of this computer etc..Server 120 can use the server of the either multiple server compositions of independent server
Cluster is realized.
As shown in Fig. 2, in one embodiment, providing a kind of user matching method.The present embodiment is mainly in this way
It is illustrated applied to the server 120 in above-mentioned Fig. 1.Referring to Fig. 2, which specifically comprises the following steps:
S202, obtains corresponding with target user target biometric data and target user's preference data, and with time
Select the corresponding candidate biological attribute data in family and candidate user preference data.
Wherein, target user is to which other users are carried out matched user therewith.Candidate user be for the target
User carries out matched other users.It is appreciated that target user and candidate user all can be referred to as user, in this application, " mesh
Mark " and " candidate " are only used for distinguishing different users.For example, the user just can be referred to as target when user triggers matching request
User, the other users for being different from the user just can be referred to as candidate user.Correspondingly, when other users also trigger matching request
When, which can also be the candidate user of other users.User mentioned in the embodiment of the present application includes in computer
User.
Target biometric data is the biological attribute data of target user.Candidate biological attribute data is candidate user
Biological attribute data.Wherein, biological attribute data is the intrinsic physiological characteristic data of human body or behavioural characteristic data.User's
Biological attribute data, for example, the sound of user, image or writing etc..The behavioural characteristic data of user, refer to that user implements certain
The characteristic of behavior, such as user browse the frequency of webpage, user often and the ground that went of which kind of user's interaction or user
Side etc..
Target user's preference data is the data for reflecting the preference of target user.Candidate user preference data is that reflection is candidate
The data of the preference of user.Wherein, the data to reflect user preferences specifically can be user to the biological characteristic number of other users
According to preference evaluation or the user preference data etc. analyzed based on user's history behavioral data.
Specifically, terminal can acquire the biological attribute data of user and corresponding user preference data and report to service
Device.When server receives the matching request of target user's triggering, determine that the other users different from the target user are candidate
User.And then server can search target biometric data corresponding with target user and target user's preference number from local
According to, and candidate biological attribute data corresponding with candidate user and candidate user preference data.
In one embodiment, server can use every other use different from the target user as candidate per family
Family.In another embodiment, when number of users is very more, in the matching process for carrying out target user and candidate user
Calculation amount is also very big.For the calculating pressure for mitigating server, when receiving matching request, server can be according to the target
The user information of user carries out preliminary screening, is filtered out from the other users different from the target user with the target user's
The candidate user that user information matches.Wherein, user information, such as user's head portrait, user's gender, age of user or user
The information such as native place.The candidate user filtered out, for example, being the user for being in same age bracket with target user.
In one embodiment, when target user triggers matching request, server can be to corresponding to the target user
Terminal sends information collection instruction, to acquire the corresponding target biometric data of target user and target user's preference data.
In one embodiment, when user preference data is that user evaluates the preference of the biological attribute data of other users
When, user can enter user interface by terminal, test number in the biological characteristic that other users are presented in user interface
According to.The biological characteristic test data is used to test the preference of user.User can test number to biological characteristic by the interactive interface
According to preference test is carried out, terminal obtains user preference data according to the preference test result of user and reports to server.Wherein,
User can carry out preference test to biological characteristic test data by the interactive interface, for example, user can test biological characteristic
Data are given a mark, and are given a mark more high, are indicated more like, and are given a mark more low, are indicated more do not like.
In one embodiment, server can collect the user's history behavioral data that terminal reports, to user's history behavior
Data are analyzed, and user preference data is obtained.For example, terminal can acquire the net page browse information of user, location information, equipment
The user behavior datas such as model or on-line time section simultaneously report to server, and server can carry out frequency to the image that user accesses
The information such as secondary analysis, user and different user interacting in social networks, the position that user often goes, to analyze user preference
Personage's appearance, personality or behavior etc..
S204 determines target user for candidate user according to target biometric data and candidate user preference data
The first matching degree.
Specifically, server can carry out matching degree calculating according to target biometric data and candidate user preference data,
To determine target user for the first matching degree of candidate user.Wherein, the first matching degree can reflect candidate user to target
User's likes degree.When the first matching degree is higher, then illustrate that the candidate user more likes the target user;When the first matching degree
It is lower, then illustrate the more disagreeable target user of the candidate user.
In one embodiment, the type of target biometric data and the type of user preference data match, and work as mesh
Mark when number of species are more than one and the number of species of candidate user preference data are more than one of biological attribute data, service
Device can determine the corresponding matching of the type according to the corresponding target biometric data of type and candidate user preference data
Degree.More than one corresponding matching degree of type is weighted summation, obtains the first matching degree.Wherein, weighted sum
Coefficient can be configured and adjust according to various types of respective significance level.In one embodiment, server can will be more than
A kind of corresponding matching degree of type is averaged, using average value as the first matching degree.It is appreciated that for second
Calculation with degree, the calculation with the first matching degree are correspondingly, details are not described herein.
In one embodiment, server is corresponding according to target biometric data and candidate biological attribute data
Class label determines and the corresponding target labels vector of target user and candidate label vector corresponding with candidate user respectively;
According to target user's preference data and candidate user preference data, determine respectively target user's preference corresponding with target user to
Amount and candidate user preference vector corresponding with candidate user.It is described according to the target biometric data and the candidate
User preference data, the step of determining the first matching degree of the target user for the candidate user, comprising: according to described in
Similarity between candidate user preference vector described in target labels vector sum determines the target user for the candidate user
The first matching degree.It is described according to the candidate biological attribute data and target user's preference data, determine the candidate
User for the target user the second matching degree the step of, comprising: used according to the candidate label vector and the target
Similarity between the preference vector of family determines the candidate user for the second matching degree of the target user.
Specifically, server, can be corresponding to target user and candidate user in advance when carrying out matching degree calculating
Biological attribute data is classified, and corresponding class label is obtained.It is right further according to the target biometric data institute of target user
The class label answered determines the corresponding target labels vector of target user.According to the candidate biological attribute data institute of candidate user
Corresponding class label determines the corresponding candidate label vector of candidate user.
In one embodiment, according to target biometric data and the corresponding classification mark of candidate biological attribute data
Label, respectively determining target labels vector corresponding with target user and the step of candidate's label vector corresponding with candidate user
It specifically includes: classifying respectively to target biometric data and candidate biological attribute data, obtain corresponding with target user
Target category label and candidate categories label corresponding with candidate user;According to target category label, determining and target user
Corresponding target labels vector;According to candidate categories label, candidate label vector corresponding with candidate user is determined.
In one embodiment, classification institute is carried out to the biological attribute data of user (including target user and candidate user)
Obtained class label have it is multiple, server can according to all kinds of distinguishing labels occur the frequency the corresponding label of the user be calculated
Vector.
For example, server can pass through the image classification mould of pre-training when biological attribute data includes user images
Type carries out classification processing to user images, obtains class label belonging to each user images.Wherein, class label is such as: imperial elder sister,
Loli, sweet system, common female, small fresh meat, uncle, type male and common male etc..When the user uploads multiple images, such as 5
, server classifies to this 5 images, obtains this corresponding class label of 5 images are as follows: uncle, type male, type
Male, type male, small fresh meat.The then corresponding image category label of the user are as follows: imperial elder sister (0), Loli (0), sweet system (0), common female
(0), small fresh meat (0.2), uncle (0.2), type male (0.6), common male (0).The label vector of the user: V can be established accordingly1=
[0,0,0,0,0.2,0.2,0.6,0], wherein each number represents the frequency that corresponding class label occurs in the label vector
Rate.
Similarly, when biological attribute data includes user speech, server can classify to user speech, obtain each
Class label belonging to user speech.Wherein, class label is such as: imperial elder sister's sound, Luo Liyin, Shao Nvyin, gloomy female's sound, juvenile sound,
Green tertiary sound, warm male sound and son's sound etc..When the user uploads multistage voice, such as 10 sections, server to this 10 sections of voices into
Row classification, the voice for obtaining being assigned to " imperial elder sister's sound " this kind of classifications in this 10 sections of voices have 1 section;Be assigned to " Loli's sound " this
The voice of classification has 7 sections;Being assigned to " maiden's sound " this kind of other voices has 2 sections.The then corresponding voice class label of the user
Are as follows: imperial elder sister's sound (0.1), Loli's sound (0.7), gloomy female's sound (0.0), juvenile sound (0.0), green tertiary sound (0.0), warm up at maiden's sound (0.2)
Male sound (0.0) and son's sound (0.0).The label vector that the user can be established accordingly is V2=[0.1,0.7,0.2,0.0,0.0,
0.0,0.0,0.0].Wherein, each number represents the frequency that corresponding class label occurs in the label vector.
It is appreciated that target user and the corresponding target of candidate user can be calculated by above-mentioned calculation
Class label and candidate categories label.The target category label can user reflect the Partial Feature of the target user, candidate's class
Distinguishing label can be used for reflecting the Partial Feature of the candidate user.
In one embodiment, when server respectively divides target biometric data and candidate biological attribute data
Class, obtain and the corresponding target category label of target user and candidate categories label corresponding with candidate user after, can be to target
The corresponding terminal feedback target class label of user feeds back candidate categories label to the corresponding terminal of candidate user.
In above-described embodiment, class label is obtained by being classified to biological attribute data, according to the corresponding class of user
Distinguishing label can accurately determine corresponding label vector, and the user can be indicated by the label vector.
In one embodiment, server can be according to user to the fancy grade of different classes of biological attribute data, really
Fixed user preference vector corresponding to the user.According to target user to the fancy grade of different classes of biological attribute data, really
Fixed target user's preference vector corresponding with target user.According to candidate user to the hobby of different classes of biological attribute data
Degree determines candidate user preference vector corresponding with candidate user.
For example, when user preference data is that user evaluates the preference of the biological attribute data of other users, than
Such as, give a mark to the images of other users according to fancy grade, give a mark more high, indicate more like, give a mark more low, indicate more
It is disagreeable.For example, user gives a mark to the voice of other users, server can be right to the voice institute for corresponding to the same category label
The score answered is averaging, and the average value is as the corresponding score of such distinguishing label.When user does not carry out the voice of a certain classification
When marking, the corresponding preference weight of the voice can be set as to preset value 0.5 (indicate neither preference nor dislike).Assuming that user's language
Sound likes label are as follows: imperial elder sister's sound (0.8), Loli's sound (0.1), maiden's sound (0.8), gloomy female's sound (0.2), juvenile sound (0.5), green uncle
Sound (0.5), warm male sound (0.5) and son's sound (0.5).Wherein dimension is the quantity of label, is worth the power for user to label preference
Weight, weight is bigger, indicates that user gets over preference to the label, and smaller expression user is to the label more not preference.So, the user
Voice preference vector U=[0.8,0.1,0.8,0.2,0.5,0.5,0.5,0.5].
When user preference data is that server carries out user's history behavioral data to analyze obtained user preference data
When, server can also establish corresponding user preference vector accordingly.For example, user's browsing in server statistics available nearly one month
The duration stopped when other users image.The duration stopped when browsing other users image according to user, to judge user
Whether preference such label.For example, the residence time is longer, then it represents that user more likes user corresponding to such label, then may be used
Corresponding higher preference value.In this way, preference value of the statistics available user of server to the user images for belonging to the same category label, it will
The preference value of the user images of corresponding the same category label is averaging, which obtains as the corresponding preference of such distinguishing label
Point.The user preference vector of the user is established according to the corresponding preference-score of each class label.
Further, server can determine mesh according to the similarity between target labels vector sum candidate user preference vector
User is marked for the first matching degree of candidate user;Similarity between the candidate label vector of foundation and target user's preference vector,
Determine candidate user for the second matching degree of target user.
Wherein, the calculation of the similarity between two vectors specifically can be " distance " calculated between two vectors, two vectors
Between " distance " calculation there are many, such as calculate Euclidean distance between two vectors, manhatton distance, Chebyshev away from
From, COS distance or Hamming distance etc..The application is it is not limited here.
In one embodiment, weight of the server to each dimension in target labels vector sum candidate user preference vector
Value is summed after being multiplied, and obtains the first matching degree.To the weight of each dimension in candidate label vector and target user's preference vector
Value is summed after being multiplied, and obtains the second matching degree.Such as, it is assumed that target labels vector is Va=[0.1,0.7,0.2,0.0];It is candidate
User preference vector is Ub=[0.8,0.1,0.8,0.2];Then the first matching degree P1=Va*Ub=0.1*0.8+0.7*0.1+
0.2*0.8+0.0*0.2=0.31.Correspondingly, the second matching degree can also be calculated in a manner described.
In above-described embodiment, according to and the corresponding target labels vector sum of target user candidate use corresponding with candidate user
Similarity between the preference vector of family can rapidly and accurately determine the first matching degree, which can reflect the candidate
User likes degree to the target user's;According to and the corresponding candidate label vector of candidate user and corresponding with target user
Similarity between target user's preference vector can rapidly and accurately determine the second matching degree, which can reflect
The target user likes degree to the candidate user.
In one embodiment, server can be respectively by the machine learning model of pre-training to target biometric data
Feature extraction is carried out with candidate user preference data, the feature extracted is compared, to determine target biometric number
According to the similarity or diversity factor between candidate user preference data.By target biometric data and candidate user preference data
Similarity between corresponding feature is as the first matching degree, or by target biometric data and candidate user preference number
According to the converse value of the diversity factor between corresponding feature as the first matching degree.
For example, target biometric data specifically can be the image of target user.Candidate user preference data tool
Body can be the preference image that candidate user is biased to the user for the classification liked.Server can learn mould by preparatory training machine
Type extracts the first characteristics of image of the image of target user by trained machine learning model.Pass through trained machine
Device learning model extracts the second characteristics of image of the corresponding preference image of candidate user.By calculating the first characteristics of image and the
Similarity between two characteristics of image, to determine the match condition of target biometric data and candidate user preference data, for example,
When the similarity between the first characteristics of image and the second characteristics of image is bigger, then it represents that target biometric data and candidate user are inclined
Good data more match, and the value of the first matching degree is bigger;When the similarity between the first characteristics of image and the second characteristics of image is smaller,
Then indicate that target biometric data and candidate user preference data more mismatch, the value of the first matching degree is with regard to smaller.Wherein, scheme
As the calculation of the similarity between feature, " distance " between two vectors of computational representation characteristics of image specifically can be.
S206 determines candidate user for target user according to candidate biological attribute data and target user's preference data
The second matching degree.
Specifically, server can carry out matching degree calculating according to candidate biological attribute data and target user's preference data,
To determine the candidate user for the second matching degree of the target user.Wherein, the second matching degree can reflect the target user
To the preference of candidate user.When the second matching degree is higher, then illustrate that the target user more likes the candidate user;When second
Matching degree is lower, then illustrates the more disagreeable candidate user of the target user.
It is appreciated that, according to candidate biological attribute data and target user's preference data, determining and waiting in step S206
Select family for the second matching degree of target user, wherein the calculation of the second matching degree can refer to first in step S204
The calculation of matching degree, the two are that input data when calculating is different.
S208 will be waited when the comprehensive matching degree determined by the first matching degree and the second matching degree meets matching condition
Family is selected to be determined as the corresponding matching user of target user.
Wherein, matching condition specifically can be comprehensive matching degree more than or equal to matching degree threshold value, or by different candidates
When the corresponding comprehensive matching degree of user sorts by numerical values recited sequence, sequence ranking is before default ranking.Specifically, server can
According to the first matching degree and the second matching degree of candidate user, the comprehensive matching degree of the candidate user is determined.Judge comprehensive matching
Whether degree meets matching condition, when the match conditions are met, then it is corresponding corresponding candidate user to be determined as the target user
Match user.
In one embodiment, when the quantity of candidate user is one, server can be according to the target user for this
Comprehensive matching is calculated for the second matching degree of the target user in the first matching degree and the candidate user of candidate user
Degree.When comprehensive matching degree meets matching condition, using the candidate user as the matching user of the target user.
In one embodiment, when the quantity of candidate user is more than one, for each candidate user, server
Target user is calculated to match the first matching degree of the candidate user and the candidate user for the second of the target user
Degree, then according to the first matching degree and the second matching degree being calculated, calculate the synthesis between the candidate user and the target user
Matching degree, until obtaining comprehensive matching degree of each candidate user respectively between target user.Server is from more than one time
It selects in family, the candidate user that the comprehensive matching degree between target user meets matching condition is filtered out, as the target user
Corresponding matching user.
In one embodiment, step S208 is specifically included: the first matching degree and the second matching degree to candidate user into
Row weighted sum calculates, and obtains comprehensive matching degree;It is when comprehensive matching degree is greater than or equal to matching degree threshold value, candidate user is true
It is set to the corresponding matching user of target user.
Specifically, server can the first matching degree to candidate user and the second matching degree be weighted read group total, obtain
To comprehensive matching degree.It is corresponding that the candidate user that comprehensive matching degree is greater than or equal to matching degree threshold value is determined as the target user again
Matching user.Wherein, the weighting coefficient in weighted sum calculating can be set or adjusted according to actual conditions.
Such as, it is assumed that corresponding first weighting coefficient of the first matching degree, corresponding second weighting coefficient of the second matching degree.It is settable
First weighting coefficient and the second weighting coefficient are all 0.5, be matched to the target user like and like the target user its
He is user.Alternatively, server may also set up the first weighting coefficient greater than the second weighting coefficient, for example the first weighting coefficient is 0.6,
Second weighting coefficient is 0.4, to be matched to the matching user for preferring the target user.Alternatively, server also settable first
Weighting coefficient is less than the second weighting coefficient, for example the first weighting coefficient is 0.4, and the second weighting coefficient is 0.6, to be matched to the mesh
The matching user that mark user prefers.
In above-described embodiment, read group total is weighted to the first matching degree and the second matching degree of candidate user, is obtained
It is corresponding to be determined as target user when comprehensive matching degree is greater than or equal to matching degree threshold value by comprehensive matching degree for candidate user
Match user.In this way, carrying out being mutually matched for target user and candidate user, the target user can be matched to and like and like this
The other users of target user greatly improve the matched accuracy rate of user.
In one embodiment, the quantity for the matching user corresponding with the target user that server determines can be one
It is a, it is also possible to more than one.It is corresponding that the user information of determining matching user can be sent to the target user by server
Terminal.Target user can check corresponding user information, or be conversated according to user information with matching user or other are interacted
Movement.
Above-mentioned user matching method, according to the corresponding target biometric data of target user and the corresponding time of candidate user
User preference data is selected, the target user can be determined for the first matching degree of the candidate user.Correspondingly, according to the candidate
The corresponding candidate's biological attribute data of user target user's preference data corresponding with the target user, can determine that the candidate uses
Second matching degree of the family for the target user.According to the first matching degree and the second matching degree, filtered out from candidate user with
The matched matching user of target user.In this way, based on biological attribute data and respective user preference data to different user
It is mutually matched, so that it may the target user be enabled to be matched to other use that the target user likes and likes the target user
Family greatly improves the matched accuracy rate of user.
In one embodiment, target biometric data includes target user's image, and candidate biological attribute data includes
Candidate user image;Classify respectively to target biometric data and candidate biological attribute data, obtains and target user
The step of corresponding target category label and candidate categories label corresponding with candidate user, specifically includes: target user is schemed
Picture and candidate user image are separately input into the image classification model of pre-training;By image classification model respectively to target user
Image and candidate user image carry out image characteristics extraction, and determination is corresponding with target user respectively for the characteristics of image according to extraction
Target category label and candidate categories label corresponding with candidate user.
Wherein, image classification model is the machine learning model after training with classification capacity.Machine learning English
Full name is Machine Learning, abbreviation ML.Machine learning model can have feature extraction by sample learning and feature is known
Other ability.Neural network model, support vector machines or Logic Regression Models etc. specifically can be used in image classification model.
In one embodiment, image classification model can be the complex network model for being interconnected by multilayer and being formed,
It may include specifically convolutional layer and full articulamentum.Wherein, convolutional layer is the feature extraction layer in convolutional neural networks.Convolutional layer can be with
It is multilayer, every layer of convolutional layer has corresponding convolution kernel, and every layer of convolution kernel can be multiple.Convolutional layer is by convolution kernel to defeated
The user images entered carry out convolution algorithm, extract the characteristics of image in user images.Full articulamentum is in convolutional neural networks
Tagsort layer, for according to the distributed nature mapping relations learnt by the image feature maps of extraction to corresponding classification
Label.
In one embodiment, server is after getting target user's image and candidate user image, by target user
In image and candidate user image difference input picture disaggregated model, the convolutional layer for including in image classification model is successively to input
User images carry out convolution operation, until image classification model in the last layer convolutional layer complete convolution operation, to extract figure
As feature, then using the characteristics of image of extraction as the input of full articulamentum, obtain the corresponding target category label of target user and
Candidate categories label corresponding with candidate user.
In one embodiment, developer can be trained disaggregated model according to training sample, to obtain with mould
The good image classification model of the classification performance of shape parameter.Wherein, training sample specifically can be user images sample, Yi Jiyu
The corresponding class label of user images sample.It can specifically be determined by way of being manually labeled to user images sample
Class label corresponding to each user images sample.
It, can be accurately to target user's image and candidate by the image classification model of pre-training in above-described embodiment
User images are classified, and have ensured the accuracy of classification.
In one embodiment, target biometric data includes target user's voice, and candidate biological attribute data includes
Candidate user voice;Classify respectively to target biometric data and candidate biological attribute data, obtains and target user
The step of corresponding target category label and candidate categories label corresponding with candidate user, specifically includes: obtaining more with quantity
In one corresponding speech feature vector sample of class label institute;It is used by the way that gauss hybrid models are determining respectively with target
The corresponding target voice feature vector of family voice and candidate speech feature vector corresponding with candidate user voice;According to target
Similarity between speech feature vector and each speech feature vector sample determines target category label corresponding to target user;
According to the similarity between candidate speech feature vector and each speech feature vector sample, candidate class corresponding to candidate user is determined
Distinguishing label.
Wherein, speech feature vector sample can be used to represent the feature vector of the voice of a certain class label.Gauss
Mixed model (Gaussian Mixed Model, GMM) is accurately to quantify things with Gaussian probability-density function, it is one
Things is decomposed into and several model is formed by based on Gaussian probability-density function.
Specifically, server can be by user speech sample and corresponding class label, to determine different classes of label
Corresponding speech feature vector sample.And then mesh corresponding with target user's voice is extracted respectively by gauss hybrid models
Mark speech feature vector and candidate speech feature vector corresponding with candidate user voice.Calculate target voice feature vector with
Similarity between each speech feature vector sample, to determine target category label corresponding to the target user.Calculate candidate language
Similarity between sound feature vector and each speech feature vector sample, to determine candidate categories mark corresponding to the candidate user
Label.
In one embodiment, obtain the class label sample institute corresponding phonetic feature more than one with quantity to
The step of measuring sample specifically includes: obtaining user speech sample and the corresponding class label of user speech sample;It is mixed by Gauss
Molding type determines speech feature vector corresponding with each user speech sample;By phonetic feature corresponding to the same category label
The mean vector of vector, as speech feature vector sample corresponding with class label.
Specifically, server can obtain user speech sample and the corresponding class label of each user speech sample.Clothes
Business device can be handled user speech sample by kaldi tool (a kind of kit of speech recognition), and user speech is extracted
The feature samples of sample.Feature samples are input to training GMM model in GMM model (training process is unsupervised training) and are led to
Cross GMM model output speech mean super vector.The speech mean super vector is input to IE extractor (i-vector
Extractor, IE) in, speech feature vector corresponding with user speech sample, also referred to as i- are determined by the IE extractor
vector.By the corresponding i-vector composing training set of all user speech samples.By the same category label in training set
The mean vector of corresponding i-vector, as speech feature vector sample corresponding with such distinguishing label.Wherein, by mean value
Process of the vector as the corresponding speech feature vector sample of such distinguishing label can be referred to as and register, it can be understood as register
To the corresponding speech feature vector sample of different classes of label.
In one embodiment, i-vector:M=m+Tw can be calculated by following formula in IE extractor;Wherein, M
It is the super vector of user speech;M is the mean value super vector of gauss hybrid models output;T is global disparity space matrix;W is i-
Vector, that is, speech feature vector.Wherein, global disparity space matrix T can specifically be based on maximum-likelihood criterion, and lead to
It crosses EM algorithm (Expectation-Maximization algorithm, EM algorithm) and is iterated and be calculated.
In above-described embodiment, pass through gauss hybrid models, it may be determined that phonetic feature corresponding with each user speech sample to
Amount, then by the mean vector of speech feature vector corresponding to the same category label, as voice spy corresponding with class label
Levy vector sample.In this way, obtained speech feature vector sample corresponding with class label, so that it may as subsequent to user speech
The reference classified improves the accuracy to Classification of Speech.
In one embodiment, when server need to classify to target user's voice and candidate user voice, can lead to
Trained gauss hybrid models are crossed in previous embodiment to be handled.To be handled to obtain target to target user's voice
Be illustrated for speech feature vector: server can be handled target user's voice by kaldi tool, extract target
The feature of user speech.The feature of extraction is input in trained GMM model and obtains target voice mean value super vector.By mesh
Mark speech mean super vector is input in IE extractor, determines target language corresponding with target user's voice by the IE extractor
Sound feature vector, that is, i-vector.It is appreciated that being handled candidate user voice to obtain candidate speech feature vector
Process with above-mentioned treatment process be it is the same, details are not described herein.
Further, server calculates target voice feature vector corresponding with target user's voice and uses with candidate
After the corresponding candidate speech feature vector of family voice, can respectively by target voice feature vector and each speech feature vector sample into
Row similarity calculation, to determine target category label corresponding to target user.Respectively by candidate speech feature vector and each language
Sound feature vector sample carries out similarity calculation, to determine candidate categories label corresponding to candidate user.
In one embodiment, according to the similarity between target voice feature vector and each speech feature vector sample, really
Set the goal target category label corresponding to user the step of specifically include: determine target voice feature vector and each phonetic feature
Voice similarity between vector sample;When voice similarity meets the first similarity difference condition, determine in voice similarity
Maximum voice similarity corresponding to speech feature vector sample;By classification corresponding to determining speech feature vector sample
Label, as target category label corresponding to target user.
Wherein, the first similarity difference condition refers to that maximum voice similarity and the difference value of time big voice similarity are greater than
Or it is equal to preset threshold.Wherein, the difference value of maximum voice similarity and time big voice similarity specifically can be difference or
Quotient etc..Wherein, secondary big voice similarity refers to, by the voice between target voice feature vector and each speech feature vector sample
When similarity sorts by numerical values recited sequence, voice similarity of the ranking ranking second.
Specifically, server can calculate target voice feature vector respectively the cosine between each speech feature vector sample away from
From determining voice similarity of the target voice feature vector respectively between each speech feature vector sample according to COS distance.Or
Person, server can also calculate semanteme of the target voice feature vector respectively between each speech feature vector sample by other means
Similarity, it is not limited here.
Further, composed by the semantic similarity between target voice feature vector and each speech feature vector sample
In similarity set, the difference condition of maximum semantic similarity and time big semantic similarity is bigger, then illustrates the semantic phase of the maximum
It is closer like the corresponding speech feature vector sample of degree and target voice feature vector.Thus can by determining phonetic feature to
Class label sample corresponding to sample is measured as target category label corresponding to the target user.
In above-described embodiment, when the difference of the voice similarity between target voice feature vector and each speech feature vector sample
It is when different the first similarity of satisfaction difference condition, voice corresponding to target user's Classification of Speech to maximum voice similarity is special
Levy the corresponding class label of vector sample.
In one embodiment, the similarity between the foundation target voice feature vector and each speech feature vector sample,
The step of determining target category label corresponding to target user further include: when voice similarity meets the second similarity difference item
When part, each voice similarity is pressed and is sorted by numerical values recited sequence;According to the preset quantity from the first voice similarity of sequence
The corresponding speech feature vector sample of voice similarity, determine composite label;Using composite label as target user institute
Corresponding target category label.
Wherein, the second similarity difference condition, which specifically can be, sorts each voice similarity by by numerical values recited sequence
Afterwards, the value part of difference two-by-two between the voice similarity of the forward preset quantity of ranking ranking is less than preset threshold or is respectively less than pre-
If threshold value.Wherein, the difference value two-by-two between voice similarity specifically can be difference or quotient etc..
Specifically, when voice similarity meets the second similarity difference condition, illustrate the forward present count of ranking ranking
The voice similarity-rough set of amount is close, in this case directly will be corresponding to target user's Classification of Speech to maximum voice similarity
The corresponding class label of speech feature vector sample is less accurately.Thus server can be according to the first voice phase from sequence
Like the corresponding speech feature vector sample of voice similarity for having spent preset quantity, composite label is determined, by target user
Classification of the Classification of Speech to composite label.
In one embodiment, the combination constituted from by different speech feature vector samples can be preset in server
Corresponding composite label.Server can be according to the voice similarity of target voice feature vector and each speech feature vector sample
In, whether the difference value between voice similarity is less than preset threshold to determine language corresponding with the target voice feature vector two-by-two
Sound feature vector sample combination, so that it is determined that corresponding composite label.
For example, when speech feature vector sample includes speech feature vector sample 1,2 and of speech feature vector sample
When speech feature vector sample 3, combined as composed by different speech feature vector samples also there are many, such as by voice spy
Levy combination 1 composed by vector sample 1 and speech feature vector sample 2;Speech feature vector sample 1 and speech feature vector sample
This 3 composed combination 2;3 are combined as composed by speech feature vector sample 2 and speech feature vector sample 3;Phonetic feature
Combination 4 composed by vector sample 1, speech feature vector sample 2 and speech feature vector sample 3.Settable group of server
Close 1 corresponding composite label 1, the corresponding composite label 2 of combination 2, the corresponding composite label 3 of combination 3 and the corresponding composite label of combination 4
4。
Server, can when calculating the voice similarity between target voice feature vector and each speech feature vector sample
Each voice similarity is pressed and is sorted by numerical values recited sequence.For example, between target voice feature vector and speech feature vector sample 1
Voice similarity 1 it is maximum, rank the first;Voice similarity 2 between target voice feature vector and speech feature vector sample 2
It is secondary big, it is number two;Voice similarity 3 between target voice feature vector and speech feature vector sample 3 is minimum, ranking the
Three.
So following a variety of situations can may be present between each voice similarity: when between voice similarity 1 and voice similarity 2
Difference value is less than preset threshold, and the difference value between other voice similarities is greater than or equal to preset threshold, then can be according to voice
Combination 1 composed by feature vector sample 1 and speech feature vector sample 2 determines that corresponding composite label is composite label 1.
When the difference value between voice similarity 1 and voice similarity 2 is less than preset threshold, and between voice similarity 2 and voice similarity 3
Difference value be less than preset threshold, then can be according to speech feature vector sample 1, speech feature vector sample 2 and phonetic feature
Combination 4 composed by vector sample 3 determines that corresponding composite label is composite label 4.
It is illustrated below with the concrete condition in practical application, when the voice phase of target user's voice and maiden's sound
It is maximum voice similarity like degree, the voice similarity between target user's voice and imperial elder sister's sound is time big voice similarity.And most
Big voice similarity and time big voice similarity are very close to difference value is less than preset threshold.At this point, no matter the user speech is divided
It is all not accurate enough that class, which is still categorized into imperial elder sister's sound to maiden's sound,.So can by the way that a kind of complex tone, such as gloomy female's sound is arranged,
As the synthesized voice classification of maiden's sound and imperial elder sister's sound, that is, composite label.It can be compound to this by target user's Classification of Speech
In label classification.
In above-described embodiment, when the difference of the voice similarity between target voice feature vector and each speech feature vector sample
When different the second similarity of satisfaction difference condition, each voice similarity is sorted by numerical values recited sequence, according to from the first of sequence
Voice similarity plays the corresponding speech feature vector sample of voice similarity of preset quantity, determines composite label, will answer
Label is closed as target category label corresponding to target user.In this way, being greatly improved voice point in actual classification operation
The accuracy of class, and the applicability of Classification of Speech is increased, it is bonded actual conditions.
About according to the similarity between candidate speech feature vector and each speech feature vector sample, candidate user institute is determined
The concrete processing procedure of corresponding candidate categories labelling step, can refer in previous embodiment according to target voice feature vector with
Similarity between each speech feature vector sample, determines the concrete processing procedure of target category label corresponding to target user,
Details are not described herein.
With reference to Fig. 3, Fig. 3 is the processing frame diagram classified in one embodiment to user speech.As shown in figure 3, should
Classification of Speech processing is divided into training, registration and test three phases.Wherein dotted arrow part indicates the training stage, first may be used
The feature (also referred to as training characteristics) of user speech sample is extracted by kaldi tool, then passes through training characteristics training GMM mould
Type then trains i-vector extractor, and exports user speech sample by trained GMM model and IE extractor and correspond to
I-vector, export the i-vector of training set.The solid arrow part of overstriking is registration phase, by the same category label
The mean vector of corresponding i-vector, as speech feature vector sample corresponding with such distinguishing label, that is, in Fig. 3
Standard tone color 1, standard tone color 2 and standard tone color 3.Standard tone color 1, standard tone color 2 and standard tone color 3 respectively correspond different
Class label.
Black non-overstriking solid arrow part is part of detecting i.e. classified part, is extracted first by kaldi tool
The feature (also referred to as test feature) of user data, then the feature extracted is passed through into trained GMM model and IE extractor,
The i-vector of test set is obtained, the standard tone color that i-vector and three of test set has been registered is subjected to similarity meter respectively
It calculates (such as complementation chordal distance), corresponding matching score value is determined according to similarity, is matched corresponding to the highest standard tone color of score value
Class label be this tested speech class label.
In above-described embodiment, target voice feature corresponding with target user's voice is determined respectively by gauss hybrid models
Vector and candidate speech feature vector corresponding with candidate user voice, by by target voice feature vector, candidate speech
Feature vector carries out similarity calculation with each speech feature vector sample respectively, can accurately classify to user speech,
It is convenient and efficient.
In one embodiment, target biometric data includes target user's image and target user's voice;Candidate is raw
Object characteristic includes candidate user image and candidate user voice;Target user's preference data includes target user's image preference
Data and target user's voice preference data;Candidate user preference data includes candidate user image preference data and candidate user
Voice preference data.According to target biometric data and candidate user preference data, determine target user for candidate user
The first matching degree the step of specifically include: according to target user's image and candidate user image preference data, determine that target is used
First images match degree of the family for candidate user;According to target user's voice and candidate user voice preference data, mesh is determined
User is marked for the first voice match degree of candidate user;First images match degree and the first voice match degree are weighted and are asked
And calculating, target user is obtained for the first matching degree of candidate user.It is inclined according to candidate biological attribute data and target user
Good data, the step of determining the second matching degree of the candidate user for target user, specifically include: according to candidate user image and
Target user's image preference data determines candidate user for the second images match degree of target user;According to candidate user language
Sound and target user's voice preference data determine candidate user for the second voice match degree of target user;By the second image
Matching degree and the second voice match degree are weighted read group total, obtain candidate the second matching degree for target user.
In one embodiment, biological attribute data specifically includes user images and user speech, correspondingly, user preference
Data specifically include user images preference data and user speech preference data.In this way, can be counted respectively when calculating first matches
The first images match degree and the first voice match degree are calculated, then the first images match degree and the first voice match degree are weighted and are asked
With obtain the first matching degree.When calculating second matches, the second images match degree and the second voice match degree can be calculated separately,
Summation is weighted to the second images match degree and the second voice match degree again, obtains the second matching degree.Wherein, weighted sum
Weight coefficient can be configured and adjust according to the weighting degree of user images and user voice, it is not limited here.
In one embodiment, server can directly using the mean value of the first images match degree and the first voice match degree as
First matching degree;Using the mean value of the second images match degree and the second voice match degree as the second matching degree.
For example, setting the first images match degree Pa1, the first voice match degree Pb1.The first matching degree P can then be calculated1=
(Pa1+Pb1)/2.Correspondingly, setting the second images match degree Pa2, the second voice match degree Pb2.The second matching degree P can then be calculated2=
(Pa2+Pb2)/2。
In one embodiment, server can according between target user and candidate user the first matching degree and second
Match, the comprehensive matching degree between two users is calculated.For example, comprehensive matching degree P3=(P1+P2)/2.To which server can be according to
According to comprehensive matching degree, filtered out from more than one candidate user and the matched matching user of target user.
In one embodiment, biological attribute data further includes that user social contact is checked card data, and user preference data further includes
User preference is checked card data.Wherein, user social contact data of checking card specifically can be the geography that user went somewhere to complete afterwards
Position is checked card behavior.User preference data of checking card specifically can be user to the preference data of different positions of checking card.In this way, clothes
Business device can also check card data according to the corresponding target user's social activity of target user and the corresponding candidate user preference of candidate user is beaten
Card data determine that target user checks card matching degree to the first of candidate user.It is social according to the corresponding candidate user of candidate user
Data of checking card and the corresponding target user's preference of target user are checked card data, determine that candidate user is checked card to the second of target user
Matching degree.The matching degree of checking card of first images match degree, the first voice match degree, first is weighted read group total by server,
Target user is obtained for the first matching degree of candidate user.Server is by the second images match degree, the second voice match degree,
Two matching degrees of checking card are weighted read group total, obtain candidate user for the second matching degree of target user.
In above-described embodiment, according to the first images match degree and the first voice match degree, determine target user for candidate
The first matching degree of user.According to the second images match degree and the second voice match degree, determine candidate for the of target user
Two matching degrees.The match condition for merging user images and user voice in this way may make according to the first matching degree and the second matching
Comprehensive matching degree determined by spending more can accurately reflect the match condition of target user and candidate user, substantially increase user
Matched accuracy.
As shown in figure 4, in one embodiment, providing a kind of user matching method.The user matching method specifically may be used
Applied to the terminal 110 or server 120 in above-mentioned Fig. 1, below the main terminal being applied in above-mentioned Fig. 1 in this way
110 illustrate.Referring to Fig. 4, which specifically comprises the following steps:
S402 obtains the biological attribute data corresponding with target user of acquisition.
Specifically, when terminal detects biometric data acquisition instruction, physical characteristics collecting equipment can be called to acquire
The corresponding biological attribute data of target user.Terminal can store the biological attribute data of acquisition to local, obtain again when needed
Take the biological attribute data of target user.
In one embodiment, when the biological attribute data that need to be acquired includes the user images of target user, terminal can
The user images of camera acquisition target user are called, or select user images from local photograph album.When the biology that need to be acquired
When characteristic includes the user speech of target user, terminal can call user's language of voice collection device recording target user
Sound, or user speech is selected from local audio data.
S404 is obtained and the matched user information for matching user of target user when generating user's matching instruction;Matching
User preference data corresponding to user is matched with biological attribute data corresponding to target user, matches life corresponding to user
Object characteristic is matched with user preference data corresponding to target user.
Wherein, user's matching instruction is the instruction for triggering user's matching movement.User information is the body for matching user
Part information, specifically can be user images or user's name etc..Specifically, when terminal generates user's matching instruction, terminal
The matching instruction can be sent to server.So that server is used according to user preference data corresponding to candidate user and target
Biological attribute data corresponding to family is matched, right according to biological attribute data corresponding to candidate user and target user institute
The user preference data answered is matched, to filter out from candidate user and the matched matching user of target user.Terminal
It can receive the user information of the matching user of server feedback.
In one embodiment, when terminal generates user's matching instruction, terminal can be special by the corresponding biology of target user
Sign data (also referred to as target biometric data) reports to server.After server receives target biometric data, it can obtain
Take the corresponding user preference data of target user (also referred to as target user's preference data), the corresponding biological attribute data of candidate user
(also referred to as candidate biological attribute data), the corresponding user preference data of candidate user (also referred to as candidate user preference data).Into one
Step ground, server can determine target user for candidate user according to target biometric data and candidate user preference data
The first matching degree;According to candidate biological attribute data and target user's preference data, determine candidate user for target user
The second matching degree;The comprehensive matching degree determined by the first matching degree and the second matching degree as candidate user meets matching item
When part, candidate user is determined as the corresponding matching user of target user.Candidate user and target are used about above-mentioned server
Family carries out matched detailed process, can refer to the description in previous embodiment, details are not described herein.
In one embodiment, which further includes the steps that obtaining user preference data, and the step is specific
Packet: entering user interface, and biological characteristic test data is presented in user interface;Target user is obtained to biological characteristic
Test data carries out preference and tests obtained user preference data.
Specifically, terminal can be in the biological characteristic test data of user interface presentation other users.The biological characteristic
Test data is used for the preference of test target user.Target user can carry out biological characteristic test data by the interactive interface
Preference test, terminal obtain user preference data according to the preference test result of user and report to server.
In one embodiment, user specifically can be to other users biological characteristic test data into preference test
Biological characteristic test data is given a mark, for example what is enjoyed a lot make a call to five stars, and what is do not liked very much makes a call to a star, does not like
It is inoffensive to make a call to 2 magnitudes.In this way, terminal can be according to the marking behavior of user, by user to the preference feelings of biological attribute data
Condition is quantified, for example, five stars are 1 point corresponding, four stars are 0.8 point corresponding, and three stars are 0.6 point corresponding, two stars corresponding 0.4
Point, a star is 0.2 point corresponding etc..
It is that target user carries out preference to test user images in one embodiment with reference to Fig. 5 (a) and Fig. 5 (b), Fig. 5 (a)
The interface schematic diagram of test.In interactive interface, test user images can have been shown.The test user images are other users
User images.Target user can choose the pattern of expression " love ", and the quantity of " love " pattern chosen can correspond to target user
To the score value that the test user images are given a mark, quantity is more, then score value is bigger.In the interactive interface, test user's figure
It may also display the score of the test user images after other users give a mark to the test user images below picture, such as
7.2 point.It is appreciated that the test user images are the user images different from the other users of target user, correspondingly, target
The user images of user can be used as the test user images of other users.
Fig. 5 (b) is the interface schematic diagram that target user carries out preference test to test user speech in one embodiment.?
In interactive interface, the play control for playing test user speech can be shown.Target user can click the play control and listen to
Test user speech.Target user can choose the pattern of expression " love ", and the quantity of " love " pattern chosen can correspond to target
The score value that user gives a mark to the test user speech, quantity is more, then score value is bigger.In the interactive interface, test is used
The score that may also display the test user speech after other users give a mark to the test user speech below the voice of family, than
Such as 9.0 points.It is appreciated that the test user speech is the user speech different from the other users of target user, correspondingly, mesh
The user speech of mark user can be used as the test user speech of other users.
In above-described embodiment, target user can be obtained in interactive interface and preference test institute is carried out to biological characteristic test data
Obtained user preference data, it is subsequent during progress user is matched, user can be analyzed based on the direct feel of user
Preference, and then the actual conditions based on user and user preference situation are mutually matched, and substantially increase the matched standard of user
True property.
In one embodiment, server can collect the user's history behavioral data that terminal reports, to user's history behavior
Data are analyzed, and user preference data is obtained.For example, terminal can acquire the net page browse information of user, location information, equipment
The user behavior datas such as model or on-line time section simultaneously report to server, and server can carry out frequency to the image that user accesses
The information such as secondary analysis, user and different user interacting in social networks, the position that user often goes, to analyze user preference
Personage's appearance, personality or behavior etc..
In one embodiment, biological attribute data includes user images and user voice;Match use corresponding to user
Family preference data is matched with biological attribute data corresponding to target user, comprising: user images corresponding to matching user are inclined
Good data with user images corresponding to target user match and match user corresponding to user voice preference data and target
The matching of user voice corresponding to user.It is inclined to match user corresponding to biological attribute data corresponding to user and target user
Good Data Matching, comprising: match user images preference data corresponding to user images corresponding to user and target user
Match and match user voice corresponding to user and is matched with user voice preference data corresponding to target user.
Similarly, user images preference data corresponding to matching user and target user institute how to be realized about server
Corresponding user images match and match user's sound corresponding to user voice preference data corresponding to user and target user
Sound matching.And how server realizes that user images corresponding to user images corresponding to matching user and target user are inclined
User voice preference data corresponding to user voice and target user corresponding to good Data Matching and matching user is matched
The detailed content of step can refer to server in previous embodiment and calculate the first images match, calculate the first voice match degree, meter
It calculates the second images match and calculates the description of the contents such as the second voice match degree, details are not described herein.
S406, the user information of output matching user.
Specifically, terminal exports the user information after obtaining the user information of matching user of server feedback.One
In a embodiment, the quantity of the matching user of server feedback can be one can also be more than one.When server feedback
When the quantity of matching user is more than one, terminal can be according to the difference of the matching degree of each matching user and target user different
Region shows the user information of matching user.In one embodiment, terminal can protrude the matching degree of displaying and target user most
The user information of high matching user, to recommend matching user to target user.
Above-mentioned user matching method obtains the biological attribute data corresponding with target user of acquisition, as generation user
When with instruction, obtain and the matched user information for matching user of the target user and output.Wherein, corresponding to matching user
User preference data matched with biological attribute data corresponding to the target user, biological characteristic corresponding to matching user
Data are matched with user preference data corresponding to the target user.In this way, based on to user biological attribute data and user
Preference data is mutually matched, so that it may be enabled the target user to be matched to the target user and be liked and like the target user
Other users, greatly improve the matched accuracy rate of user.
In one embodiment, step S402, that is, obtain the biological attribute data corresponding with target user of acquisition
The step of specifically includes the following steps:
S602 shows that the unlatching at biometric data acquisition interface enters in user interface into user interface
Mouthful.
Wherein, interactive interface is user and the interface that computer equipment or other users interact.Opening entrance is touching
Hair enters the entrance at biometric data acquisition interface, and opening entrance specifically can be virtual an icon or virtual key etc..
Specifically, user can enter user interface by terminal, show biometric data acquisition interface in user interface
Unlatching entrance.
It is the interface schematic diagram of user interface in one embodiment with reference to Fig. 7, Fig. 7.As shown in fig. 7, user's interaction
The unlatching entrance 701 for having biometric data acquisition interface is shown in interface.When biological attribute data to be collected is user's figure
When picture, which can be shown as " camera " virtual icon;When biological attribute data to be collected is user speech, this is opened
Virtual icon of " recording " can be shown as by opening entrance.
S604 shows biometric data acquisition interface when obtaining the open command for opening entrance.
Specifically, when terminal detects the open command for opening entrance, biological spy can be jumped to from interactive interface
Levy data acquisition interface.Open command for opening entrance, which specifically can be to work as, has an effect in the default behaviour of the unlatching entrance
Triggering generates when making.
S606 calls corresponding biological characteristic to adopt when the trigger action for biometric data acquisition interface occurs
Collect the biological attribute data of equipment acquisition target user.
Trigger action is the predetermined registration operation for acting on biometric data acquisition interface, detects that trigger action adjusts triggering
With the biological attribute data of corresponding physical characteristics collecting equipment acquisition target user.Trigger action specifically can be touch behaviour
Work, cursor operations, button operation or voice operating.Wherein, touch operation, which can be, touches clicking operation, touches pressing operation
Perhaps touching slide touch operation can be single-touch operation or multiple point touching operation;Cursor operations can be control
The operation that the operation or control cursor that cursor processed is clicked are pressed;Button operation can be operation of virtual key or
Physical button operation etc..
Specifically, when the trigger action for biometric data acquisition interface occurs, terminal can call corresponding life
The biological attribute data of object collection apparatus equipment acquisition target user.
The collection process of biological attribute data is illustrated respectively below to acquire user images and acquire user speech.
Acquisition for user images, with reference to the interface signal that Fig. 8 (a), Fig. 8 (a) are user images acquisition interface in one embodiment
Figure.When the unlatching entrance in Fig. 7 is " camera " virtual icon, user can be somebody's turn to do " camera " virtual icon by click and enter Fig. 8
(a) user images acquisition interface shown in.In user images acquisition interface, user can click "+" icon 801, to enter such as
User images shown in Fig. 8 (b) show list.As shown in Fig. 8 (b), which does not upload any user images at present, thus
The user images show that list is empty.User can refer to 8 (b) user images and show that the signal language in list interface " uploads yours
Photo looks at that how many people can be aroused in interest for you, remembers to want single photograph+positive face " upload user image.It has been also shown in 8 (b)
The hyperlink of " photographic examples ", user can click the hyperlink to consult photographic examples."+" figure in Fig. 8 (b) when the user clicks
When marking 802, user images can be selected from photograph album, or user images are acquired by camera and are uploaded.With reference to Fig. 8 (c), Fig. 8
It (c) is the interface schematic diagram of user's upload user head portrait in one embodiment.Cancellation icon 803 in Fig. 8 (c) when the user clicks
When, it can cancel and upload the image, reenter the process of user images acquisition.User can pass through the picture appraisal below Fig. 8 (c)
Region 804 gives a mark to the image of oneself, and love more multilist shows more likes.It is one embodiment with reference to Fig. 8 (d), Fig. 8 (d)
The middle interface schematic diagram completed the user images after user images upload and show list.As shown in Fig. 8 (d), when user uploads
The user images that user's history uploads can be shown after user images, in user images list.It can be shown on the right side of user images
Show other users to the marking situation of the user images.Such as " oneself scoring: 0.0 ", " the face person of studying carefully scoring: non-someone scoring ",
And " the face person of studying carefully: 0 people ".In the interface shown in the Fig. 8 (d), the entrance of addition image still can be shown, user can be entered by this
Mouth enters the process of addition user images.User can also click the hyperlink of " previewing photos ", the history uploaded with preview user
User images.
Acquisition for user speech, the acquisition for user images are in one embodiment with reference to Fig. 9 (a), Fig. 9 (a)
The interface schematic diagram of user speech acquisition interface.When the unlatching entrance in Fig. 7 is " recording " virtual icon, user can pass through a little
It hits " recording " virtual icon and enters user speech acquisition interface shown in Fig. 9 (a).In user speech acquisition interface, user
" recording " icon 901 can be clicked, to carry out voice recording.It is that user carries out language in one embodiment with reference to Fig. 9 (b), Fig. 9 (b)
The interface schematic diagram that sound is recorded.Such as Fig. 9 (b), can show the text that need to record into voice in interface, user can refer to the text into
Row voice recording.User can click or press recording control 902, to call local sound collection equipment to acquire user speech.
The achievable voice recording when user unclamps voice control 902, into Fig. 9 (c).Fig. 9 (c) is user speech in one embodiment
The interface schematic diagram completed is recorded, play control 903 is shown in the lower section of Fig. 9 (c), rerecord control 904 and completes control 905.
User can the voice data just recorded of 903 audition of click play control, click is rerecorded control and is back in 9 (b) records again
Voice data.User, which can click completion control, terminates this voice recording.With reference to Fig. 9 (d), Fig. 9 (d) is complete in one embodiment
User speech after uploading at voice data shows list interface schematic diagram.As shown in Fig. 9 (d), when user uploads user's language
The user speech that user's history uploads can be shown after sound, in user speech list.It can be shown on the right side of user voice data
Marking situation of the other users to the user speech.Such as " sound value aroused in interest: it is dynamic that there are no the popular feelings ", " number aroused in interest: 0 people ".
In the interface shown in the Fig. 9 (d), the entrance of addition voice still can be shown, user can enter addition user by the entrance
The process of voice.It may also display the corresponding class label of the user speech, such as " juvenile sound " in Fig. 9 (d).
It is the interface schematic diagram of user speech analysis in one embodiment with reference to Figure 10 (a) and Figure 10 (b), Figure 10 (a);Figure
10 (b) schematic diagrames presented for voice identification result in one embodiment.As shown in Figure 10 (a), when terminal will be on user speech
To server, server can be analyzed and processed user speech report, obtain the classification for voice.In this process, eventually
End can show the interface schematic diagram of user speech analysis, with the passback of waiting voice identification result.When terminal receives voice
When identification result, interface shown in Figure 10 (b) is jumped to, present Classification of Speech as a result, such as show that the user speech is corresponding
Class label be " juvenile sound ".
It is special to carry out biology by obtaining the open command for opening entrance in interactive interface in above-described embodiment
Data acquisition interface is levied, to trigger the biological attribute data of physical characteristics collecting equipment acquisition target user, can be convenient fast
The biological attribute data of target user is acquired promptly.
In one embodiment, step S404, that is, when generating user's matching instruction, acquisition is matched with target user
Matching user user information the step of specifically include: enter user's match triggers interface, in the interface exhibition of user's match triggers
Show for triggering the matched operation entry of user;When generating the trigger action for corresponding to operation entry, initiation and target user
Corresponding matching request;Matching request, for triggering according to target biometric data corresponding to target user and candidate use
It target user's preference data corresponding to first matching degree of candidate user preference data corresponding to family and target user and waits
Select the second matching degree of candidate biological attribute data corresponding to family, screening and the matched matching user of target user.Step
S406, that is, the step of output matches the user information of user include: to show interface into matching result, in matching result exhibition
Show the user information of interface display matching user.
It specifically, is the interface schematic diagram at user's match triggers interface in one embodiment with reference to Figure 11 (a), Figure 11 (a).
Terminal can enter user's match triggers interface, user's match triggers showing interface for trigger user it is matched operate into
Mouthful." starting to match " control 1101 in operation entry such as Figure 11 (a).When generation corresponds to the trigger action of operation entry
When, terminal can initiate matching request corresponding with target user to server.The matching request is used to indicate server according to mesh
The first matching degree of candidate user preference data corresponding to target biometric data corresponding to user and candidate user is marked,
And second of candidate biological attribute data corresponding to target user's preference data corresponding to target user and candidate user
With degree, screening and the matched matching user of target user.
With reference to Figure 11 (b), Figure 11 (b) is after terminal initiation matching request corresponding with target user in one embodiment etc.
The interface schematic diagram of result to be matched.When the user clicks after " starting to match " control 1101 in Figure 11 (a), terminal can be jumped to
Figure 11 (b) is to wait the passback of matching result.
It is the interface schematic diagram that matching result shows interface in one embodiment with reference to Figure 11 (c), Figure 11 (c).Work as terminal
After the user information of matching user for receiving server feedback, it can be shown in matching result and show the use for matching user in interface
Family information.With reference to Figure 11 (c), show in the middle position at matching result displaying interface highest with target user's matching degree
The user information for matching user, in the user information of the ambient display others matching user of the user information of matching user.
It, can by being used to trigger the matched operation entry of user in user's match triggers showing interface in above-described embodiment
Conveniently and efficiently to detect the trigger action for operation entry, timely to initiate matching request, it is matched to improve user
Operating efficiency.
In one embodiment, which further includes jumping when the trigger action for user information occurs
Go to the step of target user is with session interface that user conversates is matched.Specifically, terminal is detectable betides matching knot
Fruit show interface in trigger action, when occur for user information trigger action when, terminal can jump to target user with
The session interface that matching user conversates.To user can the session interface that jumps to and the matching user to match into
Guild's words.
In above-described embodiment, when occur for user information trigger action when, can jump directly to target user with
It is improved with the session interface that user conversates in this way, user can directly conversate with the matching user of system recommendation
Session efficiency.
In one embodiment, which further includes the displaying step of user's evaluation report, and the step is specific
It include: to enter user's evaluation report to consult interface;When having an effect when the trigger action at user's evaluation report access interface, send out
It plays user's appraisal report and consults request;User's evaluation report consults request for triggering according to the corresponding biological characteristic of target user
Data and user preference data user's evaluation report generated;Show user's evaluation report.
Specifically, terminal can enter user's evaluation report access interface, report access circle in user's evaluation when having an effect
When the trigger action in face, initiates user's evaluation report and consult request.Terminal or server can be according to the corresponding lifes of target user
Object characteristic and user preference data generate user's evaluation report.The report of terminal display user's evaluation.
In one embodiment, terminal or server can be to the corresponding biological attribute data of target user and user preference numbers
It is analyzed according to, user's history behavioral data and user information etc., to generate user's evaluation report.
It is the content schematic diagram of user's evaluation report in one embodiment with reference to Figure 12, Figure 12.As shown in figure 12, the user
Appraisal report specifically may include keyword related to user, including other users to the image of the user and the evaluation feelings of voice
Condition, including age distribution, constellation, location distribution, industry field distribution and to the interested user of the user
Go through distribution etc..User's evaluation report may also include the friend-making suggestion etc. for the user analyzed based on above- mentioned information.
In above-described embodiment, family appraisal report is provided and consults interface, user can consult the exclusive appraisal report of oneself, can be with
Preferably recognize oneself, improves interest and diversity of the user during matching friend-making.
It is the timing diagram of user matching method in a specific embodiment with reference to Figure 13, Figure 13.As shown in figure 13, in terminal
Operation has friend-making software client, and user shooting instruction can be triggered by the client or voice recording instructs, which can
Selection photo or recorded voice are simultaneously uploaded to server.Server can not carry out classification processing to the photo harmony cent of upload,
Obtain corresponding class label.Server can be to class label belonging to the client feedback photo or sound difference.User can
The photo or sound uploaded by the client to other users is given a mark, and client uploads the marking data of user.Clothes
The marking data of business device analysis user, obtain the user preference vector that can indicate user preference.Namely pass through the marking of user
The weighted value of behavior, dimension corresponding to the class label liked user increases, otherwise reduces weighted value.Other like the user
The user of class label also will increase the weighted value of respective dimensions, otherwise reduce weighted value.User can pass through client triggering
With request, client sends matching request to server.Server according to the corresponding weighted value of class label that user likes, and
The corresponding weighted value of class label for being accustomed to the user carries out matching degree calculating, obtains matching user, and be pushed to user.
Fig. 2 is the flow diagram of user matching method in one embodiment.Although should be understood that the process of Fig. 2
Each step in figure is successively shown according to the instruction of arrow, but these steps are not the inevitable sequence indicated according to arrow
Successively execute.Unless expressly stating otherwise herein, there is no stringent sequences to limit for the execution of these steps, these steps can
To execute in other order.Moreover, at least part step in Fig. 2 may include multiple sub-steps or multiple stages,
These sub-steps or stage are not necessarily to execute completion in synchronization, but can execute at different times, these
Sub-step perhaps the stage execution sequence be also not necessarily successively carry out but can be with the son of other steps or other steps
Step or at least part in stage execute in turn or alternately.
As shown in figure 14, in one embodiment, it provides user's coalignment 1400, including obtains module 1401, really
Cover half block 1402 and matching module 1403.Wherein:
Module 1401 is obtained, for obtaining target biometric data corresponding with target user and target user's preference number
According to, and candidate biological attribute data corresponding with candidate user and candidate user preference data.
Determining module 1402, for determining target user according to target biometric data and candidate user preference data
For the first matching degree of candidate user.
Determining module 1402 is also used to determine candidate user according to candidate biological attribute data and target user's preference data
For the second matching degree of target user.
Matching module 1403, for meeting matching when the comprehensive matching degree determined by the first matching degree and the second matching degree
When condition, candidate user is determined as the corresponding matching user of target user.
In one embodiment, determining module 1402 is also used to according to target biometric data and candidate biological characteristic number
According to corresponding class label, the corresponding target labels vector of determining and target user and corresponding with candidate user respectively
Candidate label vector;According to target user's preference data and candidate user preference data, determination is corresponding with target user respectively
Target user's preference vector and candidate user preference vector corresponding with candidate user.Determining module 1402 is also used to according to mesh
The similarity between label vector and candidate user preference vector is marked, determines target user for the first matching degree of candidate user;
According to the similarity between candidate label vector and target user's preference vector, determine candidate user for second of target user
With degree.
In one embodiment, determining module 1402 is also used to respectively to target biometric data and candidate biological characteristic
Data are classified, and the corresponding target category label of target user and candidate categories label corresponding with candidate user is obtained;
According to target category label, target labels vector corresponding with target user is determined;It is determining and candidate according to candidate categories label
The corresponding candidate label vector of user.
In one embodiment, target biometric data includes target user's image, and candidate biological attribute data includes
Candidate user image;Determining module 1402 is also used to target user's image and candidate user image being separately input into pre-training
Image classification model;Characteristics of image is carried out to target user's image and candidate user image respectively by image classification model to mention
It takes, and the characteristics of image according to extraction determines corresponding with target user target category label and corresponding with candidate user respectively
Candidate categories label.
In one embodiment, target biometric data includes target user's voice, and candidate biological attribute data includes
Candidate user voice;It is special that determining module 1402 is also used to obtain the class label institute corresponding voice more than one with quantity
Levy vector sample;Determined respectively by gauss hybrid models corresponding with target user's voice target voice feature vector and with
The corresponding candidate speech feature vector of candidate user voice;According between target voice feature vector and each speech feature vector sample
Similarity, determine target category label corresponding to target user;According to candidate speech feature vector and each phonetic feature to
The similarity between sample is measured, determines candidate categories label corresponding to candidate user.
In one embodiment, determining module 1402 is also used to obtain user speech sample and user speech sample is corresponding
Class label;By gauss hybrid models, speech feature vector corresponding with each user speech sample is determined;By the same category
The mean vector of speech feature vector corresponding to label, as speech feature vector sample corresponding with class label.
In one embodiment, determining module 1402 is also used to determine target voice feature vector and each speech feature vector
Voice similarity between sample;When voice similarity meets the first similarity difference condition, determine in voice similarity most
Speech feature vector sample corresponding to big voice similarity;By classification mark corresponding to determining speech feature vector sample
Label, as target category label corresponding to target user.
In one embodiment, determining module 1402 is also used to meet the second similarity difference condition when voice similarity
When, each voice similarity is sorted by numerical values recited sequence;According to the preset quantity from the first voice similarity of sequence
The corresponding speech feature vector sample of voice similarity, determine composite label;Using determining composite label as target
Target category label corresponding to user.
In one embodiment, target biometric data includes target user's image and target user's voice;Candidate is raw
Object characteristic includes candidate user image and candidate user voice;Target user's preference data includes target user's image preference
Data and target user's voice preference data;Candidate user preference data includes candidate user image preference data and candidate user
Voice preference data.Determining module 1402 is also used to determine mesh according to target user's image and candidate user image preference data
User is marked for the first images match degree of candidate user;According to target user's voice and candidate user voice preference data, really
Set the goal user for candidate user the first voice match degree;First images match degree and the first voice match degree are added
Read group total is weighed, obtains target user for the first matching degree of candidate user.Determining module 1402 is also used to use according to candidate
Family image and target user's image preference data determine candidate user for the second images match degree of target user;According to time
User speech and target user's voice preference data are selected, determines candidate user for the second voice match degree of target user;It will
Second images match degree and the second voice match degree are weighted read group total, obtain candidate the second matching for target user
Degree.
In one embodiment, Matching Model 1403 be also used to the first matching degree of candidate user and the second matching degree into
Row weighted sum calculates, and obtains comprehensive matching degree;It is when comprehensive matching degree is greater than or equal to matching degree threshold value, candidate user is true
It is set to the corresponding matching user of target user.
Above-mentioned user's coalignment, according to the corresponding target biometric data of target user and the corresponding time of candidate user
User preference data is selected, the target user can be determined for the first matching degree of the candidate user.Correspondingly, according to the candidate
The corresponding candidate's biological attribute data of user target user's preference data corresponding with the target user, can determine that the candidate uses
Second matching degree of the family for the target user.According to the first matching degree and the second matching degree, filtered out from candidate user with
The matched matching user of target user.In this way, based on biological attribute data and respective user preference data to different user
It is mutually matched, so that it may the target user be enabled to be matched to other use that the target user likes and likes the target user
Family greatly improves the matched accuracy rate of user.
As shown in figure 15, in one embodiment, it provides user's coalignment 1500, including obtains module 1501 and defeated
Module 1502 out.Wherein:
Module 1501 is obtained, for obtaining the biological attribute data corresponding with target user of acquisition.
Module 1501 is obtained to be also used to when generating user's matching instruction, acquisition with target user is matched matches user's
User information;User preference data corresponding to matching user is matched with biological attribute data corresponding to target user, is matched
Biological attribute data corresponding to user is matched with user preference data corresponding to target user.
Output module 1502, for exporting the user information of matching user.
In one embodiment, obtaining module 1501 includes display module 15011 and acquisition module 15012, in which:
Display module 15011 is used to enter user interface, shows biometric data acquisition in user interface
The unlatching entrance at interface;
Display module 15011 is also used to when obtaining the open command for opening entrance, shows that biological attribute data is adopted
Collect interface.
Acquisition module 15012, it is corresponding for calling when the trigger action for biometric data acquisition interface occurs
Physical characteristics collecting equipment acquisition target user biological attribute data.
In one embodiment, display module 15011 is also used to enter user interface, presents in user interface
Biological characteristic test data.
It is obtained to the progress preference test of biological characteristic test data that acquisition module 15012 is also used to obtain target user
User preference data.
In one embodiment, biological attribute data includes user images and user voice;Match use corresponding to user
Family image preference data with user images corresponding to target user match and match user corresponding to user voice preference number
It is matched according to user voice corresponding to target user.Match use corresponding to user images corresponding to user and target user
Family image preference data matches and matches user voice preference number corresponding to user voice corresponding to user and target user
According to matching.
In one embodiment, it obtains module 1501 to be also used to enter user's match triggers interface, in user's match triggers
Showing interface is for triggering the matched operation entry of user;When generating the trigger action for corresponding to operation entry, initiation and mesh
Mark the corresponding matching request of user;Matching request, for trigger according to target biometric data corresponding to target user with
Target user's preference number corresponding to first matching degree of candidate user preference data corresponding to candidate user and target user
According to the second matching degree with candidate biological attribute data corresponding to candidate user, screening with target user is matched matches use
Family.Output module 1502 is also used to enter matching result and shows interface, shows the use of interface display matching user in matching result
Family information.
In one embodiment, which further includes jump module 1503, for believing when generation for user
When the trigger action of breath, target user is jumped to and the session interface that matches user and conversate.
With reference to Figure 16, in one embodiment, which further includes sending module 1504, for entering user
Appraisal report consults interface;When having an effect when the trigger action at user's evaluation report access interface, user's evaluation report is initiated
It accuses and consults request;It is inclined according to the corresponding biological attribute data of target user and user for triggering that request is consulted in user's evaluation report
Good data user's evaluation report generated.Display module 15011 is also used to show that user's evaluation is reported.
Above-mentioned user's coalignment obtains the biological attribute data corresponding with target user of acquisition, as generation user
When with instruction, obtain and the matched user information for matching user of the target user and output.Wherein, corresponding to matching user
User preference data matched with biological attribute data corresponding to the target user, biological characteristic corresponding to matching user
Data are matched with user preference data corresponding to the target user.In this way, based on to user biological attribute data and user
Preference data is mutually matched, so that it may be enabled the target user to be matched to the target user and be liked and like the target user
Other users, greatly improve the matched accuracy rate of user.
Figure 17 shows the internal structure charts of computer equipment in one embodiment.The computer equipment specifically can be figure
Terminal 110 or server 120 in 1.As shown in figure 17, it includes total by system which, which includes the computer equipment,
Processor, memory and the network interface of line connection.Wherein, memory includes non-volatile memory medium and built-in storage.It should
The non-volatile memory medium of computer equipment is stored with operating system, can also be stored with computer program, the computer program
When being executed by processor, processor may make to realize user matching method.Computer program can also be stored in the built-in storage,
When the computer program is executed by processor, processor may make to execute user matching method.
It will be understood by those skilled in the art that structure shown in Figure 17, only part relevant to application scheme
The block diagram of structure, does not constitute the restriction for the computer equipment being applied thereon to application scheme, and specific computer is set
Standby may include perhaps combining certain components or with different component layouts than more or fewer components as shown in the figure.
In one embodiment, user's coalignment provided by the present application can be implemented as a kind of shape of computer program
Formula, computer program can be run in computer equipment as shown in figure 17.Composition can be stored in the memory of computer equipment
Each program module of user's coalignment, for example, acquisition module, determining module and matching module shown in Figure 14.It is each
The computer program that program module is constituted makes processor execute the user of each embodiment of the application described in this specification
Step in matching process.Also for example, acquisition module shown in figure 15 and output module.The computer that each program module is constituted
Program makes processor execute the step in the user matching method of each embodiment of the application described in this specification.
For example, computer equipment shown in Figure 17 can pass through the acquisition module in user's coalignment as shown in figure 14
Execute step S202.Computer equipment can execute step S204 and S206 by determining module.Computer equipment can pass through matching
Module executes step S208.
For example, computer equipment shown in Figure 17 can pass through the acquisition module in user's coalignment as shown in figure 15
Execute step S402 and S404.Computer equipment can execute step S406 by output module.
In one embodiment, a kind of computer equipment, including memory and processor are provided, memory is stored with meter
Calculation machine program, when computer program is executed by processor, so that the step of processor executes above-mentioned user matching method.It uses herein
The step of family matching process, can be the step in the user matching method of above-mentioned each embodiment.
In one embodiment, a kind of computer readable storage medium is provided, computer program, computer journey are stored with
When sequence is executed by processor, so that the step of processor executes above-mentioned user matching method.The step of user matching method herein
It can be the step in the user matching method of above-mentioned each embodiment.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with
Relevant hardware is instructed to complete by computer program, the program can be stored in a non-volatile computer and can be read
In storage medium, the program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein, provided herein
Each embodiment used in any reference to memory, storage, database or other media, may each comprise non-volatile
And/or volatile memory.Nonvolatile memory may include that read-only memory (ROM), programming ROM (PROM), electricity can be compiled
Journey ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include random access memory
(RAM) or external cache.By way of illustration and not limitation, RAM is available in many forms, such as static state RAM
(SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDRSDRAM), enhanced SDRAM
(ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) directly RAM (RDRAM), straight
Connect memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM) etc..
Each technical characteristic of above embodiments can be combined arbitrarily, for simplicity of description, not to above-described embodiment
In each technical characteristic it is all possible combination be all described, as long as however, the combination of these technical characteristics be not present lance
Shield all should be considered as described in this specification.
The several embodiments of the application above described embodiment only expresses, the description thereof is more specific and detailed, but simultaneously
The limitation to the application the scope of the patents therefore cannot be interpreted as.It should be pointed out that for those of ordinary skill in the art
For, without departing from the concept of this application, various modifications and improvements can be made, these belong to the guarantor of the application
Protect range.Therefore, the scope of protection shall be subject to the appended claims for the application patent.
Claims (15)
1. a kind of user matching method, comprising:
Target biometric data corresponding with target user and target user's preference data are obtained, and corresponding with candidate user
Candidate biological attribute data and candidate user preference data;
According to the target biometric data and the candidate user preference data, determine the target user for the time
Select first matching degree at family;
According to the candidate biological attribute data and target user's preference data, determine the candidate user for the mesh
Mark the second matching degree of user;
It, will be described when the comprehensive matching degree determined by first matching degree and second matching degree meets matching condition
Candidate user is determined as the corresponding matching user of the target user.
2. the method according to claim 1, wherein the method also includes:
According to the target biometric data and the corresponding class label of the candidate biological attribute data, determine respectively
And the corresponding target labels vector of target user and candidate label vector corresponding with the candidate user;
According to target user's preference data and the candidate user preference data, determination is corresponding with the target user respectively
Target user's preference vector and candidate user preference vector corresponding with the candidate user;
It is described according to the target biometric data and the candidate user preference data, determine the target user for institute
State the first matching degree of candidate user, comprising:
According to the similarity between candidate user preference vector described in the target labels vector sum, determine the target user for
First matching degree of the candidate user;
It is described according to the candidate biological attribute data and target user's preference data, determine the candidate user for institute
State the second matching degree of target user, comprising:
According to the similarity between the candidate label vector and target user's preference vector, determine the candidate user for
The second matching degree of the target user.
3. according to the method described in claim 2, it is characterized in that, described according to the target biometric data and the time
Select the corresponding class label of biological attribute data, determine respectively corresponding with target user target labels vector and
Candidate's label vector corresponding with the candidate user, comprising:
Classify respectively to the target biometric data and the candidate biological attribute data, obtains using with the target
The corresponding target category label in family and candidate categories label corresponding with the candidate user;
According to the target category label, target labels vector corresponding with the target user is determined;
According to the candidate categories label, candidate label vector corresponding with the candidate user is determined.
4. according to the method described in claim 3, it is characterized in that, the target biometric data includes target user's figure
Picture, candidate's biological attribute data includes candidate user image;It is described respectively to the target biometric data and described
Candidate biological attribute data is classified, and is obtained target category label corresponding with the target user and is used with the candidate
The corresponding candidate categories label in family, comprising:
Target user's image and candidate user image are separately input into the image classification model of pre-training;
Image characteristics extraction is carried out to target user's image and candidate user image respectively by described image disaggregated model,
And the characteristics of image according to extraction determine respectively corresponding with target user target category label and with the candidate user
Corresponding candidate categories label.
5. according to the method described in claim 3, it is characterized in that, the target biometric data includes target user's language
Sound, candidate's biological attribute data includes candidate user voice;It is described respectively to the target biometric data and described
Candidate biological attribute data is classified, and is obtained target category label corresponding with the target user and is used with the candidate
The corresponding candidate categories label in family, comprising:
Obtain the class label institute corresponding speech feature vector sample more than one with quantity;
Determined respectively by gauss hybrid models corresponding with target user's voice target voice feature vector and with it is described
The corresponding candidate speech feature vector of candidate user voice;
According to the similarity between the target voice feature vector and each speech feature vector sample, determine that the target is used
Target category label corresponding to family;
According to the similarity between the candidate speech feature vector and each speech feature vector sample, the candidate use is determined
Candidate categories label corresponding to family.
6. according to the method described in claim 5, it is characterized in that, the acquisition divides with the more than one class label of quantity
Not corresponding speech feature vector sample, comprising:
Obtain user speech sample and the corresponding class label of the user speech sample;
By the gauss hybrid models, determining speech feature vector corresponding with each user speech sample;
By the mean vector of speech feature vector corresponding to the same category label, as voice corresponding with the class label
Feature vector sample.
7. according to the method described in claim 5, it is characterized in that, it is described according to the target voice feature vector with it is each described
Similarity between speech feature vector sample determines target category label corresponding to the target user, comprising:
Determine the voice similarity between the target voice feature vector and each speech feature vector sample;
When the voice similarity meets the first similarity difference condition, the maximum voice phase in the voice similarity is determined
Like the corresponding speech feature vector sample of degree;
By class label corresponding to the determining speech feature vector sample, as target corresponding to the target user
Class label.
8. the method according to the description of claim 7 is characterized in that the method also includes:
When the voice similarity meets the second similarity difference condition, each voice similarity is pressed into numerical values recited sequence
Sequence;
The corresponding speech feature vector sample of voice similarity according to the preset quantity from the first voice similarity of sequence
This, determines composite label;
Using the determining composite label as target category label corresponding to the target user.
9. the method according to claim 1, wherein the target biometric data includes target user's image
With target user's voice;Candidate's biological attribute data includes candidate user image and candidate user voice;The target is used
Family preference data includes target user's image preference data and target user's voice preference data;The candidate user preference data
Including candidate user image preference data and candidate user voice preference data;
It is described according to the target biometric data and the candidate user preference data, determine the target user for institute
State the first matching degree of candidate user, comprising:
According to target user's image and the candidate user image preference data, determine the target user for the time
Select the first images match degree at family;
According to target user's voice and the candidate user voice preference data, determine the target user for the time
Select the first voice match degree at family;
The first image matching degree and the first voice match degree are weighted read group total, obtain the target user
For the first matching degree of the candidate user;
It is described according to the candidate biological attribute data and target user's preference data, determine the candidate user for institute
State the second matching degree of target user, comprising:
According to the candidate user image and target user's image preference data, determine the candidate user for the mesh
Mark the second images match degree of user;
According to the candidate user voice and target user's voice preference data, determine the candidate user for the mesh
Mark the second voice match degree of user;
The second images match degree and the second voice match degree are weighted read group total, obtain it is described it is candidate for
The second matching degree of the target user.
10. method according to any one of claim 1 to 9, which is characterized in that described when by the of the candidate user
When comprehensive matching degree determined by one matching degree and the second matching degree meets matching condition, the candidate user is determined as described
The corresponding matching user of target user, comprising:
Read group total is weighted to the first matching degree and the second matching degree of the candidate user, obtains comprehensive matching degree;
When the comprehensive matching degree is greater than or equal to matching degree threshold value, the candidate user is determined as the target user couple
The matching user answered.
11. a kind of user matching method, comprising:
Obtain the biological attribute data corresponding with target user of acquisition;
When generating user's matching instruction, obtain and the matched user information for matching user of the target user;The matching
User preference data corresponding to user is matched with biological attribute data corresponding to the target user, matching user institute
Corresponding biological attribute data is matched with user preference data corresponding to the target user;
Export the user information of the matching user.
12. according to the method for claim 11, which is characterized in that the biology corresponding with target user for obtaining acquisition
Characteristic, comprising:
Into user interface, the unlatching entrance at biometric data acquisition interface is shown in the user interface;
When obtaining for the open command for opening entrance, biometric data acquisition interface is shown;
When the trigger action for the biometric data acquisition interface occurs, corresponding physical characteristics collecting equipment is called
Acquire the biological attribute data of target user.
13. according to the method for claim 11, which is characterized in that the method also includes:
Into user interface, biological characteristic test data is presented in the user interface;
It obtains target user and the obtained user preference data of preference test is carried out to the biological characteristic test data.
14. according to the method for claim 11, which is characterized in that the biological attribute data includes user images and user
Sound;User preference data corresponding to the matching user is matched with biological attribute data corresponding to the target user,
Include:
User images preference data corresponding to the matching user matched with user images corresponding to the target user and
User voice preference data corresponding to the matching user is matched with user voice corresponding to the target user;
Biological attribute data corresponding to the matching user is matched with user preference data corresponding to the target user, packet
It includes:
User images corresponding to the matching user matched with user images preference data corresponding to the target user and
User voice corresponding to the matching user is matched with user voice preference data corresponding to the target user.
15. method described in any one of 1 to 14 according to claim 1, which is characterized in that the method also includes:
It is reported into user's evaluation and consults interface;
When having an effect when the trigger action at user's evaluation report access interface, initiates user's evaluation report access and asks
It asks;It is inclined according to the corresponding biological attribute data of the target user and user for triggering that request is consulted in the user's evaluation report
Good data user's evaluation report generated;
Show the user's evaluation report.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910295641.7A CN110175298B (en) | 2019-04-12 | 2019-04-12 | User matching method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910295641.7A CN110175298B (en) | 2019-04-12 | 2019-04-12 | User matching method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110175298A true CN110175298A (en) | 2019-08-27 |
CN110175298B CN110175298B (en) | 2023-11-14 |
Family
ID=67689978
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910295641.7A Active CN110175298B (en) | 2019-04-12 | 2019-04-12 | User matching method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110175298B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108765162A (en) * | 2018-05-10 | 2018-11-06 | 阿里巴巴集团控股有限公司 | A kind of finance data output method, device and electronic equipment |
CN110519061A (en) * | 2019-09-02 | 2019-11-29 | 国网电子商务有限公司 | A kind of identity identifying method based on biological characteristic, equipment and system |
CN111209490A (en) * | 2020-04-24 | 2020-05-29 | 深圳市爱聊科技有限公司 | Friend-making recommendation method based on user information, electronic device and storage medium |
CN111540361A (en) * | 2020-03-26 | 2020-08-14 | 北京搜狗科技发展有限公司 | Voice processing method, device and medium |
CN112270927A (en) * | 2020-09-27 | 2021-01-26 | 青岛海尔空调器有限总公司 | Intelligent interaction method based on environment adjusting equipment and intelligent interaction equipment |
CN113014564A (en) * | 2021-02-19 | 2021-06-22 | 提亚有限公司 | User matching method and device, computer equipment and storage medium |
WO2021174699A1 (en) * | 2020-03-04 | 2021-09-10 | 平安科技(深圳)有限公司 | User screening method, apparatus and device, and storage medium |
CN113449754A (en) * | 2020-03-26 | 2021-09-28 | 百度在线网络技术(北京)有限公司 | Method, device, equipment and medium for training and displaying matching model of label |
JP7377583B1 (en) | 2023-07-21 | 2023-11-10 | 淳 山本 | program |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103984775A (en) * | 2014-06-05 | 2014-08-13 | 网易(杭州)网络有限公司 | Friend recommending method and equipment |
CN104601659A (en) * | 2014-12-17 | 2015-05-06 | 深圳市腾讯计算机系统有限公司 | Application recommendation method and system |
CN109284675A (en) * | 2018-08-13 | 2019-01-29 | 阿里巴巴集团控股有限公司 | A kind of recognition methods of user, device and equipment |
CN109408708A (en) * | 2018-09-25 | 2019-03-01 | 平安科技(深圳)有限公司 | Method, apparatus, computer equipment and the storage medium that user recommends |
-
2019
- 2019-04-12 CN CN201910295641.7A patent/CN110175298B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103984775A (en) * | 2014-06-05 | 2014-08-13 | 网易(杭州)网络有限公司 | Friend recommending method and equipment |
CN104601659A (en) * | 2014-12-17 | 2015-05-06 | 深圳市腾讯计算机系统有限公司 | Application recommendation method and system |
CN109284675A (en) * | 2018-08-13 | 2019-01-29 | 阿里巴巴集团控股有限公司 | A kind of recognition methods of user, device and equipment |
CN109408708A (en) * | 2018-09-25 | 2019-03-01 | 平安科技(深圳)有限公司 | Method, apparatus, computer equipment and the storage medium that user recommends |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108765162A (en) * | 2018-05-10 | 2018-11-06 | 阿里巴巴集团控股有限公司 | A kind of finance data output method, device and electronic equipment |
CN110519061A (en) * | 2019-09-02 | 2019-11-29 | 国网电子商务有限公司 | A kind of identity identifying method based on biological characteristic, equipment and system |
WO2021174699A1 (en) * | 2020-03-04 | 2021-09-10 | 平安科技(深圳)有限公司 | User screening method, apparatus and device, and storage medium |
CN111540361A (en) * | 2020-03-26 | 2020-08-14 | 北京搜狗科技发展有限公司 | Voice processing method, device and medium |
CN113449754A (en) * | 2020-03-26 | 2021-09-28 | 百度在线网络技术(北京)有限公司 | Method, device, equipment and medium for training and displaying matching model of label |
CN111540361B (en) * | 2020-03-26 | 2023-08-18 | 北京搜狗科技发展有限公司 | Voice processing method, device and medium |
CN113449754B (en) * | 2020-03-26 | 2023-09-22 | 百度在线网络技术(北京)有限公司 | Label matching model training and displaying method, device, equipment and medium |
CN111209490A (en) * | 2020-04-24 | 2020-05-29 | 深圳市爱聊科技有限公司 | Friend-making recommendation method based on user information, electronic device and storage medium |
CN112270927A (en) * | 2020-09-27 | 2021-01-26 | 青岛海尔空调器有限总公司 | Intelligent interaction method based on environment adjusting equipment and intelligent interaction equipment |
CN113014564A (en) * | 2021-02-19 | 2021-06-22 | 提亚有限公司 | User matching method and device, computer equipment and storage medium |
JP7377583B1 (en) | 2023-07-21 | 2023-11-10 | 淳 山本 | program |
Also Published As
Publication number | Publication date |
---|---|
CN110175298B (en) | 2023-11-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110175298A (en) | User matching method | |
Cetinic et al. | A deep learning perspective on beauty, sentiment, and remembrance of art | |
CN109033229B (en) | Question and answer processing method and device | |
Rothe et al. | Some like it hot-visual guidance for preference prediction | |
CN112346567B (en) | Virtual interaction model generation method and device based on AI (Artificial Intelligence) and computer equipment | |
CN103793446B (en) | The generation method and system of music video | |
CN108616491B (en) | Malicious user identification method and system | |
CN109783730A (en) | Products Show method, apparatus, computer equipment and storage medium | |
CN110263160B (en) | Question classification method in computer question-answering system | |
CN109978812A (en) | Camera system, learning device, photographic device and learning method | |
CN109446407A (en) | Correlation recommendation method, apparatus, computer equipment and storage medium | |
CN109299344A (en) | The generation method of order models, the sort method of search result, device and equipment | |
CN108476259A (en) | The system and method for commending contents based on user behavior | |
CN107066464A (en) | Semantic Natural Language Vector Space | |
WO2007117979A2 (en) | System and method of segmenting and tagging entities based on profile matching using a multi-media survey | |
CN109447729A (en) | A kind of recommended method of product, terminal device and computer readable storage medium | |
CN108763519A (en) | The recommendation method, apparatus and readable storage medium storing program for executing of reading | |
CN109766491A (en) | Product search method, device, computer equipment and storage medium | |
CN110263235A (en) | Information pushes object updating method, device and computer equipment | |
CN111177559B (en) | Text travel service recommendation method and device, electronic equipment and storage medium | |
CN106445977A (en) | Picture pushing method and device | |
CN108509499A (en) | A kind of searching method and device, electronic equipment | |
CN116089729B (en) | Search recommendation method, device and storage medium | |
Zhong et al. | Predicting pinterest: Automating a distributed human computation | |
CN106126592B (en) | Processing method and device for search data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |