US20200012969A1 - Model training method, apparatus, and device, and data similarity determining method, apparatus, and device - Google Patents

Model training method, apparatus, and device, and data similarity determining method, apparatus, and device Download PDF

Info

Publication number
US20200012969A1
US20200012969A1 US16/577,100 US201916577100A US2020012969A1 US 20200012969 A1 US20200012969 A1 US 20200012969A1 US 201916577100 A US201916577100 A US 201916577100A US 2020012969 A1 US2020012969 A1 US 2020012969A1
Authority
US
United States
Prior art keywords
user data
user
similarity
data pair
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US16/577,100
Other languages
English (en)
Inventor
Nan Jiang
Hongwei Zhao
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced New Technologies Co Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Assigned to ALIBABA GROUP HOLDING LIMITED reassignment ALIBABA GROUP HOLDING LIMITED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: JIANG, NAN, ZHAO, HONGWEI
Publication of US20200012969A1 publication Critical patent/US20200012969A1/en
Priority to US16/777,659 priority Critical patent/US11288599B2/en
Assigned to ADVANTAGEOUS NEW TECHNOLOGIES CO., LTD. reassignment ADVANTAGEOUS NEW TECHNOLOGIES CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ALIBABA GROUP HOLDING LIMITED
Assigned to Advanced New Technologies Co., Ltd. reassignment Advanced New Technologies Co., Ltd. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ADVANTAGEOUS NEW TECHNOLOGIES CO., LTD.
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • G06K9/00268
    • G06K9/00288
    • G06K9/6256
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/04Training, enrolment or model building
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks

Definitions

  • the present application relates to the field of computer technologies, and more particularly to a model training method, apparatus, and device, and a data similarity determining method, apparatus, and device.
  • twins As a novel identity verification method, face recognition has created new risks while providing convenience to users. For a plurality of users having very similar looks (such as twins), it is difficult to effectively distinguish different users through face recognition, which is very likely to cause risks of account mis-registration and misappropriation of account funds due to the inability to correctly identify the users. As the most typical known case involving very similar looks, twins, especially identical twins, are closely related to each other and are very likely to engage in behaviors associated with the above risks. How to determine user data of twins from a large amount of data has become an important issue to be solved.
  • a recognition model is constructed using pre-selected sample data. Specifically, an investigator conducts a social survey through questionnaires, prize-winning questions, or manual observations, collects user data, and obtains and labels an association or twin relationship between users through manual observation or by asking people to be investigated. Based on the manually labeled association or twin relationship, the identification model is constructed by using the corresponding user data as sample data.
  • the above-mentioned identification model constructed by using the supervised machine learning method requires manual labeling of the sample data, and the manual labeling process consumes a large amount of manpower resources, and also consumes a large amount of time for labeling, thus making the model training inefficient and leading to high resource consumption.
  • a model training method includes: acquiring a plurality of user data pairs, where data fields of two sets of user data in each user data pair have an identical part; acquiring a user similarity corresponding to each user data pair, wherein the user similarity is a similarity between users corresponding to the two sets of user data in each user data pair; determining, according to the user similarity corresponding to each user data pair and the plurality of user data pairs, sample data for training a preset classification model; and training the classification model based on the sample data to obtain a similarity classification model.
  • a data similarity determining method includes: acquiring a to-be-detected user data pair, the to-be-detected user data pair including two sets of to-be-detected user data; performing feature extraction on each set of to-be-detected user data in the to-be-detected user data pair to obtain to-be-detected user features; and determining a similarity between users corresponding to the two sets of to-be-detected user data in the to-be-detected user data pair according to the to-be-detected user features and a pre-trained similarity classification model.
  • a model training apparatus includes: a data acquiring module acquiring a plurality of user data pairs, wherein data fields of two sets of user data in each user data pair have an identical part; a similarity acquiring module acquiring a user similarity corresponding to each user data pair, wherein the user similarity is a similarity between users corresponding to the two sets of user data in each user data pair; a sample data determining module determining, according to the user similarity corresponding to each user data pair and the plurality of user data pairs, sample data for training a preset classification model; and a model training module training the classification model based on the sample data to obtain a similarity classification model.
  • a data similarity determining apparatus includes: a to-be-detected data acquiring module acquiring a to-be-detected user data pair, the to-be-detected user data pair including two sets of to-be-detected user data; a feature extraction module performing feature extraction on each set of to-be-detected user data in the to-be-detected user data pair to obtain to-be-detected user features; and a similarity determining module determining a similarity between users corresponding to the two sets of to-be-detected user data in the to-be-detected user data pair according to the to-be-detected user features and a pre-trained similarity classification model.
  • a model training device includes: a processor; and a memory configured to store instructions, wherein the processor is configured to execute the instructions to: acquire a plurality of user data pairs, wherein data fields of two sets of user data in each user data pair have an identical part; acquire a user similarity corresponding to each user data pair, wherein the user similarity is a similarity between users corresponding to the two sets of user data in each user data pair; determine, according to the user similarity corresponding to each user data pair and the plurality of user data pairs, sample data for training a preset classification model; and train the classification model based on the sample data to obtain a similarity classification model.
  • a data similarity determining device includes: a processor; and a memory configured to store instructions, wherein the processor is configured to execute the instructions to: acquire a to-be-detected user data pair, the to-be-detected user data pair including two sets of to-be-detected user data; perform feature extraction on each set of to-be-detected user data in the to-be-detected user data pair, to obtain to-be-detected user features; and determine a similarity between users corresponding to the two sets of to-be-detected user data in the to-be-detected user data pair according to the to-be-detected user features and a pre-trained similarity classification model.
  • a plurality of user data pairs may be acquired, wherein data fields of two sets of user data in each user data pair may have an identical part; a user similarity corresponding to each user data pair may be acquired; sample data for training a preset classification model may be determined; and the classification model may be trained based on the sample data to obtain a similarity classification model, so that a similarity between users corresponding to the two sets of to-be-detected user data in a to-be-detected user data pair can be determined according to the similarity classification model.
  • a plurality of user data pairs may be obtained through the same data field, and an association between users corresponding to the two sets of user data in each user data pair may be determined according to the user similarity to obtain sample data for training a preset classification model. Accordingly, the sample data can be obtained without manual labeling, so that rapid training of a model can be implemented, the model training efficiency can be improved, and the resource consumption can be reduced.
  • FIG. 1 is a flowchart of a model training method according to an embodiment.
  • FIG. 2 is a flowchart of a data similarity determining method according to an embodiment.
  • FIG. 3 is a schematic diagram of an interface of a detection application according to an embodiment.
  • FIG. 4 is a flowchart of a data similarity determining method according to an embodiment.
  • FIG. 5 is a schematic diagram of a data similarity determining process according to an embodiment.
  • FIG. 6 is a schematic diagram of a model training apparatus according to an embodiment.
  • FIG. 7 is a schematic diagram of a data similarity determining apparatus according to an embodiment.
  • FIG. 8 is a schematic diagram of a model training device according to an embodiment.
  • FIG. 9 is a schematic diagram of a data similarity determining device according to an embodiment.
  • Embodiments of the specification provide a model training method, apparatus, and device, and a data similarity determining method, apparatus, and device.
  • FIG. 1 is a flowchart of a model training method 100 according to an embodiment.
  • the method 100 may be executed by a terminal device or a server.
  • the terminal device may be a personal computer or the like.
  • the server may be an independent single server or may be a server cluster formed by a plurality of servers.
  • the method 100 may be executed by a server to improve the model training efficiency.
  • the method 100 may include the following steps.
  • step S 102 a plurality of user data pairs are acquired, wherein data fields of two sets of user data in each user data pair have an identical part.
  • each user data pair may include user data of a plurality of different users.
  • the plurality of user data pairs may include a user data pair A and a user data pair B.
  • the user data pair A may include user data 1 and user data 2
  • the user data pair B may include user data 3 and user data 4, etc.
  • the user data may be data related to a user, which may include, for example, identity information such as a name, age, height, address, identity card number, and social security card number of the user, and may also include information such as an interest, a purchased product, travel, etc. of the user.
  • the data field may be a field or character capable of representing identities of users corresponding to the two sets of different user data in the user data pair as well as an association between the users.
  • the data field may include a surname, a preset quantity of digits in the identity card number (e.g., the first 14 digits in the identity card number), a social security card number or other identity numbers capable of determining the user identity or information, and so on.
  • the user data may be acquired in various manners.
  • the user data may be purchased from different users, or may be information entered by the user when the user registers with a website or application, such as information entered when the user registers with Alipay®, or may be user data actively uploaded by the user.
  • the specific manner in which the user data is acquired is not limited in the embodiments of the specification.
  • data fields included in the acquired user data may be compared to find user data having data fields that share an identical part.
  • the user data having data fields that share an identical part may be grouped together to form a user data pair.
  • a plurality of user data pairs may be obtained, in which data fields of user data in each user data pair have an identical part.
  • the data fields may be set as identity card numbers and surnames, and information such as the identity card number and the name of the user may be searched in the user data considering that one or more digits (e.g., the first 14 digits of the identity card number) in the identity card number can represent the relationship between two users.
  • the first 14 digits of the identity card number may be used as a basis for determining whether data fields have an identical part. For example, the first 14 digits of the identity card number and surname of each user may be acquired, and the first 14 digits of the identity card numbers and the surnames of different users may be compared.
  • Two sets of user data that have the same surname and the same first 14 digits of the identity card number may be grouped into one user data pair.
  • the user data pair may be stored in the form of a user pair, such as ⁇ identity card number of user 1, identity card number of user 2, name of user 1, name of user 2, other data of user 1, other data of user 2 ⁇ or the like.
  • the data fields of the two sets of user data having an identical part may be construed as that some contents in the data fields are identical (e.g., the first 14 digits in 18-digit identity card numbers), or may be construed as that all contents of the data fields are identical.
  • step S 104 a user similarity corresponding to each user data pair is acquired, wherein the user similarity is a similarity between users corresponding to the two sets of user data in each user data pair.
  • the user similarity may be used for representing the degree of similarity between a plurality of users, for example, 99% or 50%.
  • the user similarity may also be represented in other manners.
  • the user similarity may also be represented by twins and non-twins, or by identical twins and fraternal twins.
  • a classification model may be trained, which requires sample data for training the classification model as well as a user similarity corresponding to the sample data.
  • the user similarity may be prestored in a server or a terminal device.
  • the user similarity may be determined in various manners.
  • One example processing method may include the following operations: images of users may be acquired in advance, wherein the images may be uploaded by the users when registering with an application or website, and the users may be users corresponding to the two sets of user data included in each user data pair.
  • the images in each user data pair may be compared, through which the similarity between the users corresponding to the two sets of user data included in the user data pair may be calculated.
  • processing methods such as image preprocessing, image feature extraction, and image feature comparison may be used, which are not limited in the embodiments of the specification.
  • step S 106 according to the user similarity corresponding to each user data pair and the plurality of user data pairs, sample data for training a preset classification model is determined.
  • the classification model may be any classification model, such as a naive Bayesian classification model, a logistic regression classification model, a decision tree classification model, or a support vector machine classification model.
  • the classification model when the classification model is used only for determining whether two different users are similar, the classification model may be a binary classification model.
  • the sample data may be data used for training the classification model.
  • the sample data may be the two sets of user data in the user data pair, and may also be data obtained after the above user data is processed in a certain manner. For example, feature extraction may be performed on the above user data to obtain a corresponding user feature, and data of the user feature may be used as the sample data.
  • a similarity threshold (e.g., 80% or 70%) may be set in advance.
  • the user similarity corresponding to each user data pair may be respectively compared with the similarity threshold.
  • the user data pairs corresponding to the user similarities greater than or equal to the similarity threshold may be grouped into one set, the user data pairs corresponding to the user similarities less than the similarity threshold may be grouped into one set, a predetermined quantity (e.g., 40000 or 50000) of user data pairs may be selected from each of the above two sets, and the selected user data pairs may be used as the sample data for training the preset classification model.
  • the sample data for training the preset classification model may be selected in various manners other than the above manner. For example, features of the two sets of user data included in each user data pair may be extracted to obtain corresponding user features, and the user features may be grouped into the above two sets according to the user similarity corresponding to each user data pair and the similarity threshold. Data of the two sets of user features may be used as the sample data for training the preset classification model.
  • step S 108 the classification model is trained based on the sample data to obtain a similarity classification model.
  • the similarity classification model may be a model used for determining the degree of similarity between different users.
  • feature extraction may be performed on the two sets of user data in each of the selected user data pairs to obtain a corresponding user feature, and the user features of each user data pair in the sample data may be input to the classification model for calculation.
  • a calculation result may be output.
  • the calculation result may be compared with the user similarity corresponding to the corresponding user data pair to determine whether the two are the same. If the calculation result and the corresponding user similarity are not the same, a related parameter of the classification model may be changed, the user features of the user data pair may be input to the modified classification model for calculation, and whether the calculation result is the same as the user similarity may be determined.
  • the procedure may be repeated until the calculation result and the user similarity are the same. If the two are the same, the above processing procedure may be performed on the next selected user data pair. Finally, if the calculation result obtained after the user features of each user data pair are input to the classification model is the same as the user similarity corresponding to the corresponding user data pair, the obtained classification model may be used as the similarity classification model.
  • the similarity classification model may be obtained.
  • FIG. 2 is a flowchart of a similarity determining method 200 according to an embodiment.
  • the method 200 may be executed by a terminal device or a server.
  • the terminal device may be a personal computer or the like.
  • the server may be an independent single server or may be a server cluster formed by a plurality of servers.
  • the method 200 may include the following steps.
  • a to-be-detected user data pair is acquired.
  • the to-be-detected user data pair may be a user data pair formed by user data of two users to be detected.
  • FIG. 3 is a schematic diagram of an interface 300 of a detection application according to an embodiment.
  • the interface 300 may include a button for uploading data. When the similarity between two different users is to be detected, the button for uploading data may be tapped.
  • the detection application may pop up a prompt box 302 for uploading data.
  • a data uploader may input data of the to-be-detected user data pair in the prompt box 302 , and a confirmation button in the prompt box 302 may be tapped when the input is completed.
  • the detection application may acquire the to-be-detected user data pair input by the data uploader.
  • the detection application may be installed on a terminal device or a server.
  • the detection application may send the to-be-detected user data pair to the server after acquiring the to-be-detected user data pair, so that the server may acquire the to-be-detected user data pair. If the detection application is installed on the server, the server may directly acquire the to-be-detected user data pair from the detection application.
  • step S 204 feature extraction is performed on each set of to-be-detected user data in the to-be-detected user data pair to obtain to-be-detected user features.
  • the to-be-detected user features may include a feature of the user data of the user to be detected.
  • each set of to-be-detected user data in the to-be-detected user data pair may be acquired.
  • a corresponding feature may be extracted from the to-be-detected user data by using a preset feature extraction algorithm, and the extracted feature may be used as a to-be-detected user feature corresponding to the to-be-detected user data.
  • the to-be-detected user feature corresponding to each set of to-be-detected user data in the to-be-detected user data pair may be obtained.
  • the feature extraction algorithm may be any algorithm capable of extracting a predetermined feature from the user data, and specifically may be set according to actual situations.
  • step S 206 a similarity is determined between users corresponding to the two sets of to-be-detected user data in the to-be-detected user data pair according to the to-be-detected user features and a pre-trained similarity classification model.
  • the to-be-detected user feature obtained through step S 204 may be input to the similarity classification model obtained through step S 102 to step S 108 for calculation.
  • the result output from the similarity classification model may be the similarity between the users corresponding to the two sets of to-be-detected user data in the to-be-detected user data pair.
  • the direct output result of the similarity classification model may be presented in percentage, for example, 90% or 40%.
  • the direct output result of the similarity classification model may further be set according to actual situations, such as when identical twins and non-identical twins are to be distinguished, or when identical twins and fraternal twins are to be distinguished.
  • a classification threshold may be set. If the direct output result is greater than or equal to the classification threshold, it may be determined that the users corresponding to the two sets of to-be-detected user data in the to-be-detected user data pair are identical twins; otherwise, the users may be determined as non-identical twins or fraternal twins.
  • the similarity between the users corresponding to the two sets of to-be-detected user data in the to-be-detected user data pair may be rapidly determined according to the pre-trained similarity classification model, thereby improving the efficiency of determining the similarity between the users.
  • both the user data pair and the to-be-detected user data pair in the embodiments include two sets of user data
  • the model training method and the similarity determining method may also be applied to a user data combination and a to-be-detected user data combination including more than two sets of user data.
  • the embodiments provide a model training method and a similarity determining method, in which a plurality of user data pairs may be acquired, wherein data fields of two sets of user data in each user data pair may have an identical part; a user similarity corresponding to each user data pair may be acquired; sample data for training a preset classification model may be determined; and the classification model may be trained based on the sample data to obtain a similarity classification model, so that a similarity between users corresponding to the two sets of to-be-detected user data in a to-be-detected user data pair can be determined according to the similarity classification model.
  • a plurality of user data pairs may be obtained only through the same data field, and an association between users corresponding to the two sets of user data in each user data pair may be determined according to the user similarity to obtain sample data for training a preset classification model, that is, the sample data can be obtained without manual labeling, so that rapid training of a model can be implemented, the model training efficiency can be improved, and the resource consumption can be reduced.
  • FIG. 4 is a flow chart of a data similarity determining method 400 according to an embodiment.
  • the method 400 may be executed by a server, or jointly by a terminal device and a server.
  • the terminal device may be a personal computer or the like.
  • the server may be an independent single server or may be a server cluster formed by a plurality of servers.
  • the method 400 may be executed by a server.
  • method 400 is implemented jointly by a terminal device and a server, reference can be made to the following related contents.
  • an image of a user may be captured on site and compared with a user image of the user that is prestored in a database of a face recognition system, and if the value obtained through comparison reaches a predetermined threshold, it may be determined that the user is the user corresponding to the prestored user image, thus verifying the identity of the user.
  • a predetermined threshold it may be determined that the user is the user corresponding to the prestored user image, thus verifying the identity of the user.
  • twins especially identical twins who are closely related to each other. If there is a list including as many twin users as possible, a face recognition strategy may be designed for these users to prevent the above risks. Therefore, a model for effectively identifying twins may be constructed to output a twin list for monitoring face recognition behavior of these users while ensuring high accuracy, thus achieving risk control.
  • model training method 400 For the implementation of constructing the model for effectively identifying twins, reference can be made to the model training method 400 , as described below.
  • step S 402 a plurality of user data pairs are acquired, wherein data fields of two sets of user data in each user data pair have an identical part.
  • step S 402 considering that twins generally have the same surname and the same portion, e.g., the same first 14 digits, of the identity card numbers, surnames and the first 14 digits of identity card numbers may be used as data fields for selecting the user data pair.
  • step S 402 reference can be made to the related content of step S 102 in Embodiment 1.
  • the processing of selecting the user data pair may be implemented based on the surnames and the first 14 digits of the identity card numbers.
  • the processing of selecting the user data pair may also be implemented based on other information, for example, the surnames and the social security card numbers, or the first 14 digits of the identity card numbers and the social security card numbers, which is not limited in the embodiments of the specification.
  • step S 404 Considering that the degree of similarity between the users corresponding to the two sets of user data in the user data pair is determined during model training, the following provides a related processing manner, as described in step S 404 and step S 406 below.
  • step S 404 biological features of users corresponding to a first user data pair are acquired, wherein the first user data pair is any user data pair in the plurality of user data pairs.
  • the biological features may be physiological and behavioral features of a human body, such as a fingerprint feature, iris feature, facial feature, DNA, or other physiological features, or a voiceprint feature, handwriting feature, keystroke habit, or other behavioral features.
  • a user data pair (referred to herein as the first user data pair) may be arbitrarily selected from the plurality of user data pairs.
  • the user may upload one or more of the biological features of the user to the server.
  • the server may store the biological feature and an identifier of the user in an associated manner.
  • the identifier of the user may be a username or a name of the user input by the user during registration.
  • the information stored in an associated manner in the server may be as shown in Table 1.
  • the server may extract identifiers of users included in the first user data pair, and may acquire corresponding biological features according to the identifiers of the users, thus obtaining the biological features of the users corresponding to the first user data pair.
  • the identifiers of the users included in the first user data pair may be user 2 and user 3, and by querying the corresponding relationships in Table 1, it may be determined that the user 2 corresponds to a biological feature B, and the user 3 corresponds to a biological feature C, that is, the biological features of the users corresponding to the first user data pair are the biological feature B and the biological feature C.
  • step S 406 a user similarity corresponding to the first user data pair is determined according to the biological features of the users corresponding to the first user data pair.
  • similarity calculation may be respectively performed for the obtained biological features, so as to determine the degree of similarity between two corresponding users (i.e., the user similarity).
  • the similarity calculation may be implemented in various manners, for example, according to a Euclidean distance between feature vectors, which is not limited in the embodiments of the specification.
  • a threshold may be set for determining whether users are similar.
  • the threshold may be set to 70.
  • the user similarity corresponding to two biological features is greater than or equal to 70, it may be determined that the users corresponding to the two sets of user data in the first user data pair are similar; when the user similarity corresponding to the two biological features is less than 70, it may be determined that the users corresponding to the two sets of user data in the first user data pair are not similar.
  • the processing procedure may be performed on other user data pairs in addition to the first user data pair in the plurality of user data pairs, so as to obtain the user similarity corresponding to each user data pair in the plurality of user data pairs.
  • the user similarity may be determined according to the biological features of the users. In some embodiments, the user similarity may be determined in other various manners. Step S 404 and step S 406 are further described below by using an example where the biological features are facial features, and reference can be made to the following step 1 and step 2 for details.
  • step 1 facial images of the users corresponding to the first user data pair are acquired, wherein the first user data pair is any user data pair in the plurality of user data pairs.
  • a user data pair (referred to herein as the first user data pair) may be arbitrarily selected from the plurality of user data pairs.
  • the user may upload an image including the face of the user to the server.
  • the server may store the image and an identifier of the user in an associated manner.
  • the identifier of the user may be a username or a name of the user input by the user during registration.
  • the information stored in an associated manner in the server may be as shown in Table 2.
  • the server may extract identifiers of users included in the first user data pair, and may acquire corresponding images according to the identifiers of the users, thus obtaining the facial images of the users corresponding to the first user data pair.
  • the identifiers of the users included in the first user data pair may be user 2 and user 3, and by querying the corresponding relationships in the Table 2, it may be determined that the image including the face of the user and corresponding to the user 2 is an image B, and the image including the face of the user and corresponding to the user 3 is an image C, that is, the facial images of the users corresponding to the first user data pair are the image B and the image C.
  • step 2 feature extraction is performed on the facial images to obtain facial image features, and the user similarity corresponding to the first user data pair is determined according to the facial image features of the users corresponding to the first user data pair.
  • step 1 after the facial images of the users corresponding to the first user data pair are obtained through step 1, feature extraction may be respectively performed on the obtained facial images to obtain corresponding facial image features, and a corresponding feature vector may be obtained based on an extracted feature of each facial image; a Euclidean distance between feature vectors of any two facial images may be calculated, and according to the value of the Euclidean distance between the feature vectors, the degree of similarity between the two corresponding users (i.e., the user similarity) may be determined. The larger the value of the Euclidean distance between feature vectors is, the lower the user similarity is; the smaller the value of the Euclidean distance between feature vectors is, the higher the user similarity is.
  • a threshold may be set for determining whether images are similar.
  • the threshold may be set to 70.
  • the user similarity corresponding to two facial images is greater than or equal to 70, it may be determined that the users corresponding to the two sets of user data in the first user data pair are similar; when the user similarity corresponding to two facial images is less than 70, it is determined that the users corresponding to the two sets of user data in the first user data pair are not similar.
  • step 1 feature extraction may be respectively performed on the image B and the image C, and corresponding feature vectors may be constructed respectively according to the extracted features to obtain a feature vector of the image B and a feature vector of the image C.
  • a Euclidean distance between the feature vector of the image B and the feature vector of the image C may be calculated, and the user similarity between the user 2 and the user 3 may be determined according to the value of the obtained Euclidean distance.
  • the processing procedure may be performed on other user data pairs in addition to the first user data pair in the plurality of user data pairs, so as to obtain the user similarity corresponding to each user data pair in the plurality of user data pairs.
  • step S 404 and step S 406 further provides an optional processing manner, and reference can be made to the following step 1 and step 2 for details.
  • step 1 speech data of the users corresponding to the first user data pair is acquired, wherein the first user data pair is any user data pair in the plurality of user data pairs.
  • a user data pair (referred to herein as the first user data pair) may be arbitrarily selected from the plurality of user data pairs.
  • the user may upload speech data having a predetermined duration (for example, 3 seconds or 5 seconds) and/or including a predetermined speech content (for example, speech of one or more words or one sentence) to the server.
  • the server may store the speech data and an identifier of the user in an associated manner.
  • the server may respectively extract the identifiers of the users included in the first user data pair, and acquire corresponding speech data according to the identifiers of the users, thus obtaining the speech data of the users corresponding to the first user data pair.
  • step 2 feature extraction is performed on the speech data to obtain speech features, and the user similarity corresponding to the first user data pair is determined according to the speech features of the users corresponding to the first user data pair.
  • step 1 after the speech data of the users corresponding to the first user data pair are obtained through step 1 above, feature extraction may be respectively performed on the obtained speech data, and based on an extracted feature of each piece of speech data, the degree of similarity between the two corresponding users (i.e., the user similarity) may be determined.
  • the degree of similarity between the two corresponding users i.e., the user similarity
  • the user similarity may be determined through one-by-one comparison of features; or speech spectrum analysis may be performed for any two pieces of speech data to determine the user similarity.
  • the processing procedure may be performed on other user data pairs in addition to the first user data pair in the plurality of user data pairs, so as to obtain the user similarity corresponding to each user data pair in the plurality of user data pairs.
  • step S 408 feature extraction is performed on each user data pair in the plurality of user data pairs to obtain associated user features between the two sets of user data in each user data pair.
  • a user data pair (referred to herein as a third user data pair) may be arbitrarily selected from the plurality of user data pairs, and feature extraction may be respectively performed on two sets of different user data in the third user data pair.
  • the third user data pair may include user data 1 and user data 2, and feature extraction may be respectively performed on the user data 1 and the user data 2.
  • the associated user features between the two sets of user data in the third user data pair may be obtained by comparing features extracted from different user data.
  • the processing procedure may be performed on other user data pairs in addition to the third user data pair in the plurality of user data pairs, so as to obtain the associated user features between the two sets of user data in each user data pair.
  • the user feature may include, but is not limited to, a household registration dimension feature, a name dimension feature, a social feature, and an interest feature, or the like.
  • the household registration dimension feature may include a feature of user identity information.
  • the household registration dimension feature may be mainly based on the household registration management system of China.
  • the identity card information included in the household registration may include the date of birth and the household registration place, and the household registration may include the parents' names and citizen's address.
  • some citizens' registration information may not be the same as the actual situation. For example, the registered date of birth may be earlier than the real date, two children may respectively follow the parents' surnames, or even the divorce of the parents may lead to the separation of the household registration.
  • the household registration dimension feature may serve as a reference for determining whether the two users are twins. In this way, the association between different users may be determined depending on features such as whether different users corresponding to the user data pair have the same date of birth, the same household registration place, the same parents, or the same current address.
  • the name dimension feature may include a feature of user name information and a feature of a degree of scarcity of a user surname.
  • the name dimension feature based on the Nature Language Processing (NLP) theory and social experience, generally, if the names of two people look alike (e.g., Zhang Jinlong and Zhang Jinhu) or have a certain semantic relationship (e.g., Zhang Meimei and Zhang Lili), it may be considered that there is an association between the two.
  • the relationship between the names of the two users may be assessed using a dictionary, and the user's registered personal information and demographic data may be used to calculate the degree of scarcity of the surname as a feature.
  • the association between different users may be determined depending on features such as whether different users corresponding to the user data pair have the same surname or have the same length of name, the degree of synonym of the names, whether the combination of the names is a phrase, the degree of scarcity of the surname, or the like.
  • the social feature may include a feature of social relationship information of a user.
  • the social feature may be obtained by extracting a social relationship of the user data pair based on big data. Generally, twins would interact with each other frequently and have a highly overlapping social relationship, such as having the same relatives or classmates.
  • the user data pair may be associated based on a relationship network formed by personal information of users stored in the server and existing data, address books, or the like, to obtain corresponding features.
  • the association between different users corresponding to the user data pair may be determined depending on features such as whether the different users follow each other in a social networking application, whether the different users have transferred funds between each other, whether the different users have saved contact information of the other party to the address book, whether the different users have marked a specific appellation for the other party in the address book, a quantity of common contacts between their address books, or the like.
  • the user feature may further include features of e-commerce, tourism, entertainment, and other dimensions.
  • data related to the features of e-commerce, tourism, entertainment and other dimensions may be acquired from a predetermined database or website. In this way, the association between different users corresponding to the user data pair may be determined depending on features such as a quantity of common shopping records between the different users, whether they have traveled together, whether they have checked in at a hotel at the same time, a similarity between their shopping preferences, whether they have the same delivery address, or the like.
  • the processing of determining the user similarity (including step S 404 and step S 406 ) and the processing of feature extraction (including step S 408 ) may be executed in a chronological order. In some embodiments, the processing of determining the user similarity and the processing of feature extraction may be executed at the same time or in a reversed order, which is not limited in the embodiments of the specification.
  • step S 410 according to the associated user features between the two sets of user data in each user data pair and the user similarity corresponding to each user data pair, the sample data for training the classification model is determined.
  • a threshold may be set in advance. According to the threshold, user data pairs corresponding to the user similarities greater than or equal to the threshold may be selected from the plurality of user data pairs. The associated user features between the two sets of user data in each of the selected user data pairs may be used as user features for training the classification model. The selected user features and the user similarities corresponding to the selected user data pairs may be determined as the sample data for training the classification model.
  • step S 410 may be implemented in various other manners.
  • the following further provides an optional processing manner, including the following step 1 and step 2.
  • step 1 positive sample features and negative sample features are selected from user features corresponding to the plurality of user data pairs according to the user similarity corresponding to each user data pair and a predetermined similarity threshold.
  • the user similarity between the facial images of the two users may be calculated, so as to determine whether the two users are identical twins.
  • a similarity threshold e.g., 80% or 70%
  • User data pairs corresponding to the user similarities greater than or equal to the similarity threshold may be determined as user data pairs of identical twins, and user data pairs corresponding to the user similarities less than the similarity threshold may be determined as user data pairs of non-identical twins.
  • identical twins and fraternal twins basically have the same features except in looks
  • user features corresponding to the user data pairs of identical twins may be used as positive sample features of the similarity classification model
  • user features corresponding to the user data pairs of non-identical twins including fraternal twins and non-twins
  • the negative sample features do not mean that features included therein are all user features of fraternal twins.
  • the user features of fraternal twins may account for a small portion in the negative sample features, or the negative sample features may include a small number of positive sample features, which may not affect the training of the classification model but will improve the robustness of the similarity classification model.
  • the positive sample features may include the same quantity of features as the negative sample features.
  • 10000 user data pairs corresponding to the user similarities less than 10% may be selected from the plurality of user data pairs
  • 10000 user data pairs corresponding to the user similarities greater than 10% and less than 20% may be selected from the plurality of user data pairs
  • 10000 user data pairs corresponding to the user similarities greater than 20% and less than 30% may be selected from the plurality of user data pairs
  • 10000 user data pairs corresponding to the user similarities greater than 30% and less than 40% may be selected from the plurality of user data pairs
  • 10000 user data pairs corresponding to the user similarities greater than 40% and less than 50% may be selected from the plurality of user data pairs.
  • User features of the above 50000 user data pairs may be used as the negative sample features.
  • 40000 user data pairs corresponding to the user similarities greater than 80% and less than 90% may be selected from the plurality of user data pairs, and 10000 user data pairs corresponding to the user similarities greater than 90% and less than 100% may be selected from the plurality of user data pairs.
  • User features of the above 50000 user data pairs may be used as the positive sample features.
  • step 2 the positive sample features and the negative sample features are used as the sample data for training the classification model.
  • data of the user features and the corresponding user similarities may be combined, and the combined data may be used as the sample data for training the classification model.
  • step S 412 the classification model is trained based on the sample data to obtain a similarity classification model.
  • the similarity classification model may be a binary classifier model, such as a Gradient Boosting Decision Tree (GBDT) binary classifier model.
  • GBDT Gradient Boosting Decision Tree
  • positive sample features may be respectively input to the classification model for calculation.
  • the obtained calculation result may be compared with the user similarity corresponding to the positive sample feature. If the calculated result and the user similarity match each other, the next positive sample feature or negative sample feature may be selected and input to the classification model for calculation.
  • the obtained calculation result may continue to be compared with the user similarity corresponding to the positive sample feature. If the calculated result and the user similarity do not match, a related parameter of the classification model may be adjusted, the positive sample feature may be input to the adjusted classification model for calculation, and the obtained calculation result may be compared with the user similarity corresponding to the positive sample feature again. The procedure may be repeated until the two match each other.
  • all the positive sample features and all the negative sample features may be input to the classification model for calculation, thus training the classification model.
  • the final classification model obtained through training may be used as the similarity classification model.
  • the similarity classification model may be obtained through the above processing procedure.
  • the similarity classification model may be applied to a face recognition scenario.
  • the similarity classification model can be used for separate risk control.
  • the similarity classification model after the similarity classification model is obtained, it may be determined using the similarity classification model whether the to-be-detected users corresponding to the to-be-detected user data pair are twins, as shown in FIG. 5 .
  • step S 414 a to-be-detected user data pair is acquired, similar to or the same as step S 202 in Embodiment 1. Reference can be made to the related contents of step S 202 for a specific implementation of step S 414 .
  • step S 416 feature extraction is performed on each set of to-be-detected user data in the to-be-detected user data pair to obtain to-be-detected user features.
  • the features extracted from the to-be-detected user data may include, but are not limited to, a household registration dimension feature, a name dimension feature, a social feature, an interest feature, or the like.
  • step S 418 a similarity is determined between users corresponding to the two sets of to-be-detected user data in the to-be-detected user data pair according to the to-be-detected user features and the pre-trained similarity classification model, similar to or the same as step S 206 in Embodiment 1.
  • step S 420 to-be-detected users corresponding to the to-be-detected user data pair are determined as twins if the similarity between the users corresponding to the two sets of to-be-detected user data in the to-be-detected user data pair is greater than or equal to a predetermined similarity classification threshold.
  • the similarity classification threshold may be set to a large value, for example, 95% or 97%.
  • the to-be-detected user feature may be predicted and scored by using the trained similarity classification model.
  • the scoring process may be performed to calculate a probability that the users corresponding to the corresponding user data pair are twins. For example, if the probability is 80%, the score is 80; and if the probability is 90%, the score is 90. The higher the score is, the higher the probability that the users corresponding to the user data pair may be twins.
  • FIG. 5 is a schematic diagram of a data similarity determining process of the method 400 according to an embodiment.
  • the to-be-detected user data pair includes to-be-detected user data pair 1 and to-be-detected user data pair 2 .
  • the model training process and the data similarity determining process are similar to those described above in connection with FIG. 4 and will not be repeated here.
  • the embodiment of the specification provides a data similarity determining method, in which a plurality of user data pairs may be acquired, wherein data fields of two sets of user data in each user data pair have an identical part; a user similarity corresponding to each user data pair may be acquired; sample data for training a preset classification model may be determined; and the classification model may be trained based on the sample data to obtain a similarity classification model, so that a similarity between users corresponding to the two sets of to-be-detected user data in a to-be-detected user data pair can be determined subsequently according to the similarity classification model.
  • a plurality of user data pairs may be obtained only through the same data field, and an association between users corresponding to the two sets of user data in each user data pair may be determined according to the user similarity to obtain sample data for training a preset classification model, that is, the sample data can be obtained without manual labeling, so that rapid training of a model can be implemented, the model training efficiency can be improved, and the resource consumption can be reduced.
  • FIG. 6 is a schematic diagram of a model training apparatus 600 according to an embodiment.
  • the apparatus 600 corresponds to the model training method described above.
  • the apparatus 600 may be disposed in a server.
  • the apparatus 600 may include: a data acquiring module 601 , a similarity acquiring module 602 , a sample data determining module 603 , and a model training module 604 .
  • the data acquiring module 601 may be configured to acquire a plurality of user data pairs, wherein data fields of two sets of user data in each user data pair have an identical part.
  • the similarity acquiring module 602 may be configured to acquire a user similarity corresponding to each user data pair, wherein the user similarity is a similarity between users corresponding to the two sets of user data in each user data pair.
  • the sample data determining module 603 may be configured to determine, according to the user similarity corresponding to each user data pair and the plurality of user data pairs, sample data for training a preset classification model.
  • the model training module 604 may be configured to train the classification model based on the sample data to obtain a similarity classification model.
  • the similarity acquiring module 602 may include: a biological feature acquiring unit (not shown) configured to acquire biological features of users corresponding to a first user data pair, wherein the first user data pair is any user data pair in the plurality of user data pairs; and a similarity acquiring unit (not shown) configured to determine a user similarity corresponding to the first user data pair according to the biological features of the users corresponding to the first user data pair.
  • the biological feature may include a facial image feature.
  • the biological feature acquiring unit may be configured to acquire facial images of the users corresponding to the first user data pair, and perform feature extraction on the facial images to obtain facial image features.
  • the similarity acquiring unit may be configured to determine the user similarity corresponding to the first user data pair according to the facial image features of the users corresponding to the first user data pair.
  • the biological feature may include a speech feature.
  • the biological feature acquiring unit may be configured to acquire speech data of the users corresponding to the first user data pair; and perform feature extraction on the speech data to obtain speech features.
  • the similarity acquiring unit may be configured to determine the user similarity corresponding to the first user data pair according to the speech features of the users corresponding to the first user data pair.
  • the sample data determining module 603 may include: a feature extraction unit (not shown) configured to perform feature extraction on each user data pair in the plurality of user data pairs to obtain associated user features between the two sets of user data in each user data pair; and a sample data determining unit (not shown) configured to determine, according to the associated user features between the two sets of user data in each user data pair and the user similarity corresponding to each user data pair, the sample data for training the classification model.
  • a feature extraction unit (not shown) configured to perform feature extraction on each user data pair in the plurality of user data pairs to obtain associated user features between the two sets of user data in each user data pair
  • a sample data determining unit (not shown) configured to determine, according to the associated user features between the two sets of user data in each user data pair and the user similarity corresponding to each user data pair, the sample data for training the classification model.
  • the sample data determining unit may be configured to select positive sample features and negative sample features from user features corresponding to the plurality of user data pairs according to the user similarity corresponding to each user data pair and a predetermined similarity threshold; and use the positive sample features and the negative sample features as the sample data for training the classification model.
  • the user feature may include at least one of a household registration dimension feature, a name dimension feature, a social feature, or an interest feature, wherein the household registration dimension feature may include a feature of user identity information, the name dimension feature may include a feature of user name information and a feature of a degree of scarcity of a user surname, and the social feature may include a feature of social relationship information of a user.
  • the positive sample features may include the same quantity of features as the negative sample features.
  • the similarity classification model may be a binary classifier model.
  • the embodiment of the specification provides a model training apparatus, in which a plurality of user data pairs may be acquired, wherein data fields of two sets of user data in each user data pair have an identical part; a user similarity corresponding to each user data pair may be acquired; sample data for training a preset classification model may be determined; and the classification model may be trained based on the sample data to obtain a similarity classification model, so that a similarity between users corresponding to the two sets of to-be-detected user data in a to-be-detected user data pair can be determined according to the similarity classification model.
  • a plurality of user data pairs may be obtained through the same data field, and an association between users corresponding to the two sets of user data in each user data pair may be determined according to the user similarity to obtain sample data for training a preset classification model, that is, the sample data can be obtained without manual labeling, so that rapid training of a model can be implemented, the model training efficiency can be improved, and the resource consumption can be reduced.
  • FIG. 7 is a schematic diagram of a data similarity determining apparatus 700 according to an embodiment.
  • the apparatus 700 corresponds to the data similarity determination method described above.
  • the apparatus 700 may include: a to-be-detected data acquiring module 701 , a feature extraction module 702 , and a similarity determining module 703 .
  • the to-be-detected data acquiring module 701 may be configured to acquire a to-be-detected user data pair.
  • the feature extraction module 702 may be configured to perform feature extraction on each set of to-be-detected user data in the to-be-detected user data pair to obtain to-be-detected user features.
  • the similarity determining module 703 may be configured to determine a similarity between users corresponding to the two sets of to-be-detected user data in the to-be-detected user data pair according to the to-be-detected user features and a pre-trained similarity classification model.
  • the apparatus 700 may further include: a similarity classification module (not shown) configured to determine to-be-detected users corresponding to the to-be-detected user data pair as twins if the similarity between the users corresponding to the two sets of to-be-detected user data in the to-be-detected user data pair is greater than or equal to a predetermined similarity classification threshold.
  • a similarity classification module (not shown) configured to determine to-be-detected users corresponding to the to-be-detected user data pair as twins if the similarity between the users corresponding to the two sets of to-be-detected user data in the to-be-detected user data pair is greater than or equal to a predetermined similarity classification threshold.
  • the embodiment of the specification provides a data similarity determining apparatus, in which a plurality of user data pairs may be acquired, wherein data fields of two sets of user data in each user data pair have an identical part; a user similarity corresponding to each user data pair may be acquired; sample data for training a preset classification model may be determined; and the classification model may be trained based on the sample data to obtain a similarity classification model, so that a similarity between users corresponding to the two sets of to-be-detected user data in a to-be-detected user data pair can be determined according to the similarity classification model.
  • a plurality of user data pairs may be obtained only through the same data field, and an association between users corresponding to the two sets of user data in each user data pair may be determined according to the user similarity to obtain sample data for training a preset classification model, that is, the sample data can be obtained without manual labeling, so that rapid training of a model can be implemented, the model training efficiency can be improved, and the resource consumption can be reduced.
  • FIG. 8 is a schematic diagram of a model training device 800 according to an embodiment.
  • the model training device 800 may be the server, the terminal device, or the like provided in the foregoing embodiments.
  • the model training device 800 may differ greatly depending on different configurations or performance, and may include one or more processors 801 and a memory 802 .
  • the memory 802 may store one or more storage applications or data.
  • the memory 802 may be a transitory or permanent storage.
  • the application stored in the memory 802 may include one or more modules (not shown), wherein each module may include a series of computer executable instructions.
  • the one or more processors 801 may be configured to communicate with the memory 802 , and execute, on the model training device 800 , the series of computer executable instructions in the memory 802 .
  • the model training device may further include one or more power supplies 803 , one or more wired or wireless network interfaces 804 , one or more input/output interfaces 805 , and one or more keyboards 806 .
  • the one or more processors 801 may be configured to execute the instructions to perform: acquiring a plurality of user data pairs, wherein data fields of two sets of user data in each user data pair have an identical part; acquiring a user similarity corresponding to each user data pair, wherein the user similarity is a similarity between users corresponding to the two sets of user data in each user data pair; determining, according to the user similarity corresponding to each user data pair and the plurality of user data pairs, sample data for training a preset classification model; and training the classification model based on the sample data to obtain a similarity classification model.
  • the one or more processors 801 may be configured to execute the instructions to perform: acquiring biological features of users corresponding to a first user data pair, wherein the first user data pair is any user data pair in the plurality of user data pairs; and determining a user similarity corresponding to the first user data pair according to the biological features of the users corresponding to the first user data pair.
  • the biological feature may include a facial image feature.
  • the acquiring biological features of users corresponding to a first user data pair may include: acquiring facial images of the users corresponding to the first user data pair; and performing feature extraction on the facial images to obtain facial image features.
  • the determining a user similarity corresponding to the first user data pair according to the biological features of the users corresponding to the first user data pair may include: determining the user similarity corresponding to the first user data pair according to the facial image features of the users corresponding to the first user data pair.
  • the biological feature may include a speech feature.
  • the acquiring biological features of users corresponding to a first user data pair may include: acquiring speech data of the users corresponding to the first user data pair; and performing feature extraction on the speech data, to obtain speech features.
  • the determining a user similarity corresponding to the first user data pair according to the biological features of the users corresponding to the first user data pair may include: determining the user similarity corresponding to the first user data pair according to the speech features of the users corresponding to the first user data pair.
  • the one or more processors 801 may be configured to execute the instructions to perform: performing feature extraction on each user data pair in the plurality of user data pairs to obtain associated user features between the two sets of user data in each user data pair; and determining, according to the associated user features between the two sets of user data in each user data pair and the user similarity corresponding to each user data pair, the sample data for training the classification model.
  • the one or more processors 801 may be configured to execute the instructions to perform: selecting positive sample features and negative sample features from user features corresponding to the plurality of user data pairs according to the user similarity corresponding to each user data pair and a predetermined similarity threshold; and using the positive sample features and the negative sample features as the sample data for training the classification model.
  • the user feature may include at least one of a household registration dimension feature, a name dimension feature, a social feature, or an interest feature, wherein the household registration dimension feature may include a feature of user identity information, the name dimension feature may include a feature of user name information and a feature of a degree of scarcity of a user surname, and the social feature may include a feature of social relationship information of a user.
  • the positive sample features include the same quantity of features as the negative sample features.
  • the similarity classification model is a binary classifier model.
  • the embodiment of the specification provides a model training device, in which a plurality of user data pairs may be acquired, wherein data fields of two sets of user data in each user data pair have an identical part; a user similarity corresponding to each user data pair may be acquired; sample data for training a preset classification model may be determined; and the classification model may be trained based on the sample data to obtain a similarity classification model, so that a similarity between users corresponding to the two sets of to-be-detected user data in a to-be-detected user data pair can be determined according to the similarity classification model.
  • a plurality of user data pairs may be obtained only through the same data field, and an association between users corresponding to the two sets of user data in each user data pair may be determined according to the user similarity to obtain sample data for training a preset classification model, that is, the sample data can be obtained without manual labeling, so that rapid training of a model can be implemented, the model training efficiency can be improved, and the resource consumption can be reduced.
  • FIG. 6 is a schematic diagram of a data similarity determining device 900 according to an embodiment.
  • the data similarity determining device 900 may be the server, the terminal device, or the like provided in the foregoing embodiments.
  • the data similarity determining device 900 may differ greatly depending on different configurations or performance, and may include one or more processors 901 and a memory 902 .
  • the memory 902 may store one or more storage applications or data.
  • the memory 902 may be a transitory or permanent storage.
  • the application stored in the memory 902 may include one or more modules (not shown), wherein each module may include a series of computer executable instructions in the data similarity determining device.
  • the one or more processors 901 may be configured to communicate with the memory 902 , and execute, on the data similarity determining device 900 , the series of computer executable instructions in the memory 902 .
  • the data similarity determining device 900 may further include one or more power supplies 903 , one or more wired or wireless network interfaces 904 , one or more input/output interfaces 905 , and one or more keyboards 906 .
  • the one or more processors 901 may be configured to execute the instructions to perform: acquiring a to-be-detected user data pair; performing feature extraction on each set of to-be-detected user data in the to-be-detected user data pair to obtain to-be-detected user features; and determining a similarity between users corresponding to the two sets of to-be-detected user data in the to-be-detected user data pair according to the to-be-detected user features and a pre-trained similarity classification model.
  • the one or more processors 901 may be configured to execute the instructions to perform: determining to-be-detected users corresponding to the to-be-detected user data pair as twins if the similarity between the users corresponding to the two sets of to-be-detected user data in the to-be-detected user data pair is greater than or equal to a predetermined similarity classification threshold.
  • the embodiment of the specification provides a data similarity determining device, in which a plurality of user data pairs may be acquired, wherein data fields of two sets of user data in each user data pair have an identical part; a user similarity corresponding to each user data pair may be acquired; sample data for training a preset classification model may be determined; and the classification model may be trained based on the sample data to obtain a similarity classification model, so that a similarity between users corresponding to the two sets of to-be-detected user data in a to-be-detected user data pair can be determined according to the similarity classification model.
  • a plurality of user data pairs may be obtained through the same data field, and an association between users corresponding to the two sets of user data in each user data pair may be determined according to the user similarity to obtain sample data for training a preset classification model, that is, the sample data can be obtained without manual labeling, so that rapid training of a model can be implemented, the model training efficiency can be improved, and the resource consumption can be reduced.
  • PLD Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • FPGA Field Programmable Gate Array
  • the programming may be implemented by using logic compiler software, instead of manually manufacturing an integrated circuit chip.
  • the logic compiler software may be similar to a software complier used for developing and writing a program, and original codes before compiling may be written using a specific programming language, such as a Hardware Description Language (HDL).
  • HDL Hardware Description Language
  • ABEL Advanced Boolean Expression Language
  • AHDL Altera Hardware Description Language
  • CUPL Cornell University Programming Language
  • HDCal Java Hardware Description Language
  • JHDL Java Hardware Description Language
  • Lava Lola
  • MyHDL MyHDL
  • PALASM Phase Change Language
  • RHDL Ruby Hardware Description Language
  • VHDL Very-High-Speed
  • a controller may be implemented in any suitable manner in the above described devices.
  • the controller may be in the form of, for example, a microprocessor or a processor and a computer readable medium storing a computer readable program code (for example, software or firmware) executable by the (micro)processor, a logic gate, a switch, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, or an embedded micro-controller.
  • the controller may include, but are not limited to, the following micro-controllers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20, and Silicone Labs C8051F320.
  • a memory controller may also be implemented as a part of control logic of a memory.
  • controller may be implemented by using pure computer readable program codes, and in addition, the method steps may be logically programmed to enable the controller to implement the same function in a form of a logic gate, a switch, an application-specific integrated circuit, a programmable logic controller and an embedded microcontroller. Therefore, this type of controller may be considered as a hardware component, and apparatuses included therein for implementing various functions may also be considered as structures inside the hardware component. Or, the apparatuses used for implementing various functions may even be considered as both software modules for implementing the method and structures inside the hardware component.
  • the device, apparatus, module or unit illustrated in the above embodiments may be specifically implemented by using a computer chip or a material object, or a product having a certain function.
  • a typical implementation device is a computer.
  • the computer can be, for example, a personal computer, a laptop computer, a cellular phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
  • the embodiments of the specification may be a method, a device, or a computer program product. Accordingly, the embodiments may be in the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects.
  • the computer program product may be implemented on one or more computer-usable storage media (including but not limited to magnetic disk memories, CD-ROMs, optical memories, etc.) including computer-usable program code.
  • These computer program instructions may also be stored in a computer-readable storage medium that may guide a computer or another programmable data processing device to work in a specified manner, so that the instructions stored in the computer-readable storage medium may generate a product including an instruction apparatus, wherein the instruction apparatus implements functions specified in one or more processes in the flowcharts and/or one or more blocks in the block diagrams.
  • These computer program instructions may also be loaded into a computer or another programmable data processing device, so that a series of operation steps are performed on the computer or another programmable data processing device to generate processing implemented by a computer, and instructions executed on the computer or another programmable data processing device may provide steps for implementing functions specified in one or more processes in the flowcharts and/or one or more blocks in the block diagrams.
  • the computer-readable storage medium may include volatile and non-volatile, mobile and non-mobile media, and can use any method or technology to store information.
  • the information may be a computer readable instruction, a data structure, a module of a program, or other data.
  • Examples of the computer-readable storage media of the computer may include, but are not limited to, a phase change memory (PRAM), a static random access memory (SRAM), a dynamic random access memory (DRAM), other types of RAMs, a ROM, an electrically erasable programmable read-only memory (EEPROM), a flash memory or other memory technologies, a compact disk read-only memory (CD-ROM), a digital versatile disc (DVD) or other optical storage, a cassette tape, a tape disk storage or other magnetic storage devices, or any other non-transmission media, which may be used for storing computer accessible information.
  • the computer-readable storage medium does not include transitory computer readable media (transitory media), for example, a modulated data signal and carrier.
  • the above described methods may be implemented by computer-executable instruction executed by a computer, for example, a program module.
  • the program module includes a routine, a program, an object, an assembly, a data structure, and the like used for executing a specific task or implementing a specific abstract data type.
  • the above described methods may also be implemented in distributed computing environments, and in the distributed computer environments, a task may be executed by using remote processing devices connected through a communications network.
  • the program module may be located in local and remote computer storage media including a storage device.
  • the embodiments in the specification are described progressively, identical or similar parts of the embodiments may be referenced to each other, and each embodiment emphasizes an aspect different from other embodiments.
  • the device/apparatus embodiment is basically similar to the method embodiment, and thus is described in a relatively simple manner.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Databases & Information Systems (AREA)
  • Acoustics & Sound (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Analysis (AREA)
  • Complex Calculations (AREA)
US16/577,100 2017-07-19 2019-09-20 Model training method, apparatus, and device, and data similarity determining method, apparatus, and device Pending US20200012969A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/777,659 US11288599B2 (en) 2017-07-19 2020-01-30 Model training method, apparatus, and device, and data similarity determining method, apparatus, and device

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN201710592780.7 2017-07-19
CN201710592780.7A CN107609461A (zh) 2017-07-19 2017-07-19 模型的训练方法、数据相似度的确定方法、装置及设备
PCT/CN2018/096252 WO2019015641A1 (zh) 2017-07-19 2018-07-19 模型的训练方法、数据相似度的确定方法、装置及设备

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/096252 Continuation WO2019015641A1 (zh) 2017-07-19 2018-07-19 模型的训练方法、数据相似度的确定方法、装置及设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/777,659 Continuation US11288599B2 (en) 2017-07-19 2020-01-30 Model training method, apparatus, and device, and data similarity determining method, apparatus, and device

Publications (1)

Publication Number Publication Date
US20200012969A1 true US20200012969A1 (en) 2020-01-09

Family

ID=61059789

Family Applications (2)

Application Number Title Priority Date Filing Date
US16/577,100 Pending US20200012969A1 (en) 2017-07-19 2019-09-20 Model training method, apparatus, and device, and data similarity determining method, apparatus, and device
US16/777,659 Active US11288599B2 (en) 2017-07-19 2020-01-30 Model training method, apparatus, and device, and data similarity determining method, apparatus, and device

Family Applications After (1)

Application Number Title Priority Date Filing Date
US16/777,659 Active US11288599B2 (en) 2017-07-19 2020-01-30 Model training method, apparatus, and device, and data similarity determining method, apparatus, and device

Country Status (10)

Country Link
US (2) US20200012969A1 (de)
EP (1) EP3611657A4 (de)
JP (1) JP6883661B2 (de)
KR (1) KR102349908B1 (de)
CN (1) CN107609461A (de)
MY (1) MY201891A (de)
PH (1) PH12019501851A1 (de)
SG (1) SG11201907257SA (de)
TW (1) TWI735782B (de)
WO (1) WO2019015641A1 (de)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210019553A1 (en) * 2018-03-30 2021-01-21 Nec Corporation Information processing apparatus, control method, and program
US20220114491A1 (en) * 2020-10-09 2022-04-14 AquaSys LLC Anonymous training of a learning model
CN114756677A (zh) * 2022-03-21 2022-07-15 马上消费金融股份有限公司 样本生成方法、文本分类模型的训练方法及文本分类方法
US11455425B2 (en) * 2020-10-27 2022-09-27 Alipay (Hangzhou) Information Technology Co., Ltd. Methods, apparatuses, and systems for updating service model based on privacy protection
US11526552B2 (en) * 2020-08-18 2022-12-13 Lyqness Inc. Systems and methods of optimizing the use of user questions to identify similarities among a large network of users

Families Citing this family (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107609461A (zh) * 2017-07-19 2018-01-19 阿里巴巴集团控股有限公司 模型的训练方法、数据相似度的确定方法、装置及设备
CN108399389B (zh) * 2018-03-01 2020-05-08 路志宏 机器视觉的多机监管系统、方法及客户机、服务器、存储介质
CN108427767B (zh) * 2018-03-28 2020-09-29 广州市创新互联网教育研究院 一种知识主题和资源文件的关联方法
CN108732559B (zh) * 2018-03-30 2021-09-24 北京邮电大学 一种定位方法、装置、电子设备及可读存储介质
CN111027994B (zh) * 2018-10-09 2023-08-01 百度在线网络技术(北京)有限公司 相似对象确定方法、装置、设备和介质
CN111274811B (zh) * 2018-11-19 2023-04-18 阿里巴巴集团控股有限公司 地址文本相似度确定方法以及地址搜索方法
CN111325228B (zh) * 2018-12-17 2021-04-06 上海游昆信息技术有限公司 一种模型训练方法及装置
CN109934275B (zh) * 2019-03-05 2021-12-14 深圳市商汤科技有限公司 图像处理方法及装置、电子设备和存储介质
CN111797878A (zh) * 2019-04-09 2020-10-20 Oppo广东移动通信有限公司 数据处理方法、装置、存储介质及电子设备
CN111797869A (zh) * 2019-04-09 2020-10-20 Oppo广东移动通信有限公司 模型训练方法、装置、存储介质及电子设备
CN110163655B (zh) * 2019-04-15 2024-03-05 中国平安人寿保险股份有限公司 基于梯度提升树的坐席分配方法、装置、设备及存储介质
CN110543636B (zh) * 2019-09-06 2023-05-23 出门问问创新科技有限公司 一种对话系统的训练数据选择方法
CN112488140A (zh) * 2019-09-12 2021-03-12 北京国双科技有限公司 一种数据关联方法及装置
CN110717484B (zh) * 2019-10-11 2021-07-27 支付宝(杭州)信息技术有限公司 一种图像处理方法和系统
CN110852881B (zh) * 2019-10-14 2021-04-27 支付宝(杭州)信息技术有限公司 风险账户识别方法、装置、电子设备及介质
CN110837869A (zh) * 2019-11-11 2020-02-25 深圳市商汤科技有限公司 图像分类模型训练方法、图像处理方法及装置
CN110742595A (zh) * 2019-11-12 2020-02-04 中润普达(十堰)大数据中心有限公司 基于认知云系统的异常血压监护系统
CN111046910A (zh) * 2019-11-12 2020-04-21 北京三快在线科技有限公司 图像分类、关系网络模型训练、图像标注方法及装置
CN111382403A (zh) * 2020-03-17 2020-07-07 同盾控股有限公司 用户行为识别模型的训练方法、装置、设备及存储介质
CN111739517B (zh) * 2020-07-01 2024-01-30 腾讯科技(深圳)有限公司 语音识别方法、装置、计算机设备及介质
CN112347320A (zh) * 2020-11-05 2021-02-09 杭州数梦工场科技有限公司 数据表字段的关联字段推荐方法及装置
CN112269937B (zh) * 2020-11-16 2024-02-02 加和(北京)信息科技有限公司 一种计算用户相似度的方法、系统及装置
CN112988845B (zh) * 2021-04-01 2021-11-16 湖南机械之家信息科技有限公司 在大数据业务场景下的数据信息处理方法及信息服务平台
EP4099142A4 (de) 2021-04-19 2023-07-05 Samsung Electronics Co., Ltd. Elektronische vorrichtung und betriebsverfahren dafür
CN113516165B (zh) * 2021-05-07 2023-10-10 北京惠朗时代科技有限公司 一种基于图像金字塔匹配后验的客户满意度判别方法
CN113408208B (zh) * 2021-06-25 2023-06-09 成都欧珀通信科技有限公司 模型训练方法、信息提取方法、相关装置及存储介质
CN115497633B (zh) * 2022-10-19 2024-01-30 联仁健康医疗大数据科技股份有限公司 一种数据处理方法、装置、设备及存储介质
CN115604027B (zh) * 2022-11-28 2023-03-14 中南大学 网络指纹识别模型训练方法、识别方法、设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060117021A1 (en) * 2004-11-29 2006-06-01 Epic Systems Corporation Shared account information method and apparatus
US20110047163A1 (en) * 2009-08-24 2011-02-24 Google Inc. Relevance-Based Image Selection
US20120142428A1 (en) * 2010-12-01 2012-06-07 Taktak Labs, Llc Systems and methods for online, real-time, social gaming
US20150177842A1 (en) * 2013-12-23 2015-06-25 Yuliya Rudenko 3D Gesture Based User Authorization and Device Control Methods
US20150278254A1 (en) * 2014-03-31 2015-10-01 Anurag Bhardwaj Image-based retrieval and searching
US20170069062A1 (en) * 2015-09-08 2017-03-09 The Johns Hopkins University Small maritime target detector
US9846885B1 (en) * 2014-04-30 2017-12-19 Intuit Inc. Method and system for comparing commercial entities based on purchase patterns

Family Cites Families (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7321854B2 (en) * 2002-09-19 2008-01-22 The Penn State Research Foundation Prosody based audio/visual co-analysis for co-verbal gesture recognition
US7308581B1 (en) * 2003-03-07 2007-12-11 Traffic101.Com Systems and methods for online identity verification
KR20070105826A (ko) * 2006-04-27 2007-10-31 삼성전자주식회사 공개키 인증시스템 및 그 인증방법
EP2038766A4 (de) * 2006-07-12 2009-08-05 Arbitron Inc Verfahren und systeme zur bestätigungs- und bonuskonformität
US20080106370A1 (en) * 2006-11-02 2008-05-08 Viking Access Systems, Llc System and method for speech-recognition facilitated communication to monitor and control access to premises
US7696427B2 (en) * 2006-12-01 2010-04-13 Oracle America, Inc. Method and system for recommending music
US20170330029A1 (en) * 2010-06-07 2017-11-16 Affectiva, Inc. Computer based convolutional processing for image analysis
TWI437501B (zh) * 2010-11-26 2014-05-11 Egis Technology Inc 基於生物特徵之身分驗證裝置及其方法
CN102129574B (zh) * 2011-03-18 2016-12-07 广东中星电子有限公司 一种人脸认证方法及系统
WO2012139269A1 (en) * 2011-04-11 2012-10-18 Intel Corporation Tracking and recognition of faces using selected region classification
CN102663370B (zh) * 2012-04-23 2013-10-09 苏州大学 一种人脸识别的方法及系统
US20140063237A1 (en) * 2012-09-03 2014-03-06 Transportation Security Enterprises, Inc.(TSE), a Delaware corporation System and method for anonymous object identifier generation and usage for tracking
US20140250523A1 (en) * 2012-10-11 2014-09-04 Carnegie Mellon University Continuous Authentication, and Methods, Systems, and Software Therefor
JP5284530B2 (ja) 2012-11-22 2013-09-11 キヤノン株式会社 情報処理方法、情報処理装置
WO2014112375A1 (ja) * 2013-01-17 2014-07-24 日本電気株式会社 話者識別装置、話者識別方法、および話者識別用プログラム
US9036876B2 (en) * 2013-05-01 2015-05-19 Mitsubishi Electric Research Laboratories, Inc. Method and system for authenticating biometric data
US9788777B1 (en) * 2013-08-12 2017-10-17 The Neilsen Company (US), LLC Methods and apparatus to identify a mood of media
CN103745242A (zh) * 2014-01-30 2014-04-23 中国科学院自动化研究所 一种跨设备生物特征识别方法
WO2017024172A1 (en) * 2015-08-05 2017-02-09 Cronvo Llc Systems and methods for managing telecommunications
CN105224623B (zh) * 2015-09-22 2019-06-18 北京百度网讯科技有限公司 数据模型的训练方法及装置
US10027888B1 (en) * 2015-09-28 2018-07-17 Amazon Technologies, Inc. Determining area of interest in a panoramic video or photo
CN105488463B (zh) * 2015-11-25 2019-01-29 康佳集团股份有限公司 基于人脸生物特征的直系亲属关系识别方法及系统
CN105306495B (zh) * 2015-11-30 2018-06-19 百度在线网络技术(北京)有限公司 用户识别方法和装置
US10990658B2 (en) * 2016-07-11 2021-04-27 Samsung Electronics Co., Ltd. Method and apparatus for verifying user using multiple biometric verifiers
US10074371B1 (en) * 2017-03-14 2018-09-11 Amazon Technologies, Inc. Voice control of remote device by disabling wakeword detection
CN107609461A (zh) * 2017-07-19 2018-01-19 阿里巴巴集团控股有限公司 模型的训练方法、数据相似度的确定方法、装置及设备

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060117021A1 (en) * 2004-11-29 2006-06-01 Epic Systems Corporation Shared account information method and apparatus
US20110047163A1 (en) * 2009-08-24 2011-02-24 Google Inc. Relevance-Based Image Selection
US20120142428A1 (en) * 2010-12-01 2012-06-07 Taktak Labs, Llc Systems and methods for online, real-time, social gaming
US20150177842A1 (en) * 2013-12-23 2015-06-25 Yuliya Rudenko 3D Gesture Based User Authorization and Device Control Methods
US20150278254A1 (en) * 2014-03-31 2015-10-01 Anurag Bhardwaj Image-based retrieval and searching
US9846885B1 (en) * 2014-04-30 2017-12-19 Intuit Inc. Method and system for comparing commercial entities based on purchase patterns
US20170069062A1 (en) * 2015-09-08 2017-03-09 The Johns Hopkins University Small maritime target detector

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210019553A1 (en) * 2018-03-30 2021-01-21 Nec Corporation Information processing apparatus, control method, and program
US11526552B2 (en) * 2020-08-18 2022-12-13 Lyqness Inc. Systems and methods of optimizing the use of user questions to identify similarities among a large network of users
US20220114491A1 (en) * 2020-10-09 2022-04-14 AquaSys LLC Anonymous training of a learning model
US11455425B2 (en) * 2020-10-27 2022-09-27 Alipay (Hangzhou) Information Technology Co., Ltd. Methods, apparatuses, and systems for updating service model based on privacy protection
CN114756677A (zh) * 2022-03-21 2022-07-15 马上消费金融股份有限公司 样本生成方法、文本分类模型的训练方法及文本分类方法

Also Published As

Publication number Publication date
WO2019015641A1 (zh) 2019-01-24
PH12019501851A1 (en) 2020-06-29
MY201891A (en) 2024-03-22
US20200167693A1 (en) 2020-05-28
JP2020524315A (ja) 2020-08-13
US11288599B2 (en) 2022-03-29
EP3611657A1 (de) 2020-02-19
SG11201907257SA (en) 2019-09-27
KR102349908B1 (ko) 2022-01-12
KR20200014723A (ko) 2020-02-11
JP6883661B2 (ja) 2021-06-09
TWI735782B (zh) 2021-08-11
TW201909005A (zh) 2019-03-01
EP3611657A4 (de) 2020-05-13
CN107609461A (zh) 2018-01-19

Similar Documents

Publication Publication Date Title
US11288599B2 (en) Model training method, apparatus, and device, and data similarity determining method, apparatus, and device
EP3248143B1 (de) Reduzierung von rechenressourcen, die zum trainieren eines bildbasierten klassifikators verwendet werden
CN107209861B (zh) 使用否定数据优化多类别多媒体数据分类
JP6894534B2 (ja) 情報処理方法及び端末、コンピュータ記憶媒体
CN111325037B (zh) 文本意图识别方法、装置、计算机设备和存储介质
CN109284675B (zh) 一种用户的识别方法、装置及设备
US20200311145A1 (en) System and method for generating an answer based on clustering and sentence similarity
CN108227564B (zh) 一种信息处理方法、终端及计算机可读介质
US20160171063A1 (en) Modeling actions, consequences and goal achievement from social media and other digital traces
Rosa et al. Twitter topic fuzzy fingerprints
CN112132238A (zh) 一种识别隐私数据的方法、装置、设备和可读介质
CN111523103B (zh) 用户身份识别方法、装置及电子设备
Ozkan et al. A large-scale database of images and captions for automatic face naming
WO2022222942A1 (zh) 问答记录生成方法、装置、电子设备及存储介质
Czyżewski et al. Analysis of results of large‐scale multimodal biometric identity verification experiment
WO2017000341A1 (zh) 一种信息处理方法、装置以及终端
WO2015023031A1 (ko) 전문분야 검색 지원 방법 및 그 장치
CN112015994B (zh) 药物推荐方法、装置、设备及介质
CN110019556B (zh) 一种话题新闻获取方法、装置及其设备
Dass et al. Cyberbullying detection on social networks using LSTM model
US20210117448A1 (en) Iterative sampling based dataset clustering
CN112115248B (zh) 一种从对话语料中抽取对话策略结构的方法及系统
CN113139379B (zh) 信息识别方法和系统
US20190385715A1 (en) Systems and methods for facilitating computer-assisted linkage of healthcare records
Deleris et al. Probability statements extraction with constrained conditional random fields

Legal Events

Date Code Title Description
AS Assignment

Owner name: ALIBABA GROUP HOLDING LIMITED, CAYMAN ISLANDS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:JIANG, NAN;ZHAO, HONGWEI;REEL/FRAME:050442/0548

Effective date: 20190902

STPP Information on status: patent application and granting procedure in general

Free format text: APPLICATION DISPATCHED FROM PREEXAM, NOT YET DOCKETED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

AS Assignment

Owner name: ADVANTAGEOUS NEW TECHNOLOGIES CO., LTD., CAYMAN ISLANDS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ALIBABA GROUP HOLDING LIMITED;REEL/FRAME:053713/0665

Effective date: 20200826

AS Assignment

Owner name: ADVANCED NEW TECHNOLOGIES CO., LTD., CAYMAN ISLANDS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ADVANTAGEOUS NEW TECHNOLOGIES CO., LTD.;REEL/FRAME:053761/0338

Effective date: 20200910

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION