WO2020215915A1 - 身份验证方法、装置、计算机设备及存储介质 - Google Patents

身份验证方法、装置、计算机设备及存储介质 Download PDF

Info

Publication number
WO2020215915A1
WO2020215915A1 PCT/CN2020/078777 CN2020078777W WO2020215915A1 WO 2020215915 A1 WO2020215915 A1 WO 2020215915A1 CN 2020078777 W CN2020078777 W CN 2020078777W WO 2020215915 A1 WO2020215915 A1 WO 2020215915A1
Authority
WO
WIPO (PCT)
Prior art keywords
attribute
main
identity verification
network
feature vector
Prior art date
Application number
PCT/CN2020/078777
Other languages
English (en)
French (fr)
Inventor
梁健
曹誉仁
张晨斌
白琨
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to JP2021539985A priority Critical patent/JP7213358B2/ja
Priority to EP20794930.6A priority patent/EP3961441B1/en
Publication of WO2020215915A1 publication Critical patent/WO2020215915A1/zh
Priority to US17/359,125 priority patent/US20210326576A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1365Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification

Definitions

  • This application relates to the field of artificial intelligence, in particular to an identity verification method, device, computer equipment and storage medium.
  • Identity verification technology refers to a technology that confirms a user's identity through a certain method in a computer system.
  • Common identity verification technologies include: face recognition, fingerprint recognition, terminal gesture recognition and so on.
  • a neural network model is set in the server.
  • the neural network model is called to verify the face image; when the verification is successful, the identity of the user to be verified is determined; when the verification fails, an error notification is fed back.
  • the neural network model is obtained through training in advance through the training set.
  • the above neural network model may mislearn a biased prediction. For example, when the user starts to grow a beard, wear glasses, or change clothes due to the season, the verification of the neural network model may fail.
  • an identity verification method According to various embodiments provided in the present application, an identity verification method, device, computer equipment, and storage medium are provided.
  • the technical solution is as follows:
  • an identity verification method which is executed by a computer device, and the method includes:
  • the main attribute feature vector is an unbiased feature representation that selectively decouples m-1 domain difference features in the original feature, and m is an integer greater than 2.
  • the identity verification model is called to perform feature extraction on the original features to obtain the main attribute feature vector in the original features; wherein the identity verification model includes:
  • a first confrontation generation network or, the first confrontation generation network and the second confrontation generation network;
  • the first confrontation generation network is a network trained by selective decoupling of the m-1 domain difference features based on causality
  • the second confrontation generation network is a network that is used for the first confrontation generation network.
  • the extracted attribute feature vectors of different attributes are randomly combined and then subjected to additive adversarial training to obtain a network.
  • the attributes include the identity and the m-1 domain differences.
  • the first confrontation generation network includes m generators G 1 to G m , and each generator G j corresponds to m discriminants Generators G j1 to G jm , the j-th generator G j is used to learn the characteristics of the j-th attribute, the attribute includes identity and m-1 domain differences, i, j, j' ⁇ [m], the Methods include:
  • the second confrontation generation network includes m additive space conversion networks and m recognition networks corresponding to m attributes one to one.
  • the attributes include identity and m-1 domain differences, j ⁇ [m], m is an integer greater than 2, and the method includes:
  • the n r combined attribute feature vectors are divided into a first vector set and a second vector set.
  • the attribute combinations of the combined attribute feature vectors in the first vector set are the attribute combinations appearing in the training set, and the second vector set.
  • the attribute combination of the combined attribute feature vector of is an attribute combination that does not appear in the training set;
  • the jth additive space conversion network is used to convert the jth combined attribute feature vector into The j-th additive feature vector
  • the j-th recognition network is used to identify the label corresponding to the j-th attribute on the sum feature vector of the m additive feature vectors
  • the second loss is backpropagated to the recognition network and the additive space conversion network corresponding to other attributes.
  • an identity verification device comprising:
  • m is an integer greater than 2;
  • the identity verification module is used to extract the main attribute feature vector in the original feature;
  • the main attribute feature vector is an unbiased feature representation that selectively decouples m-1 domain difference features in the original feature, m is an integer greater than 2;
  • the identity verification module is further configured to perform unbiased identity verification processing according to the main attribute feature vector to obtain an identity verification result.
  • a computer device including a processor and a memory, and computer-readable instructions are stored in the memory.
  • the processing The device executes the steps of the identity verification method.
  • a computer-readable storage medium storing computer-readable instructions, and when the computer-readable instructions are executed by one or more processors, the one or more processors Perform the steps of the identity verification method.
  • FIG. 1 is a flowchart of an identity verification method provided in related technologies
  • FIG. 2 is a block diagram of an identity verification system provided by an exemplary embodiment of the present application.
  • FIG. 3 is a flowchart of an identity verification method provided by an exemplary embodiment of the present application.
  • FIG. 4 is a two-stage schematic diagram of the first confrontation generation network and the second confrontation generation network during operation according to an exemplary embodiment of the present application;
  • FIG. 5 is a network structure diagram of a first confrontation generation network and a second confrontation generation network provided by an exemplary embodiment of the present application;
  • FIG. 6 is a flowchart of a training method of a first confrontation generation network provided by an exemplary embodiment of the present application
  • FIG. 7 is a schematic diagram of the interface of the identity verification software provided by an exemplary embodiment of the present application.
  • FIG. 8 is a schematic diagram of a network architecture for decoupling learning based on causality provided by an exemplary embodiment of the present application.
  • FIG. 9 is a flowchart of a training method for a second confrontation generation network provided by an exemplary embodiment of the present application.
  • FIG. 10 is a schematic diagram of the training principle of the second confrontation generation network provided by an exemplary embodiment of the present application.
  • FIG. 11 is a flowchart of an identity verification method provided by an exemplary embodiment of the present application.
  • FIG. 12 is a flowchart of an identity verification method provided by an exemplary embodiment of the present application.
  • FIG. 13 is a flowchart of an identity verification method provided by an exemplary embodiment of the present application.
  • FIG. 14 is a block diagram of an identity verification device provided by an exemplary embodiment of the present application.
  • FIG. 15 is a block diagram of a training device for a first confrontation generation network provided by an exemplary embodiment of the present application.
  • FIG. 16 is a block diagram of a training device for a second confrontation generation network provided by an exemplary embodiment of the present application.
  • FIG. 17 is a block diagram of a computer device provided by an exemplary embodiment of the present application.
  • Identity verification technology refers to the technology that confirms the user's identity through computer means.
  • Common identity verification technologies include at least one of face recognition, fingerprint recognition, voiceprint recognition, iris recognition, terminal gesture recognition, and pedestrian re-identification.
  • Identity verification model refers to the neural network model used for identity recognition.
  • Face recognition It refers to the technology that confirms the user's identity through the feature points on the face image.
  • the feature points on the face image include, but are not limited to: at least one of eyebrow feature points, eye feature points, mouth feature points, nose feature points, ear feature points, and cheek feature points.
  • Terminal gesture recognition When a user uses a terminal (such as a mobile phone), the physical characteristics of the user's operation collected by the terminal's internal sensors, such as pressing force, pressing frequency, pressing position, body vibration frequency, machine The technology of confirming the user's identity, such as the period of body vibration and the displacement of the body.
  • Domain Factors that cause the overall distribution deviation of a subset of samples in a training set. For example, for face recognition, the hair color of different users is black, yellow, and white as a domain difference; whether different users wear glasses or not It can be regarded as a domain difference; whether different users have a beard is also regarded as a domain difference.
  • Transfer learning when there are domain differences in the data, build a learning system to deal with the domain differences;
  • Negative transfer A concept in transfer learning, which describes the phenomenon that the accuracy of the test set drops due to a certain transfer learning method adopted on the training set.
  • GAN Generative Adversarial Network
  • Discriminator It is the part of GAN that plays with the generator and is responsible for judging whether the data generated by the generator is close to the real data.
  • the identity verification model may mislearn a biased prediction due to user grouping/clustering. For example, in face recognition verification, when the user starts to grow a beard or wear glasses, the verification may fail. In addition, in the field of pedestrian re-identification, verification may also fail when the season causes people's clothing to change or when graphics are collected under different angle cameras.
  • a method is provided to eliminate the influence of domain differences on the accuracy of identity verification.
  • Such methods include, but are not limited to: Transfer Component Analysis (TCA), Deep Adaptation Network (DAN), Reversing Gradient (Reversing Gradient, RevGrad), and Aversarial Discriminative Domain Adaptation , ADDA).
  • the identity verification model includes: a generator 12, a task discriminator (Task Discriminator) 14 and a difference discriminator (Bias Discriminator) 16.
  • the generator 12 is used to extract feature vectors from the original features
  • the task discriminator 14 is used to perform identity recognition based on the feature vectors, such as user 1, user 2, and user 3
  • the difference discriminator 16 is used to perform models based on the feature vector Discrimination, such as model 1, model 2, and model 3.
  • the difference discriminator 14 eliminates the feature information related to model discrimination in the feature vector output by the generator 12 through confrontation learning, and the task discriminator 16 is used to identify the user.
  • This application provides an unbiased identity verification scheme, which can eliminate the influence of multiple domain differences on identity verification as much as possible, and is suitable for identity verification scenarios with multiple domain differences.
  • Fig. 2 shows a block diagram of an identity verification system provided by an exemplary embodiment of the present application.
  • the identity verification system includes: a terminal 120, a network 140, and a server 160.
  • the terminal 120 may be a mobile phone, a tablet computer, a desktop computer, a notebook computer, a surveillance camera and other equipment.
  • the terminal 120 is a terminal with identity verification requirements.
  • the terminal 120 is used to collect original features required for identity verification.
  • the original features include at least one of face data, terminal sensor data, iris data, fingerprint data, and voiceprint data.
  • a user account may be logged on the terminal 120, that is, the terminal 120 may be a private device; in other embodiments, the terminal 120 is a monitoring device with monitoring properties.
  • the terminal 120 may be connected to the server 160 through the network 140.
  • the network 140 may be a wired network or a wireless network.
  • the terminal 120 may transmit the authentication data to the server 160, and after the server 160 completes the identity verification, the identity verification result is returned to the terminal 120.
  • the server 160 is a background server used for authentication.
  • the server 160 is provided with a neural network model for identity verification (hereinafter referred to as an identity verification model).
  • the identity verification model can perform identity verification based on unbiased representation of characteristic data.
  • Fig. 3 shows a flowchart of an identity verification method provided by an exemplary embodiment of the present application.
  • the identity verification method includes: a training phase 220 and a testing (and application) phase 240.
  • a training set for training the identity verification model is constructed.
  • the training set includes: the original feature 221 of each sample, the identity tag 222, and various domain difference tags 223.
  • each sample corresponds to a user
  • the original feature 221 is user feature data collected during the identity verification process.
  • the identity tag 222 is used to identify the identity of the user
  • the domain difference tag 223 is used to identify the domain difference of the user. Taking domain differences including hair color differences and beard differences as an example, Table 1 schematically shows two sets of samples.
  • the decoupled learning 224 uses identity verification as the main learning task, and multiple domain differences as auxiliary learning tasks. For each sample, identity and each domain difference are regarded as an attribute. For each attribute, the method of adversarial learning is used to learn the decoupled representation of each attribute (that is, the feature vector of each attribute is extracted independently as much as possible), so that the hidden layer space does not contain the classification information of other attributes . As a result, the finally learned identity verification model 242 can ignore the influence of multiple domain differences on identity verification as much as possible, thereby outputting accurate identity verification results.
  • the original features 241 in the test set are used to input the identity verification model 242 for unbiased identity verification, and then the identity verification result (ie, the identity tag 243) is output.
  • the identity verification model 242 is put into practical application.
  • FIG. 4 shows a structural block diagram of an identity verification model 242 provided by an exemplary embodiment of the present application.
  • the identity verification model 242 includes a first confrontation generation network 242a and a second confrontation generation network 242b.
  • the first confrontation generation network 242a is a network obtained by selectively decoupling m-1 domain difference features based on causality, and m is an integer greater than 2.
  • the second confrontation generation network 242b is a network obtained by performing additive confrontation training after random combination of different attribute feature vectors output by the first confrontation generation network 242a.
  • the first confrontation generation network 242a and the second confrontation generation network 242b are used to implement two-stage decoupling learning.
  • the first confrontation generation network 242a is used to learn decoupled feature representations based on the asymmetric causality between attributes. That is, the first adversarial generation network 242a is obtained by training in the following way: when there are causal first domain difference features and second domain difference features in the original features, the process of adversarial learning is performed on the second domain difference features It ignores the decoupling learning between the difference features of the first domain.
  • the first adversarial generation network 242a decouples at least two domain differences that have causal relationships, it does not forcefully decouple at least two domain differences that have causal relationships. Therefore, no or extremely small probability will occur. Negative migration.
  • the attribute feature vectors of different attributes are randomly combined to form a new combination that does not appear in the sample, and then the second confrontation generation network 242b is decoupled based on additive confrontation learning to achieve further decoupling Learn. That is, the second confrontation generation network is obtained by training in the following way: randomly combine the different attribute feature vectors extracted from the training set by the first confrontation generation network 242a, and combine the attribute combinations that did not appear in the training set to perform additive Confront the trained network.
  • the second confrontation generation network 242b can fully decouple the domain differences of irrelevant attributes, thereby solving the problem of insufficient decoupling of the domain differences of irrelevant attributes and leading to learning There are still too many attribute dependence problems in the obtained features.
  • first confrontation generation network 242a can be implemented separately, that is, the second confrontation generation network 242b is an optional part.
  • the first confrontation generation network 242a The first confrontation generation network 242a
  • the first confrontation generation network 242a includes: a basic generator G 0 , m generators (also called attribute feature learning network) G 1 to G m , and m*m discriminators D 11 to D 33 .
  • the basic generator G 0 is used to transform the original feature x to obtain the global attribute feature vector f 0 ;
  • Each generator G j corresponds to m discriminators G j1 to G jm , and the j-th generator G j is used to learn the characteristics of the j-th attribute.
  • the attributes include identity and m-1 domains.
  • Each generator G 1 to G m is used to extract discriminant information associated with the current attribute, so as to learn the attribute feature vector after the attribute is decoupled from other attributes.
  • the jth generator is associated with the jth attribute.
  • each attribute feature vector contains only the discrimination information associated with the attribute.
  • This application considers a given matrix ⁇ R m*m , which contains the causal relationship between two attributes. Then for each j ⁇ [m], this application constructs m discriminant networks D j1 ,..., D jm to process the causal relationship between the jth attribute and the m attribute.
  • Each D ii is used to learn the feature of the i-th attribute, and each D ij is used to eliminate the feature of the j-th attribute in the adversarial learning of the i-th attribute.
  • the generator G 1 corresponding to the identity can be called the master generator, and the other generators G 2 and G 3 respectively correspond to a domain.
  • Each generator also corresponds to n discriminators, and the discriminator D 11 can be called the main discriminator.
  • the main generator G 1 is used to perform feature extraction on the global attribute feature vector f 0 to obtain the first main attribute feature vector f 1 .
  • the main discriminator D 11 is used to perform identity verification on the first main attribute feature vector f 1 to obtain the identity verification result; or, when the first confrontation generation network 242a and the first
  • the main discriminator D 11 is used to perform a first discrimination on the first main attribute feature vector f 1 and then output the combined attribute feature vector f 1 to the second confrontation generation network 242b.
  • [k] Represents the set of subscripts ⁇ 1, 2, ..., k ⁇ ;
  • [-i] Indicates the set of subscripts to remove the i-th element
  • n number of samples
  • n the number of attributes
  • Y ⁇ R n*n output/attribute/label matrix, containing n independent samples y i , i ⁇ [n];
  • X ⁇ R n*d input/characteristic matrix, containing n independent samples x i , i ⁇ [n];
  • the model is trained on the corresponding feature and attribute labels.
  • the training of the first confrontation generation network 242a is a typical training process of the confrontation learning network, and the generators G 0 to G m are used for feature extraction.
  • the discriminators D 11 to D mm are divided into two categories: for all i, j ⁇ [m], i ⁇ j,
  • Each discriminator D ii is a feature used to learn the i-th attribute, and each discriminator D ij is a feature used to eliminate the j-th attribute;
  • the adversarial learning process for the discriminator D ij can be regarded as the following two alternate steps:
  • Step 601 Fix all G i and optimize D ij to make the output approximate to the corresponding one-hot encoded label y j ;
  • Step 602 Fix all Dij and optimize all G i to make the output approximate to the corresponding (1-y i ).
  • the output loss of the discriminator D ab is not back-propagated, i, j, j' ⁇ [m].
  • Step 603 Perform the above two steps alternately until the training end conditions of the generator G i and the discriminator D ij are met.
  • the training end condition includes: the loss function converges to the target value, or the number of training times reaches a preset number.
  • G i may be the i-th feature extraction corresponding attributes, and can not extract the corresponding features of the other properties. In this way, the i-th attribute can be decoupled from other attributes.
  • the optimization problem of the confrontation learning of the first confrontation generation network 242a is as follows.
  • the first is the optimization problem of attribute learning, that is, the loss function of the generator Gi:
  • w j is the weight of the j-th attribute
  • G 0 is the basic generator
  • G j is the generator corresponding to the j-th attribute
  • D jj is the jj-th discriminator
  • j belongs to [m] .
  • the second is the discriminative learning of domain differences, that is, the loss function of the discriminator:
  • Is the one-hot encoding vector for y ij′ Is the loss function against learning, Is the weight of the (j, j') attribute pair, j, j'belongs to [m+1], and x i is the original feature in the training set.
  • the third step is to eliminate domain differences:
  • 1 kj′ is a vector of all ones with dimension k j′ .
  • the activation function of the last layer of the discriminant network is softmax, Is the cross entropy loss, Is the average square error loss.
  • the above-mentioned 4 optimization problems are repeated in sequence. Among them, in each cycle, the first two optimization problems are optimized for 1 step, and the last two optimization problems are optimized for 5 steps.
  • terminal sensor data is used for identity authentication.
  • the terminal as a smart phone as an example, the smart phone is equipped with a gravity acceleration sensor and a gyroscope sensor.
  • the gravity acceleration sensor and the gyroscope sensor When the user clicks the password "171718" on the screen, the gravity acceleration sensor and the gyroscope sensor will collect the user's operating characteristics. In turn, sensor data is generated, which can be used to verify user identity.
  • the operating system and body thickness of each terminal are different, different operating systems will use different data formats to report sensor data, and different body thicknesses will also affect the sensor data collected by the sensor. Therefore, in the example of FIG.
  • the first confrontation generation network 242a includes: a basic generator G 0 , a main generator G 1 , an auxiliary generator G 2 and an auxiliary generator G 3 .
  • the network corresponding to the main generator G 1 conducts supervised learning for identity recognition, and conducts adversarial learning for system discrimination and thickness discrimination, so that the features extracted by the main generator G 1 only contain identification information, not the system The characteristics of discrimination and thickness discrimination.
  • the features extracted by the auxiliary generator G 2 only include the features identified by the system, but not the features of identification and thickness identification.
  • the auxiliary generator G 3 which only contains features for thickness discrimination.
  • This application makes use of the causal relationship between two attributes. Specifically, for each attribute, this application will select a subset of all other attribute sets for decoupling. The selection is based on the causal relationship between each attribute and other attributes, that is, if other attributes are not the cause of the attribute change, then other attributes can be decoupled from the attribute.
  • This technique enables the method of the present application to flexibly select attributes, so as to avoid negative transfer caused by forced decoupling of all other attributes (especially attributes with causal relationship), and avoid the result of too few decoupling attributes. Attribute dependence. Taking Figure 8 as an example, if the thickness change will cause the system to change, then the attribute of the system discrimination cannot be decoupled from the thickness discrimination.
  • the system change is not the cause of the thickness change, so the attribute of thickness discrimination can be decoupled from the attribute of system discrimination, so it will become the structure shown in Figure 8, the auxiliary generator G 2 A thickness-judging confrontation target was removed from the network. However, the network of the auxiliary generator G 3 does not remove the confrontation target identified by the system.
  • the second confrontation generation network 242b includes m additive spatial transformation networks T 1 to T m and m recognition networks R 1 to R m .
  • Combined attributes against a first feature vector generation network 242a is generated m plus spatial switching network T 1 to T m were converted feature vector of m plus s 1, ..., s m.
  • the m additive feature vectors are added to form a sum feature vector u, which is then sent to m recognition networks R 1 ,..., R m for recognition, corresponding to m attributes respectively.
  • the additive space conversion network T 1 corresponding to the identity recognition in the m additive space conversion networks can also be called the main additive space conversion network
  • the recognition network R 1 corresponding to the identity recognition in the m recognition networks can also be called Mainly identify the network.
  • FIG. 9 shows a flowchart of a training method of the second confrontation generation network 242b provided by an exemplary embodiment of the present application.
  • the method includes:
  • Step 901 Randomly combine attribute feature vectors corresponding to different attributes generated by the first confrontation generation network to generate n r combined attribute feature vectors
  • Step 902 Divide n r combined attribute feature vectors into a first vector set and a second vector set.
  • the attribute combinations of the combined attribute feature vectors in the first vector set are the attribute combinations appearing in the training set, and the attribute combinations in the second vector set
  • the attribute combination of the combined attribute feature vector is the attribute combination that does not appear in the training set;
  • Each combined attribute feature vector corresponds to an attribute combination, and is divided into two subsets according to the attribute combination : The attribute combination that appears in the training set and the attribute combination that does not appear in the training set. Define the following two subscript sets ⁇ s and ⁇ u :
  • ⁇ s ⁇ i ⁇ n r ⁇ : the subscript of the attribute combination seen in the training set
  • ⁇ u ⁇ i ⁇ n r ⁇ : the subscript of the unseen attribute combination in the training set.
  • Step 903 Use the first vector set and the second vector set to predict the additive space conversion network and the recognition network;
  • the j-th additive space conversion network is used to convert the j-th combined attribute feature vector into the j-th additive feature vector
  • the j-th recognition network is used to compare the sum feature vector of the m additive feature vector with the j-th
  • the tag identification corresponding to each attribute.
  • Step 904 For the first loss generated in the training process of the first vector set, the first loss is backpropagated to the recognition network and the additive space conversion network corresponding to each attribute for training;
  • w′ j is the weight of attribute j.
  • R j is the additive space conversion network corresponding to the jth attribute
  • T j is the recognition network corresponding to the jth attribute
  • T j' is the recognition network corresponding to the j'th attribute
  • f ij' is the i-th sample
  • the hidden layer feature vector of the j'th attribute, the symbol " ⁇ " represents random combination.
  • st is the abbreviation of subject to, which means that u i meets the constraints.
  • Step 904 For the second loss generated in the training process of the second vector set, the second loss is back-propagated to the recognition network and the additive space conversion network corresponding to other attributes for training.
  • wj′ is the weight of attribute j.
  • R j is the additive space conversion network corresponding to the jth attribute
  • T j is the recognition network corresponding to the jth attribute
  • T j' is the recognition network corresponding to the j'th attribute
  • f ij' is the i-th sample
  • the hidden layer feature vector of the j'th attribute, the symbol " ⁇ " represents random combination.
  • st is the abbreviation of subject to, which means that u i meets the constraints.
  • the last activation function of all recognition networks is also a softmax function. Is the cross entropy loss function.
  • the optimization mechanism of the additive confrontation network is shown in Figure 10.
  • the first two attributes are: object category and color category.
  • the first two branches of the additive adversarial network correspond to the learning of these two attributes in turn.
  • the training has been completed for the combination of attributes that have been seen, for example, a white mountain can be accurately identified as the object "mountain” and the color "white”.
  • this application requires the network to output the object "mountain” and the color "green”.
  • the output color is not “green”, then there is reason to believe that the error comes from the “white” information in the first branch of the network.
  • this application returns the color error generated by the output of the second branch to the first branch to eliminate the color information therein. In this way, the gamut difference generated by the color information in the first branch is eliminated.
  • each user group corresponds to only one domain, such as one device type.
  • the division of user groups is divided by domain differences.
  • the model trained on one domain should be tested on another domain, and each user group only considers the difference of one domain, as shown in Table 2.
  • the basic generator G 0 in the above embodiments, m generators (also called attribute feature learning networks) G 1 to G m , and m additive space conversion networks T 1 to T m can be For any neural network.
  • the last layer activation functions of the discriminators, m*m discriminators D 11 to D 33 and m recognition networks R 1 to R m in the above embodiments may be softmax functions, sigmoid functions , Tanh function, linear function, swish activation function, relu activation function.
  • the loss function (including the with In phase 2 ) Can be cross entropy loss (cross entropy loss), logistic loss (logistic loss), mean square loss (mean square loss), square loss (square loss), l 2 norm loss and l 1 norm loss.
  • I( ⁇ ) is an indicator function, that is, the value is taken according to the prior probability of the label on the training set.
  • Fig. 11 shows a flowchart of an identity verification method provided by an exemplary embodiment of the present application. This method can be executed by the server shown in FIG. 1. The method includes:
  • Step 1101 Collect the original features of the user, and there are m-1 domain difference features in the original features;
  • the domain is a factor that causes the overall distribution deviation of a sample subset of a training set.
  • the domain includes but is not limited to at least two of hair color, beard, glasses, model, operating system, body thickness, and application type.
  • m is an integer greater than 2.
  • Step 1102 extract the main attribute feature vector in the original feature;
  • the main attribute feature vector is an unbiased feature representation that selectively decouples m-1 domain difference features in the original feature;
  • the server invokes the identity verification model to extract the main attribute feature vector in the original feature.
  • the authentication model includes:
  • the first confrontation generation network or, the first confrontation generation network and the second confrontation generation network;
  • the first confrontation generation network is a network trained by selective decoupling of m-1 domain difference features based on causality
  • the second confrontation generation network is the difference extracted from the first confrontation generation network.
  • a network obtained by performing additive adversarial training after random combination of attribute feature vectors.
  • Step 1103 Perform identity verification according to the main attribute feature vector to obtain the identity verification result
  • the server invokes the identity verification model to perform identity verification according to the main attribute feature vector to obtain the identity verification result.
  • Step 1104 Perform a target operation according to the identity verification result.
  • the target operation can be a sensitive operation related to authentication.
  • Target operations include, but are not limited to: unlocking the lock screen interface, unlocking the confidential space, authorizing payment behavior, authorizing transfer behavior, authorizing decryption behavior, and so on.
  • the method provided in this embodiment extracts the main attribute feature vector in the original feature through the identity verification model, and performs identity verification according to the main attribute feature vector to obtain the identity verification result. Because the main attribute feature vector is the original feature
  • the multiple domain difference features in the domains are selectively decoupled and unbiased feature representation, so the influence of multiple domain difference features on the authentication process is eliminated as much as possible, even if there are domain differences in the original features (such as growing a beard, changing Hairstyle), can also accurately achieve identity verification.
  • the first confrontation generation network includes a basic generator, a main generator and a main discriminator.
  • Fig. 12 shows a flowchart of an identity verification method provided by another exemplary embodiment of the present application. This method can be executed by the server shown in FIG. 1. The method includes:
  • Step 1201 Collect the original features of the user. There are m-1 domain difference features in the original features, and m is an integer greater than 2.
  • Step 1202 Invoke the basic generator to transform the original feature into a global attribute feature vector
  • the basic generator G 0 is used to transform the original feature x into a global attribute feature vector f 0 , as shown in FIG. 5.
  • the global attribute feature vector f 0 is mixed with identity attribute features and m-1 domain difference features.
  • Step 1203 call the main generator to perform feature extraction on the global attribute feature vector to obtain the first main attribute feature vector;
  • the main generator G 1 is used to perform feature extraction on the global attribute feature vector f 0 to obtain the first main attribute feature vector f 1 , the first main attribute feature vector f 1 is the feature vector corresponding to the identity attribute (for m-1 domain differences Feature decoupling), the first main attribute feature vector f 1 is an unbiased feature representation that selectively decouples m-1 domain difference features in the original feature.
  • Step 1204 call the main discriminator to perform identity verification on the first main attribute feature vector to obtain the identity verification result;
  • the D 11 of the main arbiter for the first main feature vectors attribute identity tag predicted output corresponding identity tags.
  • the identity tag includes: belonging to the identity tag i, or not belonging to any existing identity tag.
  • Step 1205 Perform a target operation according to the identity verification result.
  • the target operation can be a sensitive operation related to authentication.
  • Target operations include, but are not limited to: unlocking the lock screen interface, unlocking the confidential space, authorizing payment behavior, authorizing transfer behavior, authorizing decryption behavior, and so on.
  • the method provided in this embodiment uses the first confrontation generation network to perform unbiased identity verification.
  • the first confrontation generation network decouples at least two domain differences that have a causal relationship, it does not At least two domain differences of causality are forced to decouple, so there is no or extremely small probability that negative migration will occur, and at least two domain differences that have causal relationships can be better decoupled, so as to obtain better Partial authentication result.
  • the first confrontation generation network includes a basic generator, a main generator and a main discriminator;
  • the second confrontation generation network includes a main additive space transformation network and a main recognition network.
  • Fig. 13 shows a flowchart of an identity verification method provided by another exemplary embodiment of the present application.
  • the method may be executed by the server shown in FIG. 1, and the method includes:
  • Step 1301 Collect the original features of the user. There are m-1 domain difference features in the original features, and m is an integer greater than 2.
  • Step 1302 call the basic generator in the first confrontation generation network to transform the original feature into a global attribute feature vector
  • the basic generator G 0 is used to transform the original feature x into a global attribute feature vector f 0 , as shown in FIG. 5.
  • the global attribute feature vector f 0 is mixed with identity attribute features and m-1 domain difference features.
  • Step 1303 Call the main generator in the first confrontation generation network to perform feature extraction on the global attribute feature vector to obtain the first main attribute feature vector;
  • the main generator G 1 is used to perform feature extraction on the global attribute feature vector f 0 to obtain the first main attribute feature vector f 1 , the first main attribute feature vector f 1 is the feature vector corresponding to the identity attribute (for m-1 domain differences Feature decoupling), the first main attribute feature vector f 1 is an unbiased feature representation that selectively decouples m-1 domain difference features in the original feature.
  • Step 1304 After calling the main discriminator in the first confrontation generation network to perform the first discrimination on the first main attribute feature vector, output the combined attribute feature vector to the second confrontation generation network;
  • D 11 of the main arbiter for the first main feature attribute vectors f 1 after a first judgment outputs the combined properties feature vector f '1 to the second network against generation.
  • Step 1305 call the main additive space conversion network in the second confrontation generation network to convert the combined attribute feature vector output by the first confrontation generation network to obtain an additive feature vector;
  • Additive primary network space transformer T 1 generated against the first network feature vector output combined attributes f '1 is converted, to obtain an additive feature vector S 1.
  • Step 1306 Call the main recognition network in the second confrontation generation network to perform identity recognition on the additive feature vector, and obtain the identity verification result.
  • the main recognition network R 1 performs identity label prediction on the additive feature vector S 1 and outputs the corresponding identity label.
  • the identity tag includes: belonging to the identity tag i, or not belonging to any existing identity tag.
  • Step 1307 Perform a target operation according to the identity verification result.
  • the target operation can be a sensitive operation related to authentication.
  • Target operations include, but are not limited to: unlocking the lock screen interface, unlocking the confidential space, authorizing payment behavior, authorizing transfer behavior, authorizing decryption behavior, and so on.
  • the method provided in this embodiment uses the first confrontation generation network to perform unbiased identity verification.
  • the first confrontation generation network decouples at least two domain differences that have a causal relationship, it does not At least two domain differences of causality are forced to decouple, so there is no or extremely small probability that negative migration will occur, and at least two domain differences that have causal relationships can be better decoupled, so as to obtain better Partial authentication result.
  • the method provided in this embodiment also performs unbiased identity verification by cascading the second confrontation generation network after the first confrontation generation network. Since the second confrontation generation network fully decouples the domain difference of irrelevant attributes, it solves the problem of The domain difference decoupling of irrelevant attributes is not enough, which leads to the problem of too much attribute dependence in the learned features, so that even when there are implicit relationship attributes between multiple domain differences, it is still possible to compare multiple domain differences. Good decoupling, thereby improving decoupling performance and obtaining better unbiased authentication results.
  • the identity verification method provided in this application can be applied to the following scenarios:
  • the terminal When using face recognition technology for identity verification, the terminal will collect the user's face image for identity recognition. For the same user, the user may choose to have a beard or no beard, long or short hair, wear glasses or no glasses, so that there are domain differences in different facial images of the same user. These domain difference characteristics will affect whether the authentication result of identity verification is correct.
  • the identity verification method in the above embodiment can be used, so that when there are domain difference features, the identity verification result can be obtained more accurately.
  • the terminal When using sensor data for identity verification, the terminal is provided with an acceleration sensor and/or a gyroscope sensor, and the user's behavior characteristics when using the terminal are collected through the sensor. Behavioral characteristics include: the intensity of the user's click on the terminal, the frequency with which the user clicks on the terminal, and the characteristics of the pause rhythm when the user continuously clicks on the terminal. Due to the different formats of sensor data reported by different sensors, different operating systems have different requirements on the format of sensor data, and the behavior characteristics collected by terminals of different shapes and thicknesses (installed with the same sensor) are also different, and current users may New terminals (such as mobile phones) will be replaced once a year, resulting in domain differences when the same user account is authenticated on different terminals.
  • New terminals such as mobile phones
  • a fingerprint sensor is provided in the terminal, and the fingerprint sensor is used to collect fingerprint characteristics when the user uses the terminal. Because the formats of fingerprint data reported by different fingerprint sensors are different, when users change terminals, there are domain differences when the same user account is authenticated on different terminals. These domain difference characteristics will affect whether the authentication result of identity verification is correct. In order to eliminate the influence of these domain difference features on the identity verification process, the identity verification method in the above embodiment can be used, so that when there are domain difference features, the identity verification result can be obtained more accurately.
  • the terminal When iris recognition technology is used for identity verification, the terminal will collect the user's iris image for identity recognition. For the same user, the user may have contact lenses or no contact lenses, and different contact lenses may have different patterns. The domain difference caused by such contact lenses will affect whether the authentication result of identity verification is correct. . In order to eliminate the influence of these domain difference features on the identity verification process, the identity verification method in the above embodiment can be used, so that when there are domain difference features, the identity verification result can be obtained more accurately.
  • Fig. 14 shows a block diagram of an identity verification device provided by an exemplary embodiment of the present application.
  • the device can be implemented as all or part of the server through software, hardware or a combination of both.
  • the device includes:
  • the collection module 1420 is used to collect the original features of the user, and there are m-1 domain difference features in the original features;
  • the identity verification module 1440 is configured to extract the main attribute feature vector in the original feature;
  • the main attribute feature vector is an unbiased feature representation obtained by selectively decoupling m-1 domain difference features in the original feature , M is an integer greater than 2;
  • the identity verification module 1460 is further configured to perform unbiased identity verification processing according to the main attribute feature vector to obtain an identity verification result;
  • the operation module 1480 is configured to perform target operations according to the identity verification result.
  • the identity verification module 1460 is configured to call the identity verification model to perform feature extraction on the original features to obtain the main attribute feature vector in the original features; wherein, the identity verification model includes: A confrontation generation network; or, the first confrontation generation network and the second confrontation generation network;
  • the first confrontation generation network includes: a base generator, a main generator, and a main discriminator;
  • the identity verification module 1440 is configured to call the basic generator to transform the original feature into a global attribute feature vector
  • the identity verification module 1440 is configured to call the main generator to perform feature extraction on the global attribute feature vector to obtain the first main attribute feature vector;
  • the identity verification module 1440 is configured to call the main discriminator to perform identity verification on the first main attribute feature vector to obtain an identity verification result, or call the main discriminator to perform identity verification on the first main attribute feature vector After the first discrimination, the combined attribute feature vector is output to the second confrontation generation network.
  • the first confrontation generation network is obtained by training in the following manner:
  • the difference between the first domain difference feature and the first domain difference feature is ignored in the process of adversarial learning of the second domain difference feature. Decoupling learning.
  • the first confrontation generation network includes: m generators G 1 to G m , each of the generators G j corresponds to m discriminators G j1 to G jm , and the j-th generator
  • the generator G j is used to learn the characteristics of the j-th attribute
  • the generator G 1 corresponding to the identity is the main generator
  • the discriminator D 11 corresponding to the generator G 1 is the main discriminator, i, j, j' ⁇ [m];
  • the first confrontation generation network is obtained by training in the following manner:
  • the second confrontation generation network includes: a main additive space conversion network and a main recognition network;
  • the identity verification module 1440 is configured to call the main additive space conversion network to convert the combined attribute feature vector output by the first confrontation generation network to obtain an additive feature vector;
  • the identity verification module 1440 is configured to call the main recognition network to perform identity recognition on the additive feature vector to obtain an identity verification result.
  • the second confrontation generation network is obtained by training in the following manner:
  • At least one attribute combination corresponding to the combined attribute feature vector is an attribute combination that does not appear in the training set.
  • the second confrontation generation network includes: m additive space transformation networks and m recognition networks corresponding to the m attributes one-to-one, j ⁇ [m];
  • the second confrontation generation network is obtained by training using the following steps:
  • the n r combined attribute feature vectors are divided into a first vector set and a second vector set.
  • the attribute combinations of the combined attribute feature vectors in the first vector set are the attribute combinations appearing in the training set, and the second vector set.
  • the attribute combination of the combined attribute feature vector of is an attribute combination that does not appear in the training set;
  • the jth additive space conversion network is used to convert the jth combined attribute feature vector into The j-th additive feature vector
  • the j-th recognition network is used to identify the label corresponding to the j-th attribute on the sum feature vector of the m additive feature vectors
  • the second loss is backpropagated to the recognition network and the additive space conversion network corresponding to other attributes.
  • Fig. 15 shows a block diagram of a training device for a first confrontation generation network provided by an exemplary embodiment of the present application.
  • the device can be implemented as all or part of the server through software, hardware or a combination of both.
  • the first confrontation generation network includes m generators G 1 to G m , each of the generators G j corresponds to m discriminators G j1 to G jm , and the j-th generator G j is used to learn the j-th
  • the characteristics of the attribute, the attribute includes identity and m-1 domains, i, j, j' ⁇ [m]
  • the device includes:
  • the first training module 1520 is used to fix all the generators G i and optimize all the discriminators Di to make the output approximate the label y i corresponding to the jth attribute;
  • the second training module 1540 is used to fix all discriminators Dij and optimize all generators G i to make the output approximate to (1-y i ) corresponding to the jth attribute;
  • the alternation module 1560 is configured to control the first training module 1520 and the second training module to alternately perform the above two steps until the training end conditions of the generator G i and the discriminator D ij are satisfied;
  • Fig. 16 shows a block diagram of a training device for a second confrontation generation network provided by an exemplary embodiment of the present application.
  • the device can be implemented as all or part of the server through software, hardware or a combination of both.
  • the second confrontation generation network includes m additive space conversion networks and m recognition networks corresponding to m attributes one-to-one, and the attributes include identity and m-1 domain differences, j ⁇ [m], m is An integer greater than 2, the device includes:
  • the random combination module 1620 is used to randomly combine attribute feature vectors corresponding to different attributes extracted from the training set to generate n r combined attribute feature vectors;
  • the set dividing module 1640 is configured to divide the n r combined attribute feature vectors into a first vector set and a second vector set, and the attribute combinations of the combined attribute feature vectors in the first vector set are the attributes appearing in the training set Combination, the attribute combination of the combined attribute feature vector in the second vector set is an attribute combination that does not appear in the training set;
  • the forward training module 1660 is configured to use the first vector set and the second vector set to predict the additive space transformation network and the recognition network, and the jth additive space transformation network is used to The j combined attribute feature vectors are converted into the j additive feature vector, and the j identification network is used to perform label prediction corresponding to the j attribute on the sum feature vector of the m additive feature vectors;
  • the error feedback module 1680 is configured to, for the first loss generated in the prediction process of the first vector set, backpropagate the first loss to the identification network and the additive space conversion corresponding to each attribute The internet;
  • the error feedback module 1680 is configured to, for the second loss generated by the second vector set in the prediction process, backpropagate the second loss to the recognition network and the additive space corresponding to other attributes Convert the network.
  • the identity verification device provided in the above embodiment verifies the identity
  • only the division of the above functional modules is used as an example for illustration.
  • the above function allocation can be completed by different functional modules as required, namely The internal structure of the device is divided into different functional modules to complete all or part of the functions described above.
  • the identity verification device provided in the foregoing embodiment belongs to the same concept as the method embodiment of the identity verification method. For the specific implementation process, please refer to the method embodiment, which will not be repeated here.
  • FIG. 17 shows a structural block diagram of a computer device 1700 according to an embodiment of the present application.
  • the computer device 1700 may be an electronic device such as a mobile phone, a tablet computer, a smart TV, a multimedia playback device, a wearable device, a desktop computer, and a server.
  • the computer device 1700 can be used to implement any one of the identity verification method, the training method of the first confrontation generation network, and the training method of the second confrontation generation network provided in the foregoing embodiments.
  • the computer device 1700 includes a processor 1701 and a memory 1702.
  • the processor 1701 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on.
  • the processor 1701 can be implemented in at least one hardware form among DSP (Digital Signal Processing), FPGA (Field Programmable Gate Array), PLA (Programmable Logic Array, Programmable Logic Array) .
  • the processor 1701 may also include a main processor and a coprocessor.
  • the main processor is a processor used to process data in the wake state, also called a CPU (Central Processing Unit, central processing unit); the coprocessor is A low-power processor used to process data in the standby state.
  • the processor 1701 may be integrated with a GPU (Graphics Processing Unit, image processor), and the GPU is used to render and draw content that needs to be displayed on the display screen.
  • the processor 1701 may further include an AI (Artificial Intelligence) processor, and the AI processor is used to process calculation operations related to machine learning.
  • AI Artificial Intelligence
  • the memory 1702 may include one or more computer-readable storage media, which may be non-transitory.
  • the memory 1702 may also include high-speed random access memory and non-volatile memory, such as one or more magnetic disk storage devices and flash memory storage devices.
  • the non-transitory computer-readable storage medium in the memory 1702 is used to store at least one instruction, and the at least one instruction is used to be executed by the processor 1701 to implement the identity verification provided in the method embodiment of the present application. Any one of the method, the training method of the first confrontation generation network, and the training method of the second confrontation generation network.
  • the computer device 1700 may optionally further include: a peripheral device interface 1703 and at least one peripheral device.
  • the processor 1701, the memory 1702, and the peripheral device interface 1703 may be connected by a bus or a signal line.
  • Each peripheral device can be connected to the peripheral device interface 1703 through a bus, a signal line, or a circuit board.
  • the peripheral device may include: at least one of a display screen 1704, an audio circuit 1705, a communication interface 1706, and a power supply 1707.
  • FIG. 17 does not constitute a limitation on the computer device 1700, and may include more or fewer components than shown in the figure, or combine certain components, or adopt different component arrangements.
  • a computer device in an exemplary embodiment, includes a processor and a memory, and computer-readable instructions are stored in the memory.
  • the processor executes the aforementioned identity. Any one of the verification method, the training method of the first confrontation generation network, and the training method of the second confrontation generation network.
  • a computer-readable storage medium stores computer-readable instructions.
  • the computer-readable instructions When the computer-readable instructions are executed by one or more processors, one or more processing The device executes the above authentication method.
  • the aforementioned computer-readable storage medium may be ROM (Read-Only Memory), RAM (Random Access Memory, random access memory), CD-ROM (Compact Disc Read-Only Memory, CD-ROM) ), magnetic tapes, floppy disks and optical data storage devices.
  • a computer-readable instruction product is also provided.
  • the computer-readable instruction product When executed, it is used to implement the above-mentioned identity verification method, the training method of the first confrontation generation network, and the second confrontation Any of the training methods for generating the network.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Security & Cryptography (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Computer Hardware Design (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Analysis (AREA)
  • Computer And Data Communications (AREA)
  • Machine Translation (AREA)

Abstract

一种身份验证方法,包括:采集用户的原始特征;调用身份验证模型提取原始特征中的主属性特征向量,主属性特征向量是将原始特征中的m-1种域差异特征进行选择性解耦的无偏特征表示,m为大于2的整数;根据主属性特征向量进行无偏身份验证得到身份验证结果。

Description

身份验证方法、装置、计算机设备及存储介质
本申请要求于2019年04月24日提交中国专利局,申请号为2019103360374、发明名称为“身份验证方法、对抗生成网络的训练方法、装置及设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人工智能领域,特别涉及一种身份验证方法、装置、计算机设备及存储介质。
背景技术
身份验证技术是指通过计算机系统中的一定手段,对用户身份进行确认的技术。常见的身份验证技术包括:人脸识别、指纹识别、终端姿态识别等等。
以人脸识别为例,服务器中设置有神经网络模型。当采集到待验证用户的人脸图像后,调用神经网络模型对人脸图像进行验证;当验证成功时,确定出待验证用户的身份;当验证失败时,反馈错误通知。其中,神经网络模型是预先通过训练集训练得到的。
但上述神经网络模型可能会误学习出有偏预测。比如,当用户开始蓄胡子、戴眼镜或因季节改变穿衣时,该神经网络模型的验证就可能失败。
发明内容
根据本申请提供的各种实施例,提供一种身份验证方法、装置、计算机设备及存储介质。所述技术方案如下:
根据本申请的一个方面,提供了一种身份验证方法,由计算机设备执行,所述方法包括:
采集用户的原始特征,所述原始特征中存在m-1种域差异特征,m为大于2的整数;
提取所述原始特征中的主属性特征向量;所述主属性特征向量是将所述 原始特征中的m-1种域差异特征进行选择性解耦的无偏特征表示,m为大于2的整数;及
根据所述主属性特征向量进行无偏身份验证处理得到身份验证结果。
在一个实施例中,调用身份验证模型对所述原始特征进行特征提取,得到所述原始特征中的主属性特征向量;其中,所述身份验证模型包括:
第一对抗生成网络;或,所述第一对抗生成网络和第二对抗生成网络;
其中,所述第一对抗生成网络是基于因果关系对所述m-1种域差异特征进行选择性解耦所训练得到的网络,所述第二对抗生成网络是对所述第一对抗生成网络提取到的不同属性的属性特征向量进行随机组合后进行加性对抗训练得到的网络,所述属性包括身份和所述m-1种域差异。
根据本申请的一个方面,提供了一种第一对抗生成网络的训练方法,所述第一对抗生成网络包括m个生成器G 1至G m,每个所述生成器G j对应m个判别器G j1至G jm,第j个生成器G j用于学习第j个属性的特征,所述属性包括身份和m-1种域差异,i,j,j’∈[m],所述方法包括:
固定所有生成器G i,优化所有判别器D ij来使得输出逼近与所述第j个属性对应的标签y i
固定所有判别器D ij,优化所有生成器G i来使得输出逼近与所述第j个属性对应的(1-y i);
其中,若第a个属性和第b个属性存在因果关系,则所述判别器D ab的输出损失不进行反向传播。
根据本申请的一个方面,提供了一种第二对抗生成网络的训练方法,所述第二对抗生成网络包括与m个属性一一对应的m个加性空间转换网络以及m个识别网络,所述属性包括身份和m-1种域差异,j∈[m],m为大于2的整数,所述方法包括:
将从训练集提取到的不同属性对应的属性特征向量进行随机组合,产生n r个组合属性特征向量;
将所述n r个组合属性特征向量划分为第一向量集合和第二向量集合,第一向量集合中的组合属性特征向量的属性组合是所述训练集中出现的属性组合,第二向量集合中的组合属性特征向量的属性组合是所述训练集中未出现的属性组合;
使用所述第一向量集合和所述第二向量集合对所述加性空间转换网络以及所述识别网络进行预测,第j个加性空间转换网络用于将第j个组合属性特征向量转换为第j个加性特征向量,第j个识别网络用于对m个加性特征向量的和特征向量进行与第j个属性对应的标签识别;
对于所述第一向量集合在预测过程中产生的第一损失,将所述第一损失反向传播至每个属性对应的所述识别网络和所述加性空间转换网络;
对于所述第二向量集合在预测过程中产生的第二损失,将所述第二损失反向传播至其它属性对应的所述识别网络和所述加性空间转换网络。
根据本申请的另一方面,提供了一种身份验证装置,所述装置包括:
采集模块,用于采集用户的原始特征,m为大于2的整数;
身份验证模块,用于提取所述原始特征中的主属性特征向量;所述主属性特征向量是将所述原始特征中的m-1种域差异特征进行选择性解耦的无偏特征表示,m为大于2的整数;及
所述身份验证模块,还用于根据所述主属性特征向量进行无偏身份验证处理得到身份验证结果。
根据本申请的另一方面,提供了一种计算机设备,包括处理器和存储器,所述存储器中存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行所述身份验证方法的步骤。
根据本申请的另一方面,提供了一种计算机可读存储介质,存储有计算机可读指令,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行所述身份验证方法的步骤。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征、目的和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是相关技术中提供的身份验证方法的流程图;
图2是本申请一个示意性实施例提供的身份验证系统的框图;
图3是本申请一个示意性实施例提供的身份验证方法的流程图;
图4是本申请一个示意性实施例提供的第一对抗生成网络和第二对抗生成网络在工作时的两阶段示意图;
图5是本申请一个示意性实施例提供的第一对抗生成网络和第二对抗生成网络的网络结构图;
图6是本申请一个示意性实施例提供的第一对抗生成网络的训练方法的流程图;
图7是本申请一个示意性实施例提供的身份验证软件的界面示意图;
图8是本申请一个示意性实施例提供的基于因果关系进行解耦学习的网络架构示意图;
图9是本申请一个示意性实施例提供的第二对抗生成网络的训练方法的流程图;
图10是本申请一个示意性实施例提供的第二对抗生成网络的训练原理示意图;
图11是本申请一个示意性实施例提供的身份验证方法的流程图;
图12是本申请一个示意性实施例提供的身份验证方法的流程图;
图13是本申请一个示意性实施例提供的身份验证方法的流程图;
图14是本申请一个示意性实施例提供的身份验证装置的框图;
图15是本申请一个示意性实施例提供的第一对抗生成网络的训练装置的框图;
图16是本申请一个示意性实施例提供的第二对抗生成网络的训练装置的框图;
图17是本申请一个示意性实施例提供的计算机设备的框图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
首先对本申请实施例提供的若干个名词进行解释:
身份验证技术:是指通过计算机手段,对用户身份进行确认的技术。常见的身份验证技术包括:人脸识别、指纹识别、声纹识别、虹膜识别、终端姿态识别、行人重识别中的至少一种。
身份验证模型:是指用于进行身份识别的神经网络模型。
人脸识别:是指通过人脸图像上的特征点,对用户身份进行确认的技术。人脸图像上的特征点包括但不限于:眉毛特征点、眼睛特征点、嘴巴特征点、鼻子特征点、耳朵特征点、脸颊特征点中的至少一种。
终端姿态识别:是指用户使用终端(比如手机)时,根据终端内部的传感器所采集到的用户操作在物理维度上的操作特征,比如按压力度、按压频率、按压位置、机身震动频率、机身震动周期、机身位移大小等,对用户身份进行确认的技术。
域:对一个训练集中样本子集产生整体性分布偏差的因素,比如对于人脸识别,不同用户的发色黑、黄和白可视为一种域的差异;不同用户是否戴有眼镜,也可视为一种域的差异;不同用户是否蓄有胡子,也视为一种域的差异。
迁移学习:当数据中有域的差别时,构建学习系统来处理域的差异;
负迁移:迁移学习中的概念,描述由于在训练集上采取了某种迁移学习方法,导致在测试集上正确率下降的现象。
对抗生成网络(Generative Adversarial Network,GAN):是近几年被广泛研究的一种生成模型,具备捕捉真实数据分布的能力。
生成器(Generator):是GAN中重要的组成部分,负责生成足够真实的数据。
判别器(Discriminator):是GAN中与生成器相互博弈的部分,负责判断生成器生成的数据是否接近真实数据。
在采用身份验证模型进行身份验证的过程中,身份验证模型可能会由于用户的分组/聚类而误学习出一种有偏的预测。比如,在人脸识别验证中,当用户开始蓄胡子或者戴眼镜的时候,验证就可能失败。另外,在行人重识别领域,当季节导致人们穿衣变化时或者在不同角度的摄像机下进行图形采集时,验证也可能失败。
相关技术中,提供了一种消除域差异对身份验证准确性影响的方法。该类方法包括但不限于:迁移成分分析(Transfer Component Analysis,TCA)、深度自适应网络(Deep Adaptation Network,DAN)、逆转梯度(Reversing Gradient,RevGrad)、对抗判别领域自适应(Aversarial Discriminative Domain Adaptation,ADDA)。
这类方法在学习主分类任务(如身份验证)的同时,消除所学特征的域差异。假设身份验证中存在不同手机机型的域差异,如图1所示,该身份验证模型包括:生成器12、任务判别器(Task Discriminator)14和差异判别器(Bias Discriminator)16。其中,生成器12用于从原始特征中提取特征向量;任务判别器14用于根据特征向量进行身份识别,比如用户1、用户2和用户3;差异判别器16用于根据特征向量进行机型判别,比如机型1、机型2和机型3。也即,原始特征通过生成器(Generator)网络12进行学习,输出的特征向量同时要做身份识别和机型判别。差异判别器14通过对抗学习,消除生成器12输出的特征向量中有关机型判别的特征信息,任务判别器16用来对用户进行身份识别。
由于影响身份验证模型的域差异可能为多种,比如:发色、发型、眼镜、胡子、耳环等等,当存在多种域差异且两两域差异之间还具有依赖关系时,上述技术方案中可能会出现两个问题:1、有可能会对具有依赖关系的域差异进行强制解耦而导致负迁移;2、有可能会因为对无关属性的域差异解耦不够而导致学习到的特征中仍然有过多的属性依赖。
本申请提供了一种无偏身份验证方案,能够尽可能地消除多种域差异对身份验证的影响,适用于存在有多种域差异的身份验证场景。
图2示出了本申请一个示例性实施例提供的身份验证系统的框图。该身份验证系统包括:终端120、网络140和服务器160。
终端120可以是手机、平板电脑、台式电脑、笔记本电脑、监控摄像头等设备。终端120是存在身份验证需求的终端,终端120用于采集身份验证所需的原始特征,原始特征包括:人脸数据、终端传感器数据、虹膜数据、指纹数据、声纹数据中的至少一种。在一些实施例中,终端120上可以登录有用户帐号,也即终端120可以是私人设备;在另一些实施例中,终端120 是具有监控性质的监控设备。
终端120可以通过网络140与服务器160相连。网络140可以是有线网络或无线网络。终端120可以将认证数据传输给服务器160,由服务器160完成身份验证后,将身份验证结果回传给终端120。
服务器160是用于进行身份验证的后台服务器。服务器160中设置有用于身份验证的神经网络模型(下文简称:身份验证模型)。该身份验证模型能够基于无偏表示的特征数据进行身份验证。
图3示出了本申请一个示例性实施例提供的身份验证方法的流程图。该身份验证方法包括:训练阶段220和测试(及应用)阶段240。
在训练阶段220中,构建用于训练身份验证模型的训练集。该训练集包括:每个样本的原始特征221、身份标签222和多种域差异标签223。可选地,每个样本对应一个用户,原始特征221是在身份验证过程中采集的用户特征数据。身份标签222用于标识该用户的身份,域差异标签223用于标识该用户的域差异。以域差异包括发色差异和胡子差异为例,表一示意性的示出了两组样本。
表一
原始特征 身份标签 域差异标签1 域差异标签2
人脸图像1 黑仙人 白头发 有胡子
人脸图像2 李小龙 黑头发 无胡子
采用该训练集对身份验证模型进行解耦学习224。该解耦学习224将身份验证作为主学习任务,多种域差异作为辅助学习任务。对于每个样本,身份以及每种域差异都视为一种属性。对于每个属性,采用对抗学习的方法来学习每种属性的解耦表示(也即尽可能地将每种属性的特征向量独立地提取出来),使得该隐层空间不包含其他属性的分类信息。使得最终学习到的身份验证模型242,可以尽可能地忽略多种域差异对身份验证的影响,从而输出准确的身份验证结果。
在测试(及应用)阶段240,采用测试集中的原始特征241输入身份验证模型242进行无偏身份验证,进而输出身份验证结果(也即身份标签243)。当测试通过时,将该身份验证模型242投入实际应用。
图4示出了本申请一个示例性实施例提供的身份验证模型242的结构框图。该身份验证模型242包括:第一对抗生成网络242a和第二对抗生成网络242b。
第一对抗生成网络242a是基于因果关系对m-1种域差异特征进行选择性解耦所训练得到的网络,m为大于2的整数。第二对抗生成网络242b是对第一对抗生成网络242a输出的不同属性特征向量进行随机组合后进行加性对抗训练得到的网络。
第一对抗生成网络242a和第二对抗生成网络242b用于实现两个阶段的解耦学习。
在阶段1中,第一对抗生成网络242a用于基于属性之间的非对称因果关系来学习解耦的特征表示。也即,第一对抗生成网络242a是通过如下方式训练得到的:当原始特征中存在具有因果关系的第一域差异特征和第二域差异特征时,对第二域差异特征进行对抗学习的过程中忽略与第一域差异特征之间的解耦学习。
因此,第一对抗生成网络242a对具有因果关系的至少两种域差异进行解耦时,并未对具有因果关系的至少两种域差异进行强制解耦,因此不会产生或极小概率会产生负迁移。
在阶段2中,先随机组合不同属性的属性特征向量,从而组合出样本中未出现的新组合,再由第二对抗生成网络242b基于加性对抗学习进行解耦,从而实现更进一步的解耦学习。也即,第二对抗生成网络是通过如下方式训练得到的:对第一对抗生成网络242a从训练集中提取到的不同属性特征向量进行随机组合,组合出训练集中未出现的属性组合后进行加性对抗训练得到的网络。
因此,通过随机组合来组合出训练集中未出现过的样本组合,能够使得第二对抗生成网络242b对无关属性的域差异进行充分解耦,从而解决对无关属性的域差异解耦不够而导致学习到的特征中仍然有过多的属性依赖的问题。
需要说明的是,上述第一对抗生成网络242a可以单独实施,也即第二对抗生成网络242b是可选部分。
第一对抗生成网络242a
结合参考图5,第一对抗生成网络242a包括:基础生成器G 0、m个生成器(也称属性特征学习网络)G 1至G m、m*m个判别器D 11至D 33
基础生成器G 0,用于对原始特征x进行转换,得到全局属性特征向量f 0
每个生成器G j对应m个判别器G j1至G jm,第j个生成器G j用于学习第j个属性的特征,属性包括身份和m-1个域。生成器的个数与属性的个数m相同,m为大于2的整数(图5中以m=3举例说明,但不限定为3),也即图5中的属性包括:身份和至少两个域。
每个生成器G 1至G m分别用于提取与当前属性关联的判别信息,以学习到该属性与其它属性解耦后的属性特征向量。对于j∈[m],第j个生成器与第j个属性相关联。
本申请设计的对抗学习方法包括:每个属性特征向量仅包含与该属性关联的判别信息。本申请考虑给定一个矩阵Λ∈R m*m,其中包含两两属性之间的因果关系。然后对于每个j∈[m],本申请都构建m个判别网络D j1,...,D jm来处理第j个属性和m个属性之间的因果关系。每个D ii用于学习第i个属性的特征,每个D ij用于在第i个属性的对抗学习中消除第j个属性的特征。
与身份对应的生成器G 1可称为主生成器,其它生成器G 2和G 3分别对应一个域。每个生成器还对应有n个判别器,判别器D 11可称为主判别器。
其中,主生成器G 1,用于对全局属性特征向量f 0进行特征提取得到第一主属性特征向量f 1。当第一对抗生成网络242a单独使用为身份验证模型时,主判别器D 11用于对第一主属性特征向量f 1进行身份验证得到身份验证结果;或者,当第一对抗生成网络242a与第二对抗生成网络242b级联使用为身份验证模型时,主判别器D 11用于对第一主属性特征向量f 1进行第一判别后输出组合属性特征向量f 1给第二对抗生成网络242b。
基于图5做出如下参数定义:
[k]:表示下标集合{1,2,…,k};
[-i]:表示去除第i个元素的下标集合;
n:样本数量;
m:属性数量;
d:特征维数;
Y∈R n*n:输出/属性/标签矩阵,包含n个独立的样本y i,i∈[n];
X∈R n*d:输入/特征矩阵,包含n个独立的样本x i,i∈[n];
本申请允许Y包含缺失值,定义Ω={(i,j);i∈[n],j∈[m],y ij未观测到的标签值}为观测到的标签的下标集合。模型在与之对应的特征与属性标签上训练。
本申请假设Y中都是类别变量,即对每个j∈[m],y ij∈[k j]。
不失一般性地,假设Y的第一列是身份标签,其他列是多种域差异标签。
第一对抗生成网络242a的训练
第一对抗生成网络242a的训练是一个典型的对抗学习网络的训练过程,生成器G 0至G m都用来做特征提取。判别器D 11至D mm分为两类:对于所有的i,j∈[m],i≠j,
(1)每个判别器D ii是用于学习第i个属性的特征,每个判别器D ij是用于消除第j个属性的特征;
(2)每个判别器D ii的学习是用标准的监督学习,每个判别器D ij的学习是采用对抗学习。
对于判别器D ij的对抗学习流程可以视为如下两个交替进行的步骤:
步骤601:固定所有G i,优化D ij来使得输出逼近与之对应的独热编码的标签y j
步骤602:固定所有D ij,优化所有G i来使得输出逼近与之对应的(1-y i)。
其中,若第a个属性和第b个属性存在因果关系,则所述判别器D ab的输出损失不进行反向传播,i,j,j'∈[m]。
步骤603:交替执行上述两个步骤,直至满足生成器G i和判别器D ij的训练结束条件。
可选地,训练结束条件包括:损失函数收敛至目标值,或者,训练次数达到预设次数。
阶段1中对抗学习的最终目标是使得所有G i可以提取与之对应的第i个属性的特征,而不能提取与之对应的其他属性的特征。如此,第i个属性就可以和其他属性解耦。
第一对抗生成网络242a的对抗学习的优化问题如下。
首先是属性学习的优化问题,也即生成器Gi的损失函数:
Figure PCTCN2020078777-appb-000001
其中,
Figure PCTCN2020078777-appb-000002
是属性学习的损失函数,w j是第j个属性的权重,G 0是基础生成器,G j是第j个属性对应的生成器,D jj是第jj个判别器,j属于[m]。
其次是域差异的判别学习,也即判别器的损失函数:
Figure PCTCN2020078777-appb-000003
其中,
Figure PCTCN2020078777-appb-000004
是对y ij′的独热编码向量,
Figure PCTCN2020078777-appb-000005
是对抗学习的损失函数,
Figure PCTCN2020078777-appb-000006
是(j,j’)属性对的权重,j,j’属于[m+1],x i是训练集中的原始特征。
第三步是消除域差异:
Figure PCTCN2020078777-appb-000007
其中,
Figure PCTCN2020078777-appb-000008
1 kj′为维度为k j′的全1向量。
在第三步中,本申请也会同时强化属性学习:
Figure PCTCN2020078777-appb-000009
根据本申请之前采用非对称因果关系的策略,当属性j'的变化会引起属性j的变化时,本申请令Λ jj’=0,否则令Λ jj’=1。换句话说,若第j'个属性和第j个属性存在因果关系,则所述判别器D jj’的输出损失不进行反向传播,i,j,j'∈[m]。
判别网络的最后一层的激活函数是softmax,
Figure PCTCN2020078777-appb-000010
是交叉熵损失,
Figure PCTCN2020078777-appb-000011
是平均平方误差损失。上述4个优化问题循环依次进行。其中,每个循环中,前两个优化问题优化1步,后两个优化问题优化5步。
在如图7所示的示意性例子中,采用终端传感器数据进行身份认证。以终端为智能手机为例,智能手机中设置有重力加速度传感器和陀螺仪传感器,当用户点击屏幕上的密码“171718”时,重力加度度传感器和陀螺仪传感器会采集到用户的操作特征,进而产生传感器数据,该传感器数据能够用于验证用户身份。但是由于每个终端的操作系统和机身厚度是不同的,不同操作系统会采用不同的数据形式来上报传感器数据,不同的机身厚度也会影响传感器所采集到的传感器数据。因此在图8的例子中,假设第一对抗生成网络242a包括:基础生成器G 0,主生成器G 1、辅生成器G 2和辅生成器G 3。在该例子中,主生成器G 1对应的网络对身份识别进行监督学习,对系统判别和厚度判别进行对抗学习,使得主生成器G 1提取的特征仅仅包含身份识别的信息,而不包含系统判别和厚度判别的特征。同理,辅生成器G 2提取的特征仅仅包含系统判别的特征,而不包含身份识别和厚度判别的特征。辅生成器G 3同理,仅仅包含厚度判别的特征。
本申请利用了两两属性之间的因果关系,具体地,对于每个属性,本申请将选择其他所有属性集合中的一个子集来进行解耦。选择是基于每个属性和其他属性的因果关系,即,如果其他属性不是作为引起该属性变化的原因,那么其他属性就可以来与该属性进行解耦。这种技术使得本申请的方法有能力灵活地进行属性选择,从而避免强制解耦所有其他属性(特别是存在因果关系的属性)而导致的负迁移,以及避免解耦的属性过少而导致的属性依赖。以图8作为示例,假如厚度变化会引起系统变化,那么对于系统判别这个属性,就不能和厚度判别来进行解耦。但是对于厚度判别而言,系统变化并不是厚度变化的原因,所以厚度判别这个属性可以和系统判别这个属性来进行解耦,所以就会变为图8所示的结构,辅生成器G 2的网络里去除了一个厚度判别的对抗目标。但是辅生成器G 3的网络并不会去除系统判别的对抗目标。
本申请之所以利用上述非对称的因果关系,是因为:以图8作为示例,假如厚度变化一定会引起系统的变化,若辅生成器G 2里的特征可以被识别出系统变化,是不可能不被识别出厚度变化的。因为假如厚度变化一定会引起可被识别的系统变化,最终导致厚度变化被识别。但是反之却没有这样的关系。
第二对抗生成网络242b
如图5所示,第二对抗生成网络242b包括:m个加性空间转换网络T 1至T m,以及m个识别网络R 1至R m
第一对抗生成网络242a产生的组合属性特征向量被m个加性空间转换网络T 1至T m分别转换为m个加性特征向量s 1,...,s m。m个加性特征向量相加成为一个和特征向量u,然后被送入m个识别网络R 1,...,R m进行识别,分别对应于m个属性。
其中,m个加性空间转换网络中与身份识别对应的加性空间转换网络T 1还可称为主加性空间转换网络,m个识别网络中与身份识别对应的识别网络R 1还可称为主识别网络。
第二对抗生成网络242b的训练
图9示出了本申请一个示例性实施例提供的第二对抗生成网络242b的训练方法的流程图。该方法包括:
步骤901,将第一对抗生成网络生成的不同属性对应的属性特征向量进行随机组合,产生n r个组合属性特征向量;
步骤902,将n r个组合属性特征向量划分为第一向量集合和第二向量集合,第一向量集合中的组合属性特征向量的属性组合是训练集中出现的属性组合,第二向量集合中的组合属性特征向量的属性组合是训练集中未出现的属性组合;
对第一对抗生成网络242a生成的不同属性对应的属性特征向量进行随机组合,产生n r个组合属性特征向量,每个组合属性特征向量分别对应有属性组合,并按照属性组合分为两个子集:训练集中出现的属性组合以及训练集中未出现的属性组合。定义如下两个下标集合Ω s和Ω u
Ω s={i∈n r}:训练集中见过的属性组合的下标;
Ω u={i∈n r}:训练集中未见过的属性组合的下标。
步骤903,使用第一向量集合和第二向量集合对加性空间转换网络以及识别网络进行预测;
第j个加性空间转换网络用于将第j个组合属性特征向量转换为第j个加 性特征向量,第j个识别网络用于对m个加性特征向量的和特征向量进行与第j个属性对应的标签识别。
步骤904,对于第一向量集合在训练过程中产生的第一损失,将第一损失反向传播至每个属性对应的识别网络和加性空间转换网络进行训练;
对于每个j∈[m],优化如下的优化问题:
Figure PCTCN2020078777-appb-000012
Figure PCTCN2020078777-appb-000013
其中,
Figure PCTCN2020078777-appb-000014
为识别损失函数,w′ j为属性j的权重。R j是第j个属性对应的加性空间转换网络,T j是第j个属性对应的识别网络,T j’是第j'个属性对应的识别网络,f ij'是第i个样本的第j'个属性的隐层特征向量,符号“~”代表经过随机组合。s.t.是subject to的缩写,表示使得u i满足约束条件。
步骤904,对于第二向量集合在训练过程中产生的第二损失,将第二损失反向传播至其它属性对应的识别网络和加性空间转换网络进行训练。
对于每个j∈[m],优化如下的优化问题:
Figure PCTCN2020078777-appb-000015
Figure PCTCN2020078777-appb-000016
其中,
Figure PCTCN2020078777-appb-000017
为识别损失函数,wj′为属性j的权重。R j是第j个属性对应的加性空间转换网络,T j是第j个属性对应的识别网络,T j’是第j'个属性对应的识别网络,f ij'是第i个样本的第j'个属性的隐层特征向量,符号“~”代表经过随机组合。s.t.是subject to的缩写,表示使得u i满足约束条件。
所有识别网络(R网络)的最后一层激活函数也是softmax函数。
Figure PCTCN2020078777-appb-000018
是交叉熵损失函数。
加性对抗网络的优化机制如图10所示。假设前两个属性分别为:物体类别和颜色类别。加性对抗网络的前两条支路依次对应于这两个属性的学习。首先,假设对于见过的属性组合已经训练完毕,例如,一座白色的山可以被精确地识别为物体“山”和颜色“白”。之后,对于没见过的属性组合,一座白色的山和一颗绿色的树,本申请要让网络输出物体“山”和颜色“绿”。在假设见过的组合已经训练完毕的前提下,如果现在输出的颜色不是“绿”,那么有 理由相信,误差是来自于网络第一条支路中的“白色”的信息。那么本申请将第二支路输出产生的颜色误差回传至第一条支路来消除其中的颜色信息。这样一来,在第一支路中的颜色信息产生的域差异就被消除了。
需要说明的是,上述训练集中,每个用户群仅对应于一种域,比如一种设备类型。用户群的划分是由域的差别划分的。在一个域上训练的模型要在另外一个域上进行测试,每个用户群只考虑了一种域的差别,如表二所示。在实际应用中,可能出现多种域的差别。比如对于人脸验证而言,眼镜、发型、胡子的差别都算域的差别。
表二
  用户群1 用户群2 用户群3
域1 训练 测试 测试
域2 测试 测试 训练
域3 测试 训练 测试
作为本申请的一个示例,上述各个实施例中的基础生成器G 0,m个生成器(也称属性特征学习网络)G 1至G m、m个加性空间转换网络T 1至T m可以为任意神经网络。
作为本申请的一个示例,上述各个实施例中的判别器、m*m个判别器D 11至D 33、m个识别网络R 1至R m的最后一层激活函数可以是softmax函数、sigmoid函数、tanh函数、线性函数、swish激活函数、relu激活函数中的任意一种。
作为本申请的一个示例,损失函数(包括阶段1中的
Figure PCTCN2020078777-appb-000019
Figure PCTCN2020078777-appb-000020
阶段2中的
Figure PCTCN2020078777-appb-000021
)均可以是交叉熵损失(cross entropy loss)、逻辑斯特损失(logistic loss)、均值平方损失(mean square loss)、平方损失(square loss),l 2范数损失和l 1范数损失。
作为本申请的一个示例,上述各个实施例中的
Figure PCTCN2020078777-appb-000022
其中
Figure PCTCN2020078777-appb-000023
为维度为k j′的全1向量。这里的
Figure PCTCN2020078777-appb-000024
还可以替换为其他四种维度为k j′的向量:
(1)全0向量;
(2)全1向量;
(3)全0.5向量;
(4)对于r∈[k j′],第r维取
Figure PCTCN2020078777-appb-000025
其中I(·)为示性函数,即按照训练集上标签的先验概率来取值。
身份验证阶段
图11示出了本申请一个示例性实施例提供的身份验证方法的流程图。该方法可以由图1所示的服务器执行。该方法包括:
步骤1101,采集用户的原始特征,原始特征中存在m-1种域差异特征;
域是对一个训练集中样本子集产生整体性分布偏差的因素。域包括但不限于:发色、胡子、眼镜、机型、操作系统、机身厚度、应用类型中的至少两种。m为大于2的整数。
步骤1102,提取原始特征中的主属性特征向量;主属性特征向量是将原始特征中的m-1种域差异特征进行选择性解耦的无偏特征表示;
可选地,服务器调用身份验证模型提取原始特征中的主属性特征向量。该身份验证模型包括:
第一对抗生成网络;或,第一对抗生成网络和第二对抗生成网络;
其中,第一对抗生成网络是基于因果关系对m-1种域差异特征进行选择性解耦所训练得到的网络,所述第二对抗生成网络是对所述第一对抗生成网络提取到的不同属性的属性特征向量进行随机组合后进行加性对抗训练得到的网络。
步骤1103,根据主属性特征向量进行身份验证得到身份验证结果;
可选地,服务器调用身份验证模型根据主属性特征向量进行身份验证得到身份验证结果。
步骤1104,根据身份验证结果进行目标操作。
目标操作可以是与身份验证有关的敏感操作。目标操作包括但不限于:解锁锁屏界面、解锁保密空间、授权支付行为、授权转账行为、授权解密行为等等。
本申请实施例对“目标操作”的具体操作形式不加以限定。
综上所述,本实施例提供的方法,通过身份验证模型提取原始特征中的主属性特征向量,根据主属性特征向量进行身份验证得到身份验证结果,由 于该主属性特征向量是将原始特征中的多种域差异特征进行选择性解耦的无偏特征表示,因此尽可能地消除了多种域差异特征对身份验证过程的影响,即便原始特征中存在域差异(比如留了胡子、换了发型),也能够准确地实现身份验证。
需要说明的是,在身份验证阶段,对于第一对抗生成网络,只需要第一对抗生成网络中的基础生成器、主生成器和主判别器即可;对于第二对抗生成网络,只需要主加性空间转换网络和主识别网络即可。
以采用第一对抗生成网络单独作为身份验证模型为例,相应的身份验证方法参考如下实施例。第一对抗生成网络包括基础生成器、主生成器和主判别器。
图12示出了本申请另一个示例性实施例提供的身份验证方法的流程图。该方法可以由图1所示的服务器执行。该方法包括:
步骤1201,采集用户的原始特征,原始特征中存在m-1种域差异特征,m为大于2的整数;
步骤1202,调用基础生成器将原始特征变换为全局属性特征向量;
基础生成器G 0用于将原始特征x变换为全局属性特征向量f 0,如图5所示。全局属性特征向量f 0中混合有身份属性的特征,以及m-1个域差异特征。
步骤1203,调用主生成器对全局属性特征向量进行特征提取得到第一主属性特征向量;
主生成器G 1用于对全局属性特征向量f 0进行特征提取得到第一主属性特征向量f 1,第一主属性特征向量f 1是身份属性对应的特征向量(对m-1种域差异特征进行了解耦),第一主属性特征向量f 1是将原始特征中的m-1种域差异特征进行选择性解耦的无偏特征表示。
步骤1204,调用主判别器对第一主属性特征向量进行身份验证得到身份验证结果;
主判别器D 11用于对第一主属性特征向量进行身份标签预测,输出相应的身份标签。身份标签包括:属于身份标签i,或者,不属于已有的任意身份标签。
步骤1205,根据身份验证结果进行目标操作。
目标操作可以是与身份验证有关的敏感操作。目标操作包括但不限于:解锁锁屏界面、解锁保密空间、授权支付行为、授权转账行为、授权解密行为等等。
本申请实施例对“目标操作”的具体操作形式不加以限定。
综上所述,本实施例提供的方法,通过第一对抗生成网络来进行无偏身份验证,由于第一对抗生成网络对具有因果关系的至少两种域差异进行解耦时,并未对具有因果关系的至少两种域差异进行强制解耦,因此不会产生或极小概率会产生负迁移,能够对存在因果关系的至少两种域差异进行较好的解耦,从而获得较优的无偏身份验证结果。
以采用第一对抗生成网络和第二对抗生成网络级联作为身份验证模型为例,相应的身份验证方法参考如下实施例。第一对抗生成网络包括基础生成器、主生成器和主判别器;第二对抗生成网络包括主加性空间转换网络和主识别网络。
图13示出了本申请另一个示例性实施例提供的身份验证方法的流程图。该方法可以由图1所示的服务器执行,该方法包括:
步骤1301,采集用户的原始特征,所述原始特征中存在m-1种域差异特征,m为大于2的整数;
步骤1302,调用第一对抗生成网络中的基础生成器将原始特征变换为全局属性特征向量;
基础生成器G 0用于将原始特征x变换为全局属性特征向量f 0,如图5所示。全局属性特征向量f 0中混合有身份属性的特征,以及m-1个域差异特征。
步骤1303,调用第一对抗生成网络中的主生成器对全局属性特征向量进行特征提取得到第一主属性特征向量;
主生成器G 1用于对全局属性特征向量f 0进行特征提取得到第一主属性特征向量f 1,第一主属性特征向量f 1是身份属性对应的特征向量(对m-1种域差异特征进行了解耦),第一主属性特征向量f 1是将原始特征中的m-1种域差异特征进行选择性解耦的无偏特征表示。
步骤1304,调用第一对抗生成网络中的主判别器对第一主属性特征向量进行第一判别后,输出组合属性特征向量给第二对抗生成网络;
主判别器D 11用于对第一主属性特征向量f 1进行第一判别后,输出组合属性特征向量f' 1给第二对抗生成网络。
步骤1305,调用第二对抗生成网络中的主加性空间转换网络对第一对抗生成网络输出的组合属性特征向量进行转换,得到加性特征向量;
主加性空间转换网络T 1对第一对抗生成网络输出的组合属性特征向量f' 1进行转换,得到加性特征向量S 1
步骤1306,调用第二对抗生成网络中的主识别网络对加性特征向量进行身份识别,得到身份验证结果。
主识别网络R 1对加性特征向量S 1进行身份标签预测,输出相应的身份标签。身份标签包括:属于身份标签i,或者,不属于已有的任意身份标签。
需要说明的是,与图5不同的是,在预测阶段不需要进行随机组合过程,以及多个加性特征向量的相加过程。
步骤1307,根据身份验证结果进行目标操作。
目标操作可以是与身份验证有关的敏感操作。目标操作包括但不限于:解锁锁屏界面、解锁保密空间、授权支付行为、授权转账行为、授权解密行为等等。
本申请实施例对“目标操作”的具体操作形式不加以限定。
综上所述,本实施例提供的方法,通过第一对抗生成网络来进行无偏身份验证,由于第一对抗生成网络对具有因果关系的至少两种域差异进行解耦时,并未对具有因果关系的至少两种域差异进行强制解耦,因此不会产生或极小概率会产生负迁移,能够对存在因果关系的至少两种域差异进行较好的解耦,从而获得较优的无偏身份验证结果。
本实施例提供的方法,还通过在第一对抗生成网络后级联第二对抗生成网络来进行无偏身份验证,由于第二对抗生成网络对无关属性的域差异进行充分解耦,从而解决对无关属性的域差异解耦不够而导致学习到的特征中仍然有过多的属性依赖的问题,使得即便多种域差异之间存在隐含的关系属性时,仍然能对多种域差异进行较好的解耦,从而提升解耦性能,获得更优的无偏身份验证结果。
本申请提供的身份验证方法,能够应用于如下场景:
1、基于人脸识别的身份验证场景;
在采用人脸识别技术进行身份验证时,终端会采集用户的人脸图像进行身份识别。对于同一个用户,该用户可能会选择留胡子或不留胡子,留长发或短发,戴眼镜或不戴眼镜,从而使得同一个用户的不同人脸图像中存在域差异特征。这些域差异特征均会影响身份验证的验证结果是否正确。为了消除这些域差异特征对身份验证过程的影响,可以采用上述实施例中的身份验证方法,从而在存在域差异特征时,也能够较为准确地得到身份验证结果。
2、基于传感器数据的身份验证场景;
在采用传感器数据进行身份验证时,终端内设置有加速度传感器和/或陀螺仪传感器,通过传感器来采集用户使用终端时的行为特征。行为特征包括:用户点击终端的力度、用户点击终端的频率、用户连续点击终端时的停顿节奏特征。由于不同传感器上报的传感器数据的格式不同,不同操作系统对传感器数据的格式要求不同,不同形状和厚度的终端(安装有相同的传感器)所采集到的行为特征也存在不同,而目前的用户可能会一年更换一次新的终端(比如手机),导致同一用户帐号在不同终端上进行身份验证时存在域差异特征。这些域差异特征均会影响身份验证的验证结果是否正确。为了消除这些域差异特征对身份验证过程的影响,可以采用上述实施例中的身份验证方法,从而在存在域差异特征时,也能够较为准确地得到身份验证结果。
3、基于指纹数据的身份验证场景;
在采用指纹数据进行身份验证时,终端内设置有指纹传感器,通过指纹传感器来采集用户使用终端时的指纹特征。由于不同指纹传感器上报的指纹数据的格式不同,因此当用户更换终端后,导致同一用户帐号在不同终端上进行身份验证时存在域差异特征。这些域差异特征均会影响身份验证的验证结果是否正确。为了消除这些域差异特征对身份验证过程的影响,可以采用上述实施例中的身份验证方法,从而在存在域差异特征时,也能够较为准确地得到身份验证结果。
4、基于虹膜识别的身份验证场景。
在采用虹膜识别技术进行身份验证时,终端会采集用户的虹膜图像进行身份识别。对于同一个用户,该用户可能会带有隐形眼镜或不带有隐形眼镜,不同的隐形眼镜上还可能有不同的花纹,这种隐形眼镜所导致的域差异会影 响身份验证的验证结果是否正确。为了消除这些域差异特征对身份验证过程的影响,可以采用上述实施例中的身份验证方法,从而在存在域差异特征时,也能够较为准确地得到身份验证结果。
以下为本申请实施例提供的装置实施例,对于装置实施例中未详细描述的细节,可以参考上述一一对应的方法实施例。
图14示出了本申请一个示例性实施例提供的身份验证装置的框图。该装置可以通过软件、硬件或者两者的结合实现成为服务器的全部或一部分。该装置包括:
采集模块1420,用于采集用户的原始特征,原始特征中存在m-1种域差异特征;
身份验证模块1440,用于提取所述原始特征中的主属性特征向量;所述主属性特征向量是将所述原始特征中的m-1种域差异特征进行选择性解耦的无偏特征表示,m为大于2的整数;
所述身份验证模块1460,还用于根据所述主属性特征向量进行无偏身份验证处理得到身份验证结果;
操作模块1480,用于根据所述身份验证结果进行目标操作。
在一个可能的实现方式中,身份验证模块1460,用于调用身份验证模型对所述原始特征进行特征提取,得到所述原始特征中的主属性特征向量;其中,所述身份验证模型包括:第一对抗生成网络;或,所述第一对抗生成网络和第二对抗生成网络;
在一个可能的实现方式中,所述第一对抗生成网络包括:基础生成器、主生成器和主判别器;
所述身份验证模块1440,用于调用所述基础生成器将所述原始特征变换为全局属性特征向量;
所述身份验证模块1440,用于调用所述主生成器对所述全局属性特征向量进行特征提取得到第一主属性特征向量;
所述身份验证模块1440,用于调用所述主判别器对所述第一主属性特征向量进行身份验证得到身份验证结果,或者,调用所述主判别器对所述第一主属性特征向量进行第一判别后输出组合属性特征向量给所述第二对抗生成 网络。
在一个可能的实现方式中,所述第一对抗生成网络是通过如下方式训练得到的:
当所述原始特征中存在具有因果关系的第一域差异特征和第二域差异特征时,对所述第二域差异特征进行对抗学习的过程中忽略与所述第一域差异特征之间的解耦学习。
在一个可能的实现方式中,所述第一对抗生成网络包括:m个生成器G 1至G m,每个所述生成器G j对应m个判别器G j1至G jm,第j个生成器G j用于学习第j个属性的特征,与所述身份对应的生成器G 1是所述主生成器,与所述生成器G 1对应的判别器D 11是所述主判别器,i,j,j'∈[m];
所述第一对抗生成网络是采用如下方式训练得到的:
固定所有生成器G i,优化所有判别器D ij来使得输出逼近与所述第j个属性对应的标签y i
固定所有判别器D ij,优化所有生成器G i来使得输出逼近与所述第j个属性对应的(1-y i);
其中,若第j’个属性和第j个属性存在因果关系,则所述判别器D jj’的输出损失不进行反向传播,i,j,j'∈[m]。
在一个可能的实现方式中,所述第二对抗生成网络包括:主加性空间转换网络和主识别网络;
所述身份验证模块1440,用于调用所述主加性空间转换网络对所述第一对抗生成网络输出的组合属性特征向量进行转换,得到加性特征向量;
所述身份验证模块1440,用于调用所述主识别网络对所述加性特征向量进行身份识别,得到身份验证结果。
在一个可能的实现方式中,所述第二对抗生成网络是通过如下方式训练得到的:
对所述第一对抗生成网络从训练集中提取到的不同属性特征向量进行随机组合;
对随机组合得到的组合属性特征向量进行加性对抗训练;
其中,存在至少一个所述组合属性特征向量对应的属性组合是所述训练集中未出现的属性组合。
在一个可能的实现方式中,所述第二对抗生成网络包括:与所述m个属性一一对应的m个加性空间转换网络以及m个识别网络,j∈[m];
所述第二对抗生成网络是采用如下步骤训练得到的:
将所述第一对抗生成网络生成的不同属性对应的属性特征向量进行随机组合,产生n r个组合属性特征向量;
将所述n r个组合属性特征向量划分为第一向量集合和第二向量集合,第一向量集合中的组合属性特征向量的属性组合是所述训练集中出现的属性组合,第二向量集合中的组合属性特征向量的属性组合是所述训练集中未出现的属性组合;
使用所述第一向量集合和所述第二向量集合对所述加性空间转换网络以及所述识别网络进行预测,第j个加性空间转换网络用于将第j个组合属性特征向量转换为第j个加性特征向量,第j个识别网络用于对m个加性特征向量的和特征向量进行与第j个属性对应的标签识别;
对于所述第一向量集合在预测过程中产生的第一损失,将所述第一损失反向传播至每个属性对应的所述识别网络和所述加性空间转换网络;
对于所述第二向量集合在预测过程中产生的第二损失,将所述第二损失反向传播至其它属性对应的所述识别网络和所述加性空间转换网络。
图15示出了本申请一个示例性实施例提供的第一对抗生成网络的训练装置的框图。该装置可以通过软件、硬件或者两者的结合实现成为服务器的全部或一部分。所述第一对抗生成网络包括m个生成器G 1至G m,每个所述生成器G j对应m个判别器G j1至G jm,第j个生成器G j用于学习第j个属性的特征,所述属性包括身份和m-1个域,i,j,j'∈[m],该装置包括:
第一训练模块1520,用于固定所有生成器G i,优化所有判别器D ij来使得输出逼近与所述第j个属性对应的标签y i
第二训练模块1540,用于固定所有判别器D ij,优化所有生成器G i来使得输出逼近与所述第j个属性对应的(1-y i);
交替模块1560,用于控制所述第一训练模块1520和所述第二训练模块交替执行上述两个步骤,直至满足所述生成器G i和所述判别器D ij的训练结束条件;
其中,若第j'个属性和第j个属性存在因果关系,则所述判别器D jj’的输出损失不进行反向传播,i,j,j'∈[m]。
图16示出了本申请一个示例性实施例提供的第二对抗生成网络的训练装置的框图。该装置可以通过软件、硬件或者两者的结合实现成为服务器的全部或一部分。所述第二对抗生成网络包括与m个属性一一对应的m个加性空间转换网络以及m个识别网络,所述属性包括身份和m-1种域差异,j∈[m],m为大于2的整数,该装置包括:
随机组合模块1620,用于将从训练集提取到的不同属性对应的属性特征向量进行随机组合,产生n r个组合属性特征向量;
集合划分模块1640,用于将所述n r个组合属性特征向量划分为第一向量集合和第二向量集合,第一向量集合中的组合属性特征向量的属性组合是所述训练集中出现的属性组合,第二向量集合中的组合属性特征向量的属性组合是所述训练集中未出现的属性组合;
前向训练模块1660,用于使用所述第一向量集合和所述第二向量集合对所述加性空间转换网络以及所述识别网络进行预测,第j个加性空间转换网络用于将第j个组合属性特征向量转换为第j个加性特征向量,第j个识别网络用于对m个加性特征向量的和特征向量进行与第j个属性对应的标签预测;
误差反馈模块1680,用于对于所述第一向量集合在预测过程中产生的第一损失,将所述第一损失反向传播至每个属性对应的所述识别网络和所述加性空间转换网络;
所述误差反馈模块1680,用于对于所述第二向量集合在预测过程中产生的第二损失,将所述第二损失反向传播至其它属性对应的所述识别网络和所述加性空间转换网络。
需要说明的是:上述实施例提供的身份验证装置在验证身份时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将设备的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的身份验证装置,与身份验证方法的方法实施例属于同一构思,其具体实现过程详见方法 实施例,这里不再赘述。
图17示出了本申请一个实施例提供的计算机设备1700的结构框图。该计算机设备1700可以是手机、平板电脑、智能电视、多媒体播放设备、可穿戴设备、台式电脑、服务器等电子设备。该计算机设备1700可用于实施上述实施例中提供的身份验证方法、第一对抗生成网络的训练方法、第二对抗生成网络的训练方法中的任意一种。
通常,计算机设备1700包括有:处理器1701和存储器1702。
处理器1701可以包括一个或多个处理核心,比如4核心处理器、8核心处理器等。处理器1701可以采用DSP(Digital Signal Processing,数字信号处理)、FPGA(Field Programmable Gate Array,现场可编程门阵列)、PLA(Programmable Logic Array,可编程逻辑阵列)中的至少一种硬件形式来实现。处理器1701也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称CPU(Central Processing Unit,中央处理器);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器1701可以在集成有GPU(Graphics Processing Unit,图像处理器),GPU用于负责显示屏所需要显示的内容的渲染和绘制。一些实施例中,处理器1701还可以包括AI(Artificial Intelligence,人工智能)处理器,该AI处理器用于处理有关机器学习的计算操作。
存储器1702可以包括一个或多个计算机可读存储介质,该计算机可读存储介质可以是非暂态的。存储器1702还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器1702中的非暂态的计算机可读存储介质用于存储至少一个指令,该至少一个指令用于被处理器1701所执行以实现本申请中方法实施例提供的身份验证方法、第一对抗生成网络的训练方法、第二对抗生成网络的训练方法中的任意一种。
在一些实施例中,计算机设备1700还可选包括有:外围设备接口1703和至少一个外围设备。处理器1701、存储器1702和外围设备接口1703之间可以通过总线或信号线相连。各个外围设备可以通过总线、信号线或电路板与外围设备接口1703相连。具体地,外围设备可以包括:显示屏1704、音 频电路1705、通信接口1706和电源1707中的至少一种。
本领域技术人员可以理解,图17中示出的结构并不构成对计算机设备1700的限定,可以包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。
在示例性实施例中,还提供了一种计算机设备,所述计算机设备包括处理器和存储器,存储器中存储有计算机可读指令,计算机可读指令被处理器执行时,使得处理器执行上述身份验证方法、第一对抗生成网络的训练方法、第二对抗生成网络的训练方法中的任意一种。
在示例性实施例中,还提供了一种计算机可读存储介质,所述存储介质中存储有计算机可读指令,计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行上述身份验证方法。可选地,上述计算机可读存储介质可以是ROM(Read-Only Memory,只读存储器)、RAM(Random Access Memory,随机存取存储器)、CD-ROM(Compact Disc Read-Only Memory,只读光盘)、磁带、软盘和光数据存储设备等。
在示例性实施例中,还提供了一种计算机可读指令产品,当该计算机可读指令产品被执行时,其用于实现上述身份验证方法、第一对抗生成网络的训练方法、第二对抗生成网络的训练方法中的任意一种。
应当理解的是,在本文中提及的“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上所述仅为本申请的较佳实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (20)

  1. 一种身份验证方法,由计算机设备执行,所述方法包括:
    采集用户的原始特征,所述原始特征中存在m-1种域差异特征,m为大于2的整数;
    提取所述原始特征中的主属性特征向量,所述主属性特征向量是将所述原始特征中的所述m-1种域差异特征进行选择性解耦的无偏特征表示;及
    根据所述主属性特征向量进行无偏身份验证处理得到身份验证结果。
  2. 根据权利要求1所述的方法,其特征在于,所述提取所述原始特征中的主属性特征向量,包括:
    调用身份验证模型对所述原始特征进行特征提取,得到所述原始特征中的主属性特征向量;其中,所述身份验证模型包括:
    第一对抗生成网络;
    其中,所述第一对抗生成网络是基于因果关系对所述m-1种域差异特征进行选择性解耦所训练得到的网络。
  3. 根据权利要求1所述的方法,其特征在于,所述提取所述原始特征中的主属性特征向量,包括:
    调用身份验证模型对所述原始特征进行特征提取,得到所述原始特征中的主属性特征向量;其中,所述身份验证模型包括:
    所述第一对抗生成网络和第二对抗生成网络;
    其中,所述第一对抗生成网络是基于因果关系对所述m-1种域差异特征进行选择性解耦所训练得到的网络,所述第二对抗生成网络是对所述第一对抗生成网络提取到的不同属性的属性特征向量进行随机组合后进行加性对抗训练得到的网络,所述属性包括身份和m-1种域差异。
  4. 根据权利要求2或3所述的方法,其特征在于,所述第一对抗生成网络包括:基础生成器、主生成器和主判别器;
    所述调用身份验证模型对所述原始特征进行特征提取,包括:
    调用所述基础生成器将所述原始特征变换为全局属性特征向量;及
    调用所述主生成器对所述全局属性特征向量进行特征提取得到第一主属性特征向量;
    所述根据所述主属性特征向量进行无偏身份验证处理得到身份验证结果,包括:
    调用所述主判别器对所述第一主属性特征向量进行身份验证得到身份验证结果。
  5. 根据权利要求3所述的方法,其特征在于,所述第一对抗生成网络包括:基础生成器、主生成器和主判别器;
    所述调用身份验证模型对所述原始特征进行特征提取,包括:
    调用所述基础生成器将所述原始特征变换为全局属性特征向量;及
    调用所述主生成器对所述全局属性特征向量进行特征提取得到第一主属性特征向量;
    所述根据所述主属性特征向量进行无偏身份验证处理得到身份验证结果,包括:
    调用所述主判别器对所述第一主属性特征向量进行第一判别后输出组合属性特征向量给所述第二对抗生成网络。
  6. 根据权利要求2至5任一项所述的方法,其特征在于,所述第一对抗生成网络是通过如下方式训练得到的:
    当所述原始特征中存在具有因果关系的第一域差异特征和第二域差异特征时,对所述第二域差异特征进行对抗学习的过程中忽略与所述第一域差异特征之间的解耦学习。
  7. 根据权利要求2至5任一项所述的方法,其特征在于,所述第一对抗生成网络包括:m个生成器G 1至G m,每个所述生成器G j对应m个判别器G j1至G jm,第j个生成器G j用于学习第j个属性的特征,所述属性包括身份和m-1个域,与所述身份对应的生成器G 1是所述主生成器,与所述生成器G 1对应的判别器D 11是所述主判别器,i,j,j'∈[m];
    所述第一对抗生成网络是采用如下方式训练得到的:
    固定所有生成器G i,优化所有判别器D ij来使得输出逼近与所述第i个属性对应的标签y i
    固定所有判别器D ij,优化所有生成器G i来使得输出逼近与所述第i个属性对应的(1-y i);
    交替执行上述两个步骤,直至满足所述生成器G i和所述判别器D ij的训练结束条件;
    其中,若第j’个属性和第j个属性存在因果关系,则所述判别器D jj’的输出损失不进行反向传播,i,j,j'∈[m]。
  8. 根据权利要求7所述的方法,其特征在于,所述判别器D 11至D mm分为两类:对于所有的i,j∈[m],i≠j,
    每个判别器D ii是用于学习第i个属性的特征,每个判别器D ij是用于消除第j个属性的特征;
    每个判别器D ii的学习是用标准的监督学习,每个判别器D ij的学习是采用对抗学习。
  9. 根据权利要求3所述的方法,其特征在于,所述第二对抗生成网络包括:主加性空间转换网络和主识别网络;
    所述根据所述主属性特征向量进行无偏身份验证处理得到身份验证结果,包括:
    调用所述主加性空间转换网络对所述第一对抗生成网络输出的组合属性特征向量进行转换,得到加性特征向量;及
    调用所述主识别网络对所述加性特征向量进行身份识别,得到身份验证结果。
  10. 根据权利要求3、5或9所述的方法,其特征在于,所述第二对抗生成网络是通过如下方式训练得到的:
    对所述第一对抗生成网络从训练集中提取到的不同属性特征向量进行随机组合;及
    对随机组合得到的组合属性特征向量进行加性对抗训练;
    其中,存在至少一个所述组合属性特征向量对应的属性组合是所述训练集中未出现的属性组合。
  11. 根据权利要求10所述的方法,其特征在于,所述第二对抗生成网络包括:与所述m个属性一一对应的m个加性空间转换网络以及m个识别网络,j∈[m];
    所述第二对抗生成网络是采用如下步骤训练得到的:
    将所述第一对抗生成网络生成的不同属性对应的属性特征向量进行随机组合,产生n r个组合属性特征向量;
    将所述n r个组合属性特征向量划分为第一向量集合和第二向量集合,第一向量集合中的组合属性特征向量的属性组合是所述训练集中出现的属性组合,第二向量集合中的组合属性特征向量的属性组合是所述训练集中未出现的属性组合;
    使用所述第一向量集合和所述第二向量集合对所述加性空间转换网络以及所述识别网络进行预测,第j个加性空间转换网络用于将第j个组合属性特征向量转换为第j个加性特征向量,第j个识别网络用于对m个加性特征向量的和特征向量进行与第j个属性对应的标签识别;
    对于所述第一向量集合在预测过程中产生的第一损失,将所述第一损失反向传播至每个属性对应的所述识别网络和所述加性空间转换网络;
    对于所述第二向量集合在预测过程中产生的第二损失,将所述第二损失反向传播至其它属性对应的所述识别网络和所述加性空间转换网络。
  12. 一种身份验证装置,其特征在于,所述装置包括:
    采集模块,用于采集用户的原始特征,所述原始特征中存在m-1种域差异特征,m为大于2的整数;
    身份验证模块,用于提取所述原始特征中的主属性特征向量;所述主属性特征向量是将所述原始特征中的m-1种域差异特征进行选择性解耦的无偏特征表示,m为大于2的整数;及
    所述身份验证模块,还用于根据所述主属性特征向量进行无偏身份验证处理得到身份验证结果。
  13. 根据权利要求12所述的装置,其特征在于,所述身份验证模块还用于调用身份验证模型对所述原始特征进行特征提取,得到所述原始特征中的主属性特征向量;其中,所述身份验证模型包括:第一对抗生成网络;或者,所述第一对抗生成网络和第二对抗生成网络;其中,所述第一对抗生成网络是基于因果关系对所述m-1种域差异特征进行选择性解耦所训练得到的网络,所述第二对抗生成网络是对所述第一对抗生成网络提取到的不同属性的属性特征向量进行随机组合后进行加性对抗训练得到的网络,所述属性包括身份和m-1种域差异。
  14. 根据权利要求13所述的装置,其特征在于,所述第一对抗生成网络包括:基础生成器、主生成器和主判别器;所述身份验证模块还用于调用所述基础生成器将所述原始特征变换为全局属性特征向量;调用所述主生成器对所述全局属性特征向量进行特征提取得到第一主属性特征向量;及调用所述主判别器对所述第一主属性特征向量进行身份验证得到身份验证结果。
  15. 根据权利要求13所述的装置,其特征在于,所述第一对抗生成网络包括:基础生成器、主生成器和主判别器;所述身份验证模块还用于调用所述基础生成器将所述原始特征变换为全局属性特征向量;调用所述主生成器对所述全局属性特征向量进行特征提取得到第一主属性特征向量;及调用所述主判别器对所述第一主属性特征向量进行第一判别后输出组合属性特征向量给所述第二对抗生成网络。
  16. 根据权利要求13至15任一项所述的装置,其特征在于,所述第一对抗生成网络包括:m个生成器G 1至G m,每个所述生成器G j对应m个判别器G j1至G jm,第j个生成器G j用于学习第j个属性的特征,所述属性包括身份和m-1个域,与所述身份对应的生成器G 1是所述主生成器,与所述生成器G 1对应的判别器D 11是所述主判别器,i,j,j'∈[m];
    所述第一对抗生成网络是采用如下方式训练得到的:
    固定所有生成器G i,优化所有判别器D ij来使得输出逼近与所述第i个属性对应的标签y i
    固定所有判别器D ij,优化所有生成器G i来使得输出逼近与所述第i个属性对应的(1-y i);
    交替执行上述两个步骤,直至满足所述生成器G i和所述判别器D ij的训练结束条件;
    其中,若第j’个属性和第j个属性存在因果关系,则所述判别器D jj’的输出损失不进行反向传播,i,j,j'∈[m]。
  17. 根据权利要求13至15任一项所述的装置,其特征在于,所述第二对抗生成网络包括:主加性空间转换网络和主识别网络;所述身份验证模块还用于调用所述主加性空间转换网络对所述第一对抗生成网络输出的组合属性特征向量进行转换,得到加性特征向量;及调用所述主识别网络对所述加性特征向量进行身份识别,得到身份验证结果。
  18. 根据权利要求13至15任一项所述的装置,其特征在于,所述第二对抗生成网络包括:与所述m个属性一一对应的m个加性空间转换网络以及m个识别网络,j∈[m];
    所述第二对抗生成网络是采用如下步骤训练得到的:
    将所述第一对抗生成网络生成的不同属性对应的属性特征向量进行随机组合,产生n r个组合属性特征向量;
    将所述n r个组合属性特征向量划分为第一向量集合和第二向量集合,第一向量集合中的组合属性特征向量的属性组合是所述训练集中出现的属性组合,第二向量集合中的组合属性特征向量的属性组合是所述训练集中未出现的属性组合;
    使用所述第一向量集合和所述第二向量集合对所述加性空间转换网络以及所述识别网络进行预测,第j个加性空间转换网络用于将第j个组合属性特征向量转换为第j个加性特征向量,第j个识别网络用于对m个加性特征向量的和特征向量进行与第j个属性对应的标签识别;
    对于所述第一向量集合在预测过程中产生的第一损失,将所述第一损失反向传播至每个属性对应的所述识别网络和所述加性空间转换网络;
    对于所述第二向量集合在预测过程中产生的第二损失,将所述第二损失 反向传播至其它属性对应的所述识别网络和所述加性空间转换网络。
  19. 一种计算机设备,其特征在于,所述计算机设备包括处理器和存储器,所述存储器中存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行如权利要求1至11中任一项所述的身份验证方法的步骤。
  20. 一种计算机可读存储介质,其特征在于,所述存储介质中存储有计算机可读指令,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如权利要求1至11中任一项所述的身份验证方法的步骤。
PCT/CN2020/078777 2019-04-24 2020-03-11 身份验证方法、装置、计算机设备及存储介质 WO2020215915A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2021539985A JP7213358B2 (ja) 2019-04-24 2020-03-11 アイデンティティ検証方法、アイデンティティ検証装置、コンピュータ機器、及びコンピュータプログラム
EP20794930.6A EP3961441B1 (en) 2019-04-24 2020-03-11 Identity verification method and apparatus, computer device and storage medium
US17/359,125 US20210326576A1 (en) 2019-04-24 2021-06-25 Identity verification method and apparatus, computer device and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910336037.4A CN110059465B (zh) 2019-04-24 2019-04-24 身份验证方法、装置及设备
CN201910336037.4 2019-04-24

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/359,125 Continuation US20210326576A1 (en) 2019-04-24 2021-06-25 Identity verification method and apparatus, computer device and storage medium

Publications (1)

Publication Number Publication Date
WO2020215915A1 true WO2020215915A1 (zh) 2020-10-29

Family

ID=67320600

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/078777 WO2020215915A1 (zh) 2019-04-24 2020-03-11 身份验证方法、装置、计算机设备及存储介质

Country Status (5)

Country Link
US (1) US20210326576A1 (zh)
EP (1) EP3961441B1 (zh)
JP (1) JP7213358B2 (zh)
CN (1) CN110059465B (zh)
WO (1) WO2020215915A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112785495A (zh) * 2021-01-27 2021-05-11 驭势科技(南京)有限公司 图像处理模型训练方法、图像生成方法、装置和设备
CN114499712A (zh) * 2021-12-22 2022-05-13 天翼云科技有限公司 一种手势识别方法、设备及存储介质
CN116129473A (zh) * 2023-04-17 2023-05-16 山东省人工智能研究院 基于身份引导的联合学习换衣行人重识别方法及系统

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110059465B (zh) * 2019-04-24 2023-07-25 腾讯科技(深圳)有限公司 身份验证方法、装置及设备
CN110598578A (zh) * 2019-08-23 2019-12-20 腾讯云计算(北京)有限责任公司 身份识别方法、身份识别系统的训练方法、装置及设备
US11455531B2 (en) * 2019-10-15 2022-09-27 Siemens Aktiengesellschaft Trustworthy predictions using deep neural networks based on adversarial calibration
CN111033532B (zh) * 2019-11-26 2024-04-02 驭势(上海)汽车科技有限公司 生成对抗网络的训练方法和系统、电子设备和存储介质
CN111339890A (zh) * 2020-02-20 2020-06-26 中国测绘科学研究院 基于高分辨率遥感影像提取新增建设用地信息的方法
CN112084962B (zh) * 2020-09-11 2021-05-25 贵州大学 基于生成式对抗网络脸部隐私保护方法
CN112179503A (zh) * 2020-09-27 2021-01-05 中国科学院光电技术研究所 基于稀疏子孔径夏克-哈特曼波前传感器的深度学习波前复原方法
CN113658178B (zh) * 2021-10-14 2022-01-25 北京字节跳动网络技术有限公司 组织图像的识别方法、装置、可读介质和电子设备
CN114708667B (zh) * 2022-03-14 2023-04-07 江苏东方数码系统集成有限公司 一种基于多重生物识别技术的安防方法及系统
CN114863213B (zh) * 2022-05-11 2024-04-16 杭州电子科技大学 一种基于因果解耦生成模型的域泛化图像识别方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108229348A (zh) * 2017-12-21 2018-06-29 中国科学院自动化研究所 遮挡人脸图像的识别装置
US20180314716A1 (en) * 2017-04-27 2018-11-01 Sk Telecom Co., Ltd. Method for learning cross-domain relations based on generative adversarial networks
CN108766444A (zh) * 2018-04-09 2018-11-06 平安科技(深圳)有限公司 用户身份验证方法、服务器及存储介质
CN109376769A (zh) * 2018-09-21 2019-02-22 广东技术师范学院 基于生成式对抗神经网络用于多任务分类的信息迁移方法
CN110059465A (zh) * 2019-04-24 2019-07-26 腾讯科技(深圳)有限公司 身份验证方法、对抗生成网络的训练方法、装置及设备

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106022317A (zh) * 2016-06-27 2016-10-12 北京小米移动软件有限公司 人脸识别方法及装置
US11625603B2 (en) * 2017-04-27 2023-04-11 Nippon Telegraph And Telephone Corporation Learning-type signal separation method and learning-type signal separation device
CN108875463B (zh) * 2017-05-16 2022-08-12 富士通株式会社 多视角向量处理方法和设备
US10579785B2 (en) * 2017-09-29 2020-03-03 General Electric Company Automatic authentification for MES system using facial recognition
CN108090465B (zh) * 2017-12-29 2020-05-01 国信优易数据有限公司 一种妆容效果处理模型训练方法及妆容效果处理方法
US10699161B2 (en) * 2018-02-28 2020-06-30 Fujitsu Limited Tunable generative adversarial networks
US10825219B2 (en) * 2018-03-22 2020-11-03 Northeastern University Segmentation guided image generation with adversarial networks
CN109523463B (zh) * 2018-11-20 2023-04-07 中山大学 一种基于条件生成对抗网络的人脸老化方法
US11275819B2 (en) * 2018-12-05 2022-03-15 Bank Of America Corporation Generative adversarial network training and feature extraction for biometric authentication

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180314716A1 (en) * 2017-04-27 2018-11-01 Sk Telecom Co., Ltd. Method for learning cross-domain relations based on generative adversarial networks
CN108229348A (zh) * 2017-12-21 2018-06-29 中国科学院自动化研究所 遮挡人脸图像的识别装置
CN108766444A (zh) * 2018-04-09 2018-11-06 平安科技(深圳)有限公司 用户身份验证方法、服务器及存储介质
CN109376769A (zh) * 2018-09-21 2019-02-22 广东技术师范学院 基于生成式对抗神经网络用于多任务分类的信息迁移方法
CN110059465A (zh) * 2019-04-24 2019-07-26 腾讯科技(深圳)有限公司 身份验证方法、对抗生成网络的训练方法、装置及设备

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112785495A (zh) * 2021-01-27 2021-05-11 驭势科技(南京)有限公司 图像处理模型训练方法、图像生成方法、装置和设备
CN114499712A (zh) * 2021-12-22 2022-05-13 天翼云科技有限公司 一种手势识别方法、设备及存储介质
CN114499712B (zh) * 2021-12-22 2024-01-05 天翼云科技有限公司 一种手势识别方法、设备及存储介质
CN116129473A (zh) * 2023-04-17 2023-05-16 山东省人工智能研究院 基于身份引导的联合学习换衣行人重识别方法及系统

Also Published As

Publication number Publication date
JP2022529863A (ja) 2022-06-27
EP3961441A1 (en) 2022-03-02
US20210326576A1 (en) 2021-10-21
JP7213358B2 (ja) 2023-01-26
EP3961441B1 (en) 2023-09-27
EP3961441A4 (en) 2022-07-06
CN110059465B (zh) 2023-07-25
CN110059465A (zh) 2019-07-26

Similar Documents

Publication Publication Date Title
WO2020215915A1 (zh) 身份验证方法、装置、计算机设备及存储介质
Gao et al. The labeled multiple canonical correlation analysis for information fusion
US11935298B2 (en) System and method for predicting formation in sports
CN110598019B (zh) 重复图像识别方法及装置
Hu et al. Bin ratio-based histogram distances and their application to image classification
US20220012502A1 (en) Activity detection device, activity detection system, and activity detection method
Yousaf et al. A robust and efficient convolutional deep learning framework for age‐invariant face recognition
Zhang et al. Semantically modeling of object and context for categorization
Boes et al. Audiovisual transformer architectures for large-scale classification and synchronization of weakly labeled audio events
Tan et al. Image recognition by predicted user click feature with multidomain multitask transfer deep network
Chen et al. Learning one‐to‐many stylised Chinese character transformation and generation by generative adversarial networks
Guermazi et al. Facial micro-expression recognition based on accordion spatio-temporal representation and random forests
Zhang et al. Metric learning by simultaneously learning linear transformation matrix and weight matrix for person re‐identification
Bertocco et al. Leveraging ensembles and self-supervised learning for fully-unsupervised person re-identification and text authorship attribution
Othmani et al. Kinship recognition from faces using deep learning with imbalanced data
Gao et al. Segmentation-free vehicle license plate recognition using CNN
Balgi et al. Contradistinguisher: a vapnik’s imperative to unsupervised domain adaptation
CN113723111B (zh) 一种小样本意图识别方法、装置、设备及存储介质
CN111597453B (zh) 用户画像方法、装置、计算机设备及计算机可读存储介质
Yu et al. Discrepancy-Aware Meta-Learning for Zero-Shot Face Manipulation Detection
CN112348060A (zh) 分类向量生成方法、装置、计算机设备和存储介质
Tan et al. The impact of data correlation on identification of computer-generated face images
Liu et al. Construction of a smart face recognition model for university libraries based on FaceNet-MMAR algorithm
US20240185604A1 (en) System and method for predicting formation in sports
Han et al. Mvff: multi-view feature fusion for few-shot remote sensing image scene classification

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20794930

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021539985

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020794930

Country of ref document: EP

Effective date: 20211124