WO2021223738A1 - 模型参数的更新方法、装置、设备及存储介质 - Google Patents

模型参数的更新方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2021223738A1
WO2021223738A1 PCT/CN2021/092130 CN2021092130W WO2021223738A1 WO 2021223738 A1 WO2021223738 A1 WO 2021223738A1 CN 2021092130 W CN2021092130 W CN 2021092130W WO 2021223738 A1 WO2021223738 A1 WO 2021223738A1
Authority
WO
WIPO (PCT)
Prior art keywords
face image
model
loss value
age
face
Prior art date
Application number
PCT/CN2021/092130
Other languages
English (en)
French (fr)
Inventor
吴泽衡
朱振文
周古月
徐倩
杨强
Original Assignee
深圳前海微众银行股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳前海微众银行股份有限公司 filed Critical 深圳前海微众银行股份有限公司
Publication of WO2021223738A1 publication Critical patent/WO2021223738A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • This application relates to the artificial intelligence technology field of Fintech, and in particular to a method, device, equipment and storage medium for updating model parameters.
  • Cross-age face generation can be applied in many scenarios.
  • cross-age face generation is a special face effect. After the user selects the special effect, the user can generate the face image according to the user’s face. Face images of the user at any age, such as the appearance several years later; another application scenario is used for long-term criminal tracking, assuming only the criminals’ photos from several years ago, if the current user can be generated by cross-age faces
  • the appearance of this technology is of great help in hunting down criminals; in addition, the technology also has potential value for finding missing children.
  • the current cross-age face generation scheme is through the method of rendering: first, through the recognition of key points of the face and the segmentation of the face area, the eyes, nose, mouth, hair, eyebrows, forehead, cheeks and other areas of the face are found. These areas will change with age; then through traditional image processing methods, these areas are rendered, for example, according to age, the hair is rendered into the corresponding color, such as gray and white, for example, wrinkles are added to the forehead area. In the corner of the eye area, increase the crow's feet and so on.
  • This method requires the design of a large number of models, and the way of generation is relatively fixed. Everyone's way of generation is similar, but there are different ways to deal with different ages.
  • the current cross-age face image generation scheme cannot measure the similarity between the generated face image and the original face image.
  • the current method is to use the generated face image and the original face image.
  • a pixel-level loss function is added between, but this constraint is too strong, resulting in the generated face image is very similar to the original face image, and does not reflect the true characteristics of age changes.
  • the current cross-age face image generation methods generate cross-age face images with low accuracy.
  • the main purpose of this application is to provide a method, device, device, and storage medium for updating model parameters, aiming to solve the technical problem of low accuracy of cross-age face images generated by existing cross-age face image generation methods .
  • the present application provides a method for updating model parameters.
  • the method for updating model parameters includes the steps:
  • Obtain data to be trained where the data to be trained includes at least a target age, a first face image, a second face image, and a third face image, the first face image and the second face image Face images belonging to the same user, and the first face image and the third face image belong to face images of different users;
  • the first model parameter of the generation model is updated according to the target loss value.
  • the discriminant model based on a preset facial feature extraction model and a generative confrontation network obtains the target loss according to the fourth facial image, the second facial image, and the third facial image
  • the value steps include:
  • a target loss value is obtained according to the first loss value and the second loss value.
  • the step of inputting the second face image and the fourth face image into a preset face feature extraction model to obtain a second loss value includes:
  • the second face image and the fourth face image are input into a preset face feature extraction model to obtain the first face feature and the fourth face corresponding to the second face image
  • a second loss value is calculated according to the first face feature and the second face feature.
  • the step of obtaining a target loss value according to the first loss value and the second loss value includes:
  • the first product and the second product are added to obtain a target loss value.
  • the method further includes:
  • the second model parameter of the discriminant model is updated according to the target loss value.
  • the method further includes:
  • the generation model corresponding to the convergence state is determined as the target generation model.
  • the method further includes:
  • the second age corresponding to the face image of the first age is determined
  • the face image of the first age and the corresponding second age are input into the generation model to obtain the face image corresponding to the second age, wherein the generation model is a model in a convergent state.
  • the step of inputting the face image of the first age and the corresponding second age into the generation model to obtain the face image corresponding to the second age includes:
  • the face image of the first age and the second age are input into the generative model, and feature extraction is performed on the face image of the first age and the second age through the generative model to obtain A first feature corresponding to the face image of the first age and a second feature corresponding to the second age;
  • the step of updating the first model parameter of the generation model according to the target loss value includes:
  • the first model parameter corresponding to the first gradient value is correspondingly updated according to the first gradient value.
  • the present application also provides a model parameter updating device, and the model parameter updating device includes:
  • the acquisition module is used to acquire data to be trained, wherein the data to be trained includes at least a target age, a first face image, a second face image, and a third face image, the first face image and the The second face image belongs to the face image of the same user, and the first face image and the third face image belong to the face images of different users;
  • the determining module is used to obtain a fourth face image based on a preset generative confrontation network model based on the first face image and the target age; based on the preset face feature extraction model and the generative confrontation network A discriminant model, obtaining a target loss value according to the fourth face image, the second face image, and the third face image;
  • the update module is used to update the first model parameter of the generation model according to the target loss value.
  • the present application also provides a model parameter update device, the model parameter update device including a memory, a processor, and model parameters stored in the memory and running on the processor
  • the model parameter update program is executed by the processor, the steps of the model parameter update method corresponding to the federated learning server are implemented.
  • this application also provides a computer-readable storage medium that stores a model parameter update program, and the model parameter update program is executed by a processor to achieve the above The steps of the update method of the model parameters described.
  • This application obtains the data to be trained, where the data to be trained includes at least the target age, the first face image, the second face image, and the third face image.
  • the first face image and the second face image belong to the same user
  • the first face image and the third face image belong to the face images of different users.
  • the fourth person is obtained according to the first face image and the target age Face image; based on the preset face feature extraction model and the discriminant model of the generative confrontation network, the target loss value is obtained according to the fourth face image, the second face image, and the third face image, and the target loss value is updated according to the target loss value
  • the first model parameter of the generative model realizes that in the process of training the generative model, the face feature extraction model is used to guide the training of the training generative model, and in the process of updating the model parameters of the generative model, the network and the face are confronted by generative methods.
  • the feature extraction model optimizes the model parameters of the generative model, thereby improving the accuracy of generating cross-age face images through the generative model.
  • Fig. 1 is a schematic flowchart of a first embodiment of a method for updating model parameters of the present application
  • FIG. 2 is a schematic structural diagram of a model framework corresponding to the method for updating model parameters in an embodiment of the present application
  • Fig. 3 is a functional schematic block diagram of a preferred embodiment of the device for updating model parameters of the present application
  • Fig. 4 is a schematic structural diagram of a hardware operating environment involved in a solution of an embodiment of the present application.
  • FIG. 1 is a schematic flowchart of the first embodiment of the method for updating model parameters of this application.
  • the embodiment of the application provides an embodiment of the method for updating model parameters. It should be noted that although the logical sequence is shown in the flowchart, in some cases, the sequence shown here may be executed in a different order than here. Or the steps described.
  • the method of updating model parameters is applied to servers or terminals.
  • Terminals can include mobile terminals such as mobile phones, tablet computers, notebook computers, handheld computers, and personal digital assistants (PDAs), as well as fixed devices such as digital TVs and desktop computers. terminal.
  • PDAs personal digital assistants
  • the execution body is omitted to describe the various embodiments.
  • the update methods of model parameters include:
  • Step S10 Obtain data to be trained, where the data to be trained includes at least a target age, a first face image, a second face image, and a third face image.
  • the first face image and the second face image The face image belongs to the face image of the same user, and the first face image and the third face image belong to the face images of different users.
  • a set of data to be trained is at least the target age, the first face image, the second face image, and the third face image.
  • the first face image and the second face image belong to the same user. Face images, the first face image and the third face image belong to the face images of different users.
  • the target age is the corresponding age of the cross-age face image. For example, when the target age is 60 years old, it means that a 60-year-old will be generated eventually
  • Cross-age face images, the first face image and the second face image are the face images of the first user, and the first face image and the second face image can be the face images of the first user at the same age.
  • the third face image is the face image of the second user corresponding to the target age
  • the first user and the second user are different users.
  • the ages corresponding to the first face image and the second face image are not limited.
  • the number of acquired data to be trained can be set according to specific needs. For example, 30 groups of data to be trained can be acquired, or 100 groups of data to be trained can be acquired. Specifically, when an acquisition instruction is detected, the data to be trained can be acquired according to the acquisition instruction. The acquisition instruction can be triggered by the user as needed, or can be triggered by a preset timing task.
  • Step S20 Obtain a fourth face image according to the first face image and the target age based on the generation model of the preset generative confrontation network.
  • the pre-set generative model in the generative confrontation network is obtained, the first face image and the target age are input into the generative model, and the output of the generative model is obtained.
  • the output of the generative model is recorded as the fourth face image, which is the face image corresponding to the first face image under the target age.
  • a preset number of face images of a certain age can be obtained, and another face image of the same user with a different age from the face image can be obtained, and the face image of a certain age and another age can be used as a convolution Neural network model input, the same user's face image corresponding to another age is used as the output of machine learning, that is, as the input label of the convolutional neural network model, to train the generative model.
  • the two ages can be equal or not equal .
  • the generative confrontation network generates quite good output through the mutual game learning of (at least) two modules in the framework: the generative model (Generative Model) and the discriminative model (Discriminative Model).
  • Step S30 Based on a preset face feature extraction model and a discriminant model of a generative confrontation network, a target loss value is obtained according to the fourth face image, the second face image, and the third face image.
  • the preset face feature extraction model and the discriminant model in the generative confrontation network are obtained, and based on the preset discriminant model and face feature extraction model, according to the fourth face image, the first The two face images and the third face image obtain the target loss value.
  • the discriminant model can be used to identify whether a face image is a real face image or a face image generated by a generative model; the face feature extraction model can extract the face features in the face image, and The face is mapped into a fixed-dimensional face feature to represent, the face feature can be represented by a vector, and then the distance between the two face features is calculated to determine whether the faces in the two face images belong to the same user.
  • the dimensions corresponding to the facial features are not limited.
  • the dimensions of the facial features may be 256 dimensions or 512 dimensions.
  • the training process of the discriminant model is to obtain a real-photographed face image, add real labels to the real-photographed face image, and obtain the face image generated by the generative model, and the face image generated by the generative model Adding a generated label
  • this embodiment does not limit the expression form of the real label and the generated label.
  • the face image with the real label and the face image with the generated label are obtained, the face image with the real label and the face image with the generated label are input into the basic model corresponding to the discriminant model to obtain the discriminant model, where,
  • the basic model of the discriminant model can be a machine learning model or a neural network model.
  • the face feature extraction model is a face recognition model used to extract face features in a face image.
  • step S30 includes:
  • Step a Obtain a first loss value according to the third face image and the fourth face image based on the discriminant model of the preset generative confrontation network.
  • the third face image and the fourth face image are input into the discriminant model to obtain the output of the discriminant model, where the output of the discriminant model is a loss value, in order to facilitate the distinction between the output of different models
  • the loss value of the output of the discriminant model is recorded as the first loss value.
  • Step b Input the second face image and the fourth face image into a preset face feature extraction model to obtain a second loss value.
  • the second face image and the fourth face image are input into the face feature extraction model to obtain the loss value output by the face feature extraction model, and the person The loss value output by the face feature extraction model is recorded as the second loss value.
  • step b includes:
  • Step b1 Input the second face image and the fourth face image into a preset face feature extraction model to obtain the first face feature and the first face feature corresponding to the second face image The second face feature corresponding to the four face images.
  • Step b2 calculating a second loss value according to the first face feature and the second face feature.
  • the second face image and the fourth face image are input into the preset face extraction model to obtain the facial features and the corresponding face image of the second face image.
  • the face feature corresponding to the fourth face image is recorded as the first face feature
  • the face feature corresponding to the fourth face image is recorded as the second face feature.
  • step b2 includes:
  • Step b21 Calculate the feature distance between the first face feature and the second face feature.
  • Step b22 calculating a second loss value according to the characteristic distance.
  • the feature distance between the first face feature and the second face feature is calculated, and the second loss value is calculated according to the feature distance.
  • the feature distance between the first face feature and the second face feature can be calculated by using the cosine distance algorithm. If the first face feature is recorded as The second face feature is recorded as If the second loss value is recorded as L recog , the loss function for calculating the second loss value can be expressed by the following formula:
  • f() represents the face feature extraction model
  • a 'i i.e., A' i represents a fourth face image
  • G () denotes the model generation
  • " represents the feature distances.
  • the second loss value may be obtained according to the image similarity by calculating the image similarity between the second face image and the fourth face image.
  • Step c Obtain a target loss value according to the first loss value and the second loss value.
  • the target loss value is obtained according to the first loss value and the second loss value.
  • the first loss value and the second loss value can be directly added, and the value obtained by adding the first loss value and the second loss value is determined as the target loss value.
  • step c includes:
  • Step c1 Obtain a first weight corresponding to the first loss value, and obtain a second weight corresponding to the second loss value.
  • Step c2 multiplying the first loss value by the first weight to obtain a first product, and multiplying the second loss value by the second weight to obtain a second product.
  • Step c3 adding the first product and the second product to obtain a target loss value.
  • the first weight corresponding to the first loss value is obtained, and the second weight corresponding to the second loss value is obtained, where the first weight and the second weight are predetermined Set it up. It can be understood that the sum between the first weight and the second weight is equal to one.
  • the first loss value, first weight, second loss value, and second weight are obtained, multiply the first loss value by the first weight to obtain the first product between the first loss value and the first weight, and add The second loss value is multiplied by the second weight to obtain a second product between the second loss value and the second weight. After the first product and the second product are obtained, the first product and the second product are added to obtain the target loss value.
  • A is the first face image
  • B is the second face image
  • A' is the fourth face image
  • C is the third face image
  • L GAN is the pass
  • L recog is the second loss value obtained by the loss function.
  • Step S40 Update the first model parameter of the generation model according to the target loss value.
  • the model parameters of the facial feature extraction model do not need to be updated, and the model parameters of the generation model and the discriminant model need to be updated.
  • the first model parameter of the generated model is updated according to the target loss value.
  • the method for updating the model parameters further includes:
  • Step o Update the second model parameter of the discriminant model according to the target loss value.
  • the second model parameter of the discriminant model is updated according to the target loss value.
  • step S40 includes:
  • Step d Calculate the first gradient value corresponding to each first model parameter in the generation model according to the target loss value.
  • the first gradient value corresponding to each first model parameter of the generated model is calculated according to the target loss value.
  • the second gradient value corresponding to each second model parameter in the discrimination model is calculated according to the target loss value.
  • each model parameter has a corresponding gradient value.
  • the corresponding gradient value of each model parameter can be obtained through the matrix derivation chain rule.
  • Step e Update the first model parameter corresponding to the first gradient value according to the first gradient value.
  • the first model parameter corresponding to each first gradient value is updated according to the first gradient value, so as to obtain the updated first model parameter, which is obtained according to the updated first model parameter
  • the updated generation model further, the second model parameter corresponding to the second gradient value is updated according to the second gradient value to obtain the updated second model parameter, and the updated discriminant model is obtained according to the updated second model parameter.
  • the product between the gradient value and the update coefficient of the corresponding model parameter can be calculated, and then the original, unupdated model parameter is subtracted from the product between the corresponding gradient value and the update coefficient to obtain the updated model parameter, where ,
  • the update coefficient can be set according to specific needs, and the update coefficients corresponding to each model parameter can be the same or different. In other embodiments, it is also possible to directly subtract the gradient value from the unupdated model parameter to obtain the updated model parameter.
  • the data to be trained is acquired, where the data to be trained includes at least the target age, the first face image, the second face image, and the third face image.
  • the first face image and the second face image belong to the same
  • the user’s face image, the first face image and the third face image belong to the face images of different users, based on the generation model of the preset generative confrontation network, the fourth face image is obtained according to the first face image and the target age.
  • Face image based on the preset face feature extraction model and the discriminant model of the generative confrontation network, the target loss value is obtained according to the fourth face image, the second face image and the third face image, and the target loss value is obtained according to the target loss value
  • the face feature extraction model is used to guide the training of the training generative model, and in the process of updating the model parameters of the generative model, the generative confrontation network and human
  • the face feature extraction model optimizes the model parameters of the generative model, thereby improving the accuracy of generating cross-age face images through the generative model.
  • the facial feature extraction model ensures that the distance between the cross-age face image generated by the generation model and the original face image is small enough to ensure that the generated cross-face image is sufficiently small.
  • model parameter update method further includes:
  • Step f Determine whether the generation model is in a convergent state according to the target loss value.
  • Step g if it is determined that the generative model is not in the convergent state, return to the step of obtaining the data to be trained.
  • Step h If it is determined that the generation model is in a convergent state, the generation model corresponding to the convergence state is determined as the target generation model.
  • the target loss value When the target loss value is obtained, it is determined whether the generative model is in a convergent state according to the target loss value. Specifically, it is determined whether the target loss value is less than a preset threshold, where the size of the preset threshold can be set according to specific needs, and the present embodiment and the size of the preset threshold are not specifically limited. If it is determined that the target loss value is less than the preset threshold value, it is determined that the generation model is in a convergent state; if it is determined that the target loss value is greater than or equal to the preset threshold value, it is determined that the generation model is not in a convergent state.
  • a preset threshold where the size of the preset threshold can be set according to specific needs, and the present embodiment and the size of the preset threshold are not specifically limited.
  • the generative model corresponding to the convergence state is determined as the target generative model, that is, the generative model obtained by updating the target loss value when the generative model is determined to be in the convergent state is determined as the target generative model; when it is determined that the generative model is not in the When it is in the convergent state, continue to obtain the data to be trained, and continue to train the generative model until the generative model is in the convergent state.
  • the process of determining whether the model is in a convergent state based on the target loss value is the same as the process of determining whether the generated model is in a convergent state based on the target loss value, and will not be repeated here.
  • the update times of the first model parameters of the generation model can also be calculated.
  • the generation model is determined to be in a convergent state, and the generation model obtained from the last update of the first model parameters is determined as the target generation model ;
  • the number of updates is less than or equal to the preset number of times, it is determined that the generation model is not in a convergent state, and the first model parameter of the generation model to be updated with the training data is continued to be obtained.
  • the size of the preset number of times can be set according to specific needs, and this embodiment does not limit the size of the number of predictions. It should be noted that each update of the first model parameter of the generative model indicates that the generative model has undergone one iteration of training.
  • the method for updating the model parameters further includes:
  • Step i If the face image of the first age is received, the second age corresponding to the face image of the first age is determined.
  • the target generation model it is detected whether the face image of the first age is received, where the face image of the first age can be taken in the camera state, pre-stored, or sent by other terminal equipment of. If it is detected that the face image of the first age is not received, continue to detect whether the face image of the first age is received; if the face image of the first age is received, determine the first age face image corresponding to the first age face image.
  • the second age, the second age is preset, or it can be carried in the face image of the first age. It is understandable that the first age and the second age are not equal.
  • Step j input the face image of the first age and the corresponding second age into the generation model to obtain the face image corresponding to the second age, wherein the generation model is in a convergent state Model.
  • the generation model in this embodiment is a model in a convergent state, that is, the generation model in this embodiment is a target generation model. After obtaining the face image of the first age and the second age After age, the face image of the first age and the corresponding second age are input into the target generation model to obtain the face image corresponding to the second age.
  • the age of the face image after the age is 60 years old, that is, the age corresponding to the face image after the age is the second age. It is understandable that the face image corresponding to the second age is the cross-age face image that needs to be generated.
  • step j includes:
  • Step j1 input the face image of the first age and the second age into the generative model, and characterize the face image of the first age and the second age through the generative model Extracting to obtain the first feature corresponding to the face image of the first age and the second feature corresponding to the second age.
  • Step j2 connecting the first feature and the second feature to obtain a target feature, and inputting the target feature into a convolutional neural network to obtain a face image corresponding to the second age.
  • the face image of the first age and the second age are input into the generation model, and the face image of the first age and the second age performs feature extraction to obtain the first feature corresponding to the face image of the first age and the second feature corresponding to the second age.
  • the first feature and the second feature are obtained, connect the first feature and the second feature, and record the connected first feature and second feature as the target feature, and then input the target feature into the convolutional neural network to obtain
  • the output of the convolutional neural network is the face image corresponding to the second age.
  • the generative model after updating the model parameters of the generative model, determine whether the generative model is in the convergent state according to the target loss value; if the generative model is in the convergent state, the generative model corresponding to the convergent state is determined to be the target generative model, otherwise continue to train the generative model , Thereby further improving the accuracy of cross-age face image generation by the generated target generation model.
  • the model parameter update device includes:
  • the acquiring module 10 is used to acquire data to be trained, where the data to be trained includes at least a target age, a first face image, a second face image, and a third face image.
  • the second face image belongs to the face image of the same user, and the first face image and the third face image belong to the face images of different users;
  • the determining module 20 is configured to obtain a fourth face image based on the generation model of the preset generative confrontation network based on the first face image and the target age; and based on the preset facial feature extraction model and the generative confrontation network The discriminant model for obtaining a target loss value according to the fourth face image, the second face image, and the third face image;
  • the update module 30 is configured to update the first model parameter of the generation model according to the target loss value.
  • the determining module 20 includes:
  • a determining unit configured to obtain a first loss value according to the third face image and the fourth face image based on the discriminant model of the preset generative confrontation network
  • the first input unit is configured to input the second face image and the fourth face image into a preset face feature extraction model to obtain a second loss value
  • the determining unit is further configured to obtain a target loss value according to the first loss value and the second loss value.
  • the first input unit includes:
  • the input subunit is used to input the second face image and the fourth face image into a preset face feature extraction model to obtain the first face feature and the corresponding face feature of the second face image.
  • the second face feature corresponding to the fourth face image;
  • the first calculation subunit is configured to calculate a second loss value according to the first face feature and the second face feature.
  • the determining unit includes:
  • An obtaining subunit configured to obtain a first weight corresponding to the first loss value, and obtain a second weight corresponding to the second loss value;
  • the second calculation subunit is configured to multiply the first loss value by the first weight to obtain a first product, and multiply the second loss value by the second weight to obtain a second product; The first product and the second product are added to obtain a target loss value.
  • the update module 30 is further configured to update the second model parameter of the discriminant model according to the target loss value.
  • the determining module 20 is further configured to determine whether the generation model is in a convergent state according to the target loss value
  • the device for updating the model parameters further includes:
  • An execution module configured to return to executing the step of obtaining the data to be trained if it is determined that the generation model is not in the convergent state;
  • the determining module 20 is further configured to determine the generation model corresponding to the convergence state as the target generation model if it is determined that the generation model is in a convergent state.
  • the determining module 20 is further configured to determine the second age corresponding to the face image of the first age if the face image of the first age is received;
  • the device for updating the model parameters further includes:
  • the input module is used to input the face image of the first age and the corresponding second age into the generation model to obtain the face image corresponding to the second age, wherein the generation model is in convergence State model.
  • the input module includes:
  • a second input unit configured to input the face image of the first age and the second age into the generation model
  • the feature extraction unit is configured to perform feature extraction on the face image of the first age and the second age through the generation model to obtain the first feature and the first feature corresponding to the face image of the first age The second characteristic corresponding to the second age;
  • the connecting unit is used to connect the first feature and the second feature to obtain a target feature
  • the second input unit is further configured to input the target feature into the convolutional neural network to obtain the face image corresponding to the second age.
  • the update module 30 includes:
  • a calculation unit configured to calculate a first gradient value corresponding to each first model parameter in the generation model according to the target loss value
  • the update unit is configured to update the first model parameter corresponding to the first gradient value according to the first gradient value.
  • FIG. 4 is a schematic structural diagram of the hardware operating environment involved in the solution of the embodiment of the present application.
  • Fig. 4 can be a structural diagram of the hardware operating environment of the device for updating model parameters.
  • the device for updating model parameters in the embodiment of the present application may be a terminal device such as a PC and a portable computer.
  • the device for updating the model parameters may include: a processor 1001, such as a CPU, a memory 1005, a user interface 1003, a network interface 1004, and a communication bus 1002.
  • the communication bus 1002 is used to implement connection and communication between these components.
  • the user interface 1003 may include a display screen (Display) and an input unit such as a keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface and a wireless interface.
  • the network interface 1004 may optionally include a standard wired interface and a wireless interface (such as a WI-FI interface).
  • the memory 1005 may be a high-speed RAM memory, or a non-volatile memory (non-volatile memory), such as a magnetic disk memory.
  • the memory 1005 may also be a storage device independent of the aforementioned processor 1001.
  • model parameter update device does not constitute a limitation on the model parameter update device, and may include more or less components than shown in the figure, or a combination of certain components, Or different component arrangements.
  • the memory 1005 as a computer storage medium may include an operating system, a network communication module, a user interface module, and an update program for model parameters.
  • the operating system is a program that manages and controls the update equipment hardware and software resources of the model parameters, and supports the update program of the model parameters and the operation of other software or programs.
  • the user interface 1003 is mainly used to connect terminal devices and communicate with the terminal devices, such as receiving data to be trained from the terminal device;
  • the network interface 1004 is mainly used for the background server, and The background server performs data communication;
  • the processor 1001 can be used to call the update program of the model parameters stored in the memory 1005, and execute the steps of the method for updating the model parameters as described above.
  • the specific implementation of the device for updating the model parameters of the present application is basically the same as the foregoing embodiments of the method for updating the model parameters, and will not be repeated here.
  • an embodiment of the present application also proposes a computer-readable storage medium, the computer-readable storage medium stores a model parameter update program, and the model parameter update program is executed by a processor to realize the model described above. The steps of the parameter update method.
  • the technical solution of this application essentially or the part that contributes to the existing technology can be embodied in the form of a software product, and the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, The optical disc) includes several instructions to make a terminal device (which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) execute the methods described in the various embodiments of the present application.
  • a terminal device which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

本申请公开了一种模型参数的更新方法、装置、设备及存储介质,涉及金融科技领域,模型参数的更新方法包括以下步骤:获取待训练数据,其中,待训练数据至少包括目标年龄、第一人脸图像、第二人脸图像和第三人脸图像,第一人脸图像和第二人脸图像属于同一用户的人脸图像,第一人脸图像和第三人脸图像属于不同用户的人脸图像;基于预设生成式对抗网络的生成模型,根据第一人脸图像和目标年龄得到第四人脸图像;基于预设人脸特征提取模型和生成式对抗网络的判别模型,根据第四人脸图像、第二人脸图像和第三人脸图像得到目标损失值;根据目标损失值更新生成模型的第一模型参数。

Description

模型参数的更新方法、装置、设备及存储介质
本申请要求2020年5月8日申请的,申请号为202010385753.4,名称为“模型参数的更新方法、装置、设备及存储介质”的中国专利申请的优先权,在此将其全文引入作为参考。
技术领域
本申请涉及金融科技(Fintech)的人工智能技术领域,尤其涉及一种模型参数的更新方法、装置、设备及存储介质。
背景技术
随着计算机技术的发展,越来越多的技术应用在金融领域,传统金融业正在逐步向金融科技(Fintech)转变,人工智能技术也不例外,但由于金融行业的安全性、实时性要求,也对人工智能技术提出的更高的要求。
跨年龄的人脸生成在很多场景中都能得到应用,比如说在直播软件中,跨年龄人脸生成作为一种人脸特效,用户选择该特效后,可以根据用户的人脸图像,生成该用户任意年龄的人脸图像,比如若干年后的样貌;另一个应用场景是用于长时的罪犯追踪,假设只有罪犯若干年前的照片,如果能通过跨年龄人脸生成,生成当前用户的长相,对于追捕罪犯会有很大的帮助;另外,对于寻找失踪的儿童,该技术也有潜在的价值。
目前的跨年龄的人脸生成方案是通过渲染的方法:首先通过人脸关键点识别以及人脸区域分割,找出人脸的眼睛、鼻子,嘴巴,头发,眉毛,额头,脸颊等区域,认为这些区域会随着年龄变化而变化;然后通过传统的图像处理方式,给这些区域做渲染,比如说,根据年龄,将头发渲染成相应的颜色,比如灰白色,比如在额头区域,添加上皱纹,在眼睛眼角区域,增加上鱼尾纹等等。这种方式需要设计大量的模型,且生成的方式比较固定,每个人的生成方式都大同小异,只是对不同年龄有不同的处理方式。
由此可知,目前的跨年龄的人脸图像生成方案中无法衡量所生成的人脸图像和原始人脸图像之间的相似度,目前采用的做法是在生成的人脸图像和原始人脸图像之间加入一个像素级别的损失函数,然而这种约束过于强,导 致所生成的人脸图像和原始人脸图像非常相似,并没有体现出年龄变化的真实特点。综上所述,目前的跨年龄人脸图像的生成方法所生成的跨年龄人脸图像准确率低下。
发明内容
本申请的主要目的在于提供一种模型参数的更新方法、装置、设备及存储介质,旨在解决现有的跨年龄人脸图像的生成方法所生成的跨年龄人脸图像准确率低下的技术问题。
为实现上述目的,本申请提供一种模型参数的更新方法,所述模型参数的更新方法包括步骤:
获取待训练数据,其中,所述待训练数据至少包括目标年龄、第一人脸图像、第二人脸图像和第三人脸图像,所述第一人脸图像和所述第二人脸图像属于同一用户的人脸图像,所述第一人脸图像和所述第三人脸图像属于不同用户的人脸图像;
基于预设生成式对抗网络的生成模型,根据所述第一人脸图像和所述目标年龄得到第四人脸图像;
基于预设人脸特征提取模型和生成式对抗网络的判别模型,根据所述第四人脸图像、所述第二人脸图像和所述第三人脸图像得到目标损失值;
根据所述目标损失值更新所述生成模型的第一模型参数。
可选地,所述基于预设人脸特征提取模型和生成式对抗网络的判别模型,根据所述第四人脸图像、所述第二人脸图像和所述第三人脸图像得到目标损失值的步骤包括:
基于预设生成式对抗网络的判别模型,根据所述第三人脸图像和所述第四人脸图像得到第一损失值;
将所述第二人脸图像和所述第四人脸图像输入至预设的人脸特征提取模型中,得到第二损失值;
根据所述第一损失值和所述第二损失值得到目标损失值。
可选地,所述将所述第二人脸图像和所述第四人脸图像输入至预设的人脸特征提取模型中,得到第二损失值的步骤包括:
将所述第二人脸图像和所述第四人脸图像输入至预设的人脸特征提取模型中,得到所述第二人脸图像对应的第一人脸特征和所述第四人脸图像对应的第二人脸特征;
根据所述第一人脸特征和所述第二人脸特征计算得到第二损失值。
可选地,所述根据所述第一损失值和所述第二损失值得到目标损失值的步骤包括:
获取所述第一损失值对应的第一权重,以及获取所述第二损失值对应的第二权重;
将所述第一损失值乘以所述第一权重,得到第一乘积,以及将所述第二损失值乘以所述第二权重,得到第二乘积;
将所述第一乘积与所述第二乘积相加,得到目标损失值。
可选地,所述根据所述目标损失值更新所述生成模型的第一模型参数的步骤之后,还包括:
根据所述目标损失值更新所述判别模型的第二模型参数。
可选地,所述根据所述目标损失值更新所述生成模型的第一模型参数的步骤之后,还包括:
根据所述目标损失值确定所述生成模型是否处于收敛状态;
若确定所述生成模型未处于所述收敛状态,则返回执行所述获取待训练数据的步骤;
若确定所述生成模型处于收敛状态,则将收敛状态对应的生成模型确定为目标生成模型。
可选地,所述根据所述目标损失值更新所述生成模型的第一模型参数的步骤之后,还包括:
若接收到第一年龄的人脸图像后,确定所述第一年龄的人脸图像对应的第二年龄;
将所述第一年龄的人脸图像和对应的第二年龄输入至所述生成模型中,得到所述第二年龄对应的人脸图像,其中,所述生成模型为处于收敛状态的模型。
可选地,所述将所述第一年龄的人脸图像和对应的第二年龄输入至所述生成模型中,得到所述第二年龄对应的人脸图像的步骤包括:
将所述第一年龄的人脸图像和所述第二年龄输入至所述生成模型中,通过所述生成模型对所述第一年龄的人脸图像和所述第二年龄进行特征提取,得到所述第一年龄的人脸图像对应的第一特征和所述第二年龄对应的第二特 征;
连接所述第一特征和所述第二特征得到目标特征,将所述目标特征输入卷积神经网络中得到所述第二年龄对应的人脸图像。
可选地,所述根据所述目标损失值更新所述生成模型的第一模型参数的步骤包括:
根据所述目标损失值计算所述生成模型中各第一模型参数对应的第一梯度值;
根据所述第一梯度值对应更新所述第一梯度值对应的第一模型参数。
此外,为实现上述目的,本申请还提供一种模型参数的更新装置,所述模型参数的更新装置包括:
获取模块,用于获取待训练数据,其中,所述待训练数据至少包括目标年龄、第一人脸图像、第二人脸图像和第三人脸图像,所述第一人脸图像和所述第二人脸图像属于同一用户的人脸图像,所述第一人脸图像和所述第三人脸图像属于不同用户的人脸图像;
确定模块,用于基于预设生成式对抗网络的生成模型,根据所述第一人脸图像和所述目标年龄得到第四人脸图像;基于预设人脸特征提取模型和生成式对抗网络的判别模型,根据所述第四人脸图像、所述第二人脸图像和所述第三人脸图像得到目标损失值;
更新模块,用于根据所述目标损失值更新所述生成模型的第一模型参数。
此外,为实现上述目的,本申请还提供一种模型参数的更新设备,所述模型参数的更新设备包括存储器、处理器和存储在所述存储器上并可在所述处理器上运行的模型参数的更新程序,所述模型参数的更新程序被所述处理器执行时实现如联邦学习服务器对应的模型参数的更新方法的步骤。
此外,为实现上述目的,本申请还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有模型参数的更新程序,所述模型参数的更新程序被处理器执行时实现如上所述的模型参数的更新方法的步骤。
本申请通过获取待训练数据,其中,待训练数据至少包括目标年龄、第一人脸图像、第二人脸图像和第三人脸图像,第一人脸图像和第二人脸图像属于同一用户的人脸图像,第一人脸图像和第三人脸图像属于不同用户的人脸图像,基于预设生成式对抗网络的生成模型,根据第一人脸图像和目标年龄得到所述第四人脸图像;基于预设人脸特征提取模型和生成式对抗网络的判别模型,根据第四人脸图像、第二人脸图像和所述第三人脸图像得到目标损失值,根据目标损失值更新生成模型的第一模型参数,实现了在训练生成模型过程中,通过人脸特征提取模型来指导训练生成模型的训练,且在更新生成模型的模型参数过程中,通过生成式对抗网络和人脸特征提取模型来优化生成模型的模型参数,从而提高了通过生成模型生成跨年龄人脸图像的准确率。
附图说明
图1是本申请模型参数的更新方法第一实施例的流程示意图;
图2是本申请实施例中模型参数的更新方法对应模型框架的一种结构示意图;
图3是本申请模型参数的更新装置较佳实施例的功能示意图模块图;
图4是本申请实施例方案涉及的硬件运行环境的结构示意图。
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
本申请提供一种模型参数的更新方法,参照图1,图1为本申请模型参数的更新方法第一实施例的流程示意图。
本申请实施例提供了模型参数的更新方法的实施例,需要说明的是,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。
模型参数的更新方法应用于服务器或者终端中,终端可以包括诸如手机、平板电脑、笔记本电脑、掌上电脑、个人数字助理(Personal Digital Assistant,PDA)等移动终端,以及诸如数字TV、台式计算机等固定终端。在模型参数的更新方法的各个实施例中,为了便于描述,省略执行主体进行阐述各个实施例。模型参数的更新方法包括:
步骤S10,获取待训练数据,其中,所述待训练数据至少包括目标年龄、第一人脸图像、第二人脸图像和第三人脸图像,所述第一人脸图像和所述第二人脸图像属于同一用户的人脸图像,所述第一人脸图像和所述第三人脸图像属于不同用户的人脸图像。
获取待训练数据,其中,一组待训练数据至少目标年龄、第一人脸图像、第二人脸图像和第三人脸图像,第一人脸图像和第二人脸图像属于同一用户的人脸图像,第一人脸图像和第三人脸图像属于不同用户的人脸图像,目标年龄是跨年龄人脸图像对应的年龄,如当目标年龄为60岁时,表明最终要生成60岁的跨年龄人脸图像,第一人脸图像和第二人脸图像为第一用户的人脸图像,第一人脸图像和第二人脸图像可以为第一用户同一年龄的人脸图像,也可以为第一用户不同年龄的人脸图像,第三人脸图像为目标年龄对应的第二用户的人脸图像,第一用户和第二用户是不同的用户。在本实施例中,不限制第一人脸图像和第二人脸图像对应的年龄。
在本实施例中,所获取的待训练数据的数量可根据具体需要而设置,如可以获取30组待训练数据,也可以获取100组待训练数据。具体地,可在侦测到获取指令时,根据获取指令获取待训练数据,该获取指令可由用户根据需要而触发,也可由预先设置好的定时任务定时触发。
步骤S20,基于预设生成式对抗网络的生成模型,根据所述第一人脸图像和所述目标年龄得到第四人脸图像。
当获取到待训练数据后,获取预先设置好的生成式对抗网络中的生成模型,将第一人脸图像和目标年龄输入至生成模型中,得到生成模型的输出,本实施例为了便于描述,将生成模型的输出记为第四人脸图像,第四人脸图像就是第一人脸图像在目标年龄下对应的人脸图像。具体地,在将第一人脸图像和目标年龄输入生成模型中后,通过生成模型可对第一人脸图像和目标年龄进行特征提取,然后将从第一人脸图像中提取的特征与目标年龄对应特 征连接起来,通过卷积神经网络生成第四人脸图像。其中,可获取预设数量某个年龄的人脸图像,以及获取与该人脸图像另一年龄的,同一用户的另一人脸图像,将某个年龄的人脸图像和另一年龄作为卷积神经网络模型输入,将同一用户另一年龄对应的人脸图像作为机器学习的输出,即作为卷积神经网络模型输入的标签,以训练得到生成模型,这两个年龄可以相等,也可以不相等。需要说明的是,生成式对抗网络通过框架中(至少)两个模块:生成模型(Generative Model)和判别模型(Discriminative Model)的互相博弈学习产生相当好的输出。
步骤S30,基于预设人脸特征提取模型和生成式对抗网络的判别模型,根据所述第四人脸图像、所述第二人脸图像和所述第三人脸图像得到目标损失值。
当得到第四人脸图像后,获取预先设置的人脸特征提取模型和生成式对抗网络中的判别模型,并基于预设的判别模型和人脸特征提取模型,根据第四人脸图像、第二人脸图像和第三人脸图像得到目标损失值。需要说明的是,通过该判别模型可识别一个人脸图像是真实拍摄的人脸图像还是通过生成模型生成的人脸图像;通过人脸特征提取模型可以提取人脸图像中的人脸特征,将人脸映射成一个固定维度的人脸特征来表示,该人脸特征可通过向量来表示,然后通过计算两个人脸特征之间的距离,来确定两幅人脸图像中的人脸是否属于同一用户。可以理解的,当两个人脸特征之间的距离越小,表示对应两幅人脸图像中的人脸越有可能属于同一用户;当两个人脸特征之间的距离越大,表示对应两幅人脸图像中的人脸越不可能属于同一用户。在本实施例中,不限制人脸特征对应的维度,如可人脸特征的维度可以是256维或者512维等。
需要说明的是,判别模型的训练过程为获取真实拍摄的人脸图像,并为真实拍摄的人脸图像添加真实标签,以及获取生成模型生成的人脸图像,并为生成模型生成的人脸图像添加生成标签,本实施例不限制真实标签和生成标签的表现形式。当得到携带真实标签的人脸图像和生成标签的人脸图像后,将携带真实标签的人脸图像和携带生成标签的人脸图像输入至判别模型对应的基础模型中,得到判别模型,其中,判别模型的基础模型可为机器学习模型或者神经网络模型等。人脸特征提取模型就是用于提取人脸图像中的人脸 特征的人脸识别模型。
进一步地,步骤S30包括:
步骤a,基于预设生成式对抗网络的判别模型,根据所述第三人脸图像和所述第四人脸图像得到第一损失值。
进一步地,当得到判别模型后,将第三人脸图像和第四人脸图像输入至判别模型中,得到判别模型的输出,其中,判别模型的输出为一个损失值,为了便于区分不同模型输出的损失值,将判别模型的输出的损失值记为第一损失值。
步骤b,将所述第二人脸图像和所述第四人脸图像输入至预设的人脸特征提取模型中,得到第二损失值。
当得到第四人脸图像和人脸特征提取模型后,将第二人脸图像和第四人脸图像输入至人脸特征提取模型中,得到人脸特征提取模型输出的损失值,并将人脸特征提取模型输出的损失值记为第二损失值。
进一步地,步骤b包括:
步骤b1,将所述第二人脸图像和所述第四人脸图像输入至预设的人脸特征提取模型中,得到所述第二人脸图像对应的第一人脸特征和所述第四人脸图像对应的第二人脸特征。
步骤b2,根据所述第一人脸特征和所述第二人脸特征计算得到第二损失值。
当得到第二人脸图像和第四人脸图像后,将第二人脸图像和第四人脸图像输入至预设的人脸提取模型中,得到第二人脸图像对应的人脸特征和第四人脸图像对应的人脸特征。在本实施例中,为了便于区分,将第二人脸图像对应的人脸特征记为第一人脸特征,将第四人脸图像对应的人脸特征记为第二人脸特征。当得到第一人脸特征和第二人脸特征后,根据第一人脸特征和第二人脸特征计算得到第二损失值。
进一步地,步骤b2包括:
步骤b21,计算所述第一人脸特征和所述第二人脸特征之间的特征距离。
步骤b22,根据所述特征距离计算得到第二损失值。
具体地,当得到第一人脸特征和第二人脸特征后,计算第一人脸特征和第二人脸特征之间的特征距离,根据特征距离计算得到第二损失值。其中, 本实施例可通过余弦距离算法计算第一人脸特征和第二人脸特征之间的特征距离。若将第一人脸特征记为
Figure PCTCN2021092130-appb-000001
第二人脸特征记为
Figure PCTCN2021092130-appb-000002
将第二损失值记为L recog,则计算第二损失值的损失函数可用如下公式表示:
Figure PCTCN2021092130-appb-000003
其中,f()表示人脸特征提取模型,A' i=g(A i,a)表示对第一人脸图像A i
目标年龄a的跨年龄人脸图像,得到A' i,即A' i表示第四人脸图像,g()表示生成模型,“||||”表示特征距离。
进一步地,在其它实施例中,也可以通过计算第二人脸图像和第四人脸图像之间的图像相似度,根据图像相似度得到第二损失值。
步骤c,根据所述第一损失值和所述第二损失值得到目标损失值。
当得到第一损失值和第二损失值后,根据第一损失值和第二损失值得到目标损失值。如可以直接将第一损失值和第二损失值相加,将第一损失值和第二损失值相加所得的值确定为目标损失值。
进一步地,步骤c包括:
步骤c1,获取所述第一损失值对应的第一权重,以及获取所述第二损失值对应的第二权重。
步骤c2,将所述第一损失值乘以所述第一权重,得到第一乘积,以及将所述第二损失值乘以所述第二权重,得到第二乘积。
步骤c3,将所述第一乘积与所述第二乘积相加,得到目标损失值。
进一步地,当得到第一损失值和第二损失值后,获取第一损失值对应的第一权重,以及获取第二损失值对应的第二权重,其中,第一权重和第二权重是预先设置好的。可以理解的是,第一权重和第二权重之间的和等于1。当得到第一损失值、第一权重、第二损失值和第二权重后,将第一损失值乘以第一权重,得到第一损失值和第一权重之间的第一乘积,并将第二损失值乘以第二权重,得到第二损失值和第二权重之间的第二乘积。当得到第一乘积和第二乘积后,将第一乘积与第二乘积相加,得到目标损失值。若将第一权重记为α,第二权重记为β,目标损失值记为L,第一损失值记为L GAN,则计算得到目标损失值的过程可用公式L=αL GAN+βL recog表示。
具体地,可参照图2,在图2中,A为第一人脸图像,B为第二人脸图像,A’为第四人脸图像,C为第三人脸图像,L GAN为通过损失函数得到的第一损失 值,L recog为通过损失函数得到第二损失值。
步骤S40,根据所述目标损失值更新所述生成模型的第一模型参数。
需要说明的是,在本申请实施例中,人脸特征提取模型的模型参数是不需要更新的,生成模型和判别模型的模型参数需要更新。当得到目标损失值后,根据目标损失值更新生成模型的第一模型参数。
进一步地,所述模型参数的更新方法还包括:
步骤o,根据所述目标损失值更新所述判别模型的第二模型参数。
进一步地,当得到目标损失值后,根据目标损失值更新判别模型的第二模型参数。
进一步地,步骤S40包括:
步骤d,根据所述目标损失值计算所述生成模型中各第一模型参数对应的第一梯度值。
具体地,当确定目标损失值后,根据目标损失值计算生成模型各个第一模型参数对应的第一梯度值。进一步地,根据目标损失值计算判别模型中各个第二模型参数对应的第二梯度值。需要说明的是,不管是在生成模型中,还是在判别模型中,每个模型参数都存在对应的梯度值。在本实施例中,基于目标损失值,通过矩阵求导链式法则可以得到各个模型参数的对应的梯度值。
步骤e,根据所述第一梯度值对应更新所述第一梯度值对应的第一模型参数。
当得到第一梯度值和第二梯度值后,根据第一梯度值更新各个第一梯度值对应的第一模型参数,从而得到更新后的第一模型参数,根据更新后的第一模型参数得到更新后的生成模型;进一步地,根据第二梯度值更新第二梯度值对应的第二模型参数,得到更新后的第二模型参数,根据更新后的第二模型参数得到更新后的判别模型。具体地,可计算梯度值与对应模型参数的更新系数之间的乘积,然后将原来,未更新的模型参数减去对应梯度值与更新系数之间的乘积,从而得到更新后的模型参数,其中,更新系数可根据具体需要而设置,各个模型参数对应的更新系数可以相同,也可以不相同。在其它实施例中,也可以直接将未更新的模型参数减去梯度值,得到更新后的模型参数。
本实施例通过获取待训练数据,其中,待训练数据至少包括目标年龄、第一人脸图像、第二人脸图像和第三人脸图像,第一人脸图像和第二人脸图像属于同一用户的人脸图像,第一人脸图像和第三人脸图像属于不同用户的人脸图像,基于预设生成式对抗网络的生成模型,根据第一人脸图像和目标年龄得到所述第四人脸图像;基于预设人脸特征提取模型和生成式对抗网络的判别模型,根据第四人脸图像、第二人脸图像和所述第三人脸图像得到目标损失值,根据目标损失值更新生成模型的第一模型参数,实现了在训练生成模型过程中,通过人脸特征提取模型来指导训练生成模型的训练,且在更新生成模型的模型参数过程中,通过生成式对抗网络和人脸特征提取模型来优化生成模型的模型参数,从而提高了通过生成模型生成跨年龄人脸图像的准确率。
进一步地,需要说明的是,本实施例通过人脸特征提取模型保证通过生成模型所生成的跨年龄人脸图像与原始人脸图像之间的特征的距离足够小,从而保证了所生成的跨年龄人脸图像的真实度,且通过生成式对抗网络保证所生成的跨年龄人脸图像中的关键特征不丢失,不容易发生图像映射歧义,从而提高了所生成的跨年龄人脸图像的准确率。
进一步地,提出本申请模型参数的更新方法第二实施例。所述模型参数的更新方法第二实施例与所述模型参数的更新方法第一实施例的区别在于,所述模型参数的更新方法还包括:
步骤f,根据所述目标损失值确定所述生成模型是否处于收敛状态。
步骤g,若确定所述生成模型未处于所述收敛状态,则返回执行所述获取待训练数据的步骤。
步骤h,若确定所述生成模型处于收敛状态,则将收敛状态对应的生成模型确定为目标生成模型。
当得到目标损失值后,根据目标损失值确定生成模型是否处于收敛状态。具体地,判断目标损失值是否小于预设阈值,其中,预设阈值的大小可根据具体需要而设置,本实施例与预设阈值的大小不做具体限制。若确定目标损失值小于预设阈值,则确定生成模型处于收敛状态;若确定目标损失值大于或者等于预设阈值,则确定生成模型未处于收敛状态。当确定生成模型处于 收敛状态时,将收敛状态对应的生成模型确定为目标生成模型,即将确定生成模型处于收敛状态时目标损失值更新所得的生成模型确定为目标生成模型;当确定生成模型未处于收敛状态时,则继续获取待训练数据,继续训练生成模型,直到生成模型处于收敛状态。需要说明的是,根据目标损失值确定判别模型是否处于收敛状态的过程和根据目标损失值确定生成模型是否处于收敛状态的过程相同,在此不再重复赘述。
进一步地,也可计算生成模型的第一模型参数的更新次数,当更新次数大于预设次数时,确定生成模型处于收敛状态,将最后一次更新第一模型参数所得的生成模型确定为目标生成模型;当更新次数小于或者等于预设次数时,确定生成模型未处于收敛状态,继续获取待训练数据更新生成模型的第一模型参数。其中,预设次数的大小可根据具体需要而设置,本实施例不限制预测次数的大小。需要说明的是,生成模型的第一模型参数每更新一次,表明生成模型经过了一次的迭代训练。
进一步地,所述模型参数的更新方法还包括:
步骤i,若接收到第一年龄的人脸图像后,确定所述第一年龄的人脸图像对应的第二年龄。
当得到目标生成模型后,检测是否接收到第一年龄的人脸图像,其中,该第一年龄的人脸图像可为通过摄像状态拍摄的,也可为预先存储的,或者是其它终端设备发送的。若检测到未接收到第一年龄的人脸图像,则继续检测是否接收到第一年龄的人脸图像;若接收到第一年龄的人脸图像,确定第一年龄的人脸图像对应的第二年龄,该第二年龄是预先设置好的,也可以是携带在第一年龄的人脸图像中的。可以理解的是,第一年龄和第二年龄不相等。
步骤j,将所述第一年龄的人脸图像和对应的第二年龄输入至所述生成模型中,得到所述第二年龄对应的人脸图像,其中,所述生成模型为处于收敛状态的模型。
当得到第一年龄的人脸图像和第二年龄后,将第一年龄的人脸图像和对应的第二年龄输入至生成模型中,得到第二年龄对应的人脸图像,即生成模型的输出就是跨年龄后的人脸图像,其中,本实施例中的生成模型是处于收敛状态的模型,即本实施例中的生成模型是目标生成模型,在得到第一年龄 的人脸图像和第二年龄后,将第一年龄的人脸图像和对应的第二年龄输入至目标生成模型中,得到第二年龄对应的人脸图像。如第二年龄为60岁,而第一年龄的人脸图像为25岁,则跨年龄后的人脸图像的年龄为60岁,即跨年龄后的人脸图像对应的年龄为第二年龄。可以理解的是,第二年龄对应的人脸图像就是所需要生成的跨年龄人脸图像。
进一步地,步骤j包括:
步骤j1,将所述第一年龄的人脸图像和所述第二年龄输入至所述生成模型中,通过所述生成模型对所述第一年龄的人脸图像和所述第二年龄进行特征提取,得到所述第一年龄的人脸图像对应的第一特征和所述第二年龄对应的第二特征。
步骤j2,连接所述第一特征和所述第二特征得到目标特征,将所述目标特征输入卷积神经网络中得到所述第二年龄对应的人脸图像。
具体地,在得到第一年龄的人脸图像和第二年龄后,将第一年龄的人脸图像和第二年龄输入至生成模型中,通过生成模型对第一年龄的人脸图像和所述第二年龄进行特征提取,得到第一年龄的人脸图像对应的第一特征和第二年龄对应的第二特征。当得到第一特征和第二特征后,连接第一特征和第二特征,并将连接后的第一特征和第二特征记为目标特征,然后将目标特征输入卷积神经网络中,以得到第二年龄对应的人脸图像,可以理解的是,卷积神经网络的输出就是第二年龄对应的人脸图像。
本实施例通过在更新生成模型的模型参数后,根据目标损失值判断生成模型是否处于收敛状态;若生成模型处于收敛状态,则收敛状态对应的生成模型确定为目标生成模型,否则继续训练生成模型,从而进一步地提高了所生成目标生成模型进行跨年龄人脸图像生成的准确率。
此外,本申请还提供一种模型参数的更新装置,参照图3,所述模型参数的更新装置包括:
获取模块10,用于获取待训练数据,其中,所述待训练数据至少包括目标年龄、第一人脸图像、第二人脸图像和第三人脸图像,所述第一人脸图像和所述第二人脸图像属于同一用户的人脸图像,所述第一人脸图像和所述第三人脸图像属于不同用户的人脸图像;
确定模块20,用于基于预设生成式对抗网络的生成模型,根据所述第一人脸图像和所述目标年龄得到第四人脸图像;基于预设人脸特征提取模型和生成式对抗网络的判别模型,根据所述第四人脸图像、所述第二人脸图像和所述第三人脸图像得到目标损失值;
更新模块30,用于根据所述目标损失值更新所述生成模型的第一模型参数。
进一步地,所述确定模块20包括:
确定单元,用于基于预设生成式对抗网络的判别模型,根据所述第三人脸图像和所述第四人脸图像得到第一损失值;
第一输入单元,用于将所述第二人脸图像和所述第四人脸图像输入至预设的人脸特征提取模型中,得到第二损失值;
所述确定单元还用于根据所述第一损失值和所述第二损失值得到目标损失值。
进一步地,所述第一输入单元包括:
输入子单元,用于将所述第二人脸图像和所述第四人脸图像输入至预设的人脸特征提取模型中,得到所述第二人脸图像对应的第一人脸特征和所述第四人脸图像对应的第二人脸特征;
第一计算子单元,用于根据所述第一人脸特征和所述第二人脸特征计算得到第二损失值。
进一步地,所述确定单元包括:
获取子单元,用于获取所述第一损失值对应的第一权重,以及获取所述第二损失值对应的第二权重;
第二计算子单元,用于将所述第一损失值乘以所述第一权重,得到第一乘积,以及将所述第二损失值乘以所述第二权重,得到第二乘积;将所述第一乘积与所述第二乘积相加,得到目标损失值。
进一步地,所述更新模块30还用于根据所述目标损失值更新所述判别模型的第二模型参数。
进一步地,所述确定模块20还用于根据所述目标损失值确定所述生成模型是否处于收敛状态;
所述模型参数的更新装置还包括:
执行模块,用于若确定所述生成模型未处于所述收敛状态,则返回执行所述获取待训练数据的步骤;
所述确定模块20还用于若确定所述生成模型处于收敛状态,则将收敛状态对应的生成模型确定为目标生成模型。
进一步地,所述确定模块20还用于若接收到第一年龄的人脸图像后,确定所述第一年龄的人脸图像对应的第二年龄;
所述模型参数的更新装置还包括:
输入模块,用于将所述第一年龄的人脸图像和对应的第二年龄输入至所述生成模型中,得到所述第二年龄对应的人脸图像,其中,所述生成模型为处于收敛状态的模型。
进一步地,所述输入模块包括:
第二输入单元,用于将所述第一年龄的人脸图像和所述第二年龄输入至所述生成模型中;
特征提取单元,用于通过所述生成模型对所述第一年龄的人脸图像和所述第二年龄进行特征提取,得到所述第一年龄的人脸图像对应的第一特征和所述第二年龄对应的第二特征;
连接单元,用于连接所述第一特征和所述第二特征得到目标特征;
所述第二输入单元还用于将所述目标特征输入卷积神经网络中得到所述第二年龄对应的人脸图像。
进一步地,所述更新模块30包括:
计算单元,用于根据所述目标损失值计算所述生成模型中各第一模型参数对应的第一梯度值;
更新单元,用于根据所述第一梯度值对应更新所述第一梯度值对应的第一模型参数。
本申请模型参数的更新装置具体实施方式与上述模型参数的更新方法各实施例基本相同,在此不再赘述。
此外,本申请还提供一种模型参数的更新设备。如图4所示,图4是本申请实施例方案涉及的硬件运行环境的结构示意图。
需要说明的是,图4即可为模型参数的更新设备的硬件运行环境的结构 示意图。本申请实施例模型参数的更新设备可以是PC,便携计算机等终端设备。
如图4所示,该模型参数的更新设备可以包括:处理器1001,例如CPU,存储器1005,用户接口1003,网络接口1004,通信总线1002。其中,通信总线1002用于实现这些组件之间的连接通信。用户接口1003可以包括显示屏(Display)、输入单元比如键盘(Keyboard),可选用户接口1003还可以包括标准的有线接口、无线接口。网络接口1004可选的可以包括标准的有线接口、无线接口(如WI-FI接口)。存储器1005可以是高速RAM存储器,也可以是稳定的存储器(non-volatile memory),例如磁盘存储器。存储器1005可选的还可以是独立于前述处理器1001的存储装置。
本领域技术人员可以理解,图4中示出的模型参数的更新设备结构并不构成对模型参数的更新设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
如图4所示,作为一种计算机存储介质的存储器1005中可以包括操作系统、网络通信模块、用户接口模块以及模型参数的更新程序。其中,操作系统是管理和控制模型参数的更新设备硬件和软件资源的程序,支持模型参数的更新程序以及其它软件或程序的运行。
在图4所示的模型参数的更新设备中,用户接口1003主要用于连接终端设备,与终端设备进行数据通信,如接收终端设备发送的待训练数据;网络接口1004主要用于后台服务器,与后台服务器进行数据通信;处理器1001可以用于调用存储器1005中存储的模型参数的更新程序,并执行如上所述的模型参数的更新方法的步骤。
本申请模型参数的更新设备具体实施方式与上述模型参数的更新方法各实施例基本相同,在此不再赘述。
此外,本申请实施例还提出一种计算机可读存储介质,所述计算机可读存储介质上存储有模型参数的更新程序,所述模型参数的更新程序被处理器执行时实现如上所述的模型参数的更新方法的步骤。
本申请计算机可读存储介质具体实施方式与上述模型参数的更新方法各实施例基本相同,在此不再赘述。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (20)

  1. 一种模型参数的更新方法,其中,所述模型参数的更新方法包括以下步骤:
    获取待训练数据,其中,所述待训练数据至少包括目标年龄、第一人脸图像、第二人脸图像和第三人脸图像,所述第一人脸图像和所述第二人脸图像属于同一用户的人脸图像,所述第一人脸图像和所述第三人脸图像属于不同用户的人脸图像;
    基于预设生成式对抗网络的生成模型,根据所述第一人脸图像和所述目标年龄得到第四人脸图像;
    基于预设人脸特征提取模型和生成式对抗网络的判别模型,根据所述第四人脸图像、所述第二人脸图像和所述第三人脸图像得到目标损失值;
    根据所述目标损失值更新所述生成模型的第一模型参数。
  2. 如权利要求1所述的模型参数的更新方法,其中,所述基于预设人脸特征提取模型和生成式对抗网络的判别模型,根据所述第四人脸图像、所述第二人脸图像和所述第三人脸图像得到目标损失值的步骤包括:
    基于预设生成式对抗网络的判别模型,根据所述第三人脸图像和所述第四人脸图像得到第一损失值;
    将所述第二人脸图像和所述第四人脸图像输入至预设的人脸特征提取模型中,得到第二损失值;
    根据所述第一损失值和所述第二损失值得到目标损失值。
  3. 如权利要求2所述的模型参数的更新方法,其中,所述将所述第二人脸图像和所述第四人脸图像输入至预设的人脸特征提取模型中,得到第二损失值的步骤包括:
    将所述第二人脸图像和所述第四人脸图像输入至预设的人脸特征提取模型中,得到所述第二人脸图像对应的第一人脸特征和所述第四人脸图像对应的第二人脸特征;
    根据所述第一人脸特征和所述第二人脸特征计算得到第二损失值。
  4. 如权利要求2所述的模型参数的更新方法,其中,所述根据所述第一损失值和所述第二损失值得到目标损失值的步骤包括:
    获取所述第一损失值对应的第一权重,以及获取所述第二损失值对应的第二权重;
    将所述第一损失值乘以所述第一权重,得到第一乘积,以及将所述第二损失值乘以所述第二权重,得到第二乘积;
    将所述第一乘积与所述第二乘积相加,得到目标损失值。
  5. 如权利要求1所述的模型参数的更新方法,其中,所述根据所述目标损失值更新所述生成模型的第一模型参数的步骤之后,还包括:
    根据所述目标损失值更新所述判别模型的第二模型参数。
  6. 如权利要求1所述的模型参数的更新方法,其中,所述根据所述目标损失值更新所述生成模型的第一模型参数的步骤之后,还包括:
    根据所述目标损失值确定所述生成模型是否处于收敛状态;
    若确定所述生成模型未处于所述收敛状态,则返回执行所述获取待训练数据的步骤;
    若确定所述生成模型处于收敛状态,则将收敛状态对应的生成模型确定为目标生成模型。
  7. 如权利要求1所述的模型参数的更新方法,其中,所述根据所述目标损失值更新所述生成模型的第一模型参数的步骤之后,还包括:
    若接收到第一年龄的人脸图像后,确定所述第一年龄的人脸图像对应的第二年龄;
    将所述第一年龄的人脸图像和对应的第二年龄输入至所述生成模型中,得到所述第二年龄对应的人脸图像,其中,所述生成模型为处于收敛状态的模型。
  8. 如权利要求7所述的模型参数的更新方法,其中,所述将所述第一年 龄的人脸图像和对应的第二年龄输入至所述生成模型中,得到所述第二年龄对应的人脸图像的步骤包括:
    将所述第一年龄的人脸图像和所述第二年龄输入至所述生成模型中,通过所述生成模型对所述第一年龄的人脸图像和所述第二年龄进行特征提取,得到所述第一年龄的人脸图像对应的第一特征和所述第二年龄对应的第二特征;
    连接所述第一特征和所述第二特征得到目标特征,将所述目标特征输入卷积神经网络中得到所述第二年龄对应的人脸图像。
  9. 如权利要求1至8任一项所述的模型参数的更新方法,其中,所述根据所述目标损失值更新所述生成模型的第一模型参数的步骤包括:
    根据所述目标损失值计算所述生成模型中各第一模型参数对应的第一梯度值;
    根据所述第一梯度值对应更新所述第一梯度值对应的第一模型参数。
  10. 一种模型参数的更新装置,其中,所述模型参数的更新装置包括:
    获取模块,用于获取待训练数据,其中,所述待训练数据至少包括目标年龄、第一人脸图像、第二人脸图像和第三人脸图像,所述第一人脸图像和所述第二人脸图像属于同一用户的人脸图像,所述第一人脸图像和所述第三人脸图像属于不同用户的人脸图像;
    确定模块,用于基于预设生成式对抗网络的生成模型,根据所述第一人脸图像和所述目标年龄得到第四人脸图像;基于预设人脸特征提取模型和生成式对抗网络的判别模型,根据所述第四人脸图像、所述第二人脸图像和所述第三人脸图像得到目标损失值;
    更新模块,用于根据所述目标损失值更新所述生成模型的第一模型参数。
  11. 一种模型参数的更新设备,其中,所述模型参数的更新设备包括存储器、处理器和存储在所述存储器上并可在所述处理器上运行的模型参数的更新程序,所述模型参数的更新程序被所述处理器执行时实现如权利要求1所述的模型参数的更新方法的步骤。
  12. 一种模型参数的更新设备,其中,所述模型参数的更新设备包括存储器、处理器和存储在所述存储器上并可在所述处理器上运行的模型参数的更新程序,所述模型参数的更新程序被所述处理器执行时实现如权利要求2所述的模型参数的更新方法的步骤。
  13. 一种模型参数的更新设备,其中,所述模型参数的更新设备包括存储器、处理器和存储在所述存储器上并可在所述处理器上运行的模型参数的更新程序,所述模型参数的更新程序被所述处理器执行时实现如权利要求3所述的模型参数的更新方法的步骤。
  14. 一种模型参数的更新设备,其中,所述模型参数的更新设备包括存储器、处理器和存储在所述存储器上并可在所述处理器上运行的模型参数的更新程序,所述模型参数的更新程序被所述处理器执行时实现如权利要求4所述的模型参数的更新方法的步骤。
  15. 一种模型参数的更新设备,其中,所述模型参数的更新设备包括存储器、处理器和存储在所述存储器上并可在所述处理器上运行的模型参数的更新程序,所述模型参数的更新程序被所述处理器执行时实现如权利要求5所述的模型参数的更新方法的步骤。
  16. 一种计算机可读存储介质,其中,所述计算机可读存储介质上存储有模型参数的更新程序,所述模型参数的更新程序被处理器执行时实现如权利要求1所述的模型参数的更新方法的步骤。
  17. 一种计算机可读存储介质,其中,所述计算机可读存储介质上存储有模型参数的更新程序,所述模型参数的更新程序被处理器执行时实现如权利要求2所述的模型参数的更新方法的步骤。
  18. 一种计算机可读存储介质,其中,所述计算机可读存储介质上存储 有模型参数的更新程序,所述模型参数的更新程序被处理器执行时实现如权利要求3所述的模型参数的更新方法的步骤。
  19. 一种计算机可读存储介质,其中,所述计算机可读存储介质上存储有模型参数的更新程序,所述模型参数的更新程序被处理器执行时实现如权利要求4所述的模型参数的更新方法的步骤。
  20. 一种计算机可读存储介质,其中,所述计算机可读存储介质上存储有模型参数的更新程序,所述模型参数的更新程序被处理器执行时实现如权利要求5所述的模型参数的更新方法的步骤。
PCT/CN2021/092130 2020-05-08 2021-05-07 模型参数的更新方法、装置、设备及存储介质 WO2021223738A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010385753.4 2020-05-08
CN202010385753.4A CN111553838A (zh) 2020-05-08 2020-05-08 模型参数的更新方法、装置、设备及存储介质

Publications (1)

Publication Number Publication Date
WO2021223738A1 true WO2021223738A1 (zh) 2021-11-11

Family

ID=72007989

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/092130 WO2021223738A1 (zh) 2020-05-08 2021-05-07 模型参数的更新方法、装置、设备及存储介质

Country Status (2)

Country Link
CN (1) CN111553838A (zh)
WO (1) WO2021223738A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114677566A (zh) * 2022-04-08 2022-06-28 北京百度网讯科技有限公司 深度学习模型的训练方法、对象识别方法和装置

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111553838A (zh) * 2020-05-08 2020-08-18 深圳前海微众银行股份有限公司 模型参数的更新方法、装置、设备及存储介质
CN111967412A (zh) * 2020-08-21 2020-11-20 深圳前海微众银行股份有限公司 基于联邦学习的人脸属性识别方法、客户端、设备及介质
CN112287792B (zh) * 2020-10-22 2023-03-31 深圳前海微众银行股份有限公司 采集人脸图像的方法、装置及电子设备
CN113221645B (zh) * 2021-04-07 2023-12-12 深圳数联天下智能科技有限公司 目标模型训练方法、人脸图像生成方法以及相关装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108846350A (zh) * 2018-06-08 2018-11-20 江苏大学 容忍年龄变化的人脸识别方法
CN109308450A (zh) * 2018-08-08 2019-02-05 杰创智能科技股份有限公司 一种基于生成对抗网络的脸部变化预测方法
CN109523463A (zh) * 2018-11-20 2019-03-26 中山大学 一种基于条件生成对抗网络的人脸老化方法
CN109902546A (zh) * 2018-05-28 2019-06-18 华为技术有限公司 人脸识别方法、装置及计算机可读介质
US10496809B1 (en) * 2019-07-09 2019-12-03 Capital One Services, Llc Generating a challenge-response for authentication using relations among objects
CN111553838A (zh) * 2020-05-08 2020-08-18 深圳前海微众银行股份有限公司 模型参数的更新方法、装置、设备及存储介质

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108399379B (zh) * 2017-08-11 2021-02-12 北京市商汤科技开发有限公司 用于识别面部年龄的方法、装置和电子设备
CN108985215B (zh) * 2018-07-09 2020-05-22 Oppo(重庆)智能科技有限公司 一种图片处理方法、图片处理装置及终端设备
CN109472764B (zh) * 2018-11-29 2020-11-10 广州市百果园信息技术有限公司 图像合成和图像合成模型训练的方法、装置、设备和介质
CN110084174A (zh) * 2019-04-23 2019-08-02 杭州智趣智能信息技术有限公司 一种人脸识别方法、系统及电子设备和存储介质
CN110110663A (zh) * 2019-05-07 2019-08-09 江苏新亿迪智能科技有限公司 一种基于人脸属性的年龄识别方法及系统
CN110322394A (zh) * 2019-06-18 2019-10-11 中国科学院自动化研究所 基于属性引导的人脸年龄老化图像对抗生成方法及装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109902546A (zh) * 2018-05-28 2019-06-18 华为技术有限公司 人脸识别方法、装置及计算机可读介质
CN108846350A (zh) * 2018-06-08 2018-11-20 江苏大学 容忍年龄变化的人脸识别方法
CN109308450A (zh) * 2018-08-08 2019-02-05 杰创智能科技股份有限公司 一种基于生成对抗网络的脸部变化预测方法
CN109523463A (zh) * 2018-11-20 2019-03-26 中山大学 一种基于条件生成对抗网络的人脸老化方法
US10496809B1 (en) * 2019-07-09 2019-12-03 Capital One Services, Llc Generating a challenge-response for authentication using relations among objects
CN111553838A (zh) * 2020-05-08 2020-08-18 深圳前海微众银行股份有限公司 模型参数的更新方法、装置、设备及存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114677566A (zh) * 2022-04-08 2022-06-28 北京百度网讯科技有限公司 深度学习模型的训练方法、对象识别方法和装置
CN114677566B (zh) * 2022-04-08 2023-10-17 北京百度网讯科技有限公司 深度学习模型的训练方法、对象识别方法和装置

Also Published As

Publication number Publication date
CN111553838A (zh) 2020-08-18

Similar Documents

Publication Publication Date Title
WO2021223738A1 (zh) 模型参数的更新方法、装置、设备及存储介质
CN111832468B (zh) 基于生物识别的手势识别方法、装置、计算机设备及介质
US10713532B2 (en) Image recognition method and apparatus
CN109359538B (zh) 卷积神经网络的训练方法、手势识别方法、装置及设备
CN108460338B (zh) 人体姿态估计方法和装置、电子设备、存储介质、程序
CN108229296B (zh) 人脸皮肤属性识别方法和装置、电子设备、存储介质
CN112395979B (zh) 基于图像的健康状态识别方法、装置、设备及存储介质
WO2021036059A1 (zh) 图像转换模型训练方法、异质人脸识别方法、装置及设备
WO2020006961A1 (zh) 用于提取图像的方法和装置
WO2018228218A1 (zh) 身份识别方法、计算设备及存储介质
WO2020078119A1 (zh) 模拟用户穿戴服装饰品的方法、装置和系统
CN107679447A (zh) 面部特征点检测方法、装置及存储介质
WO2021139475A1 (zh) 一种表情识别方法及装置、设备、计算机可读存储介质、计算机程序产品
WO2018196718A1 (zh) 图像消歧方法、装置、存储介质和电子设备
CN107911643B (zh) 一种视频通信中展现场景特效的方法和装置
CN114648613B (zh) 基于可变形神经辐射场的三维头部模型重建方法及装置
CN111429554A (zh) 运动视频数据处理方法、装置、计算机设备和存储介质
WO2022142032A1 (zh) 手写签名校验方法、装置、计算机设备及存储介质
WO2021127916A1 (zh) 脸部情感识别方法、智能装置和计算机可读存储介质
CN111507285A (zh) 人脸属性识别方法、装置、计算机设备和存储介质
CN115050064A (zh) 人脸活体检测方法、装置、设备及介质
CN113642481A (zh) 识别方法、训练方法、装置、电子设备以及存储介质
CN114723888A (zh) 三维发丝模型生成方法、装置、设备、存储介质及产品
CN109376618B (zh) 图像处理方法、装置及电子设备
CN112381118B (zh) 一种大学舞蹈考试测评方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21800812

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21800812

Country of ref document: EP

Kind code of ref document: A1