CN108416310B - Method and apparatus for generating information - Google Patents

Method and apparatus for generating information Download PDF

Info

Publication number
CN108416310B
CN108416310B CN201810209172.8A CN201810209172A CN108416310B CN 108416310 B CN108416310 B CN 108416310B CN 201810209172 A CN201810209172 A CN 201810209172A CN 108416310 B CN108416310 B CN 108416310B
Authority
CN
China
Prior art keywords
face image
age
sample
generated
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810209172.8A
Other languages
Chinese (zh)
Other versions
CN108416310A (en
Inventor
汤旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu Online Network Technology Beijing Co Ltd
Original Assignee
Baidu Online Network Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Baidu Online Network Technology Beijing Co Ltd filed Critical Baidu Online Network Technology Beijing Co Ltd
Priority to CN201810209172.8A priority Critical patent/CN108416310B/en
Publication of CN108416310A publication Critical patent/CN108416310A/en
Application granted granted Critical
Publication of CN108416310B publication Critical patent/CN108416310B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/178Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/179Human faces, e.g. facial parts, sketches or expressions metadata assisted face recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application discloses a method and a device for generating information. One embodiment of the method comprises: receiving a face image and age information, wherein the age represented by the age information is larger than the age of a person corresponding to a face object in the face image; determining an age tag vector according to the age information; and importing the face image and the age label vector into a pre-established generation model to obtain a generated face image, wherein the face image and the generated face image contain face information of the same person, the age corresponding to a face object in the generated face image is matched with the age represented by the age information, and the generation model is used for representing the corresponding relation between the face image and the age label vector and the generated face image. This embodiment enables the generation of information.

Description

Method and apparatus for generating information
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a method and a device for generating information.
Background
Human face recognition, which is a biological recognition technology for identity recognition based on human face feature information, is widely researched and applied in the field of computer vision. In contrast, few researchers are currently concerned with aged image generation and cross-age face recognition of faces, and there is a great need for applications in this area of research. For example, it can be used to generate a human face image of a person after adult from a human face image of the person during their young age, and can be used to help find lost children. For another example, the method can be used for predicting the appearance of a person after several years, increasing the functions of electronic equipment (such as a mobile phone, a computer and the like), and improving the user experience.
Disclosure of Invention
The embodiment of the application provides a method and a device for generating information.
In a first aspect, an embodiment of the present application provides a method for generating information, where the method includes: receiving a face image and age information, wherein the age represented by the age information is larger than the age of a person corresponding to a face object in the face image; determining an age tag vector according to the age information; and importing the face image and the age label vector into a pre-established generation model to obtain a generated face image, wherein the face image and the generated face image contain face information of the same person, the age corresponding to a face object in the generated face image is matched with the age represented by the age information, and the generation model is used for representing the corresponding relation between the face image and the age label vector and the generated face image.
In some embodiments, the generated model is obtained by the following steps based on a machine learning method: acquiring a first sample set, wherein the first sample comprises a first sample face image and a first sample age label vector; performing the following first training step based on the first set of samples: inputting a first sample face image and a first sample age label vector of at least one first sample in the first sample set into a first initial neural network model obtained in advance to obtain a first generated face image corresponding to each first sample in the at least one first sample; for each first generated face image, inputting the first generated face image into a pre-established discrimination model, obtaining the matching between the age of a face object contained in the first generated face image and the age corresponding to a first sample age label vector corresponding to the first generated face image, and the probability that the first generated face image is a real face image, and calculating the similarity between the first generated face image and the first sample face image corresponding to the first generated face image; determining whether the first initial neural network model reaches a preset first optimization target according to the obtained probability and the calculated similarity; and in response to determining that the first initial neural network model reaches a preset first optimization goal, taking the first initial neural network model as a training-completed generation model.
In some embodiments, the step of obtaining the generated model based on a machine learning method further includes: and in response to determining that the first initial neural network model does not reach a preset first optimization goal, adjusting model parameters of the first initial neural network model based on the probability and the similarity, and continuing to execute the first training step.
In some embodiments, the discriminant model is trained by: acquiring a second sample set, wherein the second sample comprises a second sample face image, a second sample age label vector and annotation information, the second sample face image comprises a positive sample face image and a negative sample face image, the positive sample face image is a real face image, the negative sample face image is a face image generated by the generated model, and the annotation information is used for annotating the positive sample face image and the negative sample face image; performing the following second training step based on the second sample set: inputting a second sample face image and a second sample age tag vector of at least one second sample in the second sample set into a pre-established second initial neural network model to obtain the age of a face object contained in the second sample face image of each second sample in the at least one second sample, the age of the face object is matched with the age corresponding to the second sample age tag vector, and the probability that the second sample face image is a real face image; comparing the probability corresponding to each second sample in the at least one second sample with the corresponding labeling information; determining whether the second initial neural network model reaches a preset second optimization target or not according to the comparison result; and in response to determining that the second initial neural network model reaches a preset second optimization goal, taking the second initial neural network model as a trained discrimination network.
In some embodiments, the step of obtaining the discriminant model further includes: and adjusting the model parameters of the second initial neural network model in response to the fact that the second initial neural network model does not reach a preset second optimization target, and continuing to execute the second training step.
In some embodiments, the determining an age tag vector according to the age information includes: determining the age category to which the age information belongs, wherein the age category is obtained by dividing according to the age categories; determining an age tag vector of an age category to which the age information belongs according to a preset corresponding relationship between the age category and the age tag vector; and using the determined age label vector as the age label vector of the age information.
In some embodiments, the age tag vector is a unique code vector.
In a second aspect, an embodiment of the present application provides an apparatus for generating information, where the apparatus includes: the receiving unit is used for receiving a face image and age information, wherein the age represented by the age information is larger than the age of a person corresponding to a face object in the face image; a determining unit for determining an age tag vector according to the age information; and the generation unit is used for importing the face image and the age label vector into a pre-established generation model to obtain a generated face image, wherein the face image and the generated face image contain face information of the same person, the age corresponding to a face object in the generated face image is matched with the age represented by the age information, and the generation model is used for representing the corresponding relation between the face image and the age label vector and the generated face image.
In some embodiments, the apparatus further comprises a generative model training unit, the generative model training unit comprising: a first acquisition unit configured to acquire a first sample set, wherein the first sample includes a first sample face image and a first sample age label vector; a first executing unit, configured to execute the following first training step based on the first sample set: inputting a first sample face image and a first sample age label vector of at least one first sample in the first sample set into a first initial neural network model obtained in advance to obtain a first generated face image corresponding to each first sample in the at least one first sample; for each first generated face image, inputting the first generated face image into a pre-established discrimination model, obtaining the matching between the age of a face object contained in the first generated face image and the age corresponding to a first sample age label vector corresponding to the first generated face image, and the probability that the first generated face image is a real face image, and calculating the similarity between the first generated face image and the first sample face image corresponding to the first generated face image; determining whether the first initial neural network model reaches a preset first optimization target according to the obtained probability and the calculated similarity; and in response to determining that the first initial neural network model reaches a preset first optimization goal, taking the first initial neural network model as a training-completed generation model.
In some embodiments, the generative model training unit further comprises: and a first adjusting unit, configured to adjust model parameters of the first initial neural network model based on the probability and the similarity in response to determining that the first initial neural network model does not reach a preset first optimization goal, and continue to perform the first training step.
In some embodiments, the apparatus further comprises a discriminant model training unit, the discriminant model training unit comprising: the second acquisition unit is used for acquiring a second sample set, wherein the second sample comprises a second sample face image, a second sample age label vector and annotation information, the second sample face image comprises a positive sample face image and a negative sample face image, the positive sample face image is a real face image, the negative sample face image is a face image generated by the generated model, and the annotation information is used for annotating the positive sample face image and the negative sample face image; a second executing unit, configured to execute the following second training step based on the second sample set: inputting a second sample face image and a second sample age tag vector of at least one second sample in the second sample set into a pre-established second initial neural network model to obtain the age of a face object contained in the second sample face image of each second sample in the at least one second sample, the age of the face object is matched with the age corresponding to the second sample age tag vector, and the probability that the second sample face image is a real face image; comparing the probability corresponding to each second sample in the at least one second sample with the corresponding labeling information; determining whether the second initial neural network model reaches a preset second optimization target or not according to the comparison result; and in response to determining that the second initial neural network model reaches a preset second optimization goal, taking the second initial neural network model as a trained discrimination network.
In some embodiments, the discriminant model training unit further includes: and the second adjusting unit is used for adjusting the model parameters of the second initial neural network model and continuing to execute the second training step in response to the fact that the second initial neural network model does not reach the preset second optimization target.
In some embodiments, the determining unit is further configured to: determining the age category to which the age information belongs, wherein the age category is obtained by dividing according to the age categories; determining an age tag vector of an age category to which the age information belongs according to a preset corresponding relationship between the age category and the age tag vector; and using the determined age label vector as the age label vector of the age information.
In some embodiments, the age tag vector is a unique code vector.
In a third aspect, an embodiment of the present application provides an apparatus, including: one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the method as described in any implementation manner of the first aspect.
In a fourth aspect, the present application provides a computer-readable medium, on which a computer program is stored, where the program, when executed by a processor, implements the method as described in any implementation manner of the first aspect.
The method and the device for generating information provided by the embodiment of the application firstly receive a face image and age information, wherein the age represented by the age information is larger than the age of a person corresponding to a face object in the face image, then an age label vector is determined according to the age information, and finally the face image and the age label vector are imported into a pre-established generation model to obtain a generated face image, wherein the face image and the generated face image contain face information of the same person, and the age corresponding to the face object in the generated face image is matched with the age represented by the age information, so that the aged face image is generated according to the face image and the age information, and the generation of information is realized.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of a method for generating information according to the present application;
FIG. 3 is a schematic illustration of an application scenario of a method for generating information according to the present application;
FIG. 4 is a flow diagram of one embodiment of a method of generating a model based on machine learning method training in a method for generating information according to the present application;
FIG. 5 is a schematic block diagram illustrating one embodiment of an apparatus for generating information according to the present application;
fig. 6 is a schematic structural diagram of a computer system suitable for implementing the terminal device or the server according to the embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 illustrates an exemplary system architecture 100 to which a method for generating information or an apparatus for generating information of embodiments of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal apparatuses 101, 102, 103 may have installed thereon various communication client applications, such as a camera-like application, an image processing-like application, a search-like application, and the like.
The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices having a display screen and supporting image display, including but not limited to smart phones, tablet computers, e-book readers, MP3 players (Moving Picture Experts Group Audio Layer III, mpeg compression standard Audio Layer 3), MP4 players (Moving Picture Experts Group Audio Layer IV, mpeg compression standard Audio Layer 4), laptop portable computers, desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 105 may be a server that provides various services, such as a background server that provides support for images or graphics displayed on the terminal devices 101, 102, 103. The background server may feed back data (e.g., image data) to the terminal device for presentation by the terminal device.
It should be noted that the method for generating information provided in the embodiment of the present application may be executed by the terminal devices 101, 102, and 103, may also be executed by the server 105, and may also be executed by the server 105 and the terminal devices 101, 102, and 103 together, and accordingly, the apparatus for generating information may be provided in the server 105, may also be provided in the terminal devices 101, 102, and 103, and may also be provided in part of the unit in the server 105 and in other units in the terminal devices 101, 102, and 103. This is not limited in this application.
The server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method for generating information in accordance with the present application is shown. The method for generating information comprises the following steps:
step 201, receiving a face image and age information.
In this embodiment, an executing subject (for example, the terminal device 101, 102, 103 or the server 105 shown in fig. 1) of the method for generating information may receive a face image and age information, wherein the age information represents an age that is larger than an age of a person corresponding to a face object in the face image. Here, the above age information may be various information representing the age, for example, a specific age (12 years, 32 years, etc.). As an example, the user wants to predict the growth of a person in the 20 year old face image at the age of 40 years from the face image, and at this time, the user may send the face image and the age information to the execution subject: age 40.
Step 202, determining an age tag vector according to the age information.
In this embodiment, the execution subject may determine an age tag vector according to the age information received in step 201. As an example, the execution body may store a correspondence table between age information and an age tag vector in advance. In this way, the execution agent may compare the age information received in step 201 with the age information in the correspondence table, and if one piece of age information in the correspondence table is the same as or close to the received age information, the execution agent may use the age tag vector corresponding to the age information in the correspondence table as the age tag vector of the received age information.
In some optional implementations of this embodiment, the step 202 may specifically include: first, the execution subject may determine an age category to which the age information belongs, wherein the age category is divided according to age groups. As an example, the age of the person may be divided into several age groups in advance, each age group being one age category, and the execution subject may determine the age category to which the age information belongs according to the age characterized by the age information. Then, the execution agent may determine an age tag vector of an age category to which the age information belongs, based on a preset correspondence relationship between the age category and the age tag vector. Finally, the execution subject may use the determined age tag vector as the age tag vector of the age information.
In some alternative implementations, the age tag vector may be a unique code vector. As an example, the age of a person may be divided into 5 age groups in advance: 0-20, 21-30, 31-40, 41-50, 50+, then the corresponding unique code vectors for these 5 age groups may be: 00001, 00010, 00100, 01000, 10000. It should be noted that the manner of age classification in the present embodiment is merely illustrative, and is not limited to the manner of age classification, and in actual use, the age of a person may be classified into any number of age groups according to actual needs.
And step 203, importing the face image and the age label vector into a pre-established generation model to obtain a generated face image.
In this embodiment, the execution body may store a pre-established generation model, and the generation model may be used to represent a correspondence between the face image and the age tag vector and the generated face image. In this way, the executing entity may import the face image received in step 201 and the age tag vector determined in step 202 into the generating model, so as to obtain a generated face image. Here, the face image and the generated face image contain face information of the same person, and an age corresponding to a face object in the generated face image matches an age represented by the age information.
As an example, the generative model may be obtained by training the executive body or other electronic devices used for training the generative model in the following manner: first, a sample set may be obtained. Wherein the samples comprise sample face images, sample age label vectors and aged face images. The sample face image and the aged face image contain face information of the same person, and the age corresponding to the face object in the aged face image is the same as the age corresponding to the sample age label vector. For example, if the sample face image is a face image aged three years and 20 years old, and the age corresponding to the sample age label vector is 40 years old, the aged face image is a face image aged three years and 40 years old. Then, based on the sample set, the following training steps may be performed: inputting the sample face image and the sample age label vector of at least one sample in the sample set into a pre-obtained initial neural network model, and obtaining a corresponding generated face image for each sample in the at least one sample, where the initial neural network model may include, but is not limited to, a convolutional neural network, a deep neural network model, and the like. And calculating the similarity between the generated face image and the aged face image in the sample for the generated face image corresponding to each sample. And determining whether the initial neural network model reaches a preset optimization target according to the calculated similarity. As an example, the optimization target may be that the similarity between the generated face image and the aged face image is greater than a preset threshold. And in response to determining that the initial neural network model reaches a preset optimization target, taking the initial neural network model as a training-completed generation model. And responding to the fact that the initial neural network model does not reach the preset optimization target, adjusting model parameters of the initial neural network model based on the similarity, and continuing to execute the training step.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the method for generating information according to the present embodiment. In the application scenario of fig. 3, the user first inputs a face image a of a 20 year old person and age information "age 45" through the terminal device 301. Then, the terminal device 301 determines an age tag vector 01000 according to the age information "age 45"; the terminal device 301 imports the face image a and the age label vector 01000 into a pre-established generation model to obtain a generated face image B, where the face image a and the generated face image B contain face information of the same person, and the age corresponding to the face object in the generated face image B matches with the age of 45 years.
The method provided by the embodiment of the application uses the pre-established generation model to generate the face image after the face is aged according to the face image and the age information, so that the information is generated.
With further reference to FIG. 4, a flow 400 of an embodiment of a method of deriving a generative model based on machine learning method training is illustrated. The flow 400 of this embodiment of a method of training generative models comprises the steps of:
step 401, a first sample set is obtained.
In this embodiment, the executing entity or other electronic device for training the generative model may obtain a first sample set, where the first sample in the first sample set may include a first sample face image and a first sample age label vector, where an age represented by the first sample age label vector may be greater than an age of a person corresponding to the face object in the first sample face image.
Step 402, a first training step is performed based on a first set of samples.
In this embodiment, the executing entity or other electronic devices for training the generative model may execute the following first training step based on the first sample set, where the first training step may specifically include:
step 4021, inputting a first sample face image and a first sample age label vector of at least one first sample in the first sample set to a first initial neural network model obtained in advance, and obtaining a first generated face image corresponding to each first sample in the at least one first sample.
Here, the initial neural network model may be an untrained neural network model or an untrained complete neural network model, for example, a convolutional neural network, a deep neural network, or the like.
Step 4022, for each first generated face image, inputting the first generated face image into a pre-established discriminant model, obtaining the matching between the age of the face object included in the first generated face image and the age corresponding to the first sample age label vector corresponding to the first generated face image, and the probability that the first generated face image is a real face image, and calculating the similarity between the first generated face image and the first sample face image corresponding to the first generated face image.
Here, the similarity may include, but is not limited to, cosine similarity, jaccard similarity coefficient, euclidean distance, and the like. The first sample age label vector corresponding to the first generated face image may be a first sample age label vector input to the first initial neural network model when the first generated face image is generated. The first same face image corresponding to the first generated face image may be a first same face image input to the first initial neural network model at the time of generating the first generated face image. Here, the discrimination model may be configured to determine a probability that an age of a human face object included in one first generated face image matches an age corresponding to a first sample age label vector corresponding to the first generated face image and the first generated face image is a true face image.
As an example, the calculating the similarity between the first generated face image and the first identical face image corresponding to the first generated face image may specifically include: first, features of the first generated face image and a first identical face image corresponding to the first generated face image may be extracted, respectively. Then, the euclidean distance between the feature of the first generated face image and the feature of the first identical face image corresponding to the first generated face image is calculated, and the similarity between the first generated face image and the first identical face image corresponding to the first generated face image is obtained.
Step 4023, determining whether the first initial neural network model reaches a preset first optimization target according to the obtained probability and the calculated similarity.
As an example, the optimization objective may refer to that the obtained probability and similarity reach a preset threshold. As another example, when the probability and the similarity corresponding to a generated face image reach a preset threshold, the first generated face image is considered to be accurate, and in this case, the optimization target may be that the accuracy of the first generated face image generated by the first initial neural network model is greater than a preset accuracy threshold.
Step 4024, in response to determining that the first initial neural network model reaches a preset first optimization goal, taking the first initial neural network model as a training-completed generation model.
In some optional implementations of this embodiment, the method for training the generative model may further include:
step 403, in response to determining that the first initial neural network model does not reach a preset first optimization goal, adjusting model parameters of the first initial neural network model based on the probability and the similarity, and continuing to execute the first training step. As an example, the executing entity or other electronic device for training the generating model may adjust the model parameters of the first initial neural network model by using a Back propagation Algorithm (BP Algorithm) and a gradient descent method (e.g., a random gradient descent Algorithm) based on the probability and the similarity. It should be noted that the back propagation algorithm and the gradient descent method are well-known technologies that are currently widely researched and applied, and are not described herein again.
In some optional implementations of the embodiment, the discriminant model may be obtained by the executing entity or other electronic devices for training the discriminant model through the following steps:
in step S10, a second sample set is obtained.
In this implementation manner, the execution subject or other electronic device for training the discrimination model may obtain a second sample set, where the second sample may include a second sample face image, a second sample age tag vector, and annotation information, the second sample face image may include a positive sample face image and a negative sample face image, the positive sample face image is a real face image, the negative sample face image is a face image generated by the generation model, and the annotation information is used for annotating the positive sample face image and the negative sample face image. For example, the annotation information may be 1 and 0, where 1 represents a positive sample face image and 0 represents a negative sample face image. Here, the age of the second sample age tag vector representation may be matched to the age of the person to which the face object corresponds in the second sample face image.
In step S20, a second training step is performed based on the second sample set.
In this implementation manner, the executing entity or other electronic devices for training the discriminant model may execute a second training step based on the second sample set, where the second training step may specifically include:
step S201, inputting the second sample face image and the second sample age tag vector of at least one second sample in the second sample set into a second initial neural network model established in advance, to obtain the age of the face object included in the second sample face image of each second sample in the at least one second sample matches with the age corresponding to the second sample age tag vector, and the probability that the second sample face image is a real face image. Here, the second initial neural network model may be an untrained neural network model or an untrained neural network model, such as a convolutional neural network, a deep neural network, or the like.
Step S202, comparing the probability corresponding to each second sample in the at least one second sample with the corresponding label information.
Step S203, determining whether the second initial neural network model reaches a preset second optimization target according to the comparison result. As an example, the second optimization goal may be that the difference between the obtained probability and the label information is smaller than a preset difference threshold. As another example, when the difference between the obtained probability and the labeling information is smaller than a preset difference threshold, the second initial neural network model is considered to be predicted accurately, and in this case, the second optimization goal may be that the prediction accuracy of the second initial neural network model is larger than a preset accuracy threshold.
In step S204, in response to determining that the second initial neural network model reaches a preset second optimization goal, the second initial neural network model may be used as a trained discrimination network.
In some optional implementations, the step of training the discriminant model may further include:
step S30, in response to determining that the second initial neural network model does not reach a preset second optimization goal, adjusting model parameters of the second initial neural network model, and continuing to execute the second training step. As an example, the executing entity or other electronic device for training the discriminant model may adjust the model parameters of the second initial neural network model by using a Back propagation Algorithm (BP Algorithm) and a gradient descent method (e.g., a random gradient descent Algorithm) based on the obtained comparison result. It should be noted that the back propagation algorithm and the gradient descent method are well-known technologies that are currently widely researched and applied, and are not described herein again.
In the method for training a generated model according to the above embodiment of the present application, in the training process of the generated model, the model parameters of the generated model are updated based on the probability that the age of the human face object included in the first generated face image matches the age corresponding to the first sample age label vector corresponding to the first generated face image and that the first generated face image is the real face image, and the similarity between the first generated face image and the first sample face image corresponding to the first generated face image, so that the age of the human face object included in the generated face image generated by the generated model can be ensured to match the age corresponding to the age label vector, the reality of the generated face image generated by the generated model can be improved, and the similarity between the generated face image generated by the generated model and the input face image can be ensured, i.e. to ensure that the input and output are face images of the same person.
With further reference to fig. 5, as an implementation of the method shown in the above figures, the present application provides an embodiment of an apparatus for generating information, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable to various electronic devices.
As shown in fig. 5, the apparatus 500 for generating information of the present embodiment includes: a receiving unit 501, a determining unit 502 and a generating unit 503. The receiving unit 501 is configured to receive a face image and age information, where an age represented by the age information is larger than an age of a person corresponding to a face object in the face image; the determining unit 502 is configured to determine an age tag vector according to the age information; the generating unit 503 is configured to import the face image and the age tag vector into a pre-established generation model to obtain a generated face image, where the face image and the generated face image contain face information of the same person, an age corresponding to a face object in the generated face image matches an age represented by the age information, and the generation model is used to represent a correspondence relationship between the face image and the age tag vector and the generated face image.
In this embodiment, specific processes of the receiving unit 501, the determining unit 502, and the generating unit 503 of the apparatus 500 for generating information and technical effects brought by the specific processes can refer to related descriptions of step 201, step 202, and step 203 in the corresponding embodiment of fig. 2, which are not described herein again.
In some optional implementations of this embodiment, the apparatus 500 may further include a generative model training unit (not shown in the figure), and the generative model training unit may include: a first acquisition unit (not shown in the figure) for acquiring a first sample set, wherein the first sample includes a first sample face image and a first sample age label vector; a first execution unit (not shown in the figure) configured to execute the following first training step based on the first sample set: inputting a first sample face image and a first sample age label vector of at least one first sample in the first sample set into a first initial neural network model obtained in advance to obtain a first generated face image corresponding to each first sample in the at least one first sample; for each first generated face image, inputting the first generated face image into a pre-established discrimination model, obtaining the matching between the age of a face object contained in the first generated face image and the age corresponding to a first sample age label vector corresponding to the first generated face image, and the probability that the first generated face image is a real face image, and calculating the similarity between the first generated face image and the first sample face image corresponding to the first generated face image; determining whether the first initial neural network model reaches a preset first optimization target according to the obtained probability and the calculated similarity; and in response to determining that the first initial neural network model reaches a preset first optimization goal, taking the first initial neural network model as a training-completed generation model.
In some optional implementations of this embodiment, the generative model training unit may further include: a first adjusting unit (not shown in the figure), configured to, in response to determining that the first initial neural network model does not reach a preset first optimization goal, adjust model parameters of the first initial neural network model based on the probability and the similarity, and continue to perform the first training step.
In some optional implementations of the embodiment, the apparatus 500 further includes a discriminant model training unit (not shown in the figure), and the discriminant model training unit may include: a second obtaining unit (not shown in the figure), configured to obtain a second sample set, where the second sample includes a second sample face image, a second sample age label vector, and annotation information, the second sample face image includes a positive sample face image and a negative sample face image, where the positive sample face image is a real face image, the negative sample face image is a face image generated by the generation model, and the annotation information is used to annotate the positive sample face image and the negative sample face image; a second execution unit (not shown in the figure) configured to execute the following second training steps based on the second sample set: inputting a second sample face image and a second sample age tag vector of at least one second sample in the second sample set into a pre-established second initial neural network model to obtain the age of a face object contained in the second sample face image of each second sample in the at least one second sample, the age of the face object is matched with the age corresponding to the second sample age tag vector, and the probability that the second sample face image is a real face image; comparing the probability corresponding to each second sample in the at least one second sample with the corresponding labeling information; determining whether the second initial neural network model reaches a preset second optimization target or not according to the comparison result; and in response to determining that the second initial neural network model reaches a preset second optimization goal, taking the second initial neural network model as a trained discrimination network.
In some optional implementation manners of this embodiment, the discriminant model training unit may further include: and a second adjusting unit (not shown in the figure) for adjusting the model parameters of the second initial neural network model in response to determining that the second initial neural network model does not reach a preset second optimization goal, and continuing to execute the second training step.
In some optional implementations of this embodiment, the determining unit 502 may be further configured to: determining the age category to which the age information belongs, wherein the age category is obtained by dividing according to the age categories; determining an age tag vector of an age category to which the age information belongs according to a preset corresponding relationship between the age category and the age tag vector; and using the determined age label vector as the age label vector of the age information.
In some optional implementations of this embodiment, the age tag vector is a unique code vector.
Referring now to FIG. 6, shown is a block diagram of a computer system 600 suitable for use in implementing a terminal device or server of an embodiment of the present application. The terminal device or the server shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 6, the computer system 600 includes a Central Processing Unit (CPU)601 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the system 600 are also stored. The CPU 601, ROM 602, and RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output portion 607 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted in the storage section 608 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611. The computer program performs the above-described functions defined in the method of the present application when executed by a Central Processing Unit (CPU) 601. It should be noted that the computer readable medium described herein can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes a receiving unit, a determining unit, and a generating unit. The names of these units do not in some cases constitute a limitation on the unit itself, and for example, the receiving unit may also be described as a "unit that receives a face image and age information".
As another aspect, the present application also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be present separately and not assembled into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the apparatus to: receiving a face image and age information, wherein the age represented by the age information is larger than the age of a person corresponding to a face object in the face image; determining an age tag vector according to the age information; and importing the face image and the age label vector into a pre-established generation model to obtain a generated face image, wherein the face image and the generated face image contain face information of the same person, the age corresponding to a face object in the generated face image is matched with the age represented by the age information, and the generation model is used for representing the corresponding relation between the face image and the age label vector and the generated face image.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (14)

1. A method for generating information, comprising:
receiving a face image and age information, wherein the age represented by the age information is larger than the age of a person corresponding to a face object in the face image;
determining an age tag vector from the age information, comprising: determining the age category to which the age information belongs, wherein the age category is obtained by dividing according to the age categories; determining an age tag vector of an age category to which the age information belongs according to a preset corresponding relation between the age category and the age tag vector; using the determined age tag vector as an age tag vector of the age information;
importing the face image and the age label vector into a pre-established generation model to obtain a generated face image, wherein the face image and the generated face image contain face information of the same person, the age corresponding to a face object in the generated face image is matched with the age represented by the age information, and the generation model is used for representing the corresponding relation between the face image and the age label vector and the generated face image; the model parameters of the generated model are based on the fact that the age of a face object contained in the generated face image is matched with the age corresponding to the sample age label vector during training, the probability that the generated face image is a real face image during training is obtained, and the similarity between the generated face image and the sample face image during training is updated.
2. The method of claim 1, wherein the generative model is derived based on a machine learning method by:
acquiring a first sample set, wherein the first sample comprises a first sample face image and a first sample age label vector;
performing the following first training step based on the first sample set: inputting a first sample face image and a first sample age label vector of at least one first sample in the first sample set into a first initial neural network model obtained in advance to obtain a first generated face image corresponding to each first sample in the at least one first sample; for each first generated face image, inputting the first generated face image into a pre-established discrimination model, obtaining the matching between the age of a face object contained in the first generated face image and the age corresponding to a first sample age label vector corresponding to the first generated face image, and the probability that the first generated face image is a real face image, and calculating the similarity between the first generated face image and the first sample face image corresponding to the first generated face image; determining whether the first initial neural network model reaches a preset first optimization target according to the obtained probability and the calculated similarity; and in response to determining that the first initial neural network model reaches a preset first optimization goal, taking the first initial neural network model as a training-completed generation model.
3. The method of claim 2, wherein deriving the generative model based on a machine learning approach further comprises:
in response to determining that the first initial neural network model does not meet a preset first optimization goal, adjusting model parameters of the first initial neural network model based on the probability and the similarity, and continuing to perform the first training step.
4. The method of claim 2, wherein the discriminant model is trained by:
acquiring a second sample set, wherein the second sample comprises a second sample face image, a second sample age label vector and annotation information, the second sample face image comprises a positive sample face image and a negative sample face image, the positive sample face image is a real face image, the negative sample face image is a face image generated by the generated model, and the annotation information is used for annotating the positive sample face image and the negative sample face image;
performing the following second training step based on the second sample set: inputting a second sample face image and a second sample age tag vector of at least one second sample in the second sample set into a pre-established second initial neural network model to obtain the age of a face object contained in the second sample face image of each second sample in the at least one second sample, the age of the face object is matched with the age corresponding to the second sample age tag vector, and the probability that the second sample face image is a real face image; comparing the probability corresponding to each second sample in the at least one second sample with the corresponding labeling information; determining whether the second initial neural network model reaches a preset second optimization target or not according to the comparison result; and in response to determining that the second initial neural network model reaches a preset second optimization goal, taking the second initial neural network model as a trained discrimination network.
5. The method of claim 4, wherein the step of deriving the discriminative model further comprises:
and adjusting the model parameters of the second initial neural network model in response to determining that the second initial neural network model does not reach a preset second optimization goal, and continuing to execute the second training step.
6. The method of claim 1, wherein the age tag vector is a unique code vector.
7. An apparatus for generating information, comprising:
the receiving unit is used for receiving a face image and age information, wherein the age represented by the age information is larger than the age of a person corresponding to a face object in the face image;
a determining unit for determining an age tag vector according to the age information;
the generating unit is used for importing the face image and the age label vector into a pre-established generating model to obtain a generated face image, wherein the face image and the generated face image contain face information of the same person, the age corresponding to a face object in the generated face image is matched with the age represented by the age information, and the generating model is used for representing the corresponding relation between the face image and the age label vector and the generated face image; the model parameters of the generated model are based on the probability that the age of a face object contained in a generated face image is matched with the age corresponding to the sample age label vector during training and the generated face image is a real face image during training and the similarity between the generated face image and the sample face image during training is updated;
the determination unit is further configured to:
determining the age category to which the age information belongs, wherein the age category is obtained by dividing according to the age categories;
determining an age tag vector of an age category to which the age information belongs according to a preset corresponding relation between the age category and the age tag vector;
and using the determined age label vector as the age label vector of the age information.
8. The apparatus of claim 7, wherein the apparatus further comprises a generative model training unit comprising:
a first acquisition unit configured to acquire a first sample set, wherein the first sample includes a first sample face image and a first sample age label vector;
a first execution unit for performing the following first training step based on the first set of samples: inputting a first sample face image and a first sample age label vector of at least one first sample in the first sample set into a first initial neural network model obtained in advance to obtain a first generated face image corresponding to each first sample in the at least one first sample; for each first generated face image, inputting the first generated face image into a pre-established discrimination model, obtaining the matching between the age of a face object contained in the first generated face image and the age corresponding to a first sample age label vector corresponding to the first generated face image, and the probability that the first generated face image is a real face image, and calculating the similarity between the first generated face image and the first sample face image corresponding to the first generated face image; determining whether the first initial neural network model reaches a preset first optimization target according to the obtained probability and the calculated similarity; and in response to determining that the first initial neural network model reaches a preset first optimization goal, taking the first initial neural network model as a training-completed generation model.
9. The apparatus of claim 8, wherein the generative model training unit further comprises:
and the first adjusting unit is used for responding to the fact that the first initial neural network model does not reach a preset first optimization target, adjusting model parameters of the first initial neural network model based on the probability and the similarity, and continuously executing the first training step.
10. The apparatus of claim 8, wherein the apparatus further comprises a discriminant model training unit comprising:
the second acquisition unit is used for acquiring a second sample set, wherein the second sample comprises a second sample face image, a second sample age label vector and annotation information, the second sample face image comprises a positive sample face image and a negative sample face image, the positive sample face image is a real face image, the negative sample face image is a face image generated by the generated model, and the annotation information is used for annotating the positive sample face image and the negative sample face image;
a second execution unit for executing the following second training steps based on the second sample set: inputting a second sample face image and a second sample age tag vector of at least one second sample in the second sample set into a pre-established second initial neural network model to obtain the age of a face object contained in the second sample face image of each second sample in the at least one second sample, the age of the face object is matched with the age corresponding to the second sample age tag vector, and the probability that the second sample face image is a real face image; comparing the probability corresponding to each second sample in the at least one second sample with the corresponding labeling information; determining whether the second initial neural network model reaches a preset second optimization target or not according to the comparison result; and in response to determining that the second initial neural network model reaches a preset second optimization goal, taking the second initial neural network model as a trained discrimination network.
11. The apparatus of claim 10, wherein the discriminant model training unit further comprises:
and the second adjusting unit is used for adjusting the model parameters of the second initial neural network model and continuing to execute the second training step in response to the fact that the second initial neural network model does not reach the preset second optimization target.
12. The apparatus of claim 7, wherein the age tag vector is a unique code vector.
13. An apparatus, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-6.
14. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1-6.
CN201810209172.8A 2018-03-14 2018-03-14 Method and apparatus for generating information Active CN108416310B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810209172.8A CN108416310B (en) 2018-03-14 2018-03-14 Method and apparatus for generating information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810209172.8A CN108416310B (en) 2018-03-14 2018-03-14 Method and apparatus for generating information

Publications (2)

Publication Number Publication Date
CN108416310A CN108416310A (en) 2018-08-17
CN108416310B true CN108416310B (en) 2022-01-28

Family

ID=63131350

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810209172.8A Active CN108416310B (en) 2018-03-14 2018-03-14 Method and apparatus for generating information

Country Status (1)

Country Link
CN (1) CN108416310B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109636867B (en) * 2018-10-31 2023-05-23 百度在线网络技术(北京)有限公司 Image processing method and device and electronic equipment
CN109359626A (en) * 2018-11-21 2019-02-19 合肥金诺数码科技股份有限公司 A kind of Image Acquisition complexion prediction meanss and its prediction technique
CN111259698B (en) * 2018-11-30 2023-10-13 百度在线网络技术(北京)有限公司 Method and device for acquiring image
CN111259695B (en) * 2018-11-30 2023-08-29 百度在线网络技术(北京)有限公司 Method and device for acquiring information
CN109829071B (en) * 2018-12-14 2023-09-05 平安科技(深圳)有限公司 Face image searching method, server, computer device and storage medium
CN110009018B (en) * 2019-03-25 2023-04-18 腾讯科技(深圳)有限公司 Image generation method and device and related equipment
CN110322398B (en) * 2019-07-09 2022-10-28 厦门美图之家科技有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
JP7438690B2 (en) * 2019-08-09 2024-02-27 日本テレビ放送網株式会社 Information processing device, image recognition method, and learning model generation method
CN110674748B (en) * 2019-09-24 2024-02-13 腾讯科技(深圳)有限公司 Image data processing method, apparatus, computer device, and readable storage medium
CN111145080B (en) * 2019-12-02 2023-06-23 北京达佳互联信息技术有限公司 Training method of image generation model, image generation method and device
CN111209878A (en) * 2020-01-10 2020-05-29 公安部户政管理研究中心 Cross-age face recognition method and device
CN111581422A (en) * 2020-05-08 2020-08-25 支付宝(杭州)信息技术有限公司 Information processing method and device based on face recognition
CN111461971B (en) * 2020-05-19 2023-04-18 北京字节跳动网络技术有限公司 Image processing method, device, equipment and computer readable storage medium
CN112163505A (en) * 2020-09-24 2021-01-01 北京字节跳动网络技术有限公司 Method, device, equipment and computer readable medium for generating image

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101556699A (en) * 2008-11-07 2009-10-14 浙江大学 Face-based facial aging image synthesis method
US8520906B1 (en) * 2007-09-24 2013-08-27 Videomining Corporation Method and system for age estimation based on relative ages of pairwise facial images of people
CN105787974A (en) * 2014-12-24 2016-07-20 中国科学院苏州纳米技术与纳米仿生研究所 Establishment method for establishing bionic human facial aging model
CN107194868A (en) * 2017-05-19 2017-09-22 成都通甲优博科技有限责任公司 A kind of Face image synthesis method and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102306281B (en) * 2011-07-13 2013-11-27 东南大学 Multi-mode automatic estimating method for humage
CN107045622B (en) * 2016-12-30 2020-06-02 浙江大学 Face age estimation method based on adaptive age distribution learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8520906B1 (en) * 2007-09-24 2013-08-27 Videomining Corporation Method and system for age estimation based on relative ages of pairwise facial images of people
CN101556699A (en) * 2008-11-07 2009-10-14 浙江大学 Face-based facial aging image synthesis method
CN105787974A (en) * 2014-12-24 2016-07-20 中国科学院苏州纳米技术与纳米仿生研究所 Establishment method for establishing bionic human facial aging model
CN107194868A (en) * 2017-05-19 2017-09-22 成都通甲优博科技有限责任公司 A kind of Face image synthesis method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Toward Automatic Simulation of Aging Effects on Face Images;Andreas Lanitis 等;《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》;20020807;第24卷(第4期);第442-455页 *
基于图像的年龄估计与人脸年龄图像重构;胡斓;《中国优秀硕士学位论文全文数据库 信息科技辑》;20071215;第44-56页 *

Also Published As

Publication number Publication date
CN108416310A (en) 2018-08-17

Similar Documents

Publication Publication Date Title
CN108416310B (en) Method and apparatus for generating information
CN109858445B (en) Method and apparatus for generating a model
CN109460513B (en) Method and apparatus for generating click rate prediction model
CN107609506B (en) Method and apparatus for generating image
US11436863B2 (en) Method and apparatus for outputting data
CN108121699B (en) Method and apparatus for outputting information
CN109981787B (en) Method and device for displaying information
CN111061881A (en) Text classification method, equipment and storage medium
WO2022121801A1 (en) Information processing method and apparatus, and electronic device
CN112153460B (en) Video dubbing method and device, electronic equipment and storage medium
CN110009059B (en) Method and apparatus for generating a model
CN109582825B (en) Method and apparatus for generating information
CN112509562B (en) Method, apparatus, electronic device and medium for text post-processing
CN111831855B (en) Method, apparatus, electronic device, and medium for matching videos
CN111428010A (en) Man-machine intelligent question and answer method and device
CN111582360B (en) Method, apparatus, device and medium for labeling data
CN112149699B (en) Method and device for generating model and method and device for identifying image
CN111897950A (en) Method and apparatus for generating information
WO2024099171A1 (en) Video generation method and apparatus
CN110059172B (en) Method and device for recommending answers based on natural language understanding
CN110046571B (en) Method and device for identifying age
CN109829431B (en) Method and apparatus for generating information
CN112182255A (en) Method and apparatus for storing media files and for retrieving media files
CN111292333A (en) Method and apparatus for segmenting an image
CN109816670B (en) Method and apparatus for generating image segmentation model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant