CN108416310A - Method and apparatus for generating information - Google Patents
Method and apparatus for generating information Download PDFInfo
- Publication number
- CN108416310A CN108416310A CN201810209172.8A CN201810209172A CN108416310A CN 108416310 A CN108416310 A CN 108416310A CN 201810209172 A CN201810209172 A CN 201810209172A CN 108416310 A CN108416310 A CN 108416310A
- Authority
- CN
- China
- Prior art keywords
- age
- facial image
- sample
- mentioned
- label vector
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/178—Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/179—Human faces, e.g. facial parts, sketches or expressions metadata assisted face recognition
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biomedical Technology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the present application discloses the method and apparatus for generating information.One specific implementation mode of this method includes:Receive facial image and age information, wherein the age of above-mentioned age information characterization is more than the age of the people in above-mentioned facial image corresponding to face object;Age label vector is determined according to age information;Above-mentioned facial image and above-mentioned age label vector are imported into the generation model pre-established, it obtains generating facial image, wherein, above-mentioned facial image and above-mentioned generation facial image include the face information of same people, and the age that the face object corresponding age in above-mentioned generation facial image characterizes with above-mentioned age information matches, above-mentioned generation model is used to characterize facial image and age label vector and generates the correspondence of facial image.The embodiment realizes the generation of information.
Description
Technical field
The invention relates to field of computer technology, and in particular to the method and apparatus for generating information.
Background technology
Recognition of face is a kind of biological identification technology that the facial feature information based on people carries out identification, is calculating
Machine visual field is widely studied and is applied.In contrast, the aging image of less researcher's concern face generates at present
With across age recognition of face, and the research of this respect has prodigious application demand.For example, can be used for according to someone infancy
Face image synthesis the people adult after facial image, can be used for help to find lost children.In another example can be used for pre-
The appearance after someone is surveyed several years, increases the function of electronic equipment (for example, mobile phone, computer etc.), promotes user experience.
Invention content
The embodiment of the present application proposes the method and apparatus for generating information.
In a first aspect, the embodiment of the present application provides a kind of method for generating information, this method includes:Receive face
Image and age information, wherein the age of above-mentioned age information characterization is more than in above-mentioned facial image corresponding to face object
The age of people;Age label vector is determined according to age information;Above-mentioned facial image and above-mentioned age label vector are imported pre-
The generation model first established obtains generating facial image, wherein above-mentioned facial image and above-mentioned generation facial image are comprising same
The year at face object corresponding age and above-mentioned age information characterization in the face information of people and above-mentioned generation facial image
Age matches, and above-mentioned generation model is used to characterize facial image and age label vector and generates the correspondence of facial image.
In some embodiments, above-mentioned generation model is obtained based on machine learning method by following steps:It obtains
First sample set, wherein first sample includes first sample facial image and first sample age label vector;Based on above-mentioned
One sample set executes following first training step:By the first sample people of at least one of above-mentioned first sample set first sample
Face image and first sample age label vector are input to the initial neural network model of first be previously obtained, obtain for above-mentioned
Each first sample corresponding first at least one first sample generates facial image;Face figure is generated for each first
The first generation facial image is inputted the discrimination model pre-established by picture, is obtained the first generation facial image and is included
The age of face object matches with the corresponding first sample age label vector of the first generation facial image corresponding age,
And this first generate facial image be real human face image probability, and calculate this first generate facial image with this first
Generate the similarity between the corresponding first sample facial image of facial image;Similarity according to obtained probability and calculating is true
Whether fixed above-mentioned first initial neural network model reaches preset first optimization aim;In response to the initial god of determination above-mentioned first
Reach preset first optimization aim through network model, the generation that the above-mentioned first initial neural network model is completed as training
Model.
In some embodiments, the step of obtaining above-mentioned generation model based on machine learning method further include:In response to true
Fixed above-mentioned first initial neural network model is not up to preset first optimization aim, is based on above-mentioned probability and above-mentioned similarity tune
The model parameter of whole above-mentioned first initial neural network model, continues to execute above-mentioned first training step.
In some embodiments, above-mentioned discrimination model is through the following steps that training obtained:The second sample set is obtained,
In, the second sample includes the second sample facial image, the second sample age label vector and markup information, the second sample face figure
As including positive sample facial image and negative sample facial image, wherein positive sample facial image is real human face image, negative sample
Facial image is the facial image that above-mentioned generation model generates, and markup information is for marking positive sample facial image and negative sample people
Face image;Following second training step is executed based on above-mentioned second sample set:By at least one of above-mentioned second sample set
Second sample facial image of two samples and the second sample age label vector input the second initial neural network pre-established
Model obtains the face pair that the second sample facial image of the second sample of each of above-mentioned at least one second sample is included
Age corresponding with the second sample age label vector at the age of elephant matches and the second sample facial image is true people
The probability of face image;By each of above-mentioned at least one second sample corresponding probability of the second sample and corresponding markup information
It is compared;Determine whether the above-mentioned second initial neural network model reaches preset second optimization aim according to comparing result;
Reach preset second optimization aim in response to the above-mentioned second initial neural network model of determination, by the above-mentioned second initial nerve net
The differentiation network that network model is completed as training.
In some embodiments, the step of obtaining above-mentioned discrimination model further include:In response to the initial god of determination above-mentioned second
It is not up to preset second optimization aim through network model, adjusts the model parameter of above-mentioned second initial neural network model, after
It is continuous to execute above-mentioned second training step.
In some embodiments, above-mentioned that age label vector is determined according to age information, including:Determine above-mentioned age information
Affiliated age categories, wherein age categories divide to obtain according to age bracket;According to preset age categories and year
The correspondence of age label vector determines the age label vector of the age categories belonging to above-mentioned age information;By determining year
Age label vector of the age label vector as above-mentioned age information.
In some embodiments, above-mentioned age label vector is one-hot encoding vector.
Second aspect, the embodiment of the present application provide a kind of device for generating information, and device includes:Receiving unit,
For receiving facial image and age information, wherein the age of above-mentioned age information characterization is more than face in above-mentioned facial image
The age of people corresponding to object;Determination unit, for determining age label vector according to age information;Generation unit is used for
Above-mentioned facial image and above-mentioned age label vector are imported into the generation model pre-established, obtain generating facial image, wherein
The face information and the people in above-mentioned generation facial image that above-mentioned facial image and above-mentioned generation facial image include same people
The age that the face object corresponding age characterizes with above-mentioned age information matches, above-mentioned generation model for characterize facial image and
Age label vector and the correspondence for generating facial image.
In some embodiments, above-mentioned apparatus further includes generating model training unit, above-mentioned generation model training unit packet
It includes:First acquisition unit, for obtaining first sample set, wherein first sample includes first sample facial image and the first sample
This age label vector;First execution unit, for executing following first training step based on above-mentioned first sample set:It will be above-mentioned
The first sample facial image and first sample age label vector of at least one of first sample set first sample are input to
The first initial neural network model being previously obtained is obtained for each first sample pair in above-mentioned at least one first sample
First answered generates facial image;Facial image is generated for each first, which is built in advance
Vertical discrimination model obtains age and the first generation facial image for the face object that the first generation facial image is included
The corresponding first sample age label vector corresponding age match and this first to generate facial image be real human face figure
The probability of picture, and calculate first generation facial image first sample facial image corresponding with the first generation facial image
Between similarity;Determine whether the above-mentioned first initial neural network model reaches according to the similarity of obtained probability and calculating
Preset first optimization aim;Reach preset first optimization aim in response to the above-mentioned first initial neural network model of determination,
The generation model that above-mentioned first initial neural network model is completed as training.
In some embodiments, above-mentioned generation model training unit further includes:The first adjustment unit, in response to determination
Above-mentioned first initial neural network model is not up to preset first optimization aim, is adjusted based on above-mentioned probability and above-mentioned similarity
The model parameter of above-mentioned first initial neural network model, continues to execute above-mentioned first training step.
In some embodiments, above-mentioned apparatus further includes discrimination model training unit, above-mentioned discrimination model training unit packet
It includes:Second acquisition unit, for obtaining the second sample set, wherein the second sample includes the second sample facial image, the second sample
Age label vector and markup information, the second sample facial image include positive sample facial image and negative sample facial image,
In, positive sample facial image is real human face image, and negative sample facial image is the facial image that above-mentioned generation model generates, mark
Note information is for marking positive sample facial image and negative sample facial image;Second execution unit, for being based on above-mentioned second sample
This collection executes following second training step:By the second sample face figure of the second sample of at least one of above-mentioned second sample set
Picture and the second sample age label vector input the second initial neural network model pre-established, obtain above-mentioned at least one the
The age for the face object that second sample facial image of the second sample of each of two samples is included and the second sample age
The probability that the label vector corresponding age matches and the second sample facial image is real human face image;By it is above-mentioned extremely
Each of the few second sample corresponding probability of the second sample is compared with corresponding markup information;According to comparing result
Determine whether the above-mentioned second initial neural network model reaches preset second optimization aim;It is initial in response to determination above-mentioned second
Neural network model reaches preset second optimization aim, sentences using the above-mentioned second initial neural network model as what training was completed
Other network.
In some embodiments, above-mentioned discrimination model training unit further includes:Second adjustment unit, in response to determination
Above-mentioned second initial neural network model is not up to preset second optimization aim, adjusts above-mentioned second initial neural network model
Model parameter, continue to execute above-mentioned second training step.
In some embodiments, above-mentioned determination unit is further used for:Determine the affiliated age categories of above-mentioned age information,
Wherein, age categories divide to obtain according to age bracket;According to pair of preset age categories and age label vector
It should be related to, determine the age label vector of the age categories belonging to above-mentioned age information;Using determining age label vector as
The age label vector of above-mentioned age information.
In some embodiments, above-mentioned age label vector is one-hot encoding vector.
The third aspect, the embodiment of the present application provide a kind of equipment, which includes:One or more processors;Storage
Device, for storing one or more programs, when said one or multiple programs are executed by said one or multiple processors,
So that said one or multiple processors realize the method as described in any realization method in first aspect.
Fourth aspect, the embodiment of the present application provide a kind of computer-readable medium, are stored thereon with computer program,
In, the method as described in any realization method in first aspect is realized when which is executed by processor.
Method and apparatus provided by the embodiments of the present application for generating information receive facial image and age letter first
Breath, wherein the age of above-mentioned age information characterization is more than the age of the people in above-mentioned facial image corresponding to face object, then
Age label vector is determined according to above-mentioned age information, is finally imported above-mentioned facial image and above-mentioned age label vector advance
The generation model of foundation obtains generating facial image, wherein above-mentioned facial image and above-mentioned generation facial image include same people
Face information and face object corresponding age and above-mentioned the age information characterization in above-mentioned generation facial image age
Match, to generate the facial image after face aging according to facial image and age information, realizes the generation of information.
Description of the drawings
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is that this application can be applied to exemplary system architecture figures therein;
Fig. 2 is the flow chart according to one embodiment of the method for generating information of the application;
Fig. 3 is the schematic diagram according to an application scenarios of the method for generating information of the application;
Fig. 4 is to train to obtain based on machine learning method in the method for generating information according to the application to generate model
Method one embodiment flow chart;
Fig. 5 is the structural schematic diagram according to one embodiment of the device for generating information of the application;
Fig. 6 is adapted for the structural representation of the computer system for the terminal device or server of realizing the embodiment of the present application
Figure.
Specific implementation mode
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to
Convenient for description, is illustrated only in attached drawing and invent relevant part with related.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase
Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 shows the method for generating information that can apply the embodiment of the present application or the device for generating information
Exemplary system architecture 100.
As shown in Figure 1, system architecture 100 may include terminal device 101,102,103, network 104 and server 105.
Network 104 between terminal device 101,102,103 and server 105 provide communication link medium.Network 104 can be with
Including various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can be interacted by network 104 with server 105 with using terminal equipment 101,102,103, to receive or send out
Send message etc..Various telecommunication customer end applications can be installed, such as photography and vedio recording class is answered on terminal device 101,102,103
With the application of, image processing class, searching class application etc..
Terminal device 101,102,103 can be hardware, can also be software.When terminal device 101,102,103 is hard
Can be the various electronic equipments that there is display screen and image is supported to show, including but not limited to smart mobile phone, tablet when part
Computer, E-book reader, MP3 player (Moving Picture Experts Group Audio Layer III, dynamic
Image expert's compression standard audio level 3), MP4 (Moving Picture Experts Group Audio Layer IV, move
State image expert's compression standard audio level 4) player, pocket computer on knee and desktop computer etc..When terminal is set
Standby 101,102,103 when being software, may be mounted in above-mentioned cited electronic equipment.Its may be implemented into multiple softwares or
Software module (such as providing Distributed Services), can also be implemented as single software or software module.It does not do herein specific
It limits.
Server 105 can be to provide the server of various services, such as to being shown on terminal device 101,102,103
Image or figure provide the background server supported.Data (such as image data) can be fed back to terminal and set by background server
It is standby, so that terminal device is shown.
It should be noted that the method for generating information that the embodiment of the present application is provided can pass through terminal device
101,102,103 execute, can also be executed by server 105, can also by server 105 and terminal device 101,102,
103 common execution, correspondingly, the device for generating information can be set in server 105, can also be set to terminal and be set
In standby 101,102,103, can be set to unit in server 105 and by other units be set to terminal device 101,
102, in 103.The application does not limit this.
It should be noted that server can be hardware, can also be software.When server is hardware, may be implemented
At the distributed server cluster that multiple servers form, individual server can also be implemented as.It, can when server is software
To be implemented as multiple softwares or software module (such as providing Distributed Services), single software or software can also be implemented as
Module.It is not specifically limited herein.
It should be understood that the number of the terminal device, network and server in Fig. 1 is only schematical.According to realization need
It wants, can have any number of terminal device, network and server.
With continued reference to Fig. 2, the flow of one embodiment of the method for generating information according to the application is shown
200.The method for being used to generate information, includes the following steps:
Step 201, facial image and age information are received.
In the present embodiment, the method for generating information executive agent (such as terminal device shown in FIG. 1 101,
102,103 or server 105) facial image and age information can be received, wherein the age of above-mentioned age information characterization is more than
The age of people in above-mentioned facial image corresponding to face object.Herein, above-mentioned age information can be each of characterization age
Kind information, for example, the specific age (12 years old, 32 years old etc.).As an example, user wishes according to 20 years old facial image
Predict the appearance when people 40 years old on this facial image, at this point, user can be sent to executive agent this facial image with
And age information:40 years old.
Step 202, age label vector is determined according to age information.
In the present embodiment, above-mentioned executive agent can determine that the age marks according to the age information received in step 201
Label vector.As an example, the correspondence of age information and age label vector can be previously stored in above-mentioned executive agent
Table.In this way, above-mentioned executive agent can will be received in step 201 age information in age information and the mapping table into
Row compares, if an age information in the mapping table and the age information received are same or similar, by the correspondence
The corresponding age label vector of the age information in relation table is as the age label vector for receiving age information.
In some optional realization methods of the present embodiment, above-mentioned steps 202 can specifically include:First, above-mentioned to hold
Row main body can determine the affiliated age categories of above-mentioned age information, wherein age categories divide to obtain according to age bracket.
As an example, can the age of people, several age brackets be divided into advance, each age bracket is above-mentioned as an age categories
Executive agent can determine the age categories belonging to above-mentioned age information according to the age that above-mentioned age information is characterized.Later,
Above-mentioned executive agent can determine above-mentioned age letter according to the correspondence of preset age categories and age label vector
The age label vector of age categories belonging to breath.Finally, above-mentioned executive agent can using determining age label vector as
The age label vector of above-mentioned age information.
In some optional realization methods, above-mentioned age label vector can be one-hot encoding vector.As an example, can be with
The age of people is divided into 5 age brackets in advance:0-20,21-30,31-40,41-50,50+, then this 5 age brackets correspond to
One-hot encoding vector can be respectively:00001,00010,00100,01000,10000.It should be noted that this realization method
In age dividing mode it is only schematical, rather than the restriction to age dividing mode can be according to reality in actual use
Border needs the age of people being divided into arbitrary age bracket.
Step 203, facial image and age label vector are imported into the generation model pre-established, obtains generating face figure
Picture.
In the present embodiment, the generation model pre-established, above-mentioned generation model can be stored in above-mentioned executive agent
It can be used for characterizing facial image and age label vector and generate the correspondence of facial image.In this way, above-mentioned executive agent
The age label vector determined in facial image and step 202 that step 201 receives can be imported above-mentioned generation model, obtained
To generation facial image.Herein, above-mentioned facial image and above-mentioned generation facial image include the face information of same people, and
The age that the face object corresponding age in above-mentioned generation facial image characterizes with above-mentioned age information matches.
As an example, above-mentioned generation model can be above-mentioned executive agent or other are used to train above-mentioned generation model
Training obtains electronic equipment in the following manner:It is possible, firstly, to obtain sample set.Wherein, sample include sample facial image,
Sample age label vector and aging facial image.Wherein, sample facial image and aging facial image include the people of same people
Face information, and the age in aging facial image corresponding to face object and the age phase corresponding to sample age label vector
Together.For example, facial image when sample facial image is Zhang San 20 years old, the sample age label vector corresponding age is 40 years old,
Facial image when then aging facial image is Zhang San 40 years old.Then, following training step can be executed based on above-mentioned sample set:
The sample facial image of at least one of above-mentioned sample set sample and sample age label vector be input to and is previously obtained
Initial neural network model is obtained for the corresponding generation facial image of each sample in above-mentioned at least one sample, at this
In, above-mentioned initial neural network model can include but is not limited to convolutional neural networks, deep neural network model etc..For
The corresponding generation facial image of each sample, calculates the phase between the generation facial image and the aging facial image in the sample
Like degree.Determine whether above-mentioned initial neural network model reaches preset optimization aim according to the similarity of calculating.As an example,
Above-mentioned optimization aim can be that the similarity generated between facial image and aging facial image is more than predetermined threshold value.In response to true
Fixed above-mentioned initial neural network model reaches preset optimization aim, and above-mentioned initial neural network model is completed as training
Generate model.It is not up to preset optimization aim in response to the above-mentioned initial neural network model of determination, based in similarity adjustment
The model parameter of initial neural network model is stated, and continues to execute above-mentioned training step.
It is a signal according to the application scenarios of the method for generating information of the present embodiment with continued reference to Fig. 3, Fig. 3
Figure.In the application scenarios of Fig. 3, user inputs the facial image A and age letter of 20 years old people by terminal device 301 first
It ceases " 45 years old ".Later, terminal device 301 determines age label vector 01000 according to age information " 45 years old ";Terminal device 301
Above-mentioned facial image A and age label vector 01000 are imported into the generation model pre-established, obtain generating facial image B,
In, facial image A and generation facial image B include the face information of same people, and generate the face object pair in facial image B
The age answered and 45 years old match.
The method that above-described embodiment of the application provides is using the generation model pre-established, according to facial image and age
Information generates the facial image after face aging, realizes the generation of information.
With further reference to Fig. 4, it illustrates train to obtain the embodiment for the method for generating model based on machine learning method
Flow 400.The training generates the flow 400 of the embodiment of the method for model, includes the following steps:
Step 401, first sample set is obtained.
In the present embodiment, above-mentioned executive agent or other for training the electronic equipment of above-mentioned generation model that can obtain
Take first sample set, wherein the first sample that first sample is concentrated may include first sample facial image and first sample year
Age label vector, herein, the age of first sample age label vector characterization can be more than people in first sample facial image
The age of people corresponding to face object.
Step 402, it is based on first sample set and executes the first training step.
In the present embodiment, above-mentioned executive agent or other be used for train above-mentioned generation model electronic equipment be based on
Following first training step can be executed by stating first sample set, wherein above-mentioned first training step can specifically include:
Step 4021, by the first sample facial image and first of at least one of above-mentioned first sample set first sample
Sample age label vector is input to the initial neural network model of first be previously obtained, obtains being directed to above-mentioned at least one first
Each first sample corresponding first in sample generates facial image.
Herein, above-mentioned initial neural network model can be unbred neural network model or not training completion
Neural network model, for example, convolutional neural networks, deep neural network etc..
Step 4022, facial image is generated for each first, which is pre-established
Discrimination model, the age for obtaining the face object that the first generation facial image is included are corresponding with the first generation facial image
First sample age label vector corresponding age match and this first to generate facial image be real human face image
Probability, and calculate this and first generate facial image and first generated between the corresponding first sample facial image of facial image with this
Similarity.
Herein, similarity can include but is not limited to cosine similarity, Jie Kade similarity factors, Euclidean distance etc..
Wherein, first sample age label vector corresponding with the first generation facial image can refer to generating the first generation face
The first sample age label vector inputted to the first initial neural network model when image.With the first generation facial image pair
The first sample facial image answered can refer to defeated to the first initial neural network model when generating the first generation facial image
The first sample facial image entered.Herein, above-mentioned discrimination model can be used for judging that one first generation facial image is wrapped
The corresponding age phase of the age of the face object contained first sample age label vector corresponding with the first generation facial image
Matching, and the probability that the first generation facial image is real human face image.
As an example, above-mentioned calculating first generation facial image first sample corresponding with the first generation facial image
Similarity between facial image can specifically include:It is possible, firstly, to extract respectively this first generate facial image and this first
Generate the feature of the corresponding first sample facial image of facial image.Then, calculate this first generate facial image feature and
The Euclidean distance of the feature of the corresponding first sample facial image of first generation facial image obtains the first generation face figure
As the similarity between first sample facial image corresponding with the first generation facial image.
Step 4023, whether the above-mentioned first initial neural network model is determined according to the similarity of obtained probability and calculating
Reach preset first optimization aim.
As an example, the probability and similarity that above-mentioned optimization aim can refer to reach preset threshold value.Make
For another example, when one generates the corresponding probability of facial image and similarity reaches preset threshold value, it is believed that this first
Generation facial image is accurate, at this point, above-mentioned optimization aim can refer to first that the above-mentioned first initial neural network model generates
The accuracy rate for generating facial image is more than preset accuracy rate threshold value.
Step 4024, reach preset first optimization aim in response to the above-mentioned first initial neural network model of determination, it will
The generation model that above-mentioned first initial neural network model is completed as training.
In some optional realization methods of the present embodiment, the method that above-mentioned training generates model can also include:
Step 403, preset first optimization aim, base are not up in response to the above-mentioned first initial neural network model of determination
The model parameter of the above-mentioned first initial neural network model is adjusted in above-mentioned probability and above-mentioned similarity, continues to execute above-mentioned first
Training step.As an example, above-mentioned executive agent or other be used to train the electronic equipment of above-mentioned generation model, can be based on
Above-mentioned probability and above-mentioned similarity use back-propagation algorithm (Back Propgation Algorithm, BP algorithm) and gradient
Descent method (such as stochastic gradient descent algorithm) is adjusted the model parameter of the above-mentioned first initial neural network model.It needs
Illustrate, back-propagation algorithm and gradient descent method are the known technologies studied and applied extensively at present, and details are not described herein.
In some optional realization methods of the present embodiment, above-mentioned discrimination model can be above-mentioned executive agent or its
He is for training the electronic equipment of above-mentioned discrimination model to be trained by following steps:
Step S10 obtains the second sample set.
In this realization method, above-mentioned executive agent or other be used to train the electronic equipment of above-mentioned discrimination model can be with
Obtain the second sample set, wherein the second sample may include the second sample facial image, the second sample age label vector and mark
Information is noted, the second sample facial image may include positive sample facial image and negative sample facial image, wherein positive sample face
Image is real human face image, and negative sample facial image is the facial image that above-mentioned generation model generates, and markup information is for marking
Note positive sample facial image and negative sample facial image.For example, markup information can be 1 and 0, wherein 1 indicates positive sample face
Image, 0 indicates negative sample facial image.Herein, the age of the second sample age label vector characterization can be with the second sample
The age of people in facial image corresponding to face object matches.
Step S20 executes the second training step based on above-mentioned second sample set.
In this realization method, above-mentioned executive agent or other be used to train the electronic equipment of above-mentioned discrimination model can be with
The second training step is executed based on above-mentioned second sample set, wherein above-mentioned second training step can specifically include:
Step S201, by the second sample facial image and second of the second sample of at least one of above-mentioned second sample set
Sample age label vector inputs the second initial neural network model pre-established, obtains in above-mentioned at least one second sample
Each of the second sample facial image face object for being included of the second sample age and the second sample age label vector
The probability that the corresponding age matches and the second sample facial image is real human face image.Herein, at the beginning of above-mentioned second
Beginning neural network model can be unbred neural network model or the neural network model that training is not completed, for example, volume
Product neural network, deep neural network etc..
Step S202, by each of above-mentioned at least one second sample corresponding probability of the second sample and corresponding mark
Information is compared.
It is excellent to determine whether the above-mentioned second initial neural network model reaches preset second according to comparing result by step S203
Change target.It is preset as an example, the difference between probability and markup information that above-mentioned second optimization aim can be is less than
Discrepancy threshold.As another example, when the difference between obtained probability and markup information is less than preset discrepancy threshold
When, it is believed that the second initial Neural Network model predictive is accurate, at this point, above-mentioned second optimization aim can be the above-mentioned second initial god
Predictablity rate through network model is more than default accuracy rate threshold value.
Step S204 reaches preset second optimization aim in response to the above-mentioned second initial neural network model of determination, can
Using the differentiation network for completing the above-mentioned second initial neural network model as training.
In some optional realization methods, the step of above-mentioned trained discrimination model, can also include:
Step S30 is not up to preset second optimization aim in response to the above-mentioned second initial neural network model of determination, adjusts
The model parameter of whole above-mentioned second initial neural network model, continues to execute above-mentioned second training step.As an example, above-mentioned hold
Row main body or other be used to train the electronic equipment of above-mentioned discrimination model can be passed using reversed based on obtained comparing result
It broadcasts algorithm (Back Propgation Algorithm, BP algorithm) and gradient descent method (such as stochastic gradient descent algorithm) is right
The model parameter of above-mentioned second initial neural network model is adjusted.It should be noted that under back-propagation algorithm and gradient
Drop method is the known technology studied and applied extensively at present, and details are not described herein.
The method that the training that above-described embodiment of the application provides generates model, it is raw in the training process for generating model
Model parameter at model is age and the first generation face for the face object for being included based on the first generation facial image
The corresponding first sample age label vector of the image corresponding age matches and the first generation facial image is real human face
The probability of image and this first generate facial image and the corresponding first sample facial image of first generation facial image it
Between similarity it is newer, it may therefore be assured that generating the generation facial image face object that is included that model generates
Age corresponding with corresponding age label vector at age matches, and can also improve and generate the generation face figure that model generates
The authenticity of picture, and can ensure that is generated generates the similitude between facial image and the facial image of input, that is, ensure
It outputs and inputs as the facial image of same people.
With further reference to Fig. 5, as the realization to method shown in above-mentioned each figure, this application provides one kind for generating letter
One embodiment of the device of breath, the device embodiment is corresponding with embodiment of the method shown in Fig. 2, which can specifically answer
For in various electronic equipments.
As shown in figure 5, the device 500 for generating information of the present embodiment includes:Receiving unit 501, determination unit 502
With generation unit 503.Wherein, receiving unit 501 is for receiving facial image and age information, wherein above-mentioned age information table
The age of sign is more than the age of the people corresponding to face object in above-mentioned facial image;Determination unit 502 according to the age for believing
Breath determines age label vector;Generation unit 503 is used in advance build above-mentioned facial image and the importing of above-mentioned age label vector
Vertical generation model obtains generating facial image, wherein above-mentioned facial image and above-mentioned generation facial image include same people's
The age phase that the face object corresponding age in face information and above-mentioned generation facial image characterizes with above-mentioned age information
Matching, above-mentioned generation model are used to characterize facial image and age label vector and generate the correspondence of facial image.
In the present embodiment, the receiving unit 501 of the device 500 for generating information, determination unit 502 and generation unit
503 specific processing and its caused technique effect can be respectively with reference to step 201, step 202 and steps in 2 corresponding embodiment of figure
Rapid 203 related description, details are not described herein.
In some optional realization methods of the present embodiment, above-mentioned apparatus 500 can also include generating model training list
First (not shown), above-mentioned generation model training unit may include:First acquisition unit (not shown), for obtaining
First sample set, wherein first sample includes first sample facial image and first sample age label vector;First executes list
First (not shown), for executing following first training step based on above-mentioned first sample set:Above-mentioned first sample is concentrated
At least one first sample first sample facial image and first sample age label vector be input to be previously obtained
One initial neural network model obtains generating for each first sample corresponding first in above-mentioned at least one first sample
Facial image;Facial image is generated for each first, which is inputted into the discrimination model pre-established,
Obtain age the first sample corresponding with the first generation facial image for the face object that the first generation facial image is included
This age label vector corresponding age matches and first probability for generating facial image as real human face image, with
And calculate this first generate facial image with this first generate it is similar between the corresponding first sample facial image of facial image
Degree;Determine whether the above-mentioned first initial neural network model reaches preset first according to the similarity of obtained probability and calculating
Optimization aim;Reach preset first optimization aim in response to the above-mentioned first initial neural network model of determination, by above-mentioned first
The generation model that initial neural network model is completed as training.
In some optional realization methods of the present embodiment, above-mentioned generation model training unit can also include:First
Adjustment unit (not shown), it is excellent for being not up to preset first in response to the above-mentioned first initial neural network model of determination
Change target, the model parameter of the above-mentioned first initial neural network model is adjusted based on above-mentioned probability and above-mentioned similarity, continues to hold
Above-mentioned first training step of row.
In some optional realization methods of the present embodiment, above-mentioned apparatus 500 further includes discrimination model training unit (figure
In be not shown), above-mentioned discrimination model training unit may include:Second acquisition unit (not shown), for obtaining second
Sample set, wherein the second sample include the second sample facial image, the second sample age label vector and markup information, second
Sample facial image includes positive sample facial image and negative sample facial image, wherein positive sample facial image is real human face
Image, negative sample facial image are the facial image that above-mentioned generation model generates, and markup information is for marking positive sample face figure
Picture and negative sample facial image;Second execution unit (not shown), for executing following the based on above-mentioned second sample set
Two training steps:By the second sample facial image of the second sample of at least one of above-mentioned second sample set and the 1 sample year
Age label vector inputs the second initial neural network model pre-established, obtains each of above-mentioned at least one second sample
The age for the face object that second sample facial image of the second sample is included is corresponding with the second sample age label vector
The probability that age matches and the second sample facial image is real human face image;By above-mentioned at least one second sample
Each of the corresponding probability of the second sample compared with corresponding markup information;It is determined at the beginning of above-mentioned second according to comparing result
Whether beginning neural network model reaches preset second optimization aim;It is reached in response to the above-mentioned second initial neural network model of determination
To preset second optimization aim, the differentiation network that the above-mentioned second initial neural network model is completed as training.
In some optional realization methods of the present embodiment, above-mentioned discrimination model training unit can also include:Second
Adjustment unit (not shown), it is excellent for being not up to preset second in response to the above-mentioned second initial neural network model of determination
Change target, adjusts the model parameter of above-mentioned second initial neural network model, continue to execute above-mentioned second training step.
In some optional realization methods of the present embodiment, above-mentioned determination unit 502 can be further used for:In determination
State the affiliated age categories of age information, wherein age categories divide to obtain according to age bracket;According to preset year
The correspondence of age classification and age label vector determines the age label vector of the age categories belonging to above-mentioned age information;
Using determining age label vector as the age label vector of above-mentioned age information.
In some optional realization methods of the present embodiment, above-mentioned age label vector is one-hot encoding vector.
Below with reference to Fig. 6, it illustrates the calculating suitable for terminal device or server for realizing the embodiment of the present application
The structural schematic diagram of machine system 600.Terminal device or server shown in Fig. 6 are only an example, should not be to the application reality
The function and use scope for applying example bring any restrictions.
As shown in fig. 6, computer system 600 includes central processing unit (CPU) 601, it can be read-only according to being stored in
Program in memory (ROM) 602 or be loaded into the program in random access storage device (RAM) 603 from storage section 608 and
Execute various actions appropriate and processing.In RAM 603, also it is stored with system 600 and operates required various programs and data.
CPU 601, ROM 602 and RAM 603 are connected with each other by bus 604.Input/output (I/O) interface 605 is also connected to always
Line 604.
It is connected to I/O interfaces 605 with lower component:Importation 606 including keyboard, mouse etc.;It is penetrated including such as cathode
The output par, c 607 of spool (CRT), liquid crystal display (LCD) etc. and loud speaker etc.;Storage section 608 including hard disk etc.;
And the communications portion 609 of the network interface card including LAN card, modem etc..Communications portion 609 via such as because
The network of spy's net executes communication process.Driver 610 is also according to needing to be connected to I/O interfaces 605.Detachable media 611, such as
Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on driver 610, as needed in order to be read from thereon
Computer program be mounted into storage section 608 as needed.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium
On computer program, which includes the program code for method shown in execution flow chart.In such reality
It applies in example, which can be downloaded and installed by communications portion 609 from network, and/or from detachable media
611 are mounted.When the computer program is executed by central processing unit (CPU) 601, limited in execution the present processes
Above-mentioned function.It should be noted that computer-readable medium described herein can be computer-readable signal media or
Computer readable storage medium either the two arbitrarily combines.Computer readable storage medium for example can be --- but
Be not limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or arbitrary above combination.
The more specific example of computer readable storage medium can include but is not limited to:Electrical connection with one or more conducting wires,
Portable computer diskette, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only deposit
Reservoir (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory
Part or above-mentioned any appropriate combination.In this application, computer readable storage medium can any be included or store
The tangible medium of program, the program can be commanded the either device use or in connection of execution system, device.And
In the application, computer-readable signal media may include the data letter propagated in a base band or as a carrier wave part
Number, wherein carrying computer-readable program code.Diversified forms may be used in the data-signal of this propagation, including but not
It is limited to electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer
Any computer-readable medium other than readable storage medium storing program for executing, the computer-readable medium can send, propagate or transmit use
In by instruction execution system, device either device use or program in connection.Include on computer-readable medium
Program code can transmit with any suitable medium, including but not limited to:Wirelessly, electric wire, optical cable, RF etc., Huo Zheshang
Any appropriate combination stated.
The calculating of the operation for executing the application can be write with one or more programming languages or combinations thereof
Machine program code, described program design language include object oriented program language-such as Java, Smalltalk, C+
+, further include conventional procedural programming language-such as " C " language or similar programming language.Program code can
Fully to execute on the user computer, partly execute, executed as an independent software package on the user computer,
Part executes or executes on a remote computer or server completely on the remote computer on the user computer for part.
In situations involving remote computers, remote computer can pass through the network of any kind --- including LAN (LAN)
Or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as utilize Internet service
Provider is connected by internet).
Flow chart in attached drawing and block diagram, it is illustrated that according to the system of the various embodiments of the application, method and computer journey
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
A part for a part for one module, program segment, or code of table, the module, program segment, or code includes one or more uses
The executable instruction of the logic function as defined in realization.It should also be noted that in some implementations as replacements, being marked in box
The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually
It can be basically executed in parallel, they can also be executed in the opposite order sometimes, this is depended on the functions involved.Also it to note
Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding
The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction
Combination realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard
The mode of part is realized.Described unit can also be arranged in the processor, for example, can be described as:A kind of processor packet
Include receiving unit, determination unit and generation unit.Wherein, the title of these units is not constituted under certain conditions to the unit
The restriction of itself, for example, receiving unit is also described as " receiving the unit of facial image and age information ".
As on the other hand, present invention also provides a kind of computer-readable medium, which can be
Included in device described in above-described embodiment;Can also be individualism, and without be incorporated the device in.Above-mentioned calculating
Machine readable medium carries one or more program, when said one or multiple programs are executed by the device so that should
Device:Receive facial image and age information, wherein the age of above-mentioned age information characterization is more than face in above-mentioned facial image
The age of people corresponding to object;Age label vector is determined according to age information;Above-mentioned facial image and above-mentioned age are marked
Label vector imports the generation model pre-established, obtains generating facial image, wherein above-mentioned facial image and above-mentioned generation face
Image includes the face information of same people and the face object corresponding age in above-mentioned generation facial image and above-mentioned age
The age of information representation matches, and above-mentioned generation model is for characterizing facial image and age label vector and generating facial image
Correspondence.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.People in the art
Member should be appreciated that invention scope involved in the application, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic
Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature
Other technical solutions of arbitrary combination and formation.Such as features described above has similar work(with (but not limited to) disclosed herein
Can technical characteristic replaced mutually and the technical solution that is formed.
Claims (16)
1. a kind of method for generating information, including:
Receive facial image and age information, wherein the age of the age information characterization is more than face in the facial image
The age of people corresponding to object;
Age label vector is determined according to age information;
The facial image and the age label vector are imported into the generation model pre-established, obtain generating facial image,
Wherein, the facial image and the generation facial image include the face information of same people and the generation facial image
In age for being characterized with the age information at face object corresponding age match, the generation model is for characterizing face
The correspondence of image and age label vector and generation facial image.
2. according to the method described in claim 1, wherein, the generation model is to pass through following steps based on machine learning method
It obtains:
Obtain first sample set, wherein first sample includes first sample facial image and first sample age label vector;
Following first training step is executed based on the first sample set:By the first sample of at least one of described first sample set
This first sample facial image and first sample age label vector is input to the initial neural network mould of first be previously obtained
Type obtains generating facial image for each first sample corresponding first at least one first sample;For every
A first generates facial image, which is inputted the discrimination model pre-established, obtains first generation
The age for the face object that facial image is included first sample age label vector corresponding with the first generation facial image
The corresponding age matches and first probability for generating facial image as real human face image, and calculates first life
At the similarity between facial image first sample facial image corresponding with the first generation facial image;It is general according to what is obtained
Rate and the similarity of calculating determine whether the described first initial neural network model reaches preset first optimization aim;In response to
Determine that the described first initial neural network model reaches preset first optimization aim, by the described first initial neural network model
The generation model completed as training.
3. according to the method described in claim 2, wherein, the step of obtaining the generation model based on machine learning method, also wraps
It includes:
Be not up to preset first optimization aim in response to the described first initial neural network model of determination, based on the probability and
The similarity adjusts the model parameter of the described first initial neural network model, continues to execute first training step.
4. according to the method described in claim 2, wherein, the discrimination model is through the following steps that training obtained:
Obtain the second sample set, wherein the second sample includes the second sample facial image, the second sample age label vector and mark
Information is noted, the second sample facial image includes positive sample facial image and negative sample facial image, wherein positive sample facial image
For real human face image, negative sample facial image is the facial image that the generation model generates, and markup information is for marking just
Sample facial image and negative sample facial image;
Following second training step is executed based on second sample set:By the second sample of at least one of second sample set
This second sample facial image and the second sample age label vector input the second initial neural network model pre-established,
Obtain the face object that the second sample facial image of the second sample of each of at least one second sample is included
Age corresponding with the second sample age label vector at age matches and the second sample facial image is real human face figure
The probability of picture;Each of at least one second sample corresponding probability of the second sample is carried out with corresponding markup information
Comparison;Determine whether the described second initial neural network model reaches preset second optimization aim according to comparing result;Response
In determining that the described second initial neural network model reaches preset second optimization aim, by the described second initial neural network mould
The differentiation network that type is completed as training.
5. according to the method described in claim 4, wherein, the step of obtaining the discrimination model, further includes:
It is not up to preset second optimization aim in response to the described second initial neural network model of determination, at the beginning of adjustment described second
The model parameter of beginning neural network model continues to execute second training step.
6. it is described that age label vector is determined according to age information according to the method described in claim 1, wherein, including:
Determine the affiliated age categories of the age information, wherein age categories divide to obtain according to age bracket;
According to the correspondence of preset age categories and age label vector, the age belonging to the age information is determined
The age label vector of classification;
Using determining age label vector as the age label vector of the age information.
7. according to the method described in claim 6, wherein, the age label vector is one-hot encoding vector.
8. a kind of device for generating information, including:
Receiving unit, for receiving facial image and age information, wherein the age of the age information characterization is more than the people
The age of people in face image corresponding to face object;
Determination unit, for determining age label vector according to age information;
Generation unit is obtained for the facial image and the age label vector to be imported the generation model pre-established
Generate facial image, wherein the facial image and the facial image that generates include the face information of same people and described
The age that the face object corresponding age in generation facial image characterizes with the age information matches, the generation model
Correspondence for characterizing facial image and age label vector with generating facial image.
9. device according to claim 8, wherein described device further includes generating model training unit, the generation mould
Type training unit includes:
First acquisition unit, for obtaining first sample set, wherein first sample includes first sample facial image and the first sample
This age label vector;
First execution unit, for executing following first training step based on the first sample set:By the first sample set
At least one of first sample first sample facial image and first sample age label vector be input to and be previously obtained
First initial neural network model is obtained for corresponding first life of each first sample at least one first sample
At facial image;Facial image is generated for each first, which is inputted into the differentiation mould pre-established
Type obtains age and the first generation facial image corresponding first for the face object that the first generation facial image is included
The sample age label vector corresponding age matches and first probability for generating facial image as real human face image,
And it calculates this and first generates facial image and first generate phase between the corresponding first sample facial image of facial image with this
Like degree;Determine whether the described first initial neural network model reaches preset according to the similarity of obtained probability and calculating
One optimization aim;Reach preset first optimization aim in response to the described first initial neural network model of determination, by described
The generation model that one initial neural network model is completed as training.
10. device according to claim 9, wherein the generation model training unit further includes:
The first adjustment unit, for being not up to preset first optimization mesh in response to the described first initial neural network model of determination
Mark, the model parameter of the described first initial neural network model is adjusted based on the probability and the similarity, continues to execute institute
State the first training step.
11. device according to claim 9, wherein described device further includes discrimination model training unit, the differentiation mould
Type training unit includes:
Second acquisition unit, for obtaining the second sample set, wherein the second sample includes the second sample facial image, the second sample
This age label vector and markup information, the second sample facial image include positive sample facial image and negative sample facial image,
Wherein, positive sample facial image is real human face image, and negative sample facial image is the facial image that the generation model generates,
Markup information is for marking positive sample facial image and negative sample facial image;
Second execution unit, for executing following second training step based on second sample set:By second sample set
At least one of the second sample facial image of the second sample and the input of the second sample age label vector pre-establish the
Two initial neural network models obtain the second sample facial image of the second sample of each of at least one second sample
Including age corresponding with the second sample age label vector at age of face object match and the second sample people
Face image is the probability of real human face image;By each of at least one second sample corresponding probability of the second sample with
Corresponding markup information is compared;Determine whether the described second initial neural network model reaches preset according to comparing result
Second optimization aim;Reach preset second optimization aim in response to the described second initial neural network model of determination, it will be described
The differentiation network that second initial neural network model is completed as training.
12. according to the devices described in claim 11, wherein the discrimination model training unit further includes:
Second adjustment unit, for being not up to preset second optimization mesh in response to the described second initial neural network model of determination
Mark adjusts the model parameter of the second initial neural network model, continues to execute second training step.
13. device according to claim 8, wherein the determination unit is further used for:
Determine the affiliated age categories of the age information, wherein age categories divide to obtain according to age bracket;
According to the correspondence of preset age categories and age label vector, the age belonging to the age information is determined
The age label vector of classification;
Using determining age label vector as the age label vector of the age information.
14. device according to claim 13, wherein the age label vector is one-hot encoding vector.
15. a kind of equipment, including:
One or more processors;
Storage device, for storing one or more programs,
When one or more of programs are executed by one or more of processors so that one or more of processors are real
The now method as described in any in claim 1-7.
16. a kind of computer-readable medium, is stored thereon with computer program, wherein real when described program is executed by processor
The now method as described in any in claim 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810209172.8A CN108416310B (en) | 2018-03-14 | 2018-03-14 | Method and apparatus for generating information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810209172.8A CN108416310B (en) | 2018-03-14 | 2018-03-14 | Method and apparatus for generating information |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108416310A true CN108416310A (en) | 2018-08-17 |
CN108416310B CN108416310B (en) | 2022-01-28 |
Family
ID=63131350
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810209172.8A Active CN108416310B (en) | 2018-03-14 | 2018-03-14 | Method and apparatus for generating information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108416310B (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109359626A (en) * | 2018-11-21 | 2019-02-19 | 合肥金诺数码科技股份有限公司 | A kind of Image Acquisition complexion prediction meanss and its prediction technique |
CN109636867A (en) * | 2018-10-31 | 2019-04-16 | 百度在线网络技术(北京)有限公司 | Image processing method, device and electronic equipment |
CN109829071A (en) * | 2018-12-14 | 2019-05-31 | 平安科技(深圳)有限公司 | Face image searching method, server, computer equipment and storage medium |
CN110009018A (en) * | 2019-03-25 | 2019-07-12 | 腾讯科技(深圳)有限公司 | A kind of image generating method, device and relevant device |
CN110322398A (en) * | 2019-07-09 | 2019-10-11 | 厦门美图之家科技有限公司 | Image processing method, device, electronic equipment and computer readable storage medium |
CN110674748A (en) * | 2019-09-24 | 2020-01-10 | 腾讯科技(深圳)有限公司 | Image data processing method, image data processing device, computer equipment and readable storage medium |
CN111145080A (en) * | 2019-12-02 | 2020-05-12 | 北京达佳互联信息技术有限公司 | Training method of image generation model, image generation method and device |
CN111209878A (en) * | 2020-01-10 | 2020-05-29 | 公安部户政管理研究中心 | Cross-age face recognition method and device |
CN111259695A (en) * | 2018-11-30 | 2020-06-09 | 百度在线网络技术(北京)有限公司 | Method and device for acquiring information |
CN111259698A (en) * | 2018-11-30 | 2020-06-09 | 百度在线网络技术(北京)有限公司 | Method and device for acquiring image |
CN111461971A (en) * | 2020-05-19 | 2020-07-28 | 北京字节跳动网络技术有限公司 | Image processing method, device, equipment and computer readable storage medium |
CN111581422A (en) * | 2020-05-08 | 2020-08-25 | 支付宝(杭州)信息技术有限公司 | Information processing method and device based on face recognition |
CN112163505A (en) * | 2020-09-24 | 2021-01-01 | 北京字节跳动网络技术有限公司 | Method, device, equipment and computer readable medium for generating image |
JP2021026744A (en) * | 2019-08-09 | 2021-02-22 | 日本テレビ放送網株式会社 | Information processing device, image recognition method, and learning model generation method |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101556699A (en) * | 2008-11-07 | 2009-10-14 | 浙江大学 | Face-based facial aging image synthesis method |
CN102306281A (en) * | 2011-07-13 | 2012-01-04 | 东南大学 | Multi-mode automatic estimating method for human age |
US8520906B1 (en) * | 2007-09-24 | 2013-08-27 | Videomining Corporation | Method and system for age estimation based on relative ages of pairwise facial images of people |
CN105787974A (en) * | 2014-12-24 | 2016-07-20 | 中国科学院苏州纳米技术与纳米仿生研究所 | Establishment method for establishing bionic human facial aging model |
CN107045622A (en) * | 2016-12-30 | 2017-08-15 | 浙江大学 | The face age estimation method learnt based on adaptive age distribution |
CN107194868A (en) * | 2017-05-19 | 2017-09-22 | 成都通甲优博科技有限责任公司 | A kind of Face image synthesis method and device |
-
2018
- 2018-03-14 CN CN201810209172.8A patent/CN108416310B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8520906B1 (en) * | 2007-09-24 | 2013-08-27 | Videomining Corporation | Method and system for age estimation based on relative ages of pairwise facial images of people |
CN101556699A (en) * | 2008-11-07 | 2009-10-14 | 浙江大学 | Face-based facial aging image synthesis method |
CN102306281A (en) * | 2011-07-13 | 2012-01-04 | 东南大学 | Multi-mode automatic estimating method for human age |
CN105787974A (en) * | 2014-12-24 | 2016-07-20 | 中国科学院苏州纳米技术与纳米仿生研究所 | Establishment method for establishing bionic human facial aging model |
CN107045622A (en) * | 2016-12-30 | 2017-08-15 | 浙江大学 | The face age estimation method learnt based on adaptive age distribution |
CN107194868A (en) * | 2017-05-19 | 2017-09-22 | 成都通甲优博科技有限责任公司 | A kind of Face image synthesis method and device |
Non-Patent Citations (3)
Title |
---|
ANDREAS LANITIS 等: "Toward Automatic Simulation of Aging Effects on Face Images", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 * |
HONGYU YANG 等: "Learning Face Age Progression: A Pyramid Architecture of GANs", 《ARXIV》 * |
胡斓: "基于图像的年龄估计与人脸年龄图像重构", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109636867A (en) * | 2018-10-31 | 2019-04-16 | 百度在线网络技术(北京)有限公司 | Image processing method, device and electronic equipment |
CN109359626A (en) * | 2018-11-21 | 2019-02-19 | 合肥金诺数码科技股份有限公司 | A kind of Image Acquisition complexion prediction meanss and its prediction technique |
CN111259698A (en) * | 2018-11-30 | 2020-06-09 | 百度在线网络技术(北京)有限公司 | Method and device for acquiring image |
CN111259695B (en) * | 2018-11-30 | 2023-08-29 | 百度在线网络技术(北京)有限公司 | Method and device for acquiring information |
CN111259698B (en) * | 2018-11-30 | 2023-10-13 | 百度在线网络技术(北京)有限公司 | Method and device for acquiring image |
CN111259695A (en) * | 2018-11-30 | 2020-06-09 | 百度在线网络技术(北京)有限公司 | Method and device for acquiring information |
CN109829071A (en) * | 2018-12-14 | 2019-05-31 | 平安科技(深圳)有限公司 | Face image searching method, server, computer equipment and storage medium |
CN109829071B (en) * | 2018-12-14 | 2023-09-05 | 平安科技(深圳)有限公司 | Face image searching method, server, computer device and storage medium |
CN110009018A (en) * | 2019-03-25 | 2019-07-12 | 腾讯科技(深圳)有限公司 | A kind of image generating method, device and relevant device |
CN110322398A (en) * | 2019-07-09 | 2019-10-11 | 厦门美图之家科技有限公司 | Image processing method, device, electronic equipment and computer readable storage medium |
CN110322398B (en) * | 2019-07-09 | 2022-10-28 | 厦门美图之家科技有限公司 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
JP2021026744A (en) * | 2019-08-09 | 2021-02-22 | 日本テレビ放送網株式会社 | Information processing device, image recognition method, and learning model generation method |
CN110674748B (en) * | 2019-09-24 | 2024-02-13 | 腾讯科技(深圳)有限公司 | Image data processing method, apparatus, computer device, and readable storage medium |
CN110674748A (en) * | 2019-09-24 | 2020-01-10 | 腾讯科技(深圳)有限公司 | Image data processing method, image data processing device, computer equipment and readable storage medium |
CN111145080A (en) * | 2019-12-02 | 2020-05-12 | 北京达佳互联信息技术有限公司 | Training method of image generation model, image generation method and device |
CN111145080B (en) * | 2019-12-02 | 2023-06-23 | 北京达佳互联信息技术有限公司 | Training method of image generation model, image generation method and device |
CN111209878A (en) * | 2020-01-10 | 2020-05-29 | 公安部户政管理研究中心 | Cross-age face recognition method and device |
CN111581422A (en) * | 2020-05-08 | 2020-08-25 | 支付宝(杭州)信息技术有限公司 | Information processing method and device based on face recognition |
CN111461971A (en) * | 2020-05-19 | 2020-07-28 | 北京字节跳动网络技术有限公司 | Image processing method, device, equipment and computer readable storage medium |
CN111461971B (en) * | 2020-05-19 | 2023-04-18 | 北京字节跳动网络技术有限公司 | Image processing method, device, equipment and computer readable storage medium |
CN112163505A (en) * | 2020-09-24 | 2021-01-01 | 北京字节跳动网络技术有限公司 | Method, device, equipment and computer readable medium for generating image |
Also Published As
Publication number | Publication date |
---|---|
CN108416310B (en) | 2022-01-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108416310A (en) | Method and apparatus for generating information | |
CN108022586B (en) | Method and apparatus for controlling the page | |
CN109858445A (en) | Method and apparatus for generating model | |
CN108898185A (en) | Method and apparatus for generating image recognition model | |
CN108446387A (en) | Method and apparatus for updating face registration library | |
CN109325541A (en) | Method and apparatus for training pattern | |
CN107766940A (en) | Method and apparatus for generation model | |
CN107578017A (en) | Method and apparatus for generating image | |
CN109993150A (en) | The method and apparatus at age for identification | |
CN108960316A (en) | Method and apparatus for generating model | |
CN108830235A (en) | Method and apparatus for generating information | |
CN109981787B (en) | Method and device for displaying information | |
CN107393541A (en) | Information Authentication method and apparatus | |
CN112153460B (en) | Video dubbing method and device, electronic equipment and storage medium | |
CN108595628A (en) | Method and apparatus for pushed information | |
CN108831505A (en) | The method and apparatus for the usage scenario applied for identification | |
CN107992478A (en) | The method and apparatus for determining focus incident | |
CN107609506A (en) | Method and apparatus for generating image | |
CN108933730A (en) | Information-pushing method and device | |
CN108280200A (en) | Method and apparatus for pushed information | |
CN109961141A (en) | Method and apparatus for generating quantization neural network | |
CN108446659A (en) | Method and apparatus for detecting facial image | |
CN108960110A (en) | Method and apparatus for generating information | |
CN109214501A (en) | The method and apparatus of information for identification | |
CN107451785A (en) | Method and apparatus for output information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |