CN108133201A - Face character recognition methods and device - Google Patents
Face character recognition methods and device Download PDFInfo
- Publication number
- CN108133201A CN108133201A CN201810044637.9A CN201810044637A CN108133201A CN 108133201 A CN108133201 A CN 108133201A CN 201810044637 A CN201810044637 A CN 201810044637A CN 108133201 A CN108133201 A CN 108133201A
- Authority
- CN
- China
- Prior art keywords
- image
- face
- network
- generation
- light
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the present application discloses face character recognition methods and device.One specific embodiment of this method includes:Pending image is obtained, wherein, pending image is to the image of face shooting under the conditions of non-frontal uniform source of light;Pending image is input to image trained in advance and generates model, it obtains carrying out pending image the optimization image after light adjustment, wherein, optimization image is the facial image that is presented under the conditions of positive uniform source of light, and image generation model is used to carrying out captured image under the conditions of non-frontal uniform source of light light adjusting to generate the image under the conditions of positive uniform source of light;In the face character identification model trained in advance by image input is optimized, the face character information of face in optimization image is obtained, wherein, face character identification model is used to that the face character of face in image to be identified to obtain face character information.This embodiment improves the accuracys of face character identification.
Description
Technical field
The invention relates to field of computer technology, and in particular to image processing field more particularly to face character
Recognition methods and device.
Background technology
With the development of Internet technology, face recognition technology has been applied to more and more fields.For example, it can pass through
Recognition of face carries out face character identification etc., and face character identification is gender, age, race, expression for identifying face etc.
One technology of property value.In general, in (such as situations such as backlight, sidelight) in the case that light environment is poor, pair in image
As it is unintelligible, be not easy to recognize, existing mode be typically directly in the image face carry out face character identification.
Invention content
The embodiment of the present application proposes face character recognition methods and device.
In a first aspect, the embodiment of the present application provides a kind of face character recognition methods, including:Pending image is obtained,
Wherein, pending image is to the image of face shooting under the conditions of non-frontal uniform source of light;Pending image is input to pre-
First trained image generation model obtains carrying out pending image the optimization image after light adjustment, wherein, optimization image is
The facial image presented under the conditions of positive uniform source of light, image generation model are used under the conditions of non-frontal uniform source of light
Captured image carries out light adjustment to generate the image under the conditions of positive uniform source of light;It will optimization image input training in advance
Face character identification model in, obtain optimization image in face face character information, wherein, face character identification model use
The face character of face is identified to obtain face character information in image.
In some embodiments, training obtains image generation model as follows:Obtain preset training sample and
The production confrontation network pre-established, wherein, production confrontation network includes generation network, the first differentiation network and second is sentenced
Other network, generation network are used to carry out the image of input the image after light adjustment and output adjustment, and first differentiates that network is used
In determining whether inputted image is exported by generation network, second differentiates network for determining in the image of generation network output
The face character information of face whether be input to generation network image in face face character information match, generate net
The face character information of face is that the image for generating network output is input to face category trained in advance in the image of network output
Property identification model in obtain, the face character information for being input to face in the image of generation network obtains in advance;It utilizes
Machine learning method is trained based on generation network, the first differentiation network and the second differentiation network, by the generation net after training
Network is determined as image generation model.
In some embodiments, training sample includes multiple under the conditions of non-frontal uniform source of light shooting face
The face category of face in first image, the second image and the second image shot under the conditions of positive uniform source of light to face
Property information.
In some embodiments, using machine learning method, differentiate that network and second differentiates net based on generation network, first
Network is trained, and the generation network after training is determined as image generation model, including:Perform following training step:Fixed life
Into the parameter of network, using the first image as the input of generation network, the image for generating network output is input to advance training
Face character identification model in obtain the face character information of face to be identified;The image and the second figure of network output will be generated
Input as differentiating network as first, by the face character of face in the face character information of face to be identified and the second image
The input that information differentiates network as second instructs the first differentiation network and the second differentiation network using machine learning method
Practice;First after fixed training differentiates that network and second differentiates the parameter of network, using the first image as the input for generating network,
Generation network is trained using machine learning method, back-propagation method and gradient descent algorithm;Determine the after training
One differentiates that network and second differentiates the loss function value of network, in response to determining that loss function value restrains, by the life after training
It is determined as image generation model into network.
In some embodiments, using machine learning method, differentiate that network and second differentiates net based on generation network, first
Network is trained, and the generation network after training is determined as image generation model, including:In response to determining loss function value not
Convergence re-executes training step using generation network, the first differentiation network and the second differentiation network after training.
In some embodiments, training sample generates as follows:Obtain the three-dimensional face model pre-established;Point
Different light source parameters is not set to render three-dimensional face model, obtain first image with different light source parameters and
Two images, wherein, the light source parameters of the first image are the parameter under the conditions of non-frontal uniform source of light, the light source parameters of the second image
For the parameter under the conditions of positive uniform source of light;Second image is input to in advance trained face character identification model, obtains the
The face character information of face in two images;By the face character information of face in the first image, the second image and the second image
Form training sample.
Second aspect, the embodiment of the present application provide a kind of face character identification device, including:First acquisition unit is matched
It puts for obtaining pending image, wherein, pending image is to the image of face shooting under the conditions of non-frontal uniform source of light;
First input unit is configured to for pending image to be input to image generation model trained in advance, obtain to pending figure
As carrying out the optimization image after light adjustment, wherein, optimization image is the face figure that is presented under the conditions of positive uniform source of light
Picture, image generation model are used to carry out captured image under the conditions of non-frontal uniform source of light light adjustment to generate front
Image under the conditions of uniform source of light;Second input unit, image input face character trained in advance will be optimized by, which being configured to, knows
In other model, the face character information of face in optimization image is obtained, wherein, face character identification model is used for people in image
The face character of face is identified to obtain face character information.
In some embodiments, which further includes:Second acquisition unit, be configured to obtain preset training sample and
The production confrontation network pre-established, wherein, production confrontation network includes generation network, the first differentiation network and second is sentenced
Other network, generation network are used to carry out the image of input the image after light adjustment and output adjustment, and first differentiates that network is used
In determining whether inputted image is exported by generation network, second differentiates network for determining in the image of generation network output
The face character information of face whether be input to generation network image in face face character information match, generate net
The face character information of face is that the image for generating network output is input to face category trained in advance in the image of network output
Property identification model in obtain, the face character information for being input to face in the image of generation network obtains in advance;Training
Unit is configured to using machine learning method, is instructed based on generation network, the first differentiation network and the second differentiation network
Practice, the generation network after training is determined as image generation model.
In some embodiments, training sample includes multiple under the conditions of non-frontal uniform source of light shooting face
The face category of face in first image, the second image and the second image shot under the conditions of positive uniform source of light to face
Property information.
In some embodiments, training unit is further configured to:Perform following training step:Fixed generation network
The image for generating network output using the first image as the input of generation network, is input to face category trained in advance by parameter
The face character information of face to be identified is obtained in property identification model;Using generate network output image and the second image as the
One differentiate network input, using the face character information of face in the face character information of face to be identified and the second image as
Second differentiates the input of network, and the first differentiation network and the second differentiation network are trained using machine learning method;It is fixed
First after training differentiates that network and second differentiates the parameter of network, using the first image as the input of generation network, utilizes machine
Device learning method, back-propagation method and gradient descent algorithm are trained generation network;Determine that first after training differentiates
The loss function value of network and the second differentiation network, in response to determining that loss function value restrains, by the generation network after training
It is determined as image generation model.
In some embodiments, training unit is further configured to:In response to determining that loss function value does not restrain, make
Training step is re-executed with generation network, the first differentiation network and the second differentiation network after training.
In some embodiments, which further includes:Third acquiring unit is configured to obtain the three-dimensional people pre-established
Face model;Setting unit is configured to that different light source parameters is set to render three-dimensional face model respectively, is had
The first image and the second image of different light source parameters, wherein, the light source parameters of the first image are non-frontal uniform source of light condition
Under parameter, the light source parameters of the second image are the parameter under the conditions of positive uniform source of light;Third input unit, be configured to by
Second image is input to face character identification model trained in advance, obtains the face character information of face in the second image;Group
Into unit, it is configured to the face character information of face in the first image, the second image and the second image forming training sample.
The third aspect, the embodiment of the present application provide a kind of electronic equipment, including:One or more processors;Storage dress
It puts, for storing one or more programs, when one or more programs are executed by one or more processors so that one or more
A processor is realized such as the method for any embodiment in face character recognition methods.
Fourth aspect, the embodiment of the present application provide a kind of computer readable storage medium, are stored thereon with computer journey
Sequence is realized when the program is executed by processor such as the method for any embodiment in face character recognition methods.
Face character recognition methods provided by the embodiments of the present application and device, by the way that acquired pending image is inputted
Model is generated to image trained in advance, obtains carrying out the pending image optimization image after light adjustment, then should
Optimization image is input to face character identification model trained in advance, obtains the face character information of face in the optimization image,
Hence for image of the light environment in the case of poor captured by (such as situations such as backlight, sidelight), it can be accurately determined
The attribute information of face improves the accuracy of face character identification.
Description of the drawings
By reading the detailed description made to non-limiting example made with reference to the following drawings, the application's is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is that this application can be applied to exemplary system architecture figures therein;
Fig. 2 is the flow chart according to one embodiment of the face character recognition methods of the application;
Fig. 3 is the schematic diagram according to an application scenarios of the face character recognition methods of the application;
Fig. 4 is the structure diagram according to one embodiment of the face character identification device of the application;
Fig. 5 is adapted for the structure diagram of the computer system of the electronic equipment for realizing the embodiment of the present application.
Specific embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention rather than the restriction to the invention.It also should be noted that in order to
Convenient for description, illustrated only in attached drawing and invent relevant part with related.
It should be noted that in the absence of conflict, the feature in embodiment and embodiment in the application can phase
Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 shows the exemplary system of the face character recognition methods that can apply the application or face character identification device
System framework 100.
As shown in Figure 1, system architecture 100 can include terminal device 101,102,103, network 104 and server 105.
Network 104 between terminal device 101,102,103 and server 105 provide communication link medium.Network 104 can be with
Including various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can be interacted with using terminal equipment 101,102,103 by network 104 with server 105, to receive or send out
Send message etc..Various telecommunication customer end applications can be installed, such as photography and vedio recording class should on terminal device 101,102,103
It is applied with, image processing class, recognition of face class is applied, searching class application etc..
Terminal device 101,102,103 can be the various electronic equipments for having camera and supporting information exchange, packet
It includes but is not limited to smart mobile phone, tablet computer, pocket computer on knee and desktop computer etc..
Server 105 can be to provide the server of various services, such as the figure to the upload of terminal device 101,102,103
As the image processing server handled.Image processing server the pending image that receives etc. such as can analyze
Processing, and handling result (such as face character information) is fed back into terminal device.
It should be noted that the face character recognition methods that the embodiment of the present application is provided generally is performed by server 105,
Correspondingly, face character identification device is generally positioned in server 105.
It should be pointed out that the local of server 105 can also directly store pending image, server 105 can be straight
It obtains and local pending image is taken to carry out face character identification, at this point, exemplary system architecture 100 can there is no terminals to set
Standby 101,102,103 and network 104.
It may also be noted that can also be equipped with image processing class application in terminal device 101,102,103, terminal is set
Standby 101,102,103, which can also be based on image processing class, applies to the progress face character identification of pending image, at this point, face category
Property recognition methods can also be performed by terminal device 101,102,103, and correspondingly, face character identification device can also be set to
In terminal device 101,102,103.At this point, server 105 and network 104 can be not present in exemplary system architecture 100.
It should be understood that the number of the terminal device, network and server in Fig. 1 is only schematical.According to realization need
Will, can have any number of terminal device, network and server.
With continued reference to Fig. 2, the flow 200 of one embodiment of face character recognition methods according to the application is shown.
The face attribute recognition approach, includes the following steps:
Step 201, pending image is obtained.
In the present embodiment, the electronic equipment of face character recognition methods operation thereon can obtain pending figure first
Picture, wherein, above-mentioned pending image can be the image shot under the conditions of non-frontal uniform source of light to face.It is right in practice
During some target objects (such as face, article when) shooting, the center of target object is stated from facing up for above-mentioned target object
The point light source or area source projected may be considered positive uniform source of light;From the non-frontal of above-mentioned target object or to above-mentioned
Non-central the projected point light source or area source of target object may be considered non-frontal uniform source of light.Herein, above-mentioned target
The front of object can be target object forepart (such as face forepart) to one side or refer to target object compared with based on
The one side (such as plane shown in cup front view) wanted, can also be any one of the preassigned target object of technical staff
Face.The front of above-mentioned target object can be the plane shown in the front view of target object, be seen from the front of above-mentioned target object
It examines, on the perspective plane of the image projecting of target object rearward, this projection image is known as front view.In above-mentioned target object
The heart can be optic centre, geometric center, point nearest apart from photographic device etc. or the preassigned mesh of technical staff
Mark some position (such as nose) of object, can also be the preassigned target object of technical staff some regions (such as
Nose region).Herein, if light source is point light source, front uniform point source is it is to be understood that the point light source goes out luminous point
It is vertical with plane where the front view of above-mentioned target object with the line at the center of above-mentioned target object.If light source is area source,
Then front uniform area light source is it is to be understood that the line and the area source at the center of the area source and the center of above-mentioned target object
Light-emitting surface where plane and above-mentioned target object front view where plane be respectively perpendicular.
It should be noted that above-mentioned pending image can be stored directly in the local of above-mentioned electronic equipment, at this point, above-mentioned
Electronic equipment directly can obtain above-mentioned pending image from local.In addition, above-mentioned pending image can also be and above-mentioned electricity
Remaining electronic equipment that sub- equipment is connected is sent to above-mentioned electronic equipment by wired connection mode or radio connection
's.Wherein, above-mentioned radio connection can include but is not limited to 3G/4G connections, WiFi connections, bluetooth connection, WiMAX companies
Connect, Zigbee connections, UWB (ultra wideband) connections and other it is currently known or in the future exploitation wireless connection sides
Formula.
Step 202, pending image is input to image trained in advance and generates model, obtain carrying out pending image
Optimization image after light adjustment.
In the present embodiment, above-mentioned pending image can be input to image generation trained in advance by above-mentioned electronic equipment
Model obtains carrying out pending image the optimization image after light adjustment, wherein, above-mentioned optimization image can be positive equal
The image presented under even light conditions.
It should be noted that image generation model can be used for captured image under the conditions of non-frontal uniform source of light
Light adjustment is carried out to generate the image under the conditions of positive uniform source of light.As an example, above-mentioned image generation model can be pre-
First with machine learning method, based on training sample to being used to carry out the model of image procossing (for example, convolutional neural networks
(Convolutional Neural Network, CNN)) it is trained rear obtained model.Above-mentioned convolutional neural networks can
To include convolutional layer, pond layer, anti-pond layer and warp lamination, wherein, convolutional layer can be used for extracting characteristics of image, pond layer
It can be used for carrying out down-sampled (downsample) to the information of input, anti-pond layer can be used for carrying out the information of input
Sample (upsample), warp lamination is used to carry out deconvolution to the information of input, using the transposition of the convolution kernel of convolutional layer as
The convolution kernel of warp lamination handles the information inputted.Deconvolution is the inverse operation of convolution, realizes the recovery of signal.
The last one warp lamination of above-mentioned convolutional neural networks can export optimization image, and the optimization image exported can use RGB
The matrix of (red green blue, RGB) triple channel is expressed, and exported optimization image size can with it is upper
It is identical to state pending image.In practice, convolutional neural networks are a kind of feedforward neural networks, its artificial neuron can respond
Surrounding cells in a part of coverage area have outstanding performance for image procossing, therefore, it is possible to using convolutional neural networks into
The processing of row image.It should be noted that above-mentioned electronic equipment can profit it is (such as Training, unsupervised in various manners
The modes such as training) above-mentioned convolutional neural networks is trained to obtain image generation model.
In some optional realization methods of the present embodiment, above-mentioned image generation model can train as follows
It obtains:
The first step, the production confrontation network (Generative for obtaining preset training sample and pre-establishing
Adversarial Nets, GAN).For example, above-mentioned production confrontation network can be depth convolution generation confrontation network (Deep
Convolutional Generative Adversarial Network, DCGAN).Wherein, above-mentioned production confrontation network can
To include generating network, the first differentiation network and second differentiates network, and above-mentioned generation network can be used for the image to being inputted
Carry out light adjustment and output adjustment after image, it is above-mentioned first differentiation network can be used to determine inputted image whether by
Above-mentioned generation network output, above-mentioned second differentiation network can be used to determine the people of face in the image that above-mentioned generation network exports
Face attribute information whether with the face character information match for the face being input in the image of above-mentioned generation network.Herein, on
The face character information for stating face in the image of generation network output is input to by the image for exporting above-mentioned generation network
It is obtained in trained face character identification model in advance, the face of face in the image of above-mentioned generation network to be entered to above-mentioned
Attribute information is the face character information manually marked out got in advance.Wherein, above-mentioned face character identification model can be with
It is identified to obtain face character information for the face character to face in image, above-mentioned face character identification model can be
It is obtained after carrying out Training to existing model (such as convolutional neural networks) using machine learning method.Wherein, it instructs
People in a large amount of facial image and each facial image can be included by practicing sample used in above-mentioned face character identification model
The face character information of face.In practice, face character information can be made using the facial image in sample as the input of model
For the output of model, the model is trained using machine learning method, the model after training is determined as face character
Identification model.
It should be noted that face character information mainly includes gender, age, race, expression etc..Above-mentioned generation network
Can be for carrying out the convolutional neural networks of image procossing (such as comprising convolutional layer, pond layer, anti-pond layer, warp lamination
Various convolutional neural networks structures, can carry out successively it is down-sampled and up-sampling);Above-mentioned first differentiates network, above-mentioned second
Differentiate network can be convolutional neural networks (such as various convolutional neural networks structures comprising full articulamentum, wherein, it is above-mentioned complete
Articulamentum can realize classification feature) or can be used to implement other model structures of classification feature, such as support vector machines
(Support Vector Machine, SVM) etc..It should be noted that the image that above-mentioned generation network is exported can use RGB
The matrix of triple channel is expressed.Herein, above-mentioned first differentiation network is if it is determined that the image of input is that above-mentioned generation network institute is defeated
The image (carrying out self-generating data) gone out, then can export 1;If it is determined that the image of input is not the figure that above-mentioned generation network is exported
As (from truthful data, i.e., above-mentioned second image), then 0 can be exported.Above-mentioned second differentiates network if it is determined that above-mentioned generation net
The face character information of face and the face character of face being input in the image of above-mentioned generation network in the image of network output
Information match can export 1;If above-mentioned second differentiation network determines the people of face in the image that above-mentioned generation network exports
Face attribute information and the face character information mismatch for being input to face in the image of above-mentioned generation network, can export 0.It needs
Illustrate, above-mentioned first differentiation network, the second differentiation network can also be based on presetting exporting other numerical value, be not limited to 1
With 0.
Second step using machine learning method, differentiates net based on above-mentioned training sample to above-mentioned generation network, above-mentioned first
Network and above-mentioned second differentiation network are trained, and the above-mentioned generation network after training is determined as image generation model.Specifically,
Generation network can be fixed first and differentiates that any network in network (differentiating that network and second differentiates network including first) (can
Referred to as first network) parameter, the network (can be described as the second network) of unlocked parameter is optimized;The second network is fixed again
Parameter, first network is improved.Above-mentioned iteration is constantly carried out, until the damage that first differentiates network and the second differentiation network
Functional value convergence is lost, generation network at this time can be determined as to image generation model.It should be noted that above-mentioned continuous progress
Iteration is back-propagation process until the convergent process of loss function value of the first differentiation network and the second differentiation network.
In some optional realization methods of the present embodiment, above-mentioned training sample can include multiple non-frontal uniform
The first image shot under light conditions to face, the second figure shot under the conditions of positive uniform source of light to face
The face character information of face in picture and above-mentioned second image.In practice, the first image and second under same light source environment
The position consistency of the consistent and captured object of consistent, the captured object of the shooting angle of image, therefore, in same light source ring
The face character information of face is identical with the face character information of face in the second image in the first image under border.It is pre- obtaining
After the training sample put and the production pre-established confrontation network, above-mentioned electronic equipment can be instructed by following training step
Get above-mentioned image generation model:
The first step, the parameter of fixed above-mentioned generation network, will using above-mentioned first image as the input of above-mentioned generation network
The image of above-mentioned generation network output is input to face character identification model trained in advance, obtains the face category of face to be identified
Property information.
The image of above-mentioned generation network output, above-mentioned second image are differentiated the defeated of network by second step as above-mentioned first
Enter, using the face character information of face in the face character information of above-mentioned face to be identified and above-mentioned second image as above-mentioned
Two differentiate the input of network, and the above-mentioned first differentiation network and above-mentioned second differentiation network are instructed using machine learning method
Practice.It should be noted that since the image of generation network output is generation data, and known second image is truthful data,
Therefore, for being input to the image of the first differentiation network, it can be automatically generated for indicating the image for generation data or true
The mark of data.
Third walks, and above-mentioned first after fixed training differentiates that network and above-mentioned second differentiates the parameter of network, by above-mentioned the
Input of one image as above-mentioned generation network, using machine learning method, back-propagation algorithm and gradient descent algorithm to upper
Generation network is stated to be trained.In practice, above-mentioned back-propagation algorithm, above-mentioned gradient descent algorithm are to study and answer extensively at present
Known technology, details are not described herein.
4th step determines that above-mentioned first after training differentiates that network and above-mentioned second differentiates the loss function value of network, rings
It should be in determining above-mentioned loss function value convergence, above-mentioned generation network is determined as above-mentioned image generates model.
It should be noted that in response to determining that above-mentioned loss function value does not restrain, above-mentioned electronic equipment can use instruction
Above-mentioned generation network, above-mentioned first differentiation network and above-mentioned second differentiation network after white silk re-execute above-mentioned training step.By
This, the parameter of image generation model that production confrontation network training obtains is based not only on training sample and obtains, and is also based on
First differentiate network and second differentiates the backpropagation of network and it is determining, thus need not rely on a large amount of sample for having mark
The training that generation model can be realized obtains image generation model, reduces human cost, further improves image procossing
Flexibility.
In some optional realization methods of the present embodiment, above-mentioned training sample can be generated by following steps:
The first step obtains the three-dimensional face model pre-established.Herein, above-mentioned three-dimensional face model can be technical staff
It is pre-established using various existing threedimensional model design tools, and above-mentioned threedimensional model design tool can support setting not
The light source of same type renders the three-dimensional face model established, and supports to be become by the projection of threedimensional model to two dimensional image
The functions such as change, details are not described herein again.
Second step sets different light source parameters to render above-mentioned three-dimensional face model respectively, obtains having difference
The first image and the second image of illumination parameter, wherein, the light source parameters of above-mentioned first image are non-frontal uniform source of light condition
Under parameter, the light source parameters of above-mentioned second image are the parameter under the conditions of positive uniform source of light.It, can be in three-dimensional people in practice
The all angles such as top, bottom, behind, side, the front of face model set light source, and light source can be point light source, area source
Etc. various types of light sources.Herein, since threedimensional model design tool projection support converts, three-dimensional mould can directly be utilized
Type design tool obtains above-mentioned first image and the second image.And it is possible to the first image and the second image are set relative to upper
Three-dimensional face model is stated with identical visual angle.
Third walks, and above-mentioned second image is input to face character identification model trained in advance, obtains above-mentioned second figure
The face character information of face as in.It should be noted that face character identification model is obtained with above-mentioned used in this step
The face character information of face in image after processing, generation network to be entered to above-mentioned image in face face character
The face character identification model of information and the face character information of face to be identified is same model;The operation side of this step
Method and people in the face character information of face, the image of generation network to be entered to above-mentioned in the image after above-mentioned handled
The operating method of the face character information of face and the face character information of above-mentioned face to be identified is essentially identical, no longer superfluous herein
It states.
4th step, by the face character information of face in above-mentioned first image, above-mentioned second image and above-mentioned second image
Form training sample.Establish training sample using three-dimensional face model, compared to directly utilize camera acquire true picture, energy
It reaches flexibly and is quickly generated more samples;Also, training sample is established using three-dimensional face model, various angles can be simulated
Degree, various types of illumination conditions, make that the data of training sample are more rich, coverage area is wider.
Step 203, in the face character identification model trained in advance by image input is optimized, face in optimization image is obtained
Face character information.
In the present embodiment, above-mentioned optimization image can be input to face character trained in advance and known by above-mentioned electronic equipment
Other model obtains the face character information of face in above-mentioned optimization image, wherein, face character identification model is used for in image
The face character of face is identified to obtain face character information.It should be noted that face character used in this step is known
Other model and the face character information of face in the image after above-mentioned handled, the image of generation network to be entered to above-mentioned
The face character information of face in the face character information of middle face, the face character information of face to be identified and the second image
Face character identification model is same model;The operating method of this step and face in the image after above-mentioned handled
Face character information, generation network to be entered to above-mentioned image in the face character information of face, face to be identified face
The operating method of the face character information of face is essentially identical in attribute information, the second image, and details are not described herein.
With continued reference to Fig. 3, Fig. 3 is a signal according to the application scenarios of the face character recognition methods of the present embodiment
Figure.In the application scenarios of Fig. 3, camera can be first turned on for handling the electronic equipment of image (such as mobile phone), is being worked as
(such as backlight) takes pictures to some object (such as face) under the conditions of preceding non-frontal uniform source of light, to get pending figure
As (as shown in label 301).Then, which can be input to in advance trained image generation model, obtained pair
Above-mentioned pending image carries out the optimization image after light adjustment (as shown in label 302).It should be noted that label 301, mark
Image indicated by numbers 302 is only to illustrate.Finally, training in advance being input to (as shown in label 302) by image is optimized
Face character identification model, the face character information (as shown in label 303) for obtaining face in the optimization image are male, 30
Year, yellow.
The method that above-described embodiment of the application provides, by the way that acquired pending image is input to training in advance
Image generates model, obtains carrying out the pending image optimization image after light adjustment, then inputs the optimization image
To face character identification model trained in advance, the face character information of face in the optimization image is obtained, hence for illumination
Image in the case that environment is poor captured by (such as situations such as backlight, sidelight) can accurately determine the attribute letter of its face
Breath improves the accuracy of face character identification.
With further reference to Fig. 4, as the realization to method shown in above-mentioned each figure, this application provides a kind of knowledges of face character
One embodiment of other device, the device embodiment is corresponding with embodiment of the method shown in Fig. 2, which can specifically apply
In various electronic equipments.
As shown in figure 4, the face character identification device 400 of the present embodiment includes:First acquisition unit 401, first inputs
402 and second input unit 403 of unit.Wherein, first acquisition unit 401 is configured to obtain pending image, wherein, it waits to locate
It is to the image of face shooting under the conditions of non-frontal uniform source of light to manage image;First input unit 402 is configured to wait to locate
Reason image is input to image generation model trained in advance, obtains carrying out pending image the optimization image after light adjustment,
Wherein, optimization image is the facial image that is presented under the conditions of positive uniform source of light, image generate model for it is non-just
Captured image carries out light adjustment to generate the image under the conditions of positive uniform source of light under the conditions of the uniform source of light of face;Second is defeated
Enter unit 403 to be configured in the face character identification model trained in advance by image input is optimized, obtain people in optimization image
The face character information of face, wherein, face character identification model is used to that the face character of face in image to be identified to obtain
Face character information.
In the present embodiment, the first acquisition unit 401 of face character identification device 400, the first input unit 402 and
The specific processing of two input units 403 can be with step 201, step 202 and the step 203 in 2 corresponding embodiment of reference chart.
In some optional realization methods of the present embodiment, above-mentioned face character identification device 400 can also include the
Two acquiring unit (not shown)s and training unit (not shown).The first step, above-mentioned second acquisition unit can obtain
Preset training sample and the production confrontation network pre-established.For example, above-mentioned production confrontation network can be depth volume
Product generation confrontation network.Wherein, above-mentioned production confrontation network can include generation network, the first differentiation network and second differentiates
Network, above-mentioned generation network can be used for carrying out the image that is inputted the image after light adjustment and output adjustment, and above-mentioned the
One differentiation network can be used to determine whether inputted image is exported by above-mentioned generation network, and above-mentioned second differentiates that network can be with
For determining the face character information of face in the image of above-mentioned generation network output whether with being input to above-mentioned generation network
The face character information match of face in image.Herein, the face character of face in the image that above-mentioned generation network exports
Information is to be input in face character identification model trained in advance to obtain by the image for exporting above-mentioned generation network, on
The face character information for stating face in the image of generation network to be entered to above-mentioned is the face manually marked out obtained in advance
Attribute information.Wherein, above-mentioned face character identification model can be used for the face character of face in image is identified to obtain
Face character information, above-mentioned face character identification model can be to existing model (such as convolution using machine learning method
Neural network) carry out Training after obtain.Wherein, sample used in the above-mentioned face character identification model of training can be with
Include the face character information of face in a large amount of facial image and each facial image.It, can will be in sample in practice
Input of the facial image as model, using face character information as the output of model, using machine learning method to the model
It is trained, the model after training is determined as face character identification model.
It should be noted that face character information mainly includes gender, age, race, expression etc..Above-mentioned generation network
Can be the convolutional neural networks for carrying out image procossing;Above-mentioned first differentiation network, above-mentioned second differentiation network can be
Convolutional neural networks or other model structures that can be used to implement classification feature, such as support vector machines etc..It needs to illustrate
, the image that above-mentioned generation network is exported can be expressed with the matrix of RGB triple channels.Herein, above-mentioned first differentiates
Network can then export 1 if it is determined that the image of input is image that above-mentioned generation network is exported;If it is determined that the image of input is not
It is the image that above-mentioned generation network is exported, then can exports 0.Above-mentioned second differentiates network if it is determined that above-mentioned generation network exports
Image in the face character information of face and the face character information phase for the face being input in the image of above-mentioned generation network
Matching, can export 1;If it is above-mentioned second differentiate network if it is determined that above-mentioned generation network output image in face face character
Information and the face character information mismatch for being input to face in the image of above-mentioned generation network, can export 0.Need what is illustrated
It is that above-mentioned first differentiation network, the second differentiation network can also be based on presetting exporting other numerical value, be not limited to 1 and 0.
Second step, above-mentioned training unit can utilize machine learning method, based on above-mentioned training sample to above-mentioned generation net
Network, above-mentioned first differentiation network and above-mentioned second differentiation network are trained, and the above-mentioned generation network after training is determined as figure
As generation model.Specifically, above-mentioned training unit can fix generation network and differentiate that any network in network (can claim first
For first network) parameter, the network (can be described as the second network) of unlocked parameter is optimized;The second network is fixed again
Parameter is improved first network.Above-mentioned iteration is constantly carried out, until the loss that first differentiates network and the second differentiation network
Functional value is restrained, and generation network at this time can be determined as to image generation model.It should be noted that above-mentioned constantly change
Generation is back-propagation process until the convergent process of loss function value of the first differentiation network and the second differentiation network.
In some optional realization methods of the present embodiment, above-mentioned training sample can include multiple non-frontal uniform
The first image shot under light conditions to face, the second figure shot under the conditions of positive uniform source of light to face
The face character information of face in picture and above-mentioned second image.In practice, the first image and second under same light source environment
The position consistency of the consistent and captured object of consistent, the captured object of the shooting angle of image, therefore, in same light source ring
The face character information of face is identical with the face character information of face in the second image in the first image under border.
In some optional realization methods of the present embodiment, in the generation for obtaining preset training sample and pre-establishing
After formula confrontation network, above-mentioned training unit can train to obtain above-mentioned image generation model by following training step:
The first step, the parameter of fixed above-mentioned generation network, will using above-mentioned first image as the input of above-mentioned generation network
The image of above-mentioned generation network output is input to face character identification model trained in advance, obtains the face category of face to be identified
Property information.
The image of above-mentioned generation network output, above-mentioned second image are differentiated the defeated of network by second step as above-mentioned first
Enter, using the face character information of face in the face character information of above-mentioned face to be identified and above-mentioned second image as above-mentioned
Two differentiate the input of network, and the above-mentioned first differentiation network and above-mentioned second differentiation network are instructed using machine learning method
Practice.It should be noted that since the image of generation network output is generation data, and known second image is truthful data,
Therefore, for being input to the image of the first differentiation network, it can be automatically generated for indicating the image for generation data or true
The mark of data.
Third walks, and above-mentioned first after fixed training differentiates that network and above-mentioned second differentiates the parameter of network, by above-mentioned the
Input of one image as above-mentioned generation network, using machine learning method, back-propagation algorithm and gradient descent algorithm to upper
Generation network is stated to be trained.In practice, above-mentioned back-propagation algorithm, above-mentioned gradient descent algorithm are to study and answer extensively at present
Known technology, details are not described herein.
4th step determines that above-mentioned first after training differentiates that network and above-mentioned second differentiates the loss function value of network, rings
It should be in determining above-mentioned loss function value convergence, above-mentioned generation network is determined as above-mentioned image generates model.
In some optional realization methods of the present embodiment, in response to determining that above-mentioned loss function value does not restrain, on
Stating training unit can use the above-mentioned generation network after training, above-mentioned first to differentiate that network and above-mentioned second differentiates network again
Perform above-mentioned training step.The parameter of image generation model that production confrontation network training obtains as a result, is based not only on training
Sample obtains, and is also based on the first differentiation network and second and differentiates the backpropagation of network and determining, because without according to
The training that generation model can be realized in a large amount of sample for having mark is relied to obtain image generation model, reduces human cost, into
One step improves the flexibility of image procossing.
In some optional realization methods of the present embodiment, above-mentioned face character identification device 400 can also include the
Three acquiring unit (not shown)s, setting unit (not shown), third input unit (not shown) and composition list
First (not shown).
The first step, above-mentioned third acquiring unit can obtain the three-dimensional face model pre-established.Herein, above-mentioned three-dimensional people
Face model can be that technical staff is pre-established, and above-mentioned threedimensional model is set using various existing threedimensional model design tools
Meter tool can support that different types of light source is set to render the three-dimensional face model established, and support by three-dimensional mould
Type is to functions such as the projective transformations of two dimensional image, and details are not described herein again.
Second step, above-mentioned setting unit can set different light source parameters to carry out wash with watercolours to above-mentioned three-dimensional face model respectively
Dye, obtains first image and the second image with different illumination parameters, wherein, the light source parameters of above-mentioned first image for it is non-just
Parameter under the conditions of the uniform source of light of face, the light source parameters of above-mentioned second image are the parameter under the conditions of positive uniform source of light.Practice
In, light source can be set in all angles such as the top of three-dimensional face model, bottom, behind, side, fronts, and light source can be with
It is various types of light sources such as point light source, area source.It herein, can since threedimensional model design tool projection support converts
Directly to obtain above-mentioned first image and the second image using threedimensional model design tool.And it is possible to set the first image and
Second image has identical visual angle relative to above-mentioned three-dimensional face model.
Third walks, and above-mentioned second image can be input to face character identification trained in advance by above-mentioned third input unit
Model obtains the face character information of face in above-mentioned second image.It should be noted that face character used in this step
Identification model and the face character information of face in the image after above-mentioned handled, the figure of generation network to be entered to above-mentioned
The face character identification model of the face character information of the face character information and face to be identified of face is same as in
Model;It is the operating method of this step and the face character information of face in the image after above-mentioned handled, to be entered supreme
State the operation side of the face character information of the face character information and above-mentioned face to be identified of face in the image of generation network
Method is essentially identical, and details are not described herein.
4th step, above-mentioned component units can be by people in above-mentioned first image, above-mentioned second image and above-mentioned second image
The face character information composition training sample of face.Training sample is established using three-dimensional face model, is imaged compared to directly utilizing
Head acquisition true picture flexibly and can be quickly generated more samples;Also, establish training sample using three-dimensional face model
This, can simulate various angles, various types of illumination conditions, make that the data of training sample are more rich, coverage area is wider.
The device that above-described embodiment of the application provides, is obtained first acquisition unit 401 by the first input unit 402
The pending image taken is input to image generation model trained in advance, obtains after carrying out light adjustment to the pending image
Optimize image, then the optimization image is input to face character identification model trained in advance by the second input unit 403, is obtained
The face character information of face in the optimization image, hence for light environment it is poor in the case of (such as feelings such as backlight, sidelight
Condition) captured by image, can accurately determine the attribute information of its face, improve the accuracy of face character identification.
Below with reference to Fig. 5, it illustrates suitable for being used for realizing the computer system 500 of the electronic equipment of the embodiment of the present application
Structure diagram.Electronic equipment shown in Fig. 5 is only an example, to the function of the embodiment of the present application and should not use model
Shroud carrys out any restrictions.
As shown in figure 5, computer system 500 includes central processing unit (CPU) 501, it can be read-only according to being stored in
Program in memory (ROM) 502 or be loaded into program in random access storage device (RAM) 503 from storage section 508 and
Perform various appropriate actions and processing.In RAM 503, also it is stored with system 500 and operates required various programs and data.
CPU 501, ROM 502 and RAM 503 are connected with each other by bus 504.Input/output (I/O) interface 505 is also connected to always
Line 504.
I/O interfaces 505 are connected to lower component:Importation 506 including touch screen, touch tablet etc.;Including such as liquid
The output par, c 507 of crystal display (LCD) etc. and loud speaker etc.;Storage section 508 including hard disk etc.;And including such as
The communications portion 509 of the network interface card of LAN card, modem etc..Communications portion 509 is held via the network of such as internet
Row communication process.Driver 510 is also according to needing to be connected to I/O interfaces 505.Detachable media 511, such as semiconductor memory
Etc., it is mounted on driver 510, is deposited in order to be mounted into as needed from the computer program read thereon as needed
Store up part 508.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product, including being carried on computer-readable medium
On computer program, which includes for the program code of the method shown in execution flow chart.In such reality
It applies in example, which can be downloaded and installed from network by communications portion 509 and/or from detachable media
511 are mounted.When the computer program is performed by central processing unit (CPU) 501, perform what is limited in the present processes
Above-mentioned function.It should be noted that computer-readable medium described herein can be computer-readable signal media or
Computer readable storage medium either the two arbitrarily combines.Computer readable storage medium for example can be --- but
It is not limited to --- electricity, magnetic, optical, electromagnetic, system, device or the device of infrared ray or semiconductor or arbitrary above combination.
The more specific example of computer readable storage medium can include but is not limited to:Electrical connection with one or more conducting wires,
Portable computer diskette, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only deposit
Reservoir (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory
Part or above-mentioned any appropriate combination.In this application, computer readable storage medium can any be included or store
The tangible medium of program, the program can be commanded the either device use or in connection of execution system, device.And
In the application, computer-readable signal media can include the data letter propagated in a base band or as a carrier wave part
Number, wherein carrying computer-readable program code.Diversified forms may be used in the data-signal of this propagation, including but not
It is limited to electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer
Any computer-readable medium other than readable storage medium storing program for executing, the computer-readable medium can send, propagate or transmit use
In by instruction execution system, device either device use or program in connection.It is included on computer-readable medium
Program code any appropriate medium can be used to transmit, including but not limited to:Wirelessly, electric wire, optical cable, RF etc., Huo Zheshang
Any appropriate combination stated.
Flow chart and block diagram in attached drawing, it is illustrated that according to the system of the various embodiments of the application, method and computer journey
Architectural framework in the cards, function and the operation of sequence product.In this regard, each box in flow chart or block diagram can generation
The part of one module of table, program segment or code, the part of the module, program segment or code include one or more use
In the executable instruction of logic function as defined in realization.It should also be noted that it in some implementations as replacements, is marked in box
The function of note can also be occurred with being different from the sequence marked in attached drawing.For example, two boxes succeedingly represented are actually
It can perform substantially in parallel, they can also be performed in the opposite order sometimes, this is depended on the functions involved.Also it to note
Meaning, the combination of each box in block diagram and/or flow chart and the box in block diagram and/or flow chart can be with holding
The dedicated hardware based system of functions or operations as defined in row is realized or can use specialized hardware and computer instruction
Combination realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard
The mode of part is realized.Described unit can also be set in the processor, for example, can be described as:A kind of processor packet
Include first acquisition unit, the first input unit and the second input unit.Wherein, the title of these units is not under certain conditions
The restriction to the unit in itself is formed, for example, first acquisition unit is also described as " unit for obtaining pending image ".
As on the other hand, present invention also provides a kind of computer-readable medium, which can be
Included in device described in above-described embodiment;Can also be individualism, and without be incorporated the device in.Above-mentioned calculating
Machine readable medium carries one or more program, when said one or multiple programs are performed by the device so that should
Device:Pending image is obtained, wherein, pending image is to the image of face shooting under the conditions of non-frontal uniform source of light;
Pending image is input to image trained in advance and generates model, obtains carrying out pending image the optimization after light adjustment
Image, wherein, optimization image is the facial image that is presented under the conditions of positive uniform source of light, image generate model for
Captured image carries out light adjustment to generate the image under the conditions of positive uniform source of light under the conditions of non-frontal uniform source of light;It will
Optimize in image input face character identification model trained in advance, obtain the face character information of face in optimization image,
In, face character identification model is used to that the face character of face in image to be identified to obtain face character information.
The preferred embodiment and the explanation to institute's application technology principle that above description is only the application.People in the art
Member should be appreciated that invention scope involved in the application, however it is not limited to the technology that the specific combination of above-mentioned technical characteristic forms
Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature
The other technical solutions for arbitrarily combining and being formed.Such as features described above has similar work(with (but not limited to) disclosed herein
The technical solution that the technical characteristic of energy is replaced mutually and formed.
Claims (14)
1. a kind of face character recognition methods, including:
Pending image is obtained, wherein, the pending image is to the figure of face shooting under the conditions of non-frontal uniform source of light
Picture;
The pending image is input to image trained in advance and generates model, obtains carrying out light to the pending image
Optimization image after adjustment, wherein, the optimization image is the facial image that is presented under the conditions of positive uniform source of light, described
Image generation model is positive equal to generate for carrying out light adjustment to captured image under the conditions of non-frontal uniform source of light
Image under even light conditions;
By in optimization image input face character identification model trained in advance, the people of face in the optimization image is obtained
Face attribute information, wherein, the face character identification model is used to that the face character of face in image to be identified to obtain people
Face attribute information.
2. according to the method described in claim 1, wherein, training obtains described image generation model as follows:
The production confrontation network for obtaining preset training sample and pre-establishing, wherein, the production confrontation network includes
Network, the first differentiation network and the second differentiation network are generated, the generation network is used to carry out light adjustment to the image of input
And the image after output adjustment, described first differentiates network for determining whether inputted image is defeated by the generation network
Go out, it is described second differentiation network be used for determine it is described generation network output image in face face character information whether with it is defeated
Enter to it is described generation network image in face face character information match, it is described generation network output image in face
Face character information be that the image of the generation network output is input in advance trained face character identification model
It arrives, the face character information for being input to face in the image of the generation network obtains in advance;
Using machine learning method, differentiate that network and described second differentiates that network carries out based on the generation network, described first
The generation network after training is determined as image generation model by training.
3. according to the method described in claim 2, wherein, the training sample includes multiple under the conditions of non-frontal uniform source of light
The first image shot to face, the second image and described shot under the conditions of positive uniform source of light to face
The face character information of face in two images.
It is described using machine learning method 4. according to the method described in claim 3, wherein, based on the generation network, described
First differentiation network and the second differentiation network are trained, and the generation network after training is determined as image generation mould
Type, including:
Perform following training step:The parameter of the fixed generation network, using described first image as the generation network
The image of the generation network output is input in face character identification model trained in advance and obtains face to be identified by input
Face character information;The image and second image of the generation network output are differentiated into the defeated of network as described first
Enter, using the face character information of face in the face character information of the face to be identified and second image as described
Two differentiate the input of network, and the described first differentiation network and the second differentiation network are instructed using machine learning method
Practice;Described first after fixed training differentiates that network and described second differentiates the parameter of network, using described first image as institute
State generation network input, using machine learning method, back-propagation method and gradient descent algorithm to it is described generation network into
Row training;Determine that described first after training differentiates that network and described second differentiates the loss function value of network, in response to determining
Go out the loss function value convergence, the generation network after training is determined as image generation model.
It is described using machine learning method 5. according to the method described in claim 4, wherein, based on the generation network, described
First differentiation network and the second differentiation network are trained, and the generation network after training is determined as image generation mould
Type, including:
In response to determining that the loss function value does not restrain, the generation network, the first differentiation net after training are used
Network and the second differentiation network re-execute the training step.
6. according to the method described in one of claim 3-5, wherein, the training sample generates as follows:
Obtain the three-dimensional face model pre-established;
Different light source parameters is set to render the three-dimensional face model respectively, obtain with different light source parameters
One image and the second image, wherein, the light source parameters of described first image are the parameter under the conditions of non-frontal uniform source of light, described
The light source parameters of second image are the parameter under the conditions of positive uniform source of light;
Second image is input to face character identification model trained in advance, obtains the people of face in second image
Face attribute information;
By the face character information composition training sample of face in described first image, second image and second image
This.
7. a kind of face character identification device, including:
First acquisition unit is configured to obtain pending image, wherein, the pending image is in non-frontal uniform source of light
Under the conditions of to face shooting image;
First input unit, is configured to the pending image being input in advance trained image and generates model, obtains pair
The pending image carries out the optimization image after light adjustment, wherein, the optimization image is in positive uniform source of light condition
Lower presented facial image, described image generation model be used for captured image under the conditions of non-frontal uniform source of light into
Row light adjusts to generate the image under the conditions of positive uniform source of light;
Second input unit is configured to, by optimization image input face character identification model trained in advance, obtain
The face character information of face in the optimization image, wherein, the face character identification model is used for face in image
Face character is identified to obtain face character information.
8. device according to claim 7, wherein, described device further includes:
Second acquisition unit is configured to the production for obtaining preset training sample and pre-establishing confrontation network, wherein, institute
It states production confrontation network and includes generation network, the first differentiation network and the second differentiation network, the generation network is used for defeated
The image entered carries out the image after light adjustment and output adjustment, and the first differentiation network is used to determine that inputted image is
No to be exported by the generation network, the second differentiation network is used for the people of face in the image for determining the generation network output
Face attribute information whether be input to it is described generation network image in face face character information match, the generation net
The face character information of face is that the image of the generation network output is input to people trained in advance in the image of network output
It is obtained in face attribute Recognition Model, the face character information for being input to face in the image of the generation network is advance
It obtains;
Training unit is configured to using machine learning method, differentiates network and described based on the generation network, described first
Second differentiation network is trained, and the generation network after training is determined as image generation model.
9. device according to claim 8, wherein, the training sample includes multiple under the conditions of non-frontal uniform source of light
The first image shot to face, the second image and described shot under the conditions of positive uniform source of light to face
The face character information of face in two images.
10. device according to claim 9, wherein, the training unit is further configured to:
Perform following training step:The parameter of the fixed generation network, using described first image as the generation network
The image of the generation network output is input in face character identification model trained in advance and obtains face to be identified by input
Face character information;The image and second image of the generation network output are differentiated into the defeated of network as described first
Enter, using the face character information of face in the face character information of the face to be identified and second image as described
Two differentiate the input of network, and the described first differentiation network and the second differentiation network are instructed using machine learning method
Practice;Described first after fixed training differentiates that network and described second differentiates the parameter of network, using described first image as institute
State generation network input, using machine learning method, back-propagation method and gradient descent algorithm to it is described generation network into
Row training;Determine that described first after training differentiates that network and described second differentiates the loss function value of network, in response to determining
Go out the loss function value convergence, the generation network after training is determined as image generation model.
11. device according to claim 10, wherein, the training unit is further configured to:
In response to determining that the loss function value does not restrain, the generation network, the first differentiation net after training are used
Network and the second differentiation network re-execute the training step.
12. according to the device described in one of claim 9-11, wherein, described device further includes:
Third acquiring unit is configured to obtain the three-dimensional face model pre-established;
Setting unit is configured to that different light source parameters is set to render the three-dimensional face model respectively, is had
There are the first image and the second image of different light source parameters, wherein, the light source parameters of described first image are non-frontal uniform light
Parameter under the conditions of source, the light source parameters of second image are the parameter under the conditions of positive uniform source of light;
Third input unit is configured to for second image to be input to face character identification model trained in advance, obtain
The face character information of face in second image;
Component units are configured to the face category of face in described first image, second image and second image
Property information composition training sample.
13. a kind of electronic equipment, including:
One or more processors;
Storage device, for storing one or more programs,
When one or more of programs are performed by one or more of processors so that one or more of processors are real
The now method as described in any in claim 1-6.
14. a kind of computer readable storage medium, is stored thereon with computer program, wherein, when which is executed by processor
Realize the method as described in any in claim 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810044637.9A CN108133201B (en) | 2018-01-17 | 2018-01-17 | Face character recognition methods and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810044637.9A CN108133201B (en) | 2018-01-17 | 2018-01-17 | Face character recognition methods and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108133201A true CN108133201A (en) | 2018-06-08 |
CN108133201B CN108133201B (en) | 2019-10-25 |
Family
ID=62399979
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810044637.9A Active CN108133201B (en) | 2018-01-17 | 2018-01-17 | Face character recognition methods and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108133201B (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108364029A (en) * | 2018-03-19 | 2018-08-03 | 百度在线网络技术(北京)有限公司 | Method and apparatus for generating model |
CN108921792A (en) * | 2018-07-03 | 2018-11-30 | 北京字节跳动网络技术有限公司 | Method and apparatus for handling picture |
CN109102029A (en) * | 2018-08-23 | 2018-12-28 | 重庆科技学院 | Information, which maximizes, generates confrontation network model synthesis face sample quality appraisal procedure |
CN109377535A (en) * | 2018-10-24 | 2019-02-22 | 电子科技大学 | Facial attribute automatic edition system, method, storage medium and terminal |
CN109754447A (en) * | 2018-12-28 | 2019-05-14 | 上海联影智能医疗科技有限公司 | Image generating method, device, equipment and storage medium |
CN110009059A (en) * | 2019-04-16 | 2019-07-12 | 北京字节跳动网络技术有限公司 | Method and apparatus for generating model |
CN110443244A (en) * | 2019-08-12 | 2019-11-12 | 深圳市捷顺科技实业股份有限公司 | A kind of method and relevant apparatus of graphics process |
WO2020037680A1 (en) * | 2018-08-24 | 2020-02-27 | 太平洋未来科技(深圳)有限公司 | Light-based three-dimensional face optimization method and apparatus, and electronic device |
CN111369429A (en) * | 2020-03-09 | 2020-07-03 | 北京字节跳动网络技术有限公司 | Image processing method, image processing device, electronic equipment and computer readable medium |
CN111369468A (en) * | 2020-03-09 | 2020-07-03 | 北京字节跳动网络技术有限公司 | Image processing method, image processing device, electronic equipment and computer readable medium |
CN111524216A (en) * | 2020-04-10 | 2020-08-11 | 北京百度网讯科技有限公司 | Method and device for generating three-dimensional face data |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108280413B (en) * | 2018-01-17 | 2022-04-19 | 百度在线网络技术(北京)有限公司 | Face recognition method and device |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150278997A1 (en) * | 2012-09-26 | 2015-10-01 | Korea Institute Of Science And Technology | Method and apparatus for inferring facial composite |
CN107423701A (en) * | 2017-07-17 | 2017-12-01 | 北京智慧眼科技股份有限公司 | The non-supervisory feature learning method and device of face based on production confrontation network |
CN107491771A (en) * | 2017-09-21 | 2017-12-19 | 百度在线网络技术(北京)有限公司 | Method for detecting human face and device |
-
2018
- 2018-01-17 CN CN201810044637.9A patent/CN108133201B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150278997A1 (en) * | 2012-09-26 | 2015-10-01 | Korea Institute Of Science And Technology | Method and apparatus for inferring facial composite |
CN107423701A (en) * | 2017-07-17 | 2017-12-01 | 北京智慧眼科技股份有限公司 | The non-supervisory feature learning method and device of face based on production confrontation network |
CN107491771A (en) * | 2017-09-21 | 2017-12-19 | 百度在线网络技术(北京)有限公司 | Method for detecting human face and device |
Non-Patent Citations (2)
Title |
---|
HE ZHANG ET AL: "Image De-raining Using a Conditional Generative Adversarial Network", 《EPRINT ARXIV:1701.05957》 * |
张卫等: "基于生成式对抗网络的人脸识别开发", 《电子世界》 * |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108364029A (en) * | 2018-03-19 | 2018-08-03 | 百度在线网络技术(北京)有限公司 | Method and apparatus for generating model |
CN108921792A (en) * | 2018-07-03 | 2018-11-30 | 北京字节跳动网络技术有限公司 | Method and apparatus for handling picture |
CN108921792B (en) * | 2018-07-03 | 2023-06-27 | 北京字节跳动网络技术有限公司 | Method and device for processing pictures |
CN109102029A (en) * | 2018-08-23 | 2018-12-28 | 重庆科技学院 | Information, which maximizes, generates confrontation network model synthesis face sample quality appraisal procedure |
WO2020037680A1 (en) * | 2018-08-24 | 2020-02-27 | 太平洋未来科技(深圳)有限公司 | Light-based three-dimensional face optimization method and apparatus, and electronic device |
CN109377535A (en) * | 2018-10-24 | 2019-02-22 | 电子科技大学 | Facial attribute automatic edition system, method, storage medium and terminal |
US11948314B2 (en) | 2018-12-28 | 2024-04-02 | Shanghai United Imaging Intelligence Co., Ltd. | Systems and methods for image processing |
US11270446B2 (en) | 2018-12-28 | 2022-03-08 | Shanghai United Imaging Intelligence Co., Ltd. | Systems and methods for image processing |
CN109754447B (en) * | 2018-12-28 | 2021-06-22 | 上海联影智能医疗科技有限公司 | Image generation method, device, equipment and storage medium |
CN109754447A (en) * | 2018-12-28 | 2019-05-14 | 上海联影智能医疗科技有限公司 | Image generating method, device, equipment and storage medium |
CN110009059A (en) * | 2019-04-16 | 2019-07-12 | 北京字节跳动网络技术有限公司 | Method and apparatus for generating model |
CN110443244A (en) * | 2019-08-12 | 2019-11-12 | 深圳市捷顺科技实业股份有限公司 | A kind of method and relevant apparatus of graphics process |
CN110443244B (en) * | 2019-08-12 | 2023-12-05 | 深圳市捷顺科技实业股份有限公司 | Graphics processing method and related device |
CN111369429A (en) * | 2020-03-09 | 2020-07-03 | 北京字节跳动网络技术有限公司 | Image processing method, image processing device, electronic equipment and computer readable medium |
CN111369468B (en) * | 2020-03-09 | 2022-02-01 | 北京字节跳动网络技术有限公司 | Image processing method, image processing device, electronic equipment and computer readable medium |
CN111369468A (en) * | 2020-03-09 | 2020-07-03 | 北京字节跳动网络技术有限公司 | Image processing method, image processing device, electronic equipment and computer readable medium |
CN111524216B (en) * | 2020-04-10 | 2023-06-27 | 北京百度网讯科技有限公司 | Method and device for generating three-dimensional face data |
CN111524216A (en) * | 2020-04-10 | 2020-08-11 | 北京百度网讯科技有限公司 | Method and device for generating three-dimensional face data |
Also Published As
Publication number | Publication date |
---|---|
CN108133201B (en) | 2019-10-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108133201B (en) | Face character recognition methods and device | |
CN108154547B (en) | Image generating method and device | |
CN108280413A (en) | Face identification method and device | |
CN108171206B (en) | Information generating method and device | |
US11010896B2 (en) | Methods and systems for generating 3D datasets to train deep learning networks for measurements estimation | |
CN108898185A (en) | Method and apparatus for generating image recognition model | |
US20190102603A1 (en) | Method and apparatus for determining image quality | |
CN110503703A (en) | Method and apparatus for generating image | |
CN107491771A (en) | Method for detecting human face and device | |
CN109816589A (en) | Method and apparatus for generating cartoon style transformation model | |
CN108491809A (en) | The method and apparatus for generating model for generating near-infrared image | |
CN108509892A (en) | Method and apparatus for generating near-infrared image | |
CN108363995A (en) | Method and apparatus for generating data | |
CN107771336A (en) | Feature detection and mask in image based on distribution of color | |
CN108776786A (en) | Method and apparatus for generating user's truth identification model | |
CN108985257A (en) | Method and apparatus for generating information | |
CN108830235A (en) | Method and apparatus for generating information | |
CN108388878A (en) | The method and apparatus of face for identification | |
CN108491823A (en) | Method and apparatus for generating eye recognition model | |
CN107622240A (en) | Method for detecting human face and device | |
CN108416323A (en) | The method and apparatus of face for identification | |
CN109344752A (en) | Method and apparatus for handling mouth image | |
CN109101919A (en) | Method and apparatus for generating information | |
CN108416326A (en) | Face identification method and device | |
CN108171204A (en) | Detection method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |